DroneTechVision · Edge AI · Field-Proven Detection

From raptor tracking
to
drone detection.

DroneTechVision applies battle-tested avian detection AI to the urgent problem of unauthorized drone identification, classification, and threat assessment — delivered on edge hardware at a fraction of incumbent cost.

121K+
Lines of Proven Code
53K+
Training Images
<2s
Detection to Alert
≈$150
Hardware Cost Per Node
Built on proven airborne detection.
DroneTechVision began as the SkyGuard project — an open-source raptor alert system protecting backyard poultry. That same AI pipeline — real-time object segmentation, species classification, and instant alerting — now targets a far larger threat.

The original SkyGuard system (now DroneTechVision) was developed as a Texas A&M capstone project to solve a real problem: 49% of chicken owners cite aerial predators as their top concern, yet no affordable detection solution existed.

We built a full-stack detection platform: YOLO-based segmentation, custom species classification via transfer learning on 53,000+ images, edge deployment on Raspberry Pi, and real-time alerting — all for under $150 per node.

That foundation — detecting small, fast-moving airborne objects against variable sky conditions on constrained hardware — is precisely the problem set that drone detection demands. The pivot is natural. The technology is ready.

01
Real-Time Object Detection
YOLO v11 segmentation at 25 FPS on edge hardware
02
Species Classification
Custom CNN via transfer learning — 400+ species, 85%+ top-1 accuracy
03
Sub-2-second response — audio, SMS, email, push
Instant Alert Pipeline
04
Edge-First Architecture
Full operation on $150 Raspberry Pi 5 node
05
Field Validated
Multi-site pilot deployment in operational farm environments
Four phases to full threat awareness.
Each phase builds on proven SkyGuard-derived technology, extending detection into classification, reporting, and threat assessment.
I
Phase 1 Detection
Distinguish drones from birds, aircraft, and environmental clutter in real-time using edge-deployed computer vision. Leverage existing YOLO segmentation pipeline retrained on drone imagery datasets.
Binary drone/non-drone classification
Sub-2-second detection latency
Multi-weather, multi-lighting robustness
False positive suppression via temporal filtering
Edge inference on Raspberry Pi 5 / Jetson
Tiling (SAHI) for small-object detection
II
Phase 2 Classification
Determine the nature and type of detected drones — categorize by platform class, size profile, and flight characteristics using a multi-model classification pipeline mirroring the proven species-identification architecture.
Platform type identification (multirotor, fixed-wing, hybrid)
Size and altitude estimation via visual geometry
Manufacturer/model fingerprinting where possible
Behavioral signature analysis (hover, transit, orbit)
Transfer learning from drone image datasets
Confidence scoring with multi-class probability
III
Phase 3 Reporting & Tracking
Transmit detection metadata, flight vector, and classification data to a central command and control system. Enable multi-node sensor fusion for coordinated airspace awareness across distributed sites.
REST API for centralized data ingestion
Flight vector estimation (heading, speed, altitude)
Multi-node sensor fusion and correlation
Geospatial event mapping and track history
Real-time dashboard with detection feeds
Encrypted, authenticated data transmission
IV
Phase 4 Threat Assessment
Classify threat probability based on drone type, behavior pattern, proximity to protected assets, and contextual intelligence. Provide actionable threat levels to enable proportional response decisions.
Multi-factor threat scoring algorithm
Behavioral anomaly detection (loitering, approach vectors)
Geofence violation alerting
Tiered threat levels (LOW / MEDIUM / HIGH / CRITICAL)
Integration hooks for countermeasure systems
Operator decision-support dashboards
Edge-native. AI-first. Field-proven.
Every component was selected for reliability, cost, and deployability in austere environments.
CV

Computer Vision

YOLOv11 real-time object detection with custom-trained classification models. SAHI tiling for small-object detection at distance.

ML

Transfer Learning

Purpose-built CNNs trained on 53K+ images. Architecture supports rapid retraining on drone-specific datasets with minimal compute.

HW

Edge Compute

Raspberry Pi 5 and Jetson Nano deployment. Full detection pipeline at ~$150/node with quantized models for real-time inference.

API

REST Architecture

Flask-based web portal with REST API for monitoring, configuration, and centralized data reporting across distributed nodes.

DB

Event Storage

SQLite metadata with configurable retention. Annotated detection images, track histories, and audit-ready event logs.

OS

Open Architecture

Python/PyTorch stack. Modular, extensible codebase at 121K+ lines. Designed for integration with existing C2 infrastructure.

Built by engineers. Proven in the field.
JD

John Daughtridge

Co-Founder & Technical Lead

M.Eng. Technical Management candidate at Texas A&M University. Designed, built, and field-deployed the SkyGuard detection platform (now DroneTechVision) — 121K+ lines of code, custom-trained AI models, and multi-site pilot operations. Bridging applied AI research with practical edge deployment.

→ LinkedIn Profile
Computer Vision Edge AI YOLOv11 PyTorch Systems Engineering Texas A&M
JM

Joseph McCoy

Co-Founder & AI Strategy Lead

Microsoft Modern Workplace Architect with deep expertise in cloud infrastructure, AI-driven enterprise solutions, and large-scale technology deployment. Texas A&M graduate bringing years of experience managing complex technical projects from design through delivery — ensuring DroneTechVision scales from prototype to production.

→ LinkedIn Profile
AI Strategy Cloud Architecture Project Management Enterprise IT Microsoft Texas A&M
Ready to discuss partnership
or pilot deployment.
We're actively seeking defense industry partnerships, integration opportunities, and pilot deployment sites.