Real-Time Detection and Classification of Drones, Vehicles, and Humans from Radar Data Using Deep Learning

Ahmet Güney Şenocaklı,S. E. Yüksel

Published 2025 in International Conference on Image Processing Theory Tools and Applications

ABSTRACT

This paper proposes and systematically compares four deep-learning architectures for real-time detection and classification of drones, vehicles, and humans using range-Doppler radar data from the RAD-DAR dataset. The proposed methods are (i) a lightweight Convolutional Neural Network (CNN) baseline, (ii) a temporally aware CNN-LSTM network augmented with attention, (iii) an adapted YOLOv8 object detector, and (iv) RT-DETR-Large, an end-to-end Transformer detector tuned for real-time radar streams. All models share an identical preprocessing pipeline-power normalisation and clutter suppression so that performance differences arise solely from network design. On the held-out RAD-DAR test split, the attention-enhanced CNN raises the macro $\mathrm{F}_{1}$ by 0.7 pp over the static CNN (95.0 % vs. 94.3 %), demonstrating the value of temporal context. Moving to detection, YOLOv8 delivers high localisation accuracy with a macro $F_{1}$ of $\mathbf{9 8. 9 \%}(\mathbf{9 9. 6 \%}$ precision, $\mathbf{9 8. 2 \%}$ recall), while RT-DETR sets a new benchmark: 99.3 % macro $\mathrm{F}_{1}, 99.7 \%$ precision, and 98.8 % recall-consistently above 97 % for each class-at $>30$ FPS on a single GPU. These results show that Transformer-based detectors can match or exceed convolutional counterparts across all object categories, offering a robust, real-time solution for security-critical radar-surveillance applications.

PUBLICATION RECORD

  • Publication year

    2025

  • Venue

    International Conference on Image Processing Theory Tools and Applications

  • Publication date

    2025-10-13

  • Fields of study

    Computer Science, Engineering

  • Identifiers
  • External record

    Open on Semantic Scholar

  • Source metadata

    Semantic Scholar

CITATION MAP

EXTRACTION MAP

CLAIMS

  • No claims are published for this paper.

CONCEPTS

  • No concepts are published for this paper.

CITED BY

  • No citing papers are available for this paper.

Showing 0-0 of 0 citing papers · Page 1 of 1