Peer-reviewed | Open Access | Multidisciplinary
Object detection remains a fundamental and indispensable task in the deployment of autonomous systems, including self-driving vehicles, unmanned aerial systems, intelligent manufacturing environments, and robotic bin-picking platforms. Achieving high detection accuracy under strict real-time constraints and limited computational resources continues to be a central challenge. This study investigates lightweight deep learning architectures optimized for real-time object detection, particularly in scenarios where processing power and memory are constrained. We examine and compare representative compact detection models, including YOLOv4-tiny, MobileNet-SSD, and EfficientDet-D0, focusing on their architectural trade-offs, inference speed, parameter efficiency, and deployment feasibility. These models are evaluated not only on performance metrics but also on implementation compatibility with embedded and edge hardware often found in autonomous platforms. Furthermore, the paper discusses model compression techniques such as quantization and pruning, emphasizing their role in maintaining accuracy while reducing model complexity and power consumption. A dedicated case study on robotic bin picking is included to illustrate how minimalist models can be integrated into practical applications. The case highlights end-to-end performance from object localization to real-time decision-making under dynamic and partially structured conditions. Insights into task-specific tuning and deployment strategies are provided to guide future implementations. This work contributes to the growing need for efficient vision systems by outlining practical solutions that balance speed, accuracy, and computational demands, thereby supporting the development of responsive and resource-aware autonomous agents.
Keywords: Real-Time Object Detection, Lightweight Deep Learning, Autonomous Systems, YOLOv4-tiny, Model Quantization, Robotic Bin Picking