Journal of Scientific Innovation and Advanced Research (JSIAR)

Peer-reviewed | Open Access | Multidisciplinary

Journal of Scientific Innovation and Advanced Research (JSIAR) Published: November 2025 Volume: 1, Issue: 8 Pages: 398-410

Adaptive Multimodal AI Framework for Robust Perception and Accident Avoidance in Autonomous Vehicles

Review Article
Adarsh Gupta1
1Department of Computer Science and Engineering, Noida International University, Greater Noida, India
Neha2
2Department of Computer Science and Engineering, Noida International University, Greater Noida, India
Laleen3
3Department of Computer Science and Engineering, Noida International University, Greater Noida, India
Danish4
4Department of Computer Science and Engineering, Noida International University, Greater Noida, India
Lakshay5
5Department of Computer Science and Engineering, Noida International University, Greater Noida, India
Mustakim6
6Department of Computer Science and Engineering, Noida International University, Greater Noida, India
*Author for correspondence: Adarsh Gupta
Department of Computer Science and Engineering, Noida International University, Greater Noida, India
E-mail ID: adarshgupta1524@gmial.com

ABSTRACT

Autonomous vehicles (AVs) represent a transformative advancement in intelligent transportation, yet their safe operation under complex and unpredictable driving conditions remains an ongoing challenge. Adverse weather, low illumination, sensor noise, and dynamic road environments often degrade the perception accuracy of unimodal systems that depend solely on visual, LiDAR, or radar data. Such single-sensor frameworks struggle with contextual uncertainty, leading to false detections, missed obstacles, and compromised decision-making. To address these limitations, this research introduces an \textbf{Adaptive Multimodal AI Framework} that seamlessly integrates camera, LiDAR, and radar modalities using a mid-level fusion approach. The system employs an attention-based weighting mechanism that dynamically adjusts the contribution of each modality based on environmental context, ensuring perceptual robustness across diverse conditions such as rain, fog, and night scenarios. The proposed model has been rigorously evaluated on benchmark datasets including nuScenes and KITTI, achieving a mean Average Precision (mAP) of 92.6% and a 44.8% reduction in False Negative Rate (FNR) compared to traditional unimodal detection systems. Experimental outcomes demonstrate enhanced consistency in object detection and trajectory prediction, especially in safety-critical edge cases. Moreover, the interpretability of the attention mechanism offers greater transparency in sensor fusion decisions, supporting explainable AI practices. This work contributes to advancing the dependability and human trust in AVs by providing a context-aware perception pipeline that not only strengthens safety margins but also establishes a scalable foundation for next-generation autonomous driving intelligence.

Keywords: Autonomous Vehicles, Multimodal AI, Sensor Fusion, Deep Learning, Accident Avoidance, Road Safety, Attention Mechanism