Sensor Fusion Using Machine Learning for Robust Object Detection in Adverse Weather Conditions for Self-Driving Cars

Authors

DOI:

https://doi.org/10.22399/ijcesen.3589

Keywords:

Sensor Fusion,, Object Detection,, Adverse Weather, Machine Learning, Autonomous Vehicles, Deep Learning

Abstract

Autonomous vehicles must navigate safely through a spectrum of environmental conditions—from clear skies to dense fog, torrential rain, and blinding snow. This thesis investigates a holistic sensor‑fusion framework that leverages complementary strengths of camera, LiDAR, and radar modalities to achieve robust 3D object detection under adverse weather. We begin by characterizing the failure modes of each sensor: vision systems suffer from contrast loss and occlusion in precipitation; LiDAR range returns scatter when mist or raindrops intrude; and radar, while inherently resilient to particulates, provides coarser spatial resolution. Building on both classical probabilistic fusion and cutting‑edge deep‑learning paradigms, we propose a multi‑level fusion network that integrates raw data, mid‑level features, and high‑level detection outputs. Our architecture employs modality‑specific backbones—ResNet for images, VoxelNet for point clouds, and range‑Doppler CNNs for radar scans—merged via attention‑driven feature fusion. To counteract training bias toward clear weather, we curate and augment a diverse training corpus drawn from nuScenes, Waymo Open Dataset, Oxford RobotCar (including its radar extension), A*3D, and synthetically fogged/raindrop‑enhanced KITTI sequences. Extensive experiments demonstrate that our fusion model retains over 80% of clear‑weather detection performance in heavy fog and rain—yielding a mean Average Precision (mAP) increase of 25–40% compared to camera‑only or LiDAR‑only baselines. Ablation studies quantify the incremental gains of each sensor combination, revealing that LiDAR+radar fusion counters extreme particulate interference, while camera+LiDAR excels at fine-grained classification. Against state‑of‑the‑art fusion methods, our approach achieves new benchmarks on adverse‑weather subsets, reducing missed detections by up to 30%. By systematically dissecting sensor behaviors, innovating data augmentation for inclement conditions, and designing a flexible fusion backbone, this work delivers a practical path toward truly all‑weather autonomous perception. The proposed framework not only elevates safety margins in real‑world deployment but also establishes a modular template for future multi‑modal extensions. Finally, we discuss the scalability and modularity of our fusion framework, emphasizing its extensibility to incorporate emerging sensor modalities such as thermal imaging and 4D radar. By providing a flexible backbone architecture, our approach facilitates rapid integration of future sensor technologies and fusion algorithms, paving the way toward truly all-weather autonomous perception systems. This work thus not only advances state-of-the-art sensor fusion methodologies but also lays practical groundwork for deploying safer, more reliable self-driving vehicles capable of navigating the full spectrum of environmental challenges.

Author Biographies

Krunal Panchal, Research Scholar

University of Massachusetts, Boston, USA

Arpan Shaileshbhai Korat

Jersey City, NJ, USA

Saurav Rajanikant Pathak

Bentonville, AR, USA

References

[1] Chen, L., Li, J., & Wang, Y. (2023). Adaptive Sensor Fusion for Autonomous Vehicles in Adverse Weather Conditions.

[2] Zhang, H., & Kumar, S. (2022). Deep Learning-Based Multi-Modal Sensor Fusion for Robust Object Detection.

[3] Li, X., & Zhao, Q. (2021). A Survey on Sensor Fusion Techniques for Autonomous Driving in Adverse Weather.

[4] Wang, J., & Lee, D. (2024). Real-Time Sensor Fusion with Edge Computing for Autonomous Vehicles.

[5] Kim, S., Park, J., & Choi, H. (2023). Multi-Level Fusion Networks for Robust Object Detection in Fog and Rain.

[6] Pang, S., Morris, D., & Radha, H. (2020). CLOCs: Camera-LiDAR Object Candidates Fusion for 3D Object Detection.

[7] Yoon, J., Kim, D., & Kim, J. (2022). WeatherNet: Deep Learning-Based Weather Classification for Autonomous Driving.

https://doi.org/10.1109/LRA.2022.3146860

[8] Garg, S., & Kumar, M. (2021). Robust Object Detection in Adverse Weather for Autonomous Vehicles: A Review.

[9] Liu, Z., Wu, Z., & Tóth, R. (2023). Transformer-Based Sensor Fusion for Autonomous Vehicles.

[10] He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep Residual Learning for Image Recognition.

[11] Redmon, J., & Farhadi, A. (2018). YOLOv3: An Incremental Improvement.

arXiv preprint arXiv:1804.02767.

[12] Bochkovskiy, A., Wang, C.-Y., & Liao, H.-Y. M. (2020). YOLOv4: Optimal Speed and Accuracy of Object Detection.

https://arxiv.org/abs/2004.10934

[13] Wang, C.-Y., Bochkovskiy, A., & Liao, H.-Y. M. (2022). YOLOv7: Trainable Bag-of-Freebies Sets New State-of-the-Art for Real-Time Object Detectors.

[14] Zhou, Y., & Tuzel, O. (2018). VoxelNet: End-to-End Learning for Point Cloud Based 3D Object Detection.

[15] Philion, J., & Fidler, S. (2020). Lift, Splat, Shoot: Encoding Images from Arbitrary Camera Rigs by Implicitly Unprojecting to 3D. ECCV.

[16] Arnold, E., et al. (2019). A Survey on 3D Object Detection Methods for Autonomous Driving Applications. IEEE Transactions on Intelligent Transportation System

[17] Wortsman, M., et al. (2021). Robustness to Out-of-Distribution Inputs via Unsupervised Pre-Training. NeurIPS.

[18] V. C. Gandhi, D. K. Siyal, S. P. Patel, and A. V. Shah, “A Survey: Emotion Detection Using Facial Recognition Using Convolutional Neural Network (CNN) and Viola–Jones Algorithm,” in Modeling and Optimization …, 2024.

Downloads

Published

2025-07-28

How to Cite

Krunal Panchal, Arpan Shaileshbhai Korat, & Saurav Rajanikant Pathak. (2025). Sensor Fusion Using Machine Learning for Robust Object Detection in Adverse Weather Conditions for Self-Driving Cars. International Journal of Computational and Experimental Science and Engineering, 11(3). https://doi.org/10.22399/ijcesen.3589

Issue

Section

Research Article