An Intelligent Vehicle Detection and Recognition Framework for Traffic Cyber-Physical Systems
DOI:
https://doi.org/10.22399/ijcesen.3180Keywords:
Vehicle Detection, Vehicle Recognition, Traffic Monitoring, Cyber-Physical Systems (CPS), Intelligent Transportation Systems (ITS)Abstract
The rapid growth of urban traffic has necessitated the integration of intelligent systems within Cyber-Physical Systems (CPS) for real-time monitoring and management. This paper presents an efficient approach to vehicle detection and recognition tailored for Traffic Cyber-Physical Systems (CPS), aimed at enhancing situational awareness, safety, and traffic flow optimization. Leveraging a combination of computer vision techniques and deep learning models, the proposed framework accurately detects vehicles in diverse environmental conditions and recognizes their types and license plates. The system incorporates real-time image acquisition from roadside cameras, pre-processing pipelines, and a convolutional neural network (CNN)-based architecture for robust vehicle classification and identification. Experimental results demonstrate high accuracy and reliability across various traffic scenarios, including low-light and high-occlusion conditions. The proposed model is scalable, adaptable to edge-based deployment, and contributes significantly to the development of responsive and intelligent traffic CPS infrastructure. This research lays the foundation for more comprehensive intelligent transportation systems, facilitating autonomous decision-making and efficient traffic control.
References
[1] Ministry of Transport of the People's Republic of China. (2023). Statistical Bulletin on the development of the 2023 transport sector. Beijing: Ministry of Transport of the People's Republic of China.
[2] Lu, H., & Li, R. (2014). The development status and trend of urban intelligent transportation system. Engineering Research-Engineering from an Interdisciplinary Perspective, (01).
[3] Zhao, N., Yuan, J., & Xu, H. (2014). Overview of intelligent transportation systems. Computer Science, 41(11), 7-11.
[4] Zhou, Z. M., Zhao, M., & Sun, D. H. (2019). An improved mean-field lattice hydrodynamic model with consideration of the average effect of multi-lattice interaction. IEEE Access, (6), 168798-168804.
[5] Hehenberger, P., Vogel-Heuser, B., Bradley, D., et al. (2016). Design, modelling, simulation and integration of cyber physical systems: Methods and applications. Computers in Industry, 82(C), 273-289.
[6] Zhang, N. (2021). A cloud-based platform for big data-driven CPS modeling of robots. IEEE Access, 9, 34667-34680.
[7] Chen, X., Zhu, Y., Zhao, Y., et al. (2021). Hybrid modeling and model transformation of AADL for verifying the properties of CPS space-time compositions. IEEE Access, 9, 99539-99551.
[8] CPS Steering Group. (2011, December 18). Report: Cyber Physical Systems Summit. http://cesg.tamu.edu/icCPS2013/_doc/CPS_Submmit_Report.pdf
[9] Wang, Z., & Xie, L. (2011). A review of information physical fusion systems. Acta Automatica, 37(10), 1157-1166.
[10] Krogh, B. H., Lee, E., Lee, I., et al. (2008). Cyber-physical systems: Executive summary. CPS Steer Group, Wash. DC.
[11] Li, Z., Zhang, T., & Zhang, J. (2011). A review of Information Physical Fusion Systems (CPS). Computer Science, 38(9), 25-31.
[12] Zhou, B., Wang, Y. P., Yu, G., & Wu, X. K. (2017). A lane-change trajectory model from drivers' vision view. Transportation Research Part C, 85, 609-627.
[13] Repta, D., Stanescu, A. M., Moisescu, M. A., et al. (2014). A cyber-physical systems approach to develop a generic enterprise architecture. In Engineering, Technology and Innovation (ICE), 2014 International ICE Conference on (pp. 1-6). IEEE.
[14] Mangharam, R., & Pajic, M. (2013). Distributed control for cyber-physical systems. Journal of the Indian Institute of Science, 93(3), 353-387.
[15] Olariu, S., & El-Tawab, S. S. (2012). Friend: A cyber-physical system for traffic flow related information aggregation and dissemination. In World of Wireless, Mobile and Multimedia Networks (pp. 1-6).
[16] Stefanov, A., Liu, C. C., Govindarasu, M., et al. (2015). SCADA modeling for performance and vulnerability assessment of integrated cyber–physical systems. International Transactions on Electrical Energy Systems, 25(3), 498-519.
[17] Tan, Y., Goddard, S., & Perez, L. C. (2008). A prototype architecture for cyber-physical systems. ACM Sigbed Review, 5(1), 26.
[18] Lee, J., Bagheri, B., & Kao, H. A. (2015). A cyber-physical systems architecture for industry 4.0-based manufacturing systems. Manufacturing Letters, 3, 18-23.
[19] Sun, D., Li, Y., Liu, W., et al. (2013). A review of traffic information physical fusion system and its key technologies. Chinese Journal of Highways, 26(1), 144-155.
[20] National Science Foundation of the United States. (2011, June 27). Cyber-Physical Systems (CPS). http://www.nsf.gov/pubs/2010/nsf10515/nsf10515.htm?org=NSF
[21] Cardenas, A. A., Amin, S., & Sastry, S. (2008). Secure control: Towards survivable cyber-physical systems. In The 28th International Conference on Distributed Computing Systems Workshops (pp. 495-500). IEEE.
[22] CPS Steering Group. (2008). Cyber-physical systems, Executive Summary (Technical Report). Available: http://varma.ece.cmu.edu/Summit/
[23] Krogh, B. H., Lee, E. A., Mok, A., et al. (2010, July 10). Cyber-physical systems executive summary. http://varma.ece.cmu.edu/Submmit/CPS-Executive-Summary.pdf
[24] Poovendran, R. (2010). Cyber-Physical Systems: Close encounters between two parallel worlds. Proceedings of the IEEE, 98(8), 1363-1366.
[25] Lee, E. A., & Seshia, S. A. (2011). Introduction to embedded systems: A cyber-physical systems approach. Lee & Seshia.
[26] Park, K. J., Zheng, R., & Liu, X. (2012). Cyber-physical systems: Milestones and research challenges. Computer Communications, 36(1), 1-7.
[27] Repta, D., Stanescu, A. M., Moisescu, M. A., et al. (2014). A cyber-physical systems approach to develop a generic enterprise architecture. In Engineering, Technology and Innovation (ICE), 2014 International ICE Conference on (pp. 1-6). IEEE.
[28] Redmon, J., & Farhadi, A. (2018). YOLOv3: An incremental improvement. arXiv preprint arXiv:1804.02767.
[29] De Fabritiis, C., Ragona, R., & Valenti, G. (2008). Traffic estimation and prediction based on real time floating car data. In Intelligent Transportation Systems, 2008. ITSC 2008. 11th International IEEE Conference on (pp. 197-203). IEEE.
[30] White, J., Quick, J., & Philippou, P. (2004). The use of mobile phone location data for traffic information. In Road Transport Information and Control, 2004. RTIC 2004. 12th IEE International Conference on (pp. 321-325). IET.
[31] Herrera, J. C., Work, D. B., Herring, R., et al. (2010). Evaluation of traffic data obtained via GPS-enabled mobile phones: The Mobile Century field experiment. Transportation Research Part C: Emerging Technologies, 18(4), 568-583.
[32] Lipton, A. J., Fujiyoshi, H., & Patil, R. S. (1998). Moving Target Classification and Tracking from Real-time Video. In 1998 IEEE Workshop on Applications of Computer Vision (pp. 8-14). IEEE Computer Society.
[33] McKenna, S. J., Jabri, S., Duric, Z., et al. (2000). Tracking groups of people. Computer Vision & Image Understanding, 80(1), 42-56.
[34] Meyer, D., & Denzler, J. (1997). Model based extraction of articulated objects in image sequences for gait analysis. In 1997 International Conference on Image Processing (p. 78). IEEE Computer Society.
[35] Zhu, Q., & Ma, C. Q. (2017). A vehicle detection method in tunnel video based on ViBe algorithm. In 2017 IEEE 2nd Advanced Information Technology, Electronic and Automation Control Conference (pp. 1545-1548). IEEE.
[36] Unzueta, L., Nieto, M., Cortes, A., et al. (2012). Adaptive multicue background subtractio for robust vehicle counting and classification. IEEE Transactions on Intelligent Transportation Systems, 13(2), 527-540.
[37] Rumaksari, A. N., Sumpeno, S., & Wibawa, A. D. (2017). Background subtraction using spatial mixture of Gaussian model with dynamic shadow filtering. In 2017 International Seminar on Intelligent Technology and Its Applications (pp. 296-301). ISITIA.
[38] Sermanet, P., Kavukcuoglu, K., Chintala, S., et al. (2013). Pedestrian detection with unsupervised multi-stage feature learning. In 2013 IEEE Conference on Computer Vision and Pattern Recognition (pp. 3626-3633). IEEE Computer Society.
[39] Taigman, Y., Yang, M., Ranzato, M., et al. (2014). DeepFace: Closing the gap to human-level performance in face verification. In 2014 IEEE Conference on Computer Vision and Pattern Recognition (pp. 1701-1708). IEEE Computer Society.
[40] Fan, Q., Brown, L., & Smith, J. (2016). A closer look at Faster R-CNN for vehicle detection. In 2016 Intelligent Vehicles Symposium (pp. 124-129). IV.
[41] Gao, Y., Guo, S., Huang, K., et al. (2017). Scale optimization for full-image-CNN vehicle detection. In 2017 Intelligent Vehicles Symposium (pp. 785-791). IV.
[42] Roy, A., Gale, N., & Hong, L. (2009). Fusion of Doppler Radar and video information for automated traffic surveillance. 2009 12th International Conference on Information Fusion, 1989-1996.
[43] Ren, J., Wang, Y., Han, Y., et al. (2019). Information fusion of digital camera and radar. In Proceedings of the 2019 IEEE MTT-S International Microwave Biomedical Conference (IMBioC) (pp. 2-8).
[44] Xiang Hua Hou, & Hong Hai Liu. (2013). Research of Background Modeling Algorithm Method Based on Multi-Frame Average Method in Moving Target Detection. Applied Mechanics and Materials, 2617(380-384).
[45] Mithun, N. C., Rashid, N. U., & Rahman, S. M. M. (2012). Detection and classification of vehicles from video using multiple time-spatial images. IEEE Transactions on Intelligent Transportation Systems, 13(3), 1215-1225.
[46] Jazayeri, A., Cai, H., Zheng, J. Y., & Tuceryan, M. (2011). Vehicle detection and tracking in car video based on motion model. IEEE Transactions on Intelligent Transportation Systems, 12(3), 583-595.
[47] Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems (pp. 1097-1105).
[48] Girshick, R. (2015). Fast R-CNN. Computer Science.
[49] Ren, S., He, K., Girshick, R., et al. (2015). Faster R-CNN: Towards real-time object detection with region proposal networks.
[50] Liu, W., Anguelov, D., Erhan, D., et al. (2016). SSD: Single shot multibox detector. In European Conference on Computer Vision (pp. 21-37).
[51] Redmon, J., & Farhadi, A. (2018). YOLOv3: An incremental improvement.
[52] Polychronopoulos, A., Scheunert, U., & Tango, F. (2004). Centralized data fusion for obstacle and road borders tracking in a collision warning system. International Conference on Information Fusion.
[53] Haselhoff, A., Kummert, A., & Schneider, G. (2007). Radar-Vision Fusion with an Application to Car. Following using an Improved AdaBoost Detection Algorithm. In Intelligent Transportation Systems Conference, 2007. ITSC. IEEE (pp. 854-858).
[54] Developpement, Y. (2015). Moving closer to automated driving, Mobileye Unveils EyeQ4System-on-Chip with its First Design Win for 2018. Sensors, 2015, 483-488.
[55] Wang, C. Y., Liao, H. Y. M., Wu, Y. H., et al. (2020). CSPNet: A new backbone that can enhance learning capability of cnn. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (pp. 390-391).
[56] Alessandretti, G., Broggi, A., & Cerri, P. (2007). Vehicle and guard rail detection using radar and vision data fusion. IEEE Transactions on Intelligent Transportation Systems, 8(1), 95-105.
[57] Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2017). Imagenet classification with deep convolutional neural networks. Communications of the ACM, 60(6), 84-90.
[58] Redmon, J., Divvala, S., Girshick, R., et al. (2016). You only look once: Unified, real-time object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 779-788).
[59] Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2017). Imagenet classification with deep convolutional neural networks. Communications of the ACM, 60(6), 84-90.
[60] Redmon, J., Divvala, S., Girshick, R., et al. (2016). You only look once: Unified, real-time object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 779-788).
[61] Alnujaim, I., Oh, D., & Kim, Y. (2020). Generative adversarial networks for classification of micro-doppler signatures of human activity. IEEE Geoscience and Remote Sensing Letters, 17(3), 396-400.
[62] Seyfioglu, M. S., Erol, B., Gurbuz, S. Z., & Amin, M. G. (2019). Dnn transfer learning from diversified micro-doppler for motion classification. IEEE Transactions on Aerospace and Electronic Systems, 55(5), 2164-2180.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 International Journal of Computational and Experimental Science and Engineering

This work is licensed under a Creative Commons Attribution 4.0 International License.