Application of Convolutional Neural Networks and Rolling Guidance Filter in Image Fusion for Detecting Brain Tumors

Authors

  • S. Karthikeyan Kalasalingam Academy of Research and Education, Tamilnadu, India
  • P. Velmurugadass

DOI:

https://doi.org/10.22399/ijcesen.621

Keywords:

Guided filter, Convolutional neural network, image fusion, Rolling Guidance, Quantitative Evaluation

Abstract

Medical image fusion is the technique of integrating images from several medical imaging modalities without causing any distortion or information loss. By preserving every feature in the fused image, it increases the value of medical imaging for the diagnosis and treatment of medical conditions. A novel fusion mechanism for multimodal image data sets is proposed in this paper. Each of the source image is smoothened using cross guided filter in the initial step. Guided filter output is further smoothened to remove fine structures using rolling guidance filter. Then details (high frequency) of each source image are extracted by subtracting the rolling guidance filter output from corresponding source image. These details are fed to convolutional neural networks to obtain decision maps. Finally the source images are fused based on decision map using maximum rule of combination. We assessed the performance of our suggested methodology using several pairs of medical imaging datasets that are accessible to the general public. According to the quantitative evaluation, the recommended fusion strategy for multimodal image fusion improves the average IE by 12.4%, MI by 41.8%, SF by 21.4%, SD by 22.81%, MSSIM by 31.1%, and  by 39% when compared to existing methods, which makes it appropriate for use in the medical field for accurate diagnosis.

References

Lu, W., Li, Z., & Chu, J. (2017). A novel computer-aided diagnosis system for breast MRI based on feature selection and ensemble learning. Computers in Biology and Medicine. 83;157–165. https://doi.org/10.1016/j.compbiomed.2017.03.002. DOI: https://doi.org/10.1016/j.compbiomed.2017.03.002

Lin, R., & Chuang, C. (2010). A hybrid diagnosis model for determining the types of the liver disease. Computers in Biology and Medicine. 40(7);665–670. https://doi.org/10.1016/j.compbiomed.2010.06.002. DOI: https://doi.org/10.1016/j.compbiomed.2010.06.002

Jabarulla, M. Y., & Lee, H.-N. (2017). Computer aided diagnostic system for ultrasound liver images: A systematic review. Optik. 140;1114–1126. https://doi.org/10.1016/j.ijleo.2017.05.013. DOI: https://doi.org/10.1016/j.ijleo.2017.05.013

Kavitha, K., & Arivazhagan, S. (2014). Fuzzy inspired image classification algorithm for hyperspectral data using three-dimensional log-Gabor features. Optik. 125(20);6236–6241. https://doi.org/10.1016/j.ijleo.2014.08.029. DOI: https://doi.org/10.1016/j.ijleo.2014.08.029

Lin, D., Sun, L., Toh, K.-A., Zhang, J. B., & Lin, Z. (2018). Biomedical image classification based on a cascade of an SVM with a reject option and subspace analysis. Computers in Biology and Medicine. 96;128–140. https://doi.org/10.1016/j.compbiomed.2018.03.005. DOI: https://doi.org/10.1016/j.compbiomed.2018.03.005

Andreini, P., Bonechi, S., Bianchini, M., Garzelli, A., & Mecocci, A. (2016). Automatic image classification for the urinoculture screening. Computers in Biology and Medicine. 70;12–22. https://doi.org/10.1016/j.compbiomed.2015.12.025. DOI: https://doi.org/10.1016/j.compbiomed.2015.12.025

Liao, C.-C., Xiao, F., Wong, J.-M., & Chiang, I.-J. (2010). Automatic recognition of midline shift on brain CT images. Computers in Biology and Medicine. 40(3);331–339. https://doi.org/10.1016/j.compbiomed.2010.01.004. DOI: https://doi.org/10.1016/j.compbiomed.2010.01.004

He, R., Cao, J., Song, L., Sun, Z., & Tan, T. (2020). Adversarial Cross-Spectral Face Completion for NIR-VIS Face Recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence. 42(5);1025–1037. https://doi.org/10.1109/tpami.2019.2961900. DOI: https://doi.org/10.1109/TPAMI.2019.2961900

Köping, L., Shirahama, K., & Grzegorzek, M. (2018). A general framework for sensor-based human activity recognition. Computers in Biology and Medicine. 95;248–260. https://doi.org/10.1016/j.compbiomed.2017.12.025. DOI: https://doi.org/10.1016/j.compbiomed.2017.12.025

Liu, Y., Liu, S., & Wang, Z. (2015). A general framework for image fusion based on multi-scale transform and sparse representation. Information Fusion. 24;147–164. https://doi.org/10.1016/j.inffus.2014.09.004. DOI: https://doi.org/10.1016/j.inffus.2014.09.004

Zhao, J., Chen, Y., Feng, H., Xu, Z., & Li, Q. (2014). Infrared image enhancement through saliency feature analysis based on multi-scale decomposition. Infrared Physics & Technology. 62;86–93. https://doi.org/10.1016/j.infrared.2013.11.008. DOI: https://doi.org/10.1016/j.infrared.2013.11.008

Ben Hamza, A., He, Y., Krim, H., & Willsky, A. (2005). A multiscale approach to pixel-level image fusion. Integrated Computer-Aided Engineering. 12(2);135–146. https://doi.org/10.3233/ica-2005-12201. DOI: https://doi.org/10.3233/ICA-2005-12201

Singh, S., Gupta, D., Anand, R. S., & Kumar, V. (2015). Nonsubsampled shearlet based CT and MR medical image fusion using biologically inspired spiking neural network. Biomedical Signal Processing and Control. 18;91–101. https://doi.org/10.1016/j.bspc.2014.11.009. DOI: https://doi.org/10.1016/j.bspc.2014.11.009

Gao, G., Xu, L., & Feng, D. (2013). Multi-focus image fusion based on non-subsampled shearlet transform. IET Image Processing. 7(6);633–639. https://doi.org/10.1049/iet-ipr.2012.0558. DOI: https://doi.org/10.1049/iet-ipr.2012.0558

Luo, X., Zhang, Z., Zhang, B., & Wu, X. (2017). Image Fusion With Contextual Statistical Similarity and Nonsubsampled Shearlet Transform. IEEE Sensors Journal. 17(6);1760–1771. https://doi.org/10.1109/jsen.2016.2646741. DOI: https://doi.org/10.1109/JSEN.2016.2646741

Zhang, Q., & Guo, B. (2009). Multifocus image fusion using the nonsubsampled contourlet transform. Signal Processing. 89(7);1334–1346. https://doi.org/10.1016/j.sigpro.2009.01.012. DOI: https://doi.org/10.1016/j.sigpro.2009.01.012

Bhatnagar, G., Wu, Q. M. J., & Liu, Z. (2013). Directive Contrast Based Multimodal Medical Image Fusion in NSCT Domain. IEEE Transactions on Multimedia. 15(5);1014–1024. https://doi.org/10.1109/tmm.2013.2244870. DOI: https://doi.org/10.1109/TMM.2013.2244870

Yang, Y., Que, Y., Huang, S., & Lin, P. (2016). Multimodal Sensor Medical Image Fusion Based on Type-2 Fuzzy Logic in NSCT Domain. IEEE Sensors Journal. 16(10);3735–3745. https://doi.org/10.1109/jsen.2016.2533864. DOI: https://doi.org/10.1109/JSEN.2016.2533864

Hill, P., Al-Mualla, M. E., & Bull, D. (2017). Perceptual Image Fusion Using Wavelets. IEEE Transactions on Image Processing. 26(3);1076–1088. https://doi.org/10.1109/tip.2016.2633863. DOI: https://doi.org/10.1109/TIP.2016.2633863

Li, S., & Yang, B. (2008). Multifocus image fusion by combining curvelet and wavelet transform. Pattern Recognition Letters. 29(9);1295–1301. https://doi.org/10.1016/j.patrec.2008.02.002. DOI: https://doi.org/10.1016/j.patrec.2008.02.002

Qu, G., Zhang, D., & Yan, P. (2001). Medical image fusion by wavelet transform modulus maxima. Optics Express. 9(4);184–190. https://doi.org/10.1364/OE.9.000184. DOI: https://doi.org/10.1364/OE.9.000184

Zhang, Q., Wang, L., Li, H., & Ma, Z. (2011). Similarity-based multimodality image fusion with shiftable complex directional pyramid. Pattern Recognition Letters. 32(13);1544–1553. https://doi.org/10.1016/j.patrec.2011.06.002. DOI: https://doi.org/10.1016/j.patrec.2011.06.002

Du, J., Li, W., Xiao, B., & Nawaz, Q. (2016). Union Laplacian pyramid with multiple features for medical image fusion. Neurocomputing. 194;326–339. https://doi.org/10.1016/j.neucom.2016.02.047. DOI: https://doi.org/10.1016/j.neucom.2016.02.047

Du, J., Li, W., & Xiao, B. (2017). Anatomical-functional image fusion by information of interest in local Laplacian filtering domain. IEEE Transactions on Image Processing. 26(12);5855–5866. https://doi.org/10.1109/TIP.2017.2745202. DOI: https://doi.org/10.1109/TIP.2017.2745202

Zhang, Q., & Levine, M. D. (2016). Robust Multi-Focus Image Fusion Using Multi-Task Sparse Representation and Spatial Context. IEEE Transactions on Image Processing. 25(5);2045–2058. https://doi.org/10.1109/tip.2016.2524212. DOI: https://doi.org/10.1109/TIP.2016.2524212

Liu, Y., Chen, X., Ward, R., & Wang, Z. (2016). Image fusion with convolutional sparse representation. IEEE Signal Processing Letters. 23(12);1882–1886. https://doi.org/10.1109/LSP.2016.2618776. DOI: https://doi.org/10.1109/LSP.2016.2618776

Li, S., Yin, H., & Fang, L. (2012). Group-sparse representation with dictionary learning for medical image denoising and fusion. IEEE Transactions on Biomedical Engineering. 59(12);3450–3459. https://doi.org/10.1109/TBME.2012.2217493. DOI: https://doi.org/10.1109/TBME.2012.2217493

Yu, N., Qiu, T., Bi, F., & Wang, A. (2011). Image Features Extraction and Fusion Based on Joint Sparse Representation. IEEE Journal of Selected Topics in Signal Processing. 5(5);1074–1082. https://doi.org/10.1109/jstsp.2011.2112332. DOI: https://doi.org/10.1109/JSTSP.2011.2112332

Yang, B., & Li, S. (2010). Multifocus image fusion and restoration with sparse representation. IEEE Transactions on Instrumentation and Measurement. 59(4);884–892. https://doi.org/10.1109/TIM.2009.2026612. DOI: https://doi.org/10.1109/TIM.2009.2026612

He, K., Sun, J., & Tang, X. (2013). Guided image filtering. IEEE Transactions on Pattern Analysis and Machine Intelligence. 35(6);1397–1409. https://doi.org/10.1109/TPAMI.2012.213. DOI: https://doi.org/10.1109/TPAMI.2012.213

Zhang, Q., Shen, X., Xu, L., & Jia, J. (2014). Rolling guidance filter. In European Conference on Computer Vision (ECCV). Springer. 815–830. https://doi.org/10.1007/978-3-319-10578-9_53. DOI: https://doi.org/10.1007/978-3-319-10578-9_53

Passalis. N., & Tefas, A. (2019). Training lightweight deep convolutional neural networks using bag-of-features pooling. IEEE Transactions on Neural Networks and Learning Systems. 30(6);1705–1715. https://doi.org/10.1109/TNNLS.2018.2872995. DOI: https://doi.org/10.1109/TNNLS.2018.2872995

Cao, C., Lan, C., Zhang, Y., Zeng, W., Lu, H., & Zhang, Y. (2019). Skeleton-based action recognition with gated convolutional neural networks. IEEE Transactions on Circuits and Systems for Video Technology. 29(11);3247–3257. https://doi: 10.1109/TCSVT.2018.2879913. DOI: https://doi.org/10.1109/TCSVT.2018.2879913

Jebadurai, J., & Dinesh Peter, J. (2018). Super-resolution of retinal images using multi-kernel SVR for IoT healthcare applications. Future Generation Computer Systems. 83;338–346. https://doi.org/10.1016/j.future.2018.01.058. DOI: https://doi.org/10.1016/j.future.2018.01.058

Mei, S., Jiang, R., Li, X., & Du, Q. (2020). Spatial and Spectral Joint Super-Resolution Using Convolutional Neural Network. IEEE Transactions on Geoscience and Remote Sensing. 58(7);4590–4603. https://doi.org/10.1109/tgrs.2020.2964288. DOI: https://doi.org/10.1109/TGRS.2020.2964288

Bai, X., Xue, R., Wang, L., & Zhou, F. (2019). Sequence SAR Image Classification Based on Bidirectional Convolution-Recurrent Network. IEEE Transactions on Geoscience and Remote Sensing. 57(11);9223–9235. https://doi.org/10.1109/tgrs.2019.2925636. DOI: https://doi.org/10.1109/TGRS.2019.2925636

Moeskops, P., Viergever, M. A., Mendrik, A. M., de Vries, L. S., Benders, M. J. N. L., & Isgum, I. (2016). Automatic Segmentation of MR Brain Images With a Convolutional Neural Network. IEEE Transactions on Medical Imaging. 35(5);1252–1261. https://doi.org/10.1109/tmi.2016.2548501. DOI: https://doi.org/10.1109/TMI.2016.2548501

Liu, X., Deng, C., Chanussot, J., Hong, D., & Zhao, B. (2019). StfNet: A Two-Stream Convolutional Neural Network for Spatiotemporal Image Fusion. IEEE Transactions on Geoscience and Remote Sensing. 57(9);6552–6564. https://doi.org/10.1109/tgrs.2019.2907310. DOI: https://doi.org/10.1109/TGRS.2019.2907310

Johnson, K. A., & Becker, J. A. (n.d.). The whole brain atlas. Retrieved from http://www.med.harvard.edu/aanlib/home.html

Wang. Z., Bovik, A. C., Sheikh, H. R., & Simoncelli, E. P. (2004). Image quality assessment: From error visibility to structural similarity. IEEE Transactions on Image Processing. 13(4);600–612. https://doi.org/10.1109/TIP.2003.819861. DOI: https://doi.org/10.1109/TIP.2003.819861

Petrovic, V., & Xydeas, C. (2004). Evaluation of image fusion performance with visible differences. In European Conference on Computer Vision (ECCV). Springer. 380–391. https://doi.org/10.1007/978-3-540-24672-5_30. DOI: https://doi.org/10.1007/978-3-540-24672-5_30

Xydeas, C., & Petrovic, V. (2000). Objective image fusion performance measure. Electronics Letters. 36(4);308–309. https://doi.org/10.1049/el:20000267. DOI: https://doi.org/10.1049/el:20000267

Venkata Srikanth, M., Suneel Kumar, A., Nagasirisha, B., & Lakshmi, T. (2024). Brain MRI and CT image fusion using multiscale local extrema and image statistics. ECTI Transactions on Electrical Engineering, Electronics, and Communications. 22(1). https://doi.org/10.37936/ecti-eec.2024221.249146. DOI: https://doi.org/10.37936/ecti-eec.2024221.249146

S. Amuthan, & N.C. Senthil Kumar. (2025). Emerging Trends in Deep Learning for Early Alzheimer’s Disease Diagnosis and Classification: A Comprehensive Review. International Journal of Computational and Experimental Science and Engineering, 11(1). https://doi.org/10.22399/ijcesen.739 DOI: https://doi.org/10.22399/ijcesen.739

Radhi, M., & Tahseen, I. (2024). An Enhancement for Wireless Body Area Network Using Adaptive Algorithms. International Journal of Computational and Experimental Science and Engineering, 10(3). https://doi.org/10.22399/ijcesen.409 DOI: https://doi.org/10.22399/ijcesen.409

Bandla Raghuramaiah, & Suresh Chittineni. (2025). BCDNet: An Enhanced Convolutional Neural Network in Breast Cancer Detection Using Mammogram Images. International Journal of Computational and Experimental Science and Engineering, 11(1). https://doi.org/10.22399/ijcesen.811 DOI: https://doi.org/10.22399/ijcesen.811

T. Deepa, & Ch. D. V Subba Rao. (2025). Brain Glial Cell Tumor Classification through Ensemble Deep Learning with APCGAN Augmentation. International Journal of Computational and Experimental Science and Engineering, 11(1). https://doi.org/10.22399/ijcesen.803 DOI: https://doi.org/10.22399/ijcesen.803

J Jeysudha, K. Deiwakumari, C.A. Arun, R. Pushpavalli, Ponmurugan Panneer Selvam, & S.D. Govardhan. (2024). Hybrid Computational Intelligence Models for Robust Pattern Recognition and Data Analysis. International Journal of Computational and Experimental Science and Engineering, 10(4). https://doi.org/10.22399/ijcesen.624 DOI: https://doi.org/10.22399/ijcesen.624

L. Smitha, Maddala Vijayalakshmi, Sunitha Tappari, N. Srinivas, G. Kalpana, & Shaik Abdul Nabi. (2024). Plant Disease Detection Using CNN with The Optimization Called Beluga Whale Optimization Mechanism. International Journal of Computational and Experimental Science and Engineering, 10(4). https://doi.org/10.22399/ijcesen.697 DOI: https://doi.org/10.22399/ijcesen.697

Boddupally JANAIAH, & Suresh PABBOJU. (2024). HARGAN: Generative Adversarial Network BasedDeep Learning Framework for Efficient Recognition of Human Actions from Surveillance Videos. International Journal of Computational and Experimental Science and Engineering, 10(4). https://doi.org/10.22399/ijcesen.587 DOI: https://doi.org/10.22399/ijcesen.587

Downloads

Published

2025-02-04

How to Cite

S. Karthikeyan, & P. Velmurugadass. (2025). Application of Convolutional Neural Networks and Rolling Guidance Filter in Image Fusion for Detecting Brain Tumors. International Journal of Computational and Experimental Science and Engineering, 11(1). https://doi.org/10.22399/ijcesen.621

Issue

Section

Research Article