Human Activity-Based Machine Learning and Deep Learning Techniques
DOI:
https://doi.org/10.22399/ijcesen.1368Keywords:
Human Activity Recognition, Machine learning, Deep Learning, Sensors, Healthcare, Activity MonitoringAbstract
Human activity recognition (HAR) has been hot research issues in recent years. The studies have differences in data types, data processing, feature description, etc. HAR constitutes a fundamental component of intelligent health monitoring systems, wherein the underlying intelligence of the services is derived from and enhanced by sensor data. Researchers have proposed multiple HAR systems designed to convert smartphone readings into other forms of physical activity. This review synthesizes the current methodologies for smartphone-based Human Activity Recognition (HAR) with focusing on healthcare application. For this purpose, we systematically searched for peer-reviewed articles regarding the utilization of cell phones for Human Activity Recognition (HAR). We collect information regarding smartphone body placement, sensors, types of physical activities examined, as well as the data transformation methodologies and classification frameworks employed for activity recognition. Thus, we selected these articles and delineated the diverse methodologies employed for data gathering, preprocessing, extraction of features, and activity classification, highlighting the predominant practices and their alternatives. We determine that cell phones are very appropriate for HAR research within the health sciences. Future studies should prioritize enhancing the quality of data gathered, addressing data gaps, incorporating a more diverse array of participants and activities, relaxing phone placement requirements, providing comprehensive documentation for study participants, and sharing the source code of the employed methods and algorithms to achieve population-level impact.
References
H. Kaur, V. Rani, and M. Kumar, (2024). Human activity recognition: A comprehensive review,” Expert Syst, 41(11);e13680, doi: 10.1111/EXSY.13680.
S. A. Tadigadapa and N. Najafi, (2003) Developments in microelectromechanical systems (MEMS): A manufacturing perspective, J Manuf Sci Eng, 125(4);816–823, doi: 10.1115/1.1617286.
V. Dentamaro, V. Gattulli, D. Impedovo, and F. Manca, (2024) Human activity recognition with smartphone-integrated sensors: A survey, doi: 10.1016/j.eswa.2024.123143.
A. Ravuri et al., (2024) A Systematic Literature Review on Human Activity Recognition,
A. Búzás et al., (2024) Hierarchical organization of human physical activity, Sci Rep, 14(1), doi: 10.1038/s41598-024-56185-0.
F. Serpush, M. B. Menhaj, B. Masoumi, and B. Karasfi, (2022) Wearable Sensor-Based Human Activity Recognition in the Smart Healthcare System, Comput Intell Neurosci, 2022;1391906, doi: 10.1155/2022/1391906.
Z. Zhou, H. Yu, and H. Shi, (2020) Human Activity Recognition Based on Improved Bayesian Convolution Network to Analyze Health Care Data Using Wearable IoT Device, IEEE Access, 8;86411–86418, doi: 10.1109/ACCESS.2020.2992584.
S. Zhang et al., (2022) Deep Learning in Human Activity Recognition with Wearable Sensors: A Review on Advances, Sensors 22(4);1476, doi: 10.3390/S22041476.
T. Özyer, D. S. Ak, and R. Alhajj, (2021) Human action recognition approaches with video datasets—A survey, Knowl Based Syst, 222, doi: 10.1016/j.knosys.2021.106995.
M. Munoz-Organero and A. Lotfi, (2016) Human movement recognition based on the stochastic characterisation of acceleration data, Sensors (Switzerland), 16(9), doi: 10.3390/s16091464.
N. Tasnim and J. H. Baek, “Dynamic Edge Convolutional Neural Network for Skeleton-Based Human Action Recognition,” Sensors, vol. 23, no. 2, Jan. 2023, doi: 10.3390/s23020778.
S. Agac and O. D. Incel, (2024) Resource-efficient, sensor-based human activity recognition with lightweight deep models boosted with attention, Computers and Electrical Engineering, 117;109274, doi: 10.1016/J.COMPELECENG.2024.109274.
K. Kaur and J. S. Batth, (2024). Human Activity Recognition Using Machine Learning, 2024 OPJU International Technology Conference (OTCON) on Smart Computing for Innovation and Advancement in Industry 4.0, pp. 1–5, doi: 10.1109/OTCON60325.2024.10688067.
T. Daniya, M. Geetha, and K. S. Kumar, (2020) Classification and regression trees with gini index, Advances in Mathematics: Scientific Journal, 9(10);8237–8247, doi: 10.37418/amsj.9.10.53.
R. Guido, S. Ferrisi, D. Lofaro, and D. Conforti, (2024) An Overview on the Advancements of Support Vector Machine Models in Healthcare Applications: A Review, Information (Switzerland), 15(4), doi: 10.3390/info15040235.
J. Liu, L. W. Huang, Y. H. Shao, W. J. Chen, and C. N. Li, (2024) A nonlinear kernel SVM classifier via L0/1 soft-margin loss with classification performance, J Comput Appl Math, 437;115471, doi: 10.1016/J.CAM.2023.115471.
A. Mammone, M. Turchi, and N. Cristianini, “Support vector machines”, doi: 10.1002/wics.049.
A. Rehman, F. Noor, J. I. Janjua, A. Ihsan, A. Q. Saeed, and T. Abbas, (2024) Classification of Lung Diseases Using Machine Learning Technique, 2024 International Conference on Decision Aid Sciences and Applications (DASA), pp. 1–7, doi: 10.1109/DASA63652.2024.10836302.
S. Akhtar, S. Aftab, S. Kousar, A. Rehman, M. Ahmad, and A. Q. Saeed, (2024). A Severity Grading Framework for Diabetic Retinopathy Detection using Transfer Learning, 2024 International Conference on Decision Aid Sciences and Applications (DASA), pp. 1–5, doi: 10.1109/DASA63652.2024.10836441.
T. M. Mitchell, (2024) Machine Learning: Tom M Mitchell: 9781259096952: Amazon.com: Books.. [Online]. Available: https://www.amazon.com/Machine-Learning-Tom-M-Mitchell/dp/1259096955
R. K. Halder, M. N. Uddin, M. A. Uddin, S. Aryal, and A. Khraisat, (2024) Enhancing K-nearest neighbor algorithm: a comprehensive review and performance analysis of modifications, J Big Data, 11(1), doi: 10.1186/s40537-024-00973-y.
L. Wang, (2018) Unlock with Your Heart : Heartbeat-based Authentication on Commercial Mobile Phones,.
H. Watanabe and T. Terada, (2018) Improving ultrasound-based gesture recognition using a partially shielded single microphone, Proceedings - International Symposium on Wearable Computers, ISWC, pp. 9–16,, doi: 10.1145/3267242.3267274/SUPPL_FILE/P9-WATANABE.MP4.
X. Xu, J. Yu, Y. Chen, Y. Zhu, S. Qian, and M. Li, (2018). Leveraging Audio Signals for Early Recognition of Inattentive Driving with Smartphones, IEEE Trans Mob Comput, 17(7);1553–1567, doi: 10.1109/TMC.2017.2772253.
T. H. VU, A. MISRA, Q. ROY, K. T. W. CHOO, and Y. LEE, (2018). Smartwatch-based early gesture detection & trajectory tracking for interactive gesture-driven applications, Proc ACM Interact Mob Wearable Ubiquitous Technol, 2(1);39: doi: 10.1145/3191771.
L. Lu et al., (2018) LipPass: Lip Reading-based User Authentication on Smartphones Leveraging Acoustic Signals, Proceedings - IEEE INFOCOM, 2018;1466–1474, doi: 10.1109/INFOCOM.2018.8486283.
P. Voigt, M. Budde, E. Pescara, M. Fujimoto, K. Yasumoto, and M. Beigl, (2018) Feasibility of human activity recognition using wearable depth cameras, Proceedings - International Symposium on Wearable Computers, ISWC, pp. 92–95, doi: 10.1145/3267242.3267276/SUPPL_FILE/P92-VOIGT.JPG.
Y. Lu, B. Huang, C. Yu, G. Liu, and Y. Shi, (2020) Designing and evaluating hand-to-hand gestures with dual commodity wrist-worn devices, Proc ACM Interact Mob Wearable Ubiquitous Technol, 4(1), doi: 10.1145/3380984/SUPPL_FILE/LU.ZIP.
Y. Wang, W. Cai, T. Gu, and W. Shao, (2020) Your eyes reveal your secrets: An eye movement based password inference on smartphone, IEEE Trans Mob Comput, 19(11);2714–2730, doi: 10.1109/TMC.2019.2934690.
D. Shi et al., (2021) Fine-Grained and Context-Aware Behavioral Biometrics for Pattern Lock on Smartphones, Proc ACM Interact Mob Wearable Ubiquitous Technol, 5(1), doi: 10.1145/3448080.
Y. Cao, H. Chen, F. Li, and Y. Wang, (2021) CanalScan: Tongue-jaw movement recognition via ear canal deformation sensing, Proceedings - IEEE INFOCOM, 2021, doi: 10.1109/INFOCOM42981.2021.9488852.
X. Chen, J. Fernandez-Mendoza, Y. Xiao, Y. Tang, and G. Cao, (2021) ApneaDetector: Detecting Sleep Apnea with Smartwatches, Proc. ACM Interact. Mob. Wearable Ubiquitous Technol, 5, doi: 10.1145/3463514.
M. Gjoreski et al., (2020). Classical and deep learning methods for recognizing human activities and modes of transportation with smartphone sensors, Information Fusion, 62;47–62, doi: 10.1016/j.inffus.2020.04.004.
ChangLiqiong et al., (2018). SleepGuard, Proc ACM Interact Mob Wearable Ubiquitous Technol, 2(3);1–34, doi: 10.1145/3264908.
T. Hamatani, M. Elhamshary, A. Uchiyama, and T. Higashino, (2018) FluidMeter,” Proc ACM Interact Mob Wearable Ubiquitous Technol, 2(3), doi: 10.1145/3264923.
K. Jiokeng, G. Jakllari, and A. L. Beylot, I Want to Know Your Hand: Authentication on Commodity Mobile Phones Based on Your Hand s Vibrations, Proc ACM Interact Mob Wearable Ubiquitous Technol, 6(2), doi: 10.1145/3534575.
I. H. Sarker, (2021) Deep Learning: A Comprehensive Overview on Techniques, Taxonomy, Applications and Research Directions, doi: 10.1007/s42979-021-00815-1.
J. Wu, Y. Feng, and P. Sun, (2018)Sensor fusion for recognition of activities of daily living, Sensors (Switzerland), 8(11), doi: 10.3390/s18114029.
M. M. Islam, S. Nooruddin, F. Karray, and G. Muhammad, (2022) Human activity recognition using tools of convolutional neural networks: A state of the art review, data sets, challenges, and future prospects, Comput Biol Med, 149;106060, doi: 10.1016/J.COMPBIOMED.2022.106060.
Z. He, Y. Sun, and Z. Zhang, (2024) Human Activity Recognition Based on Deep Learning Regardless of Sensor Orientation, Applied Sciences 14(9);3637, doi: 10.3390/APP14093637.
C. Avilés-Cruz, A. Ferreyra-Ramírez, A. Zúñiga-López, and J. Villegas-Cortéz, (2019) Coarse-fine convolutional deep-learning strategy for human activity recognition, Sensors (Switzerland), 19(7), doi: 10.3390/s19071556.
W. Huang, L. Zhang, W. Gao, F. Min, and J. He, (2021) Shallow Convolutional Neural Networks for Human Activity Recognition Using Wearable Sensors, IEEE Trans Instrum Meas, 70, doi: 10.1109/TIM.2021.3091990.
J. Jiao, M. Zhao, J. Lin, and K. Liang, (2020) A comprehensive review on convolutional neural network in machine fault diagnosis, Neurocomputing, 417;36–63, doi: 10.1016/j.neucom.2020.07.088.
I. K. Ihianle, A. O. Nwajana, S. H. Ebenuwa, R. I. Otuka, K. Owa, and M. O. Orisatoki, (2020) A deep learning approach for human activities recognition from multimodal sensing devices, IEEE Access, 8;179028–179038, doi: 10.1109/ACCESS.2020.3027979.
L. A. Jasim et al., (2024) Network Fortification: Leveraging Support Vector Machine for Enhanced Security in Wireless Body Area Networks, International Journal of Safety and Security Engineering, 14(3);923–931, doi: 10.18280/ijsse.140323.
S. Akhtar, S. Aftab, S. Kousar, A. Rehman, M. Ahmad, and A. Q. Saeed, (2024) Performance Analysis of Training Optimizers on Diabetic Retinopathy Detection Using SP-Net CNN, 2024 International Conference on Decision Aid Sciences and Applications (DASA), pp. 1–5, doi: 10.1109/DASA63652.2024.10836543.
A. Q. Saeed, S. N. H. S. Abdullah, J. Che-Hamzah, and A. T. A. Ghani, (2024) FINE VESSEL SEGMENTATION WITH REFINEMENT GATE IN DEEP LEARNING ARCHITECTURES, Malaysian Journal of Computer Science, 37(3);205–224, doi: 10.22452/MJCS.VOL37NO3.2.
S. Mekruksavanich and A. Jitpattanakul, (2021) Deep convolutional neural network with rnns for complex activity recognition using wrist-worn wearable sensor data, Electronics (Switzerland), 10(14), doi: 10.3390/electronics10141685.
S. Gupta, (2021). Deep learning based human activity recognition (HAR) using wearable sensor data, International Journal of Information Management Data Insights, 1(2), doi: 10.1016/j.jjimei.2021.100046.
C. X. Lu et al., (2018) DeepAuth: In-situ authentication for smartwatches via deeply learned behavioural biometrics, Proceedings - International Symposium on Wearable Computers, ISWC, pp. 204–207, doi: 10.1145/3267242.3267252.
C. X. Lu et al., (2019) Snoopy: Sniffing Your Smartwatch Passwords via Deep Sequence Learning,” Proc ACM Interact Mob Wearable Ubiquitous Technol, 1(4);1–29, doi: 10.1145/3161196.
M. M. Hassan, M. Z. Uddin, A. Mohamed, and A. Almogren, (2018) A robust human activity recognition system using smartphone sensors and deep learning, Future Generation Computer Systems, 81;307–313, doi: 10.1016/J.FUTURE.2017.11.029.
H. Du, P. Li, H. Zhou, W. Gong, G. Luo, and P. Yang, (2018) WordRecorder: Accurate Acoustic-based Handwriting Recognition Using Deep Learning, Proceedings - IEEE INFOCOM, 2018;1448–1456, doi: 10.1109/INFOCOM.2018.8486285.
T. Gong, Y. Kim, J. Shin, and S.-J. Lee, (2019) MetaSense: Few-Shot Adaptation to Untrained Conditions in Deep Mobile Sensing,, doi: 10.1145/3356250.3360020.
V. Becker, L. Fessler, and G. Sörös, GestEar: Combining Audio and Motion Sensing for Gesture Recognition on Smartwatches, doi: 10.1145/3341163.3347735.
G. Brunner, D. Melnyk, B. Sigfússon, and R. Wattenhofer, (2019) Swimming style recognition and lap counting using a smartwatch and deep learning, Proceedings - International Symposium on Wearable Computers, ISWC, pp. 23–31, doi: 10.1145/3341163.3347719.
Y. Liu, Z. Li, Z. Liu, and K. Wu, (2019). Real-time arm skeleton tracking and gesture inference tolerant to missing wearable sensors, MobiSys 2019 - Proceedings of the 17th Annual International Conference on Mobile Systems, Applications, and Services, pp. 287–299, doi: 10.1145/3307334.3326109.
J. Hou et al., SignSpeaker: A Real-time, High-Precision SmartWatch-based Sign Language Translator, 19, doi: 10.1145/3300061.3300117.
T. Giallanza et al., (2019) Keyboard Snooping from Mobile Phone Arrays with Mixed Convolutional and Recurrent Neural Networks, Proc ACM Interact Mob Wearable Ubiquitous Technol, 3(2);1–22, doi: 10.1145/3328916.
Y. Liu, K. Huang, X. Song, B. Yang, and W. Gao, (2020). MagHacker: Eavesdropping on Stylus Pen Writing via Magnetic Sensing from Commodity Mobile Devices, 13, doi: 10.1145/3386901.3389030.
H. Yin, A. Zhou, G. Su, B. Chen, L. Liu, and H. Ma, (2020). Learning to Recognize Handwriting Input with Acoustic Features, Proc ACM Interact Mob Wearable Ubiquitous Technol, 4(2); doi: 10.1145/3397334.
Y. Gao, Y. Jin, J. Li, S. Choi, and Z. Jin, (2020) EchoWhisper: Exploring an Acoustic-based Silent Speech Interface for Smartphone Users, Proc ACM Interact Mob Wearable Ubiquitous Technol, 4(3), doi: 10.1145/3411830.
S. Liu et al., (2020) GlobalFusion: A global attentional deep learning framework for multisensor information fusion, Proc ACM Interact Mob Wearable Ubiquitous Technol, 4(1), doi: 10.1145/3380999.
Y. Wang, J. Shen, and Y. Zheng, “Push the Limit of Acoustic Gesture Recognition,” Proceedings - IEEE INFOCOM, 2020;566–575, doi: 10.1109/INFOCOM41043.2020.9155402.
H. Park, S. Korea, Y. Lee, and J. Ko, (2021) Enabling Real-time Sign Language Translation on Mobile Platforms with On-board Depth Cameras,, doi: 10.1145/3463498.
B. Zhai, Y. Guan, M. Catt, and T. Ploetz, (2021) Ubi-SleepNet: Advanced Multimodal Fusion Techniques for Three-stage Sleep Classification Using Ubiquitous Sensing, Proc ACM Interact Mob Wearable Ubiquitous Technol, 5(4), doi: 10.1145/3494961.
W. Chen, L. Chen, M. Ma, F. S. Parizi, S. Patel, and J. Stankovic, (2021) ViFin: Harness Passive Vibration to Continuous Micro Finger Writing with a Commodity Smartwatch, Proc ACM Interact Mob Wearable Ubiquitous Technol, 5(1), doi: 10.1145/3448119.
X. Ouyang, Z. Xie, J. Zhou, J. Huang, and G. Xing, ClusterFL: A Similarity-Aware Federated Learning System for Human Activity Recognition Human activity recognition, Federated learning, Clustering, Multi-task learning, Communication optimization ACM Reference Format, p. 13, doi: 10.1145/3458864.3467681.
B. Khaertdinov, E. Ghaleb, and S. Asteriadis, (2021) Contrastive self-supervised learning for sensor-based human activity recognition, 2021 IEEE International Joint Conference on Biometrics, IJCB, doi: 10.1109/IJCB52358.2021.9484410.
Q. Zhang, D. Wang, R. Zhao, and Y. Yu, (2021) SoundLip: Enabling Word and Sentence-level Lip Interaction for Smart Devices, Proc ACM Interact Mob Wearable Ubiquitous Technol, 5(1), doi: 10.1145/3448087.
Q. Zhang, D. Wang, R. Zhao, Y. Yu, and J. Z. Jing, (2021) Write, Attend and Spell: Streaming End-to-end Free-style Handwriting Recognition Using Smartwatches, Proc ACM Interact Mob Wearable Ubiquitous Technol, 5(3), doi: 10.1145/3478100.
W. Lu, J. Wang, Y. Chen, S. J. Pan, C. Hu, and X. Qin, (2022). Semantic-Discriminative Mixup for Generalizable Sensor-based Cross-domain Activity Recognition, Proc ACM Interact Mob Wearable Ubiquitous Technol, 6(2); doi: 10.1145/3534589.
X. Xu et al., (2022). Enabling hand gesture customization on wrist-worn devices, Conference on Human Factors in Computing Systems - Proceedings, p. 19, doi: 10.1145/3491102.3501904.
Y. Xie, F. Li, Y. Wu, S. Yang, and Y. Wang, (2022) HearSmoking: Smoking Detection in Driving Environment via Acoustic Sensing on Smartphones, IEEE Trans Mob Comput, 21(8);2847–2860, doi: 10.1109/TMC.2020.3048785.
K. Ling, H. Dai, Y. Liu, and A. X. Liu, (2018) Ultragesture: Fine-grained gesture sensing and recognition, 2018 15th Annual IEEE International Conference on Sensing, Communication, and Networking, SECON, pp. 1–9, doi: 10.1109/SAHCN.2018.8397099.
MollynVimal, AhujaKaran, VermaDhruv, HarrisonChris, and GoelMayank, (2022) SAMoSA, Proc ACM Interact Mob Wearable Ubiquitous Technol, 6(3);331–332, doi: 10.1145/3550284.
P. Wang, R. Jiang, and C. Liu, (2022) Amaging: Acoustic Hand Imaging for Self-adaptive Gesture Recognition, Proceedings - IEEE INFOCOM, 2022; 80–89,, doi: 10.1109/INFOCOM48880.2022.9796906.
K. Tanigaki, T. C. Teoh, N. Yoshimura, T. Maekawa, and T. Hara, (2022) Predicting Performance Improvement of Human Activity Recognition Model by Additional Data Collection, Proc ACM Interact Mob Wearable Ubiquitous Technol, 6(3);, doi: 10.1145/3550319.
A. Raza et al., (2021). Lightweight Transformer in Federated Setting for Human Activity Recognition, [Online]. Available: https://arxiv.org/abs/2110.00244v3
S. EK, F. Portet, and P. Lalanda, (2022). Lightweight Transformers for Human Activity Recognition on Mobile Devices, Pers Ubiquitous Comput, 27(6);2267–2280, doi: 10.1007/s00779-023-01776-3.
S. Miao, L. Chen, R. Hu, and Y. Luo, (2022) Towards a Dynamic Inter-Sensor Correlations Learning Framework for Multi-Sensor-Based Wearable Human Activity Recognition, Proc ACM Interact Mob Wearable Ubiquitous Technol, 6(3), doi: 10.1145/3550331.
X. You, L. Zhang, H. Yu, M. Yuan, and X. Y. Li, (2021) KATN: Key activity detection via inexact supervised learning,” Proc ACM Interact Mob Wearable Ubiquitous Technol, 5(4), doi: 10.1145/3494957.
N. Yoshimura, T. Maekawa, T. Hara, and A. Wada, (2022) Acceleration-based Activity Recognition of Repetitive Works with Lightweight Ordered-work Segmentation Network, Proc ACM Interact Mob Wearable Ubiquitous Technol, 6(2), doi: 10.1145/3534572.
H. Li et al., “NeuralGait: Assessing Brain Health Using Your Smartphone,” Proc ACM Interact Mob Wearable Ubiquitous Technol, vol. 6, no. 4, Jan. 2023, doi: 10.1145/3569476.
G. Augustinov et al., (2022). Transformer-Based Recognition of Activities of Daily Living from Wearable Sensor Data, pp. 1–8, doi: 10.1145/3558884.3558895.
G. Sharma, A. Dhall, and R. Subramanian, (2022). A Transformer Based Approach for Activity Detection, MM 2022 - Proceedings of the 30th ACM International Conference on Multimedia, pp. 7155–7159, doi: 10.1145/3503161.3551598.
Y. Zhang, L. Wang, H. Chen, A. Tian, S. Zhou, and Y. Guo, (2022). IF-ConvTransformer: A Framework for Human Activity Recognition Using IMU Fusion and ConvTransformer, Proc ACM Interact Mob Wearable Ubiquitous Technol, 6(2); doi: 10.1145/3534584.
S. Lee, J. Lee, and K. Lee, (2023). DeepVehicleSense: An Energy-Efficient Transportation Mode Recognition Leveraging Staged Deep Learning Over Sound Samples, IEEE Trans Mob Comput, 22(6);3270–3286, doi: 10.1109/TMC.2022.3141392.
D. Ding, L. Yang, Y. C. Chen, and G. Xue, (2023). Handwriting Recognition System Leveraging Vibration Signal on Smartphones, IEEE Trans Mob Comput, 22(7);3940–3951, doi: 10.1109/TMC.2022.3148172.
R. Mishra, A. Gupta, and H. P. Gupta, (2023) Locomotion Mode Recognition Using Sensory Data With Noisy Labels: A Deep Learning Approach, IEEE Trans Mob Comput, 22(6);3460–3471, doi: 10.1109/TMC.2021.3135878.
Z. Xiao, T. Chen, Y. Liu, J. Li, and Z. Li, (2023) Keystroke Recognition With the Tapping Sound Recorded by Mobile Phone Microphones, IEEE Trans Mob Comput, 22(6);3407–3424, doi: 10.1109/TMC.2021.3137229.
C. Avilés-Cruz, A. Ferreyra-Ramírez, A. Zúñiga-López, and J. Villegas-Cortéz, (2019). Coarse-Fine Convolutional Deep-Learning Strategy for Human Activity Recognition, Sensors 19(7);1556, doi: 10.3390/S19071556.
A. Sarkar, S. K. S. Hossain, and R. Sarkar, (2023). Human activity recognition from sensor data using spatial attention-aided CNN with genetic algorithm, Neural Comput Appl, 35(7);5165–5191, doi: 10.1007/s00521-022-07911-0.
J. I. Janjua, S. Kousar, A. Khan, A. Ihsan, T. Abbas, and A. Q. Saeed, (2024). Enhancing Scalability in Reinforcement Learning for Open Spaces, 2024 International Conference on Decision Aid Sciences and Applications (DASA), pp. 1–8, doi: 10.1109/DASA63652.2024.10836237.
O. Saidani, M. Alsafyani, R. Alroobaea, N. Alturki, R. Jahangir, and L. Jamel, (2023). An Efficient Human Activity Recognition Using Hybrid Features and Transformer Model, IEEE Access, 11;101373–101386, doi: 10.1109/ACCESS.2023.3314492.
M. K, A. Ramesh, R. G, S. Prem, R. A A, and D. M. P. Gopinath, (2021). 1D Convolution approach to human activity recognition using sensor data and comparison with machine learning algorithms, International Journal of Cognitive Computing in Engineering, 2;130–143, doi: 10.1016/j.ijcce.2021.09.001.
K. Xia, J. Huang, and H. Wang, (2020). LSTM-CNN Architecture for Human Activity Recognition, IEEE Access, 8;56855–56866, doi: 10.1109/ACCESS.2020.2982225.
E. H. Bahadur, A. Kadar Muhammad Masum, A. Barua, M. G. Rabiul Alam, M. A. U. Zaman Chowdhury, and M. R. Alam, (2019). LSTM based approach for diabetic symptomatic activity recognition using smartphone sensors, 22nd International Conference on Computer and Information Technology, ICCIT 2019, Institute of Electrical and Electronics Engineers Inc.. doi: 10.1109/ICCIT48885.2019.9038185.
M. Mahfuri, S. Ghwanmeh, A. Q. Saeed, and M. Khishe, (2024). Artificial Intelligence in Account Management: Innovation, Challenges, and Strategic Outlook, 2024 International Conference on Decision Aid Sciences and Applications (DASA), pp. 1–7, doi: 10.1109/DASA63652.2024.10836501.
D. Mukherjee, R. Mondal, P. K. Singh, R. Sarkar, and D. Bhattacharjee, (2020). EnsemConvNet: a deep learning approach for human activity recognition using smartphone sensors for healthcare applications, Multimed Tools Appl, 79(41–42);31663–31690, doi: 10.1007/s11042-020-09537-7.
C. Shen, B. J. Ho, and M. Srivastava, (2018). MiLift: Efficient Smartwatch-Based Workout Tracking Using Automatic Segmentation, IEEE Trans Mob Comput, 17(7);1609–1622, doi: 10.1109/TMC.2017.2775641.
K. Ahuja, R. Islam, V. Parashar, K. Dey, C. Harrison, and M. Goel, (2018). EyeSpyVR, Proc ACM Interact Mob Wearable Ubiquitous Technol, 2(2);1–10, doi: 10.1145/3214260.
Z. Yang, C. Yu, F. Zheng, and Y. Shi, (2019). ProxiTalk, Proc ACM Interact Mob Wearable Ubiquitous Technol, 3(3);1–25, doi: 10.1145/3351276.
C. Luo et al., (2019). Brush like a Dentist: Accurate Monitoring of Toothbrushing via Wrist-Worn Gesture Sensing, Proceedings - IEEE INFOCOM, 2019;1234–1242, doi: 10.1109/INFOCOM.2019.8737513.
Y. Yin, L. Xie, T. Gu, Y. Lu, and S. Lu, (2019) AirContour: Building contour-based model for in-Air writing gesture recognition,” ACM Trans Sens Netw, 15(4); doi: 10.1145/3343855.
X. Zhang, Y. Yin, L. Xie, H. Zhang, Z. Ge, and S. Lu, (2020) Touchid: User authentication on mobile devices via inertial-Touch gesture analysis, Proc ACM Interact Mob Wearable Ubiquitous Technol, 4(4), doi: 10.1145/3432192.
Y. Song and Z. Cai, (2022) Integrating Handcrafted Features with Deep Representations for Smartphone Authentication, Proc ACM Interact Mob Wearable Ubiquitous Technol, 6(1), doi: 10.1145/3517332.
S. Bhattacharya, R. Adaimi, and E. Thomaz, (2022) Leveraging Sound and Wrist Motion to Detect Activities of Daily Living with Commodity Smartwatches, Proc ACM Interact Mob Wearable Ubiquitous Technol, 6(2), doi: 10.1145/3534582.
S. K. Challa, A. Kumar, and V. B. Semwal, (2022) A multibranch CNN-BiLSTM model for human activity recognition using wearable sensor data,” Visual Computer, 38(12);4095–4109, doi: 10.1007/s00371-021-02283-3.
S. Schober, E. Schimbäck, K. Pendl, K. Pichler, V. Sturm, and F. Runte, (2024). Human activity recognition system using wearable accelerometers for classification of leg movements: a first, detailed approach, Journal of Sensors and Sensor Systems, 13(2);187–209, doi: 10.5194/jsss-13-187-2024.
A. G. Prabono, B. N. Yahya, and S. L. Lee, (2021) Atypical Sample Regularizer Autoencoder for Cross-Domain Human Activity Recognition, Information Systems Frontiers, 23(1);71–80, doi: 10.1007/s10796-020-09992-5.
Y. Li, G. Yang, Z. Su, S. Li, and Y. Wang, (2023) Human activity recognition based on multienvironment sensor data, Information Fusion, 91;47–63, doi: 10.1016/j.inffus.2022.10.015.
K. D. Garcia et al., (2021) An ensemble of autonomous auto-encoders for human activity recognition, Neurocomputing, 439;271–280, doi: 10.1016/j.neucom.2020.01.125.
E. Essa and I. R. Abdelmaksoud, (2023) Temporal-channel convolution with self-attention network for human activity recognition using wearable sensors, Knowl Based Syst, 278;110867, doi: 10.1016/J.KNOSYS.2023.110867.
M. M. Jankowska, J. Schipperijn, J. Kerr, M. M. Jankowska, J. Schipperijn, and J. Kerr, (2015). A Framework for Using GPS Data in Physical Activity and Sedentary Behavior Studies. [Online]. Available: www.acsm-essr.org
N. A. Al-Dmour, S. Kousar, A. Khan, A. Ihsan, T. Abbas, and A. Q. Saeed, (2024). Enhancing Email Spam Detection Using Advanced AI Techniques, 2024 International Conference on Decision Aid Sciences and Applications (DASA), pp. 1–6, doi: 10.1109/DASA63652.2024.10836525.
T. Shen, I. Di Giulio, and M. Howard, (2023) A Probabilistic Model of Human Activity Recognition with Loose Clothing Sensors, 23(10), doi: 10.3390/s23104669.
J. Park et al., (2023). Development of a Gait Feature-Based Model for Classifying Cognitive Disorders Using a Single Wearable Inertial Sensor, Neurology, 101(1);E12–E19, doi: 10.1212/WNL.0000000000207372.
H. A. Dehkordi, A. S. Nezhad, H. Kashiani, S. B. Shokouhi, and A. Ayatollahi, (2022) Multi-expert human action recognition with hierarchical super-class learning, Knowl Based Syst, 250, doi: 10.1016/j.knosys.2022.109091.
Y. Wang and F. Feng, (2021). Reliability enhancement algorithm of human motion recognition based on knowledge graph, International Journal of Distributed Systems and Technologies, 12(1);1–15, doi: 10.4018/IJDST.2021010101.
L. Yao et al., (2018). WITS: an IoT-endowed computational framework for activity recognition in personalized smart homes, Computing, 100(4);369–385, doi: 10.1007/s00607-018-0603-z.
A. Zahin, L. T. Tan, and R. Q. Hu, (2019) Sensor-based human activity recognition for smart healthcare: A semi-supervised machine learning, Lecture Notes of the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering, 287;450–472, doi: 10.1007/978-3-030-22971-9_39.
L. Bibbò and M. M. B. R. Vellasco, (2023). Human Activity Recognition (HAR) in Healthcare, Multidisciplinary Digital Publishing Institute doi: 10.3390/app132413009.
G. Ogbuabor and R. La, (2018). Human activity recognition for healthcare using smartphones, ACM International Conference Proceeding Series, pp. 41–46, doi: 10.1145/3195106.3195157.
L. Chen et al., (2020). Survey of Pedestrian Action Recognition Techniques for Autonomous Driving.
L. C. Günther, S. Kärcher, and T. Bauernhansl, (2019). Activity recognition in manual manufacturing: Detecting screwing processes from sensor data, Procedia CIRP, 1177–1182. doi: 10.1016/j.procir.2019.03.288.
Goverdhan Reddy Jidiga, P. Karunakar Reddy, Arick M. Lakhani, Vasavi Bande, Mallareddy Adudhodla, & Lendale Venkateswarlu. (2025). Blockchain and Deep Learning for Secure IoT: A Hybrid Cryptographic Approach. International Journal of Computational and Experimental Science and Engineering, 11(1). https://doi.org/10.22399/ijcesen.1132
R. Sundar, M. Ganesan, M.A. Anju, M. Ishwarya Niranjana, & T. Surya. (2025). A Context-Aware Content Recommendation Engine for Personalized Learning using Hybrid Reinforcement Learning Technique. International Journal of Computational and Experimental Science and Engineering, 11(1). https://doi.org/10.22399/ijcesen.912
Johnsymol Joy, & Mercy Paul Selvan. (2025). An efficient hybrid Deep Learning-Machine Learning method for diagnosing neurodegenerative disorders. International Journal of Computational and Experimental Science and Engineering, 11(1). https://doi.org/10.22399/ijcesen.701
Sivananda Hanumanthu, & Gaddikoppula Anil Kumar. (2025). Deep Learning Models with Transfer Learning and Ensemble for Enhancing Cybersecurity in IoT Use Cases. International Journal of Computational and Experimental Science and Engineering, 11(1). https://doi.org/10.22399/ijcesen.1037
J. Prakash, R. Swathiramya, G. Balambigai, R. Menaha, & J.S. Abhirami. (2024). AI-Driven Real-Time Feedback System for Enhanced Student Support: Leveraging Sentiment Analysis and Machine Learning Algorithms. International Journal of Computational and Experimental Science and Engineering, 10(4). https://doi.org/10.22399/ijcesen.780
Meenakshi, M. Devika, A Soujanya, B.Venkataramanaiah, K. Durga Charan, & Er. Tatiraju. V. Rajani Kanth. (2025). Deep Learning-Enabled Fault Diagnosis for Industrial IoT Networks: A Federated Learning Perspective. International Journal of Computational and Experimental Science and Engineering, 11(2). https://doi.org/10.22399/ijcesen.1265
Ibeh, C. V., & Adegbola, A. (2025). AI and Machine Learning for Sustainable Energy: Predictive Modelling, Optimization and Socioeconomic Impact In The USA. International Journal of Applied Sciences and Radiation Research, 2(1). https://doi.org/10.22399/ijasrar.19
Olola, T. M., & Olatunde, T. I. (2025). Artificial Intelligence in Financial and Supply Chain Optimization: Predictive Analytics for Business Growth and Market Stability in The USA. International Journal of Applied Sciences and Radiation Research, 2(1). https://doi.org/10.22399/ijasrar.18
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 International Journal of Computational and Experimental Science and Engineering

This work is licensed under a Creative Commons Attribution 4.0 International License.