Real-time Emotion Recognition Using the MobileNetV2 Architecture

  • Triyani Hendrawati Universitas Padjadjaran
  • Anindya Apriliyanti Pravitasari Universitas Padjadjaran
Keywords: CNN, deep learning, facial recognition, MobileNetV2, tensorflow

Abstract

Facial recognition technology is now advancing quickly and is being used extensively in a number of industries, including banking, business, security systems, and human-computer interface. However, existing facial recognition models face significant challenges in real-time emotion classification, particularly in terms of computational efficiency and adaptability to varying environmental conditions such as lighting and occlusion. Addressing these challenges, this research proposes a lightweight, yet effective deep learning model based on MobileNetV2 to predict human facial emotions using a camera in real time. The model is trained on the FER-2013 dataset, which consists of seven emotion classes: anger, disgust, fear, joy, sadness, surprise, and neutral. The methodology includes deep learning-based feature extraction, convolutional neural networks (CNN), and optimization techniques to enhance real-time performance on resource-constrained devices. Experimental results demonstrate that the proposed model achieves a high accuracy of 94.23%, ensuring robust real-time emotion classification with a significantly reduced computational cost. Additionally, the model is validated using real-world camera data, confirming its effectiveness beyond static datasets and its applicability in practical real-time scenarios. The findings of this study contribute to advancing efficient emotion recognition systems, enabling their deployment in interactive AI applications, mental health monitoring, and smart environments. Real-world camera data is also used to evaluate the model, demonstrating its usefulness in real-time applications and its efficacy beyond static datasets. The results of this work advance effective emotion identification systems, making it possible to use them in smart settings, interactive AI applications, and mental health monitoring.

Downloads

Download data is not yet available.

References

B. Amirgaliyev, M. Mussabek, T. Rakhimzhanova, and A. Zhumadillayeva, “A Review of Machine Learning and Deep Learning Methods for Person Detection, Tracking and Identification, and Face Recognition with Applications,” Sensors, vol. 25, no. 5, p. 1410, Feb. 2025, doi: 10.3390/s25051410.

J. Deng, J. Guo, J. Yang, N. Xue, I. Kotsia, and S. Zafeiriou,“ArcFace: Additive Angular Margin Loss for Deep Face Recognition,” IEEE Trans Pattern Anal Mach Intell, vol. 44, no. 10, pp. 5962–5979, Oct. 2022, doi: 10.1109/TPAMI.2021.3087709.

C. Xie, C. Xia, M. Ma, Z. Zhao, X. Chen, and J. Li, “Pyramid Grafting Network for One-Stage High Resolution Saliency Detection,” in 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, Jun. 2022, pp. 11707–11716. doi: 10.1109/CVPR52688.2022.01142.

M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen, “MobileNetV2: Inverted Residuals and Linear Bottlenecks,” in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, IEEE, Jun. 2018, pp. 4510–4520. doi: 10.1109/CVPR.2018.00474.

M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen, “MobileNetV2: Inverted Residuals and Linear Bottlenecks,” in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, IEEE, Jun. 2018, pp. 4510–4520. doi: 10.1109/CVPR.2018.00474.

I. D. Raji and J. Buolamwini, “Actionable Auditing,” in Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, New York, NY, USA: ACM, Jan. 2019, pp. 429–435. doi: 10.1145/3306618.3314244.

Y. Qu et al., “Joint Hierarchical Category Structure Learning and Large-Scale Image Classification,” IEEE Transactions on Image Processing, vol. 26, no. 9, pp. 4331–4346, Sep. 2017, doi: 10.1109/TIP.2016.2615423.

V. Maeda-Gutiérrez et al., “Comparison of Convolutional Neural Network Architectures for Classification of Tomato Plant Diseases,” Applied Sciences, vol. 10, no. 4, p. 1245, Feb. 2020, doi: 10.3390/app10041245.

M. Wang and W. Deng, “Deep face recognition: A survey,” Neurocomputing, vol. 429, pp. 215–244, Mar. 2021, doi: 10.1016/j.neucom.2020.10.081.

V. Maeda-Gutiérrez et al., “Comparison of Convolutional Neural Network Architectures for Classification of Tomato Plant Diseases,” Applied Sciences, vol. 10, no. 4, p. 1245, Feb. 2020, doi: 10.3390/app10041245.

F. M. Shiri, T. Perumal, N. Mustapha, and R. Mohamed, “A Comprehensive Overview and Comparative Analysis on Deep Learning Models: CNN, RNN, LSTM, GRU,” May 2023. Available: http://arxiv.org/abs/2305.17473

M. M. Taye, “Understanding of Machine Learning with Deep Learning: Architectures, Workflow, Applications and Future Directions,” Computers, vol. 12, no. 5, p. 91, Apr. 2023, doi: 10.3390/computers12050091.

J. S. Chung, R. Arandjelović, G. Bergel, A. Franklin, and A. Zisserman, “Re-presentations of Art Collections,” 2015, pp. 85–100. doi: 10.1007/978-3-319-16178-5_6.

A. C. S, “Advancements in CNN Architectures for Computer Vision: A Comprehensive Review,” in 2023 Annual International Conference on Emerging Research Areas: International Conference on Intelligent Systems (AICERA/ICIS), IEEE, Nov. 2023, pp. 1–7. doi: 10.1109/AICERA/ICIS59538.2023.10420413.

W. Wang, Y. Sun, W. Li, and Y. Yang, “TransHP: Image Classification with Hierarchical Prompting,” Apr. 2023.

Y. Liang and W. Liang, “ResWCAE: Biometric Pattern Image Denoising Using Residual Wavelet-Conditioned Autoencoder,” Jul. 2023.

R. Patel and A. Chaware, “MobileNet architecture and its application to computer vision,” in Computer Vision and Recognition Systems Using Machine and Deep Learning Approaches: Fundamentals, technologies and applications, Institution of Engineering and Technology, 2021, pp. 253–275. doi: 10.1049/PBPC042E_ch11.

M. Tan and Q. V. Le, “EfficientNetV2: Smaller Models and Faster Training,” Apr. 2021.

Z. Leng et al., “PolyLoss: A Polynomial Expansion Perspective of Classification Loss Functions,” Apr. 2022.

M. H. D. Farahani, M. K. Mohamadi, and M. Lotfizad, “Visual Object Tracking using Sparse Representation and Interest Points in a Double Step Approach,” Int J Comput Appl, vol. 175, no. 10, pp. 1–9, Aug. 2020, doi: 10.5120/ijca2020920563.

S. Li and W. Deng, “Deep Facial Expression Recognition: A Survey,” IEEE Trans Affect Comput, vol. 13, no. 3, pp. 1195–1215, Jul. 2022, doi: 10.1109/TAFFC.2020.2981446.

H. Huo, Y. Yu, and Z. Liu, “Facial expression recognition based on improved depthwise separable convolutional network,” Multimed Tools Appl, vol. 82, no. 12, pp. 18635–18652, May 2023, doi: 10.1007/s11042-022-14066-6.

P. Santemiz, L. J. Spreeuwers, and R. N. J. Veldhuis, “A Survey on Automatic Face Recognition Using Side‐View Face Images,” IET Biom, vol. 2024, no. 1, Jan. 2024, doi: 10.1049/2024/7886911.

Published
2025-07-17
How to Cite
Hendrawati, T., & Apriliyanti Pravitasari, A. (2025). Real-time Emotion Recognition Using the MobileNetV2 Architecture. Jurnal RESTI (Rekayasa Sistem Dan Teknologi Informasi), 9(4), 714 - 720. https://doi.org/10.29207/resti.v9i4.6158
Section
Computer Science Applications