An Optimal Solution to the Overfitting and Underfitting Problem of Healthcare Machine Learning Models

  • Anil Kumar Prajapati Anil Institute of Computer Science, India
  • Umesh Kumar Singh Institute of Computer Science, India
Keywords: Machine learning, Underfitting, Overfitting, Bias-Variance, Cross-validation, Data Splitting, Parameter Tuning, Loss Function


In the current technological era, artificial intelligence is becoming increasingly popular.  Machine learning, as the branch of AI is taking charge in every field such as healthcare, the Stock market, Automation, Robotics, Image Processing, and so on. In the current scenario, machine learning and/or deep learning are becoming very popular in medical science for disease prediction. Much research is underway in the form of disease prediction models by machine learning. To ensure the performance and accuracy of the machine learning model, it is important to keep some basic things in mind during training. The machine learning model has several issues which must be rectified duration of the training of the model so that the learning model works efficiently such as model selection, parameter tuning, dataset splitting, cross-validation, bias-variance tradeoff, overfitting, underfitting, and so on. Under- and over-fitting are the two main issues that affect machine learning models. This research paper mainly focuses on minimizing and/or preventing the problem of overfitting and underfitting machine learning models.


Download data is not yet available.


[1] A. Paullada, I. D. Raji, E. M. Bender, E. Denton, and A. Hanna, “ Data and its (dis)contents: A survey of dataset development and use in machine learning research”, DOI:, Patterns open access, Vol. 2, Issue 11, Pp. 1-14, November 2021
[2] O. A. M. López, A. M. López, and J. Crossa, (2022) “Multivariate Statistical Machine Learning Methods for Genomic Prediction”,, Springer Cham Publisher, Biomedical and Life Sciences book chapter - 4, Vol. , Issue, Pp. 109-122, November 2021
[3] A. K. Prajapati and U. K. Singh, “An empirical analysis of ML techniques and/or algorithms for disease diagnosis prediction from the perspective of cardiovascular disease (CVD),” International Journal of Computing Algorithm, vol. 11, no. 2, Pp. 6-16, Dec. 2022, doi: 10.20894/ijcoa.
[4] S. Zha F. Luisier, W. Andrews, N. Srivastava, and R. Salakhutdinov, “Exploiting Image-trained CNN Architectures for Unconstrained Video Classification”,, Arxiv publishing, Cornell University, Vol. 3, Issue, Pp. 1-9, May 2015.
[5] S. Hershey, S. Chaudhuri, D. P. W. Ellis, J. F. Gemmeke, A. Jansen, R. C. Moore, M. Plakal, D. Platt, R. A. Saurous, B. Seybold, M. Slaney, R. J. Weiss, and K. Wilson “CNN Architectures for large-scale audio classification”, DOI: 10.1109/ICASSP.2017.7952132, IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), June 2017.
[6] E. Sharma, G. Ye, W. Wei, R. Zhao, Y. Tian, J. Wu, L. He, E. Lin and Y. Gong “Adaptation of RNN transducer with text-to-speech technology for keyword spotting”, DOI: 10.1109/ICASSP40776.2020.9053191, IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), April 2020.
[7] I. Lee, and Y. J. Shin, “Machine learning for enterprises: Applications, algorithm selection, and challenges”,, Business Horizons, Elsevier Journal, Vol. 63, Issue 2, Pp. 1-14, April 2020.
[8] T. Yu, and H. Zhu, “Hyper-Parameter Optimization: A Review of Algorithms and Applications”,, Arxiv publishing, Cornell University, Vol. 1, Issue, Pp. 1-56, March 2020.
[9] L. Yang, and A. Shami, “On hyperparameter optimization of machine learning algorithms: Theory and practice”, Neurocomputing, Elsevier Journal, Vol. 415, Issue, Pp. 1-22, November 2020.
[10] X. Du, H. Xu, and F. Zhu, “Understanding The Effect of Hyperparameter Optimization on Machine Learning Models for Structure Design Problems”,, Computer-Aided Design, Elsevier Journal, Vol. 135, Issue, Pp. 1-16, June 2021.
[11] Q. Wang, Y. Ma, K. Zhao, and Y. Tian, “A Comprehensive Survey of Loss Functions in Machine Learning”.” Annals of Data Science. Vol., Issue, Pp. 1-26, March 2020
[12] B. U. Bawankar and K. Chinnaiah, “Implementation of ensemble method on dna data using various cross validation techniques”,, 3c Technology innovation glosses applied to SMEs, . Vol 11., Issue 2, Pp. 1-11, December 2022
[13] M. Rafało, “Cross validation methods: Analysis based on diagnostics of thyroid cancer metastasis”,, ICT Express (ScienceDirect), Vol 8., Issue 2, Pp. 1-6, June 2022
[14] V. Singh, M. Pencina, A. J. Einstein, J. X. Liang, D. S. Berman & P. Slomka “Impact of train/test sample regimen on performance estimate stability of machine learning in cardiovascular imaging”,, Vol 11., Issue, Pp. 1-8, July 2021
[15] A. Rácz, D. Bajusz and K. Héberger, “Effect of Dataset Size and Train/Test Split Ratios in QSAR/QSPR Multiclass Classification”, molecules26041111, Molecules [MDPI], Vol 26., Issue 4, Pp. 1-16, February 2021
[16] Anil Kumar Prajapati,Umesh Kumar Singh. Cardiovascular disease (CVD) prediction through Artificial Neural network in the perspective of Deep Learning, International Journal of Computing Algorithm, Vol. 11, Issue 2, 2022, pp. 1-7, DOI: 10.20894/IJDMTA., ISSN: 2278-2397
[17] L. Li, and M. Spratling “Understanding and combating robust overfitting via input loss landscape analysis and regularization”, Pattern Recognition, Vol. 136, Issue, Pp. 1-11, April 2023
[18] G. K. Gupta, and D. K. Sharma, “A Review of Overfitting Solutions in Smart Depression Detection Models” 10.23919/INDIACom54597.2022.9763147, 9th International Conference on Computing for Sustainable Global Development (INDIACom), March 2022
[19] F. Heintz, M. Milano, and B. O’Sullivan “Trustworthy AI – Integrating Learning, Optimization and Reasoning”,, First International Workshop, TAILOR 2020 Virtual Event (Springer Cham), Pp. 31-42, September 2020.
[20] A. D. Gavrilov, A. Jordache, M. Vasdani, and J. Deng “Preventing Model Overfitting and Underfitting in Convolutional Neural Networks”, DOI: 10.4018/IJSSCI.2018100102, International Journal of Software Science and Computational Intelligence, Vol. 10, Issue 4, Pp. 1-10, December 2018
[21] T. Aotani, T. Kobayashi, and K. Sugimoto,” Meta-Optimization of Bias-Variance Trade-Off in Stochastic Model Learning” 10.1109/ACCESS.2021.3125000, IEEE Acess, Vol. 9, Issue 4, Pp. 1-10, November 2021.
[22] Y. Dar, V. Muthukumar, and R. G. Baraniuk “A Farewell to the Bias-Variance Tradeoff? An Overview of the Theory of Overparameterized Machine Learning”, Arxiv publishing, Cornell University, Vol. 1, Issue, Pp. 1-48, September 2021.
[23] Y. Muhammad, M. Tahir, M. Hayat and K. Chong, “Early and accurate detection and diagnosis of heart disease using intelligent computational model”,, Published by Scientific Reports, 10, Issue- 4, PP. 1-18, November 2020.
[24] P. Ghosh, S. Azam, M Jonkman, S. Karim, F. M. J. M Shamrat, E. Ignatius, S. Sultana, A. R. Beeravolu, and A. F. De Boer, “Efficient Prediction of Cardiovascular Disease Using Machine Learning Algorithms With Relief and LASSO Feature Selection Techniques”, DOI: 10.1109/ACCESS.2021.3053759, IEEE ACCESS, Vol. 09, Issue-, PP. 1-23, February 2021.
[25] M. Ahmad, M. Alfayad, S. Aftab,M. A. Khan, A. Fatima, B. Shoaib, M. Sh. Daoud and N. S Elmitwally “Data and Machine Learning Fusion Architecture for Cardiovascular DiseasePrediction”, CMC-Computers, Materials & Continua, doi:10.32604/cmc.2021.019013, Vol. 69, Issue-02, PP. 1-15, April 2021.
How to Cite
Anil, A. K. P., & Singh, U. K. (2023). An Optimal Solution to the Overfitting and Underfitting Problem of Healthcare Machine Learning Models. Journal of Systems Engineering and Information Technology (JOSEIT), 2(2), 77-84.