Comparison of Transfer Learning Model Performance for Breast Cancer Type Classification in Mammogram Images
Abstract
Globally, breast cancer is the type of cancer that most women suffer from. Early detection of breast cancer is very important because there is a big chance of cure. Mammography screening makes it possible to detect breast cancer early. The study of computer-assisted breast cancer diagnosis is gaining increasing attention. Breast cancer comes in two forms: benign cancer and malignant cancer. advances in deep learning (DL) technology and its use to overcome obstacles in medical imaging, and classification using a number of transfer learning models to identify the type of breast cancer (malignant, benign, or normal). This work conducted a thorough comparison analysis of eight prevalent pre-trained CNN algorithms (VGG16, ResNet50, AlexNet, MobileNetV2, ShuffleNet, EfficientNet-b0, EfficientNet-b1, and EfficientNet-b2) for breast cancer classification. In this study, we permonData is divided into training, testing, and validation. Using the publicly accessible mini-DDSM dataset, we assess the proposed architecture. were used to measure the classification accuracy (Acc). For genBased on test results, the best accuracy was obtained using EfficientNetb2 with an accuracy value of 94% for training data and 98% for test data on mammogram images.
Downloads
References
S. V. Sree, E. Y.-K. Ng, R. U. Acharya, and O. Faust, “World Journal of Clinical Oncology,” World J Clin Oncol, vol. 2, no. 4, pp. 171–178, 2011, doi: 10.5306/wjco.v2.i4.171.
R. L. Siegel and K. D. Miller, “Cancer Statistics , 2020,” vol. 70, no. 1, pp. 7–30, 2020, doi: 10.3322/caac.21590.
KEMENKES, “Kemenkes Targetkan Pemerataan Layanan Kesehatan,” Kementrian Republik Indonesia, 2023, Kanker Payudara Paling Banyak di Indonesia. [Online]. Available: https://www.kemkes.go.id/article/view/22020400002/kanker-payudara-paling-banyak-di-indonesia-kemenkes-targetkan-pemerataan-layanan-kesehatan.html.
L. Singh, Z. A. Jaffery, Z. Zaheeruddin, and R. Singh, “Segmentation and Characterization of Breast Tumor in Mammograms,” pp. 213–216, 2010, doi: 10.1109/ARTCom.2010.60.
J. V Fiorica, “Breast Cancer Screening , Mammography , and Other Modalities,” vol. 59, no. 4, pp. 688–709, 2016.
American cancer society, “Breast Cancer Facts & Figures 2017-2018,” 2018. [Online]. Available: https://www.cancer.org/content/dam/cancer-org/research/ cancer-facts-and-statistics/breast-cancer-facts-and-figures/ breast-cancer-facts-and-figures-2017-2018.pd
M. T. Tirona, E. Comprehensive, and W. Virginia, “Breast Cancer Screening Update,” 2013.
J. Li et al., “Non-invasive biomarkers for early detection of breast cancer,” Cancers (Basel)., vol. 12, no. 10, pp. 1–28, 2020, doi: 10.3390/cancers12102767.
A. Lagree et al., “OPEN A review and comparison of breast tumor cell nuclei segmentation performances using deep convolutional neural networks,” Sci. Rep., pp. 1–11, 2021, doi: 10.1038/s41598-021-87496-1.
D. Lévy and A. Jain, “Breast Mass Classification from Mammograms using Deep Convolutional Neural Networks,” no. Nips, 2016.
S. H. Kassani, P. H. Kassani, M. J. Wesolowski, K. A. Schneider, and R. Deters, “Breast Cancer Diagnosis with Transfer Learning and Global Pooling,” ICTC 2019 - 10th Int. Conf. ICT Converg. ICT Converg. Lead. Auton. Futur., no. c, pp. 519–524, 2019, doi: 10.1109/ICTC46691.2019.8939878.
R. Shen, K. Yan, K. Tian, C. Jiang, and K. Zhou, “Breast mass detection from the digitized X-ray mammograms based on the combination of deep active learning and self-paced learning,” Futur. Gener. Comput. Syst., vol. 101, pp. 668–679, 2019, doi: https://doi.org/10.1016/j.future.2019.07.013.
W. M. Salama and M. H. Aly, “Deep learning in mammography images segmentation and classification: Automated CNN approach,” Alexandria Eng. J., vol. 60, no. 5, pp. 4701–4709, 2021, doi: 10.1016/j.aej.2021.03.048.
M. Hussain, “Mammogram Screening for Breast Density Classification using a soft voting ensemble of Swin Transformers and ConvNext models,” 2022 16th Int. Conf. Signal-Image Technol. Internet-Based Syst., pp. 372–379, 2022, doi: 10.1109/SITIS57111.2022.00063.
DataGen, “Understanding VGG16: Concepts, Architecture, and Performance,” DataGen. 2023.
M. Andreetto et al., “MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications Automatic 3-D Modeling of Textured Cultural Heritage Objects View project MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications,” 2017, [Online]. Available: https://www.researchgate.net/publication/316184205
M. Tan and Q. V. Le, “EfficientNetV2: Smaller Models and Faster Training,” Proc. Mach. Learn. Res., vol. 139, pp. 10096–10106, 2021.
B. A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet Classification with Deep Convolutional Neural Networks,” 2012, doi: 10.1145/3065386.
K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition Kaiming,” Indian J. Chem. - Sect. B Org. Med. Chem., vol. 45, no. 8, pp. 1951–1954, 2006.
X. Zhang, X. Zhou, M. Lin, and J. Sun, “ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., pp. 6848–6856, 2018, doi: 10.1109/CVPR.2018.00716.
Copyright (c) 2025 Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi)

This work is licensed under a Creative Commons Attribution 4.0 International License.
Copyright in each article belongs to the author
- The author acknowledges that the RESTI Journal (System Engineering and Information Technology) is the first publisher to publish with a license Creative Commons Attribution 4.0 International License.
- Authors can enter writing separately, arrange the non-exclusive distribution of manuscripts that have been published in this journal into other versions (eg sent to the author's institutional repository, publication in a book, etc.), by acknowledging that the manuscript has been published for the first time in the RESTI (Rekayasa Sistem dan Teknologi Informasi) journal ;