Classification of Red Foxes: Logistic Regression and SVM with VGG-16, VGG-19, and Inception V3
Abstract
Deep learning models demonstrate a high degree of accuracy in image classification. The task of distinguishing between various sources of red fox images—such as authentic photographs, game-captured images, hand-drawn illustrations, and AI-generated images—raises important considerations regarding realism, texture, and style. This study conducts an evaluation of three deep learning architectures: Inception V3, VGG-16, and VGG-19, utilizing images of red foxes. The research employs Silhouette Graphs, Multidimensional Scaling (MDS), and t-Distributed Stochastic Neighbor Embedding (t-SNE) to assess clustering and classification efficiency. Support Vector Machines (SVM) and Logistic Regression are utilized to compute the Area Under the Curve (AUC), Classification Accuracy (CA), and Mean Squared Error (MSE). The MDS plots and t-SNE data clearly demonstrate the capability of the three deep learning models to distinguish between the image categories. For game-captured images, VGG-16 and VGG-19 demonstrate quite outstanding performance with silhouette scores of 0.398 and 0.315, respectively. This study explores the enhancement of classification accuracy in logistic regression and support vector machines (SVM) through the refinement of decision boundaries for overlapping categories. Utilizing Inception V3, an artificial intelligence-generated image silhouette score of 0.244 was achieved, demonstrating proficiency in image classification. The research highlights the challenges posed by diverse datasets and the efficacy of deep learning models in the classification of red fox images. The findings suggest that integrating deep learning with machine learning classifiers, such as logistic regression and SVM, may improve classification accuracy.
Downloads
References
Z. Miao, Z. Liu, K. M. Gaynor, M. S. Palmer, S. X. Yu, and W. M. Getz, “Iterative human and automated identification of wildlife images,” Nat Mach Intell, vol. 3, no. 10, pp. 885–895, Oct. 2021, doi: 10.1038/s42256-021-00393-0.
Y. S. Vang, Z. Chen, and X. Xie, “Deep Learning Framework for Multi-class Breast Cancer Histology Image Classification,” Feb. 2018, doi: https://doi.org/10.1007/978-3-319-93000-8_104.
Y. Rahmati, “Artificial Intelligence for Sustainable Urban Biodiversity: A Framework for Monitoring and Conservation,” Dec. 2024, [Online]. Available: http://arxiv.org/abs/2501.14766
J. Wan and Y. Ma, “Multi-scale Spectral-Spatial Remote Sensing Classification of Coral Reef Habitats Using CNN-SVM,” J Coast Res, vol. 102, no. sp1, Dec. 2020, doi: 10.2112/SI102-002.1.
A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” Commun ACM, vol. 60, no. 6, pp. 84–90, May 2017, doi: 10.1145/3065386.
S. Christin, É. Hervet, and N. Lecomte, “Applications for deep learning in ecology,” Methods Ecol Evol, vol. 10, no. 10, pp. 1632–1644, Oct. 2019, doi: 10.1111/2041-210X.13256.
H. Chen and C. Haoyu, “Face Recognition Algorithm Based on VGG Network Model and SVM,” J Phys Conf Ser, vol. 1229, no. 1, p. 012015, May 2019, doi: 10.1088/1742-6596/1229/1/012015.
S. Binta Islam, D. Valles, T. J. Hibbitts, W. A. Ryberg, D. K. Walkup, and M. R. J. Forstner, “Animal Species Recognition with Deep Convolutional Neural Networks from Ecological Camera Trap Images,” Animals, vol. 13, no. 9, p. 1526, May 2023, doi: 10.3390/ani13091526.
C. Duhart, G. Dublon, B. Mayton, and J. Paradiso, “Deep Learning Locally Trained Wildlife Sensing in Real Acoustic Wetland Environment,” 2019, pp. 3–14. doi: 10.1007/978-981-13-5758-9_1.
S. Sharma, S. Neupane, B. Gautam, and K. Sato, “Automated Multi-Species Classification Using Wildlife Datasets Based On Deep Learning Algorithms,” Materials, Methods & Technologies, vol. 17, no. 1, pp. 103–117, 2023, doi: 10.62991/MMT1996359772.
Y. Ohtsubo, T. Matsukawa, and E. Suzuki, “Semi-Supervised Few-Shot Classification with Deep Invertible Hybrid Models,” May 2021, [Online]. Available: http://arxiv.org/abs/2105.10644
J. Wang and Y. Jiang, “A Hybrid convolution neural network for the classification of tree species using hyperspectral imagery,” PLoS One, vol. 19, no. 5, p. e0304469, May 2024, doi: 10.1371/journal.pone.0304469.
Rashmi Ashtagi, “Fusion of AI Techniques: A Hybrid Approach for Precise Plant Leaf Disease Classification,” Journal of Electrical Systems, vol. 20, no. 1s, pp. 850–861, Mar. 2024, doi: 10.52783/jes.836.
K. Watabe, “Immersion and Manipulation of Time in The Legend of Zelda: Breath of the Wild (2017) and The Legend of Zelda: Tears of Kingdom (2023),” in Panoramic and Immersive Media Studies Yearbook, De Gruyter, 2024, pp. 55–76. doi: 10.1515/9783111335575-004.
T. Karras, S. Laine, and T. Aila, “A Style-Based Generator Architecture for Generative Adversarial Networks,” in 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, Jun. 2019, pp. 4396–4405. doi: 10.1109/CVPR.2019.00453.
Onyebuchi Nneamaka Chisom, Preye Winston Biu, Aniekan Akpan Umoh, Bartholomew Obehioye Obaedo, Abimbola Oluwatoyin Adegbite, and Ayodeji Abatan, “Reviewing the role of AI in environmental monitoring and conservation: A data-driven revolution for our planet,” World Journal of Advanced Research and Reviews, vol. 21, no. 1, pp. 161–171, Jan. 2024, doi: 10.30574/wjarr.2024.21.1.2720.
T. Price, “A Cauldron of Chaos and Cultivation: Rediscovering Disney Animation of the 1980s,” Thesis, Chapman University, Orange, CA, 2019. doi: 10.36837/chapman.000112.
T. Yu, Y. Liu, H. Liu, J. Chen, and X. Wang, “ParaLkResNet: an efficient multi-scale image classification network,” Vis Comput, vol. 40, no. 7, pp. 5057–5066, Jul. 2024, doi: 10.1007/s00371-024-03508-x.
T. Talaei Khoei, H. Ould Slimane, and N. Kaabouch, “Deep learning: systematic review, models, challenges, and research directions,” Neural Comput Appl, vol. 35, no. 31, pp. 23103–23124, Nov. 2023, doi: 10.1007/s00521-023-08957-4.
M. Tan and Q. V. Le, “EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks,” May 2019, [Online]. Available: http://arxiv.org/abs/1905.11946
T. He, T. Wang, R. Abbey, and J. Griffin, “High-Performance Support Vector Machines and Its Applications,” May 2019, [Online]. Available: http://arxiv.org/abs/1905.00331
B. Ghasemkhani, K. F. Balbal, and D. Birant, “A New Predictive Method for Classification Tasks in Machine Learning: Multi-Class Multi-Label Logistic Model Tree (MMLMT),” Mathematics, vol. 12, no. 18, p. 2825, Sep. 2024, doi: 10.3390/math12182825.
N. Ahmad and A. B. Nassif, “Dimensionality Reduction: Challenges and Solutions,” ITM Web of Conferences, vol. 43, p. 01017, Mar. 2022, doi: 10.1051/itmconf/20224301017.
X. C. Li, L. L. Liu, and L. K. Huang, “Comparison Of Several Remote Sensing Image Classification Methods Based On Envi,” The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. XLII-3/W10, pp. 605–611, Feb. 2020, doi: 10.5194/isprs-archives-XLII-3-W10-605-2020.
K. Kansal and S. Sharma, “Predictive Deep Learning: An Analysis of Inception V3, VGG16, and VGG19 Models for Breast Cancer Detection,” 2024, pp. 347–357. doi: 10.1007/978-3-031-56703-2_28.
L. Li, “Application of deep learning in image recognition,” J Phys Conf Ser, vol. 1693, no. 1, p. 012128, Dec. 2020, doi: 10.1088/1742-6596/1693/1/012128.
L. Bothmann et al., “Automated wildlife image classification: An active learning tool for ecological applications,” Ecol Inform, vol. 77, p. 102231, Nov. 2023, doi: 10.1016/j.ecoinf.2023.102231.
R. Archana and P. S. E. Jeevaraj, “Deep learning models for digital image processing: a review,” Artif Intell Rev, vol. 57, no. 1, p. 11, Jan. 2024, doi: 10.1007/s10462-023-10631-z.
Copyright (c) 2025 Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi)

This work is licensed under a Creative Commons Attribution 4.0 International License.
Copyright in each article belongs to the author
- The author acknowledges that the RESTI Journal (System Engineering and Information Technology) is the first publisher to publish with a license Creative Commons Attribution 4.0 International License.
- Authors can enter writing separately, arrange the non-exclusive distribution of manuscripts that have been published in this journal into other versions (eg sent to the author's institutional repository, publication in a book, etc.), by acknowledging that the manuscript has been published for the first time in the RESTI (Rekayasa Sistem dan Teknologi Informasi) journal ;