Classification of Food Objects Using Deep Convolutional Neural Network Using Transfer Learning

PDF (363KB), PP.53-60

Views: 0 Downloads: 0

Author(s)

Dipta Gomes 1,*

1. University of Ulster, Belfast, Northern Ireland, UK

* Corresponding author.

DOI: https://doi.org/10.5815/ijeme.2024.02.05

Received: 17 Jun. 2023 / Revised: 4 Aug. 2023 / Accepted: 17 Sep. 2023 / Published: 8 Apr. 2024

Index Terms

Food classification, Deep Learning, Convolutional Neural Network, Data Augmentation

Abstract

With the advancements of Deep Learning technologies, its application has broadened into the fields of food classification from image recognition using Convolutional Neural Network, since food ingredient classification is a very important aspect for eating habit recognition and also reducing food waste. This research is an addition to the previous research with a clear illustration for deep learning approaches and how to maximize the classification accuracy to get a more profound framework for food ingredient classification. A fine-tuned model based on the Xception Convolutional Neural Network model trained with transfer learning has been proposed with a promising accuracy of 95.20% which indicates a greater scope of accurately classifying food objects with Xception deep learning model. Higher rate of accuracy opens the door of further research of identifying various new types of food objects through a robust approach. The main contribution in the research is better fine-tuning features of food classification. The dataset used in this research is the Food-101 Dataset containing 101 classes of food object images in the dataset.

Cite This Paper

Dipta Gomes, "Classification of Food Objects Using Deep Convolutional Neural Network Using Transfer Learning", International Journal of Education and Management Engineering (IJEME), Vol.14, No.2, pp. 53-60, 2024. DOI:10.5815/ijeme.2024.02.05

Reference

[1]S. Pouyanfar and S.-C. Chen, “Semantic concept detection using weighted discretization multiple correspondence analysis for disaster information management,” in the 17th IEEE International Conference on Information Reuse and Integration, 2016, pp. 556-564. 
[2]M.-L. Shyu, C. Haruechaiyasak, S.-C. Chen, and N. Zhao, “Collaborative filtering by mining association rules from user access sequences,” in IEEE International Workshop on Challenges in Web Information Retrieval and Integration, 2005, pp. 128-135.
[3]X. Chen, C. Zhang, S.-C. Chen, and S. Rubin, “A human-centered multiple instance learning framework for semantic video retrieval,” IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), vol. 39, no. 2, pp. 228-233, 2009.
[4]Q. Zhu, L. Lin, M.-L. Shyu, and S.-C. Chen, “Effective supervised discretization for classification based on correlation maximization,” in IEEE International Conference on Information Reuse and Integration, 2011, pp. 390-395.
[5]C. Chen, Q. Zhu, L. Lin, and M.-L. Shyu, “Web media semantic concept retrieval via tag removal and model fusion,” ACM Transactions on Intelligent Systems and Technology, vol. 4, no. 4, pp. 1-22,2013.
[6]T. Meng, and M.-L. Shyu, “Leveraging concept association network for multimedia rare concept mining and retrieval,” in IEEE International Conference on Multimedia and Expo, 2012, pp. 860-865.
[7]K. Yanai, and Y. Kawano, “Food image recognition using deep convolutional network with pre-training and fine-tuning,” in IEEE International Conference on Multimedia & Expo Workshops, 2015, pp.1-6.
[8]T. Joutou, and K. Yanai, “A food image recognition system with Multiple Kernel Learning,” in 16th IEEE International Conference on Image Processing, 2009, pp. 285-288.
[9]S. Memiş, B. Arslan, O. Z. Batur and E. B. Sönmez, "A Comparative Study of Deep Learning Methods on Food Classification Problem," 2020 Innovations in Intelligent Systems and Applications Conference (ASYU), 2020, pp. 1-4, doi: 10.1109/ASYU50717.2020.9259904.
[10]Dipta Gomes, A. F. M. Saifuddin Saif, and Dip Nandi. 2020. Robust Underwater Object Detection with Autonomous Underwater Vehicle: A Comprehensive Study. In Proceedings of the International Conference on Computing Advancements (ICCA 2020). Association for Computing Machinery, New York, NY, USA, Article 17, 1–10. https://doi.org/10.1145/3377049.3377052
[11]P. P. Urmee, M. A. A. Mashud, J. Akter, A. S. M. M. Jameel and S. Islam, "Real-time Bangla Sign Language Detection using Xception Model with Augmented Dataset," 2019 IEEE International WIE Conference on Electrical and Computer Engineering (WIECON-ECE), 2019, pp. 1-5, doi: 10.1109/WIECON-ECE48653.2019.9019934
[12]He, K., Zhang, X., Ren, S., and Sun, J., “Deep Residual Learning for Image Recognition”, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, 2016.
[13]C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z.B. Wojna,“Rethinking the Inception Architecture for Computer Vision”, in proceedings IEEE CVPR, Las Vegas, 2016.
[14]He, K., Zhang, X., Ren, S., and Sun, J., “Deep Residual Learning for Image Recognition”, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, 2016
[15]G. Huang, Z. Liu, L.V.D Maaten, and K. Q. Weinberger, “Densely Connected Convolutional Networks”, in proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2261-2269, Honolulu, 2017, doi: 10.1109/CVPR.2017.243.
[16]S. Zagoruyko and N. Komodakis. “Wide Residual Networks”, in proc. British Machine Vision Conference, Sept. 2016.
[17]S. Xie, R. Girshick, P. Dollár, Z. Tu, K. He, “Aggregated Residual Transformations for Deep Neural Networks”. In proc. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5987-5995, Honolulu, 2016.
[18]L. Pan, S. Pouyanfar, H. Chen, J. Qin and S. -C. Chen, "DeepFood: Automatic Multi-Class Classification of Food Ingredients Using Deep Learning," 2017 IEEE 3rd International Conference on Collaboration and Internet Computing (CIC), 2017, pp. 181-189, doi: 10.1109/CIC.2017.00033.
[19]Z. Zong, D. T. Nguyen, P. Ogunbona and W. Li, "On the Combination of Local Texture and Global Structure for Food Classification," IEEE International Symposium on Multimedia, Taichung, pp. 204-211, 2010, doi: 10.1109/ISM.2010.37.
[20]K. Yanai and Y. Kawano, "Food image recognition using deep convolutional network with pre-training and fine-tuning," IEEE International Conference on Multimedia & Expo Workshops (ICMEW), pp. 1-6, Turin, 2015
[21]P. Pandey, A. Deepthi, B. Mandal, and N. B. Puhan, "FoodNet: Recognizing foods using ensemble of deep networks," IEEE Signal Processing Letters, vol. 24, pp. 1758-1762, 2017.
[22]Pouladzadeh, Parisa & Shirmohammadi, Shervin. (2017). Mobile Multi-Food Recognition Using Deep Learning. ACM Transactions on Multimedia Computing, Communications, and Applications. 13. 1-21. 10.1145/3063592.
[23]G. Ciocca, P. Napoletano, and R. Schettini, "CNN-based features for retrieval and classification of food images," Computer Vision and Image Understanding, vol. 176, pp. 70- 77, 2018.
[24]Nareen O. M. Salim et al 2021 J. Phys.: Conf. Ser. 1963 012014
[25]V. H. Reddy, S. Kumari, V. Muralidharan, K. Gigoo, and B. S. Thakare, "Food Recognition and Calorie Measurement using Image Processing and Convolutional Neural Network," in 2019 4th International Conference on Recent Trends on Electronics, Information, Communication & Technology (RTEICT), 2019, pp. 109-115.
[26]Bossard, L., Guillaumin, M., Van Gool, L. (2014). Food-101 – Mining Discriminative Components with Random Forests. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds) Computer Vision – ECCV 2014. ECCV 2014. Lecture Notes in Computer Science, vol 8694. Springer, Cham. https://doi.org/10.1007/978-3-319-10599-4_29
[27]Liu, C., Cao, Y., Luo, Y., Chen, G., Vokkarane, V., Ma, Y., 2016. Deepfood: deep learning-based food image recognition for computer-aided dietary assessment. In: Proceedings of the 14th International Conference on Inclusive Smart Cities and Digital Health - Vol. 9677, pp. 37–48.
[28]F. Chollet, "Xception: Deep Learning with Depthwise Separable Convolutions," 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 1800-1807, doi: 10.1109/CVPR.2017.195.