A comprehensive survey of image-based food recognition and volume estimation methods for dietary assessment

Document Type

Article

Publication Date

12-1-2021

Abstract

Dietary studies showed that dietary problems such as obesity are associated with other chronic diseases, including hypertension, irregular blood sugar levels, and increased risk of heart attacks. The primary cause of these problems is poor lifestyle choices and unhealthy dietary habits, which are manageable using interactive mHealth apps. However, traditional dietary monitoring systems using manual food logging suffer from imprecision, underreporting, time consumption, and low adherence. Recent dietary monitoring systems tackle these challenges by automatic assessment of dietary intake through machine learning methods. This survey discusses the best-performing methodologies that have been developed so far for automatic food recognition and volume estimation. Firstly, the paper presented the rationale of visual-based methods for food recognition. Then, the core of the study is the presentation, discussion, and evaluation of these methods based on popular food image databases. In this context, this study discusses the mobile applications that are implementing these methods for automatic food logging. Our findings indicate that around 66.7% of surveyed studies use visual features from deep neural networks for food recognition. Similarly, all surveyed studies employed a variant of convolutional neural networks (CNN) for ingredient recognition due to recent research interest. Finally, this survey ends with a discussion of potential applications of food image analysis, existing research gaps, and open issues of this research area. Learning from unlabeled image datasets in an unsupervised manner, catastrophic forgetting during continual learning, and improving model transparency using explainable AI are potential areas of interest for future studies.

Keywords

Food recognition, Feature extraction, Automatic diet monitoring, Image analysis, Volume estimation, Interactive segmentation, Food datasets

Divisions

fsktm

Funders

UM Partnership [RK012-2019],International Collaboration Fund for project Developmental Cognitive Robot with Continual Lifelong Learning [IF0318M1006],MESTECC, Malaysia,ONRG grant [(ONRG-NICOP- N62909-18-1-2086)/IF017-2018],Office of Naval and Research Global, UK

Publication Title

Healthcare

Volume

9

Issue

12

Publisher

MDPI

Publisher Location

ST ALBAN-ANLAGE 66, CH-4052 BASEL, SWITZERLAND

This document is currently not available here.

Share

COinS