2022 2nd International Conference on Big Data Analysis and Computer Science(BDACS2022)
Home
Prof. Lei Meng

Prof. Lei Meng

孟雷116x160.jpg

Prof. Lei Meng

Shandong University, China


Title:

Learning with Multimodal Interactions for Visually-Aware Diet Management


Abstract:

Visual food logging, usually embedded in mobile phone apps, is an emerging tool for diet management. It allows users to upload food photos of their daily intake and provides personalized services to encourage users to retain a healthy eating style. To achieve these, food recognition and recommendation are two key functionalities. However, the performance of learning to recognize food content, such as food name and its ingredients, from images is usually limited by the diverse appearances of images. This also makes modeling users’ eating preferences based on these images more difficult. This talk presents our recent achievements on learning representations of food images for improved recognition and recommendation. Both are achieved by leveraging another view of food, i.e. the tagged ingredients, to regularize the encoding of image features. In food recognition, the multimodal assumption allows the use of transfer learning to map the representations of images to those of ingredients, thus taking advantage of their stronger discriminative power. Food recommendation is more challenging since users typically eat food in different categories, requiring the image features to go beyond semantics, referred to as collaborative similarity. We will show how to encode both the semantic and collaborative similarities in image representation via a continual multitask learning approach. Besides the technical details, backgrounds, key challenges, and the experimental findings will be discussed.


Biography:

Lei Meng, Distinguished Professor and Doctoral Supervisor of Qilu Young Scholars , has been working at the School of Software, Shandong University since 2020. He received a bachelor's degree in engineering from Shandong University in 2010, and a doctorate degree from the School of Computer Science and Engineering, Nanyang Technological University (NTU), Singapore in 2015, under the supervision of Professor Ah-Hwee Tan. In 2015, he worked at the Joint NTU-UBC Research Center of Excellence in Active Living for the Elderly (LILY) as Research Fellow, co- tutors are Professor Miao Chunyan of Nanyang Technological University and Professor Cyril Leung of University of British Columbia. In 2018, he worked at NUS-Tsinghua-Southampton Centre for Extreme Search (NExT++) as Senior Research Fellow, co- tutor is Professor Tat-Seng Chua of National University of Singapore.

Focusing on scientific issues such as multimedia computing and data mining driven by Internet big data, he has long been engaged in machine learning theory and technology research on multimedia knowledge mining and content representation. Carry out research on key technologies of smart home for health big data analysis, independently build tens of millions of dietary health big data, develop and apply aging-friendly search engine, non-disturbing risk research and judgment, healthy diet management and other application systems; Strategic needs, focus on digital perception and intelligent decision-making in multi-scale social governance scenarios, and carry out pioneering innovative research in multimedia understanding, cross-modal reasoning, digital twin, and metaverse . The main research topics include (1) self-organizing clustering algorithm based on adaptive resonance principle (ART) ; (2) image representation algorithm based on cross-modal enhancement; (3) deep learning method for imbalanced data; (4) Cross-modal causal inference method combined with knowledge graph; (5) Image and 3D scene generation based on multi-source data.