File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Multi-modal multi-task learning for automatic dietary assessment

TitleMulti-modal multi-task learning for automatic dietary assessment
Authors
Issue Date2018
PublisherAssociation for the Advancement of Artificial Intelligence
Citation
32nd AAAI Conference on Artificial Intelligence (AAAI 2018), New Orleans, 2-7 February 2018. In 32nd AAAI Conference on Artificial Intelligence, AAAI 2018, 2018, p. 2347-2354 How to Cite?
AbstractWe investigate the task of automatic dietary assessment: given meal images and descriptions uploaded by real users, our task is to automatically rate the meals and deliver advisory comments for improving users' diets. To address this practical yet challenging problem, which is multi-modal and multi-task in nature, an end-to-end neural model is proposed. In particular, comprehensive meal representations are obtained from images, descriptions and user information. We further introduce a novel memory network architecture to store meal representations and reason over the meal representations to support predictions. Results on a real-world dataset show that our method outperforms two strong image captioning baselines significantly.
Persistent Identifierhttp://hdl.handle.net/10722/321829
ISBN
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorLiu, Qi-
dc.contributor.authorZhang, Yue-
dc.contributor.authorLiu, Zhenguang-
dc.contributor.authorYuan, Ye-
dc.contributor.authorCheng, Li-
dc.contributor.authorZimmermann, Roger-
dc.date.accessioned2022-11-03T02:21:44Z-
dc.date.available2022-11-03T02:21:44Z-
dc.date.issued2018-
dc.identifier.citation32nd AAAI Conference on Artificial Intelligence (AAAI 2018), New Orleans, 2-7 February 2018. In 32nd AAAI Conference on Artificial Intelligence, AAAI 2018, 2018, p. 2347-2354-
dc.identifier.isbn9781577358008-
dc.identifier.urihttp://hdl.handle.net/10722/321829-
dc.description.abstractWe investigate the task of automatic dietary assessment: given meal images and descriptions uploaded by real users, our task is to automatically rate the meals and deliver advisory comments for improving users' diets. To address this practical yet challenging problem, which is multi-modal and multi-task in nature, an end-to-end neural model is proposed. In particular, comprehensive meal representations are obtained from images, descriptions and user information. We further introduce a novel memory network architecture to store meal representations and reason over the meal representations to support predictions. Results on a real-world dataset show that our method outperforms two strong image captioning baselines significantly.-
dc.languageeng-
dc.publisherAssociation for the Advancement of Artificial Intelligence-
dc.relation.ispartof32nd AAAI Conference on Artificial Intelligence, AAAI 2018-
dc.titleMulti-modal multi-task learning for automatic dietary assessment-
dc.typeConference_Paper-
dc.description.naturelink_to_OA_fulltext-
dc.identifier.doi10.1609/aaai.v32i1.11848-
dc.identifier.scopuseid_2-s2.0-85060435252-
dc.identifier.spage2347-
dc.identifier.epage2354-
dc.identifier.isiWOS:000485488902052-
dc.publisher.placeWashington, DC-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats