Multimodal and mobile personal image retrieval: A user study

Details

Serval ID
serval:BIB_DA7A2C8BC2DC
Type
Inproceedings: an article in a conference proceedings.
Collection
Publications
Title
Multimodal and mobile personal image retrieval: A user study
Title of the conference
Prooceeding of the International Workshop on Mobile Information Retrieval (MobIR’08)
Author(s)
Anguera X., Oliver  N., Cherubini M.
Address
Singapore
Publication state
Published
Issued date
2008
Pages
17–23
Language
english
Abstract
Mobile phones have become multimedia devices. Therefore it is not uncommon to observe users capturing photos and videos on their mobile phones. As the amount of digital multimedia content expands, it becomes increasingly difficult to find specific images in the device. In this paper, we present our experience with MAMI, a mobile phone prototype that allows users to annotate and search for digital photos on their camera phone via speech input. MAMI is implemented as a mobile application that runs in real-time on the phone. Users can add speech annotations at the time of capturing photos or at a later time. Additional metadata is also stored with the photos, such as location, user identification, date and time of capture and image-based features. Users can search for photos in their personal repository by means of speech without the need of connectivity to a server. In this paper, we focus on our findings from a user study aimed at comparing the efficacy of the search and the ease-of-use and desirability of the MAMI prototype when compared to the standard image browser available on mobile phones today.
Keywords
Mobile Camera Phones, Speech Annotations, Multimedia Retrieval, User Experience, Digital Image Management
Create date
29/11/2016 14:35
Last modification date
20/08/2019 15:59
Usage data