Exploratory gaze movements are fundamental for gathering the most relevant information regarding the partner during social interactions. We have designed and implemented a system for dynamic attention allocation which is able to actively control gaze movements during a visual action recognition task. During the observation of a partner’s reaching movement, the robot is able to contextually estimate the goal position of the partner hand and the location in space of the candidate targets, while moving its gaze around with the purpose of optimizing the gathering of information relevant for the task. Experimental results on a simulated environment show that active gaze control provides a relevant advantage with respect to typical passive observation, both in term of estimation precision and of time required for action recognition.
Ognibene, D., Chinellato, E., Sarabia, M., Demiris, Y. (2013). Towards Contextual Action Recognition and Target Localization with Active Allocation of Attention. In First International Conference, Living Machines 2012, Barcelona, Spain, July 9-12, 2012. Proceedings (pp.192-203). Springer [10.1007/978-3-642-31525-1_17].
Towards Contextual Action Recognition and Target Localization with Active Allocation of Attention
Ognibene D
Primo
;
2013
Abstract
Exploratory gaze movements are fundamental for gathering the most relevant information regarding the partner during social interactions. We have designed and implemented a system for dynamic attention allocation which is able to actively control gaze movements during a visual action recognition task. During the observation of a partner’s reaching movement, the robot is able to contextually estimate the goal position of the partner hand and the location in space of the candidate targets, while moving its gaze around with the purpose of optimizing the gathering of information relevant for the task. Experimental results on a simulated environment show that active gaze control provides a relevant advantage with respect to typical passive observation, both in term of estimation precision and of time required for action recognition.File | Dimensione | Formato | |
---|---|---|---|
Ognibene-2013-Living Machines 2012-VoR.pdf
Solo gestori archivio
Descrizione: Intervento a convegno
Tipologia di allegato:
Publisher’s Version (Version of Record, VoR)
Licenza:
Tutti i diritti riservati
Dimensione
439.14 kB
Formato
Adobe PDF
|
439.14 kB | Adobe PDF | Visualizza/Apri Richiedi una copia |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.