In order to create a cooking assistant application to guide the users in the preparation of the dishes relevant to their profile diets and food preferences, it is necessary to accurately annotate the video recipes, identifying and tracking the foods of the cook. These videos present particular annotation challenges such as frequent occlusions, food appearance changes, etc. Manually annotate the videos is a time-consuming, tedious and error-prone task. Fully automatic tools that integrate computer vision algorithms to extract and identify the elements of interest are not error free, and false positive and false negative detections need to be corrected in a post-processing stage. We present an interactive, semi-automatic tool for the annotation of cooking videos that integrates computer vision techniques under the supervision of the user. The annotation accuracy is increased with respect to completely automatic tools and the human effort is reduced with respect to completely manual ones. The performance and usability of the proposed tool are evaluated on the basis of the time and effort required to annotate the same video sequences.

Bianco, S., Ciocca, G., Napoletano, P., Schettini, R., Margherita, R., Marini, G., et al. (2013). A semi-automatic annotation tool for cooking video. In Image Processing: Machine Vision Applications VI; Burlingame, CA; United States; 5-6 February 2013 [10.1117/12.2003878].

A semi-automatic annotation tool for cooking video

BIANCO, SIMONE;CIOCCA, GIANLUIGI;NAPOLETANO, PAOLO;SCHETTINI, RAIMONDO;
2013

Abstract

In order to create a cooking assistant application to guide the users in the preparation of the dishes relevant to their profile diets and food preferences, it is necessary to accurately annotate the video recipes, identifying and tracking the foods of the cook. These videos present particular annotation challenges such as frequent occlusions, food appearance changes, etc. Manually annotate the videos is a time-consuming, tedious and error-prone task. Fully automatic tools that integrate computer vision algorithms to extract and identify the elements of interest are not error free, and false positive and false negative detections need to be corrected in a post-processing stage. We present an interactive, semi-automatic tool for the annotation of cooking videos that integrates computer vision techniques under the supervision of the user. The annotation accuracy is increased with respect to completely automatic tools and the human effort is reduced with respect to completely manual ones. The performance and usability of the proposed tool are evaluated on the basis of the time and effort required to annotate the same video sequences.
slide + paper
Video annotation, object recognition, interactive tracking
English
Image Processing: Machine Vision Applications VI; Burlingame, CA; United States; 5-6 February 2013
2013
Image Processing: Machine Vision Applications VI; Burlingame, CA; United States; 5-6 February 2013
978-081949434-4
2013
8661
866112
none
Bianco, S., Ciocca, G., Napoletano, P., Schettini, R., Margherita, R., Marini, G., et al. (2013). A semi-automatic annotation tool for cooking video. In Image Processing: Machine Vision Applications VI; Burlingame, CA; United States; 5-6 February 2013 [10.1117/12.2003878].
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10281/52671
Citazioni
  • Scopus 6
  • ???jsp.display-item.citation.isi??? 2
Social impact