In this paper we propose a strategy for semi-supervised image classification that leverages unsupervised representation learning and co-training. The strategy, that is called CURL from co-training and unsupervised representation learning, iteratively builds two classifiers on two different views of the data. The two views correspond to different representations learned from both labeled and unlabeled data and differ in the fusion scheme used to combine the image features. To assess the performance of our proposal, we conducted several experiments on widely used data sets for scene and object recognition. We considered three scenarios (inductive, transductive and self-taught learning) that differ in the strategy followed to exploit the unlabeled data. As image features we considered a combination of GIST, PHOG, and LBP as well as features extracted from a Convolutional Neural Network. Moreover, two embodiments of CURL are investigated: one using Ensemble Projection as unsupervised representation learning coupled with Logistic Regression, and one based on LapSVM. The results show that CURL clearly outperforms other supervised and semi-supervised learning methods in the state of the art.

Bianco, S., Ciocca, G., Cusano, C. (2016). CURL: Image Classification using co-training and Unsupervised Representation Learning. COMPUTER VISION AND IMAGE UNDERSTANDING, 145, 15-29 [10.1016/j.cviu.2016.01.003].

CURL: Image Classification using co-training and Unsupervised Representation Learning

BIANCO, SIMONE
Primo
;
CIOCCA, GIANLUIGI
Secondo
;
2016

Abstract

In this paper we propose a strategy for semi-supervised image classification that leverages unsupervised representation learning and co-training. The strategy, that is called CURL from co-training and unsupervised representation learning, iteratively builds two classifiers on two different views of the data. The two views correspond to different representations learned from both labeled and unlabeled data and differ in the fusion scheme used to combine the image features. To assess the performance of our proposal, we conducted several experiments on widely used data sets for scene and object recognition. We considered three scenarios (inductive, transductive and self-taught learning) that differ in the strategy followed to exploit the unlabeled data. As image features we considered a combination of GIST, PHOG, and LBP as well as features extracted from a Convolutional Neural Network. Moreover, two embodiments of CURL are investigated: one using Ensemble Projection as unsupervised representation learning coupled with Logistic Regression, and one based on LapSVM. The results show that CURL clearly outperforms other supervised and semi-supervised learning methods in the state of the art.
Articolo in rivista - Articolo scientifico
Image classification; Machine learning algorithms; Pattern analysis; Semi-supervised learning;
Image classification; Machine learning algorithms; Pattern analysis; Semi-supervised learning
English
2016
145
15
29
none
Bianco, S., Ciocca, G., Cusano, C. (2016). CURL: Image Classification using co-training and Unsupervised Representation Learning. COMPUTER VISION AND IMAGE UNDERSTANDING, 145, 15-29 [10.1016/j.cviu.2016.01.003].
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10281/106762
Citazioni
  • Scopus 5
  • ???jsp.display-item.citation.isi??? 5
Social impact