We deal with a special class of games against nature which correspond to subsymbolic learning problems where we know a local descent direction in the error landscape but not the amount gained at each step of the learning procedure. Namely, Alice and Bob play a game where the probability of victory grows monotonically by unknown amounts with the resources each employs. For a fixed effort on Alice's part Bob increases his resources on the basis of the results of the individual contests (victory, tie or defeat). Quite unlike the usual ones in game theory, his aim is to stop as soon as the defeat probability goes under a given threshold with high confidence. We adopt such a game policy as an archetypal remedy to the general overtraining threat of learning algorithms. Namely, we deal with the original game in a computational learning framework analogous to the Probably Approximately Correct formulation. Therein, a wise use of a special inferential mechanism (known as twisting argument) highlights relevant statistics for managing different trade-offs between observability and controllability of the defeat probability. With similar statistics we discuss an analogous trade-off at the basis of the stopping criterion of subsymbolic learning procedures. As a conclusion, we propose a principled stopping rule based solely on the behavior of the training session, hence without distracting examples into a test set.

Apolloni, B., Bassis, S., Gaito, S., Malchiodi, D., Zoppis, I. (2010). Playing monotone games to understand learning behaviors. THEORETICAL COMPUTER SCIENCE, 411(25), 2384-2405 [10.1016/j.tcs.2010.02.011].

Playing monotone games to understand learning behaviors

ZOPPIS, ITALO FRANCESCO
2010

Abstract

We deal with a special class of games against nature which correspond to subsymbolic learning problems where we know a local descent direction in the error landscape but not the amount gained at each step of the learning procedure. Namely, Alice and Bob play a game where the probability of victory grows monotonically by unknown amounts with the resources each employs. For a fixed effort on Alice's part Bob increases his resources on the basis of the results of the individual contests (victory, tie or defeat). Quite unlike the usual ones in game theory, his aim is to stop as soon as the defeat probability goes under a given threshold with high confidence. We adopt such a game policy as an archetypal remedy to the general overtraining threat of learning algorithms. Namely, we deal with the original game in a computational learning framework analogous to the Probably Approximately Correct formulation. Therein, a wise use of a special inferential mechanism (known as twisting argument) highlights relevant statistics for managing different trade-offs between observability and controllability of the defeat probability. With similar statistics we discuss an analogous trade-off at the basis of the stopping criterion of subsymbolic learning procedures. As a conclusion, we propose a principled stopping rule based solely on the behavior of the training session, hence without distracting examples into a test set.
Articolo in rivista - Articolo scientifico
Algorithmic inference; Computational learning; Monotone games; Overtraining control; Subsymbolic learning; Training stopping rule;
English
2010
411
25
2384
2405
none
Apolloni, B., Bassis, S., Gaito, S., Malchiodi, D., Zoppis, I. (2010). Playing monotone games to understand learning behaviors. THEORETICAL COMPUTER SCIENCE, 411(25), 2384-2405 [10.1016/j.tcs.2010.02.011].
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10281/27998
Citazioni
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
Social impact