In this work we present a method to improve the pruning step of the current state-of-the-art methodology to compress neural networks. The novelty of the proposed pruning technique is in its differentiability, which allows pruning to be performed during the backpropagation phase of the network training. This enables an end-to-end learning and strongly reduces the training time. The technique is based on a family of differentiable pruning functions and a new regularizer specifically designed to enforce pruning. The experimental results show that the joint optimization of both the thresholds and the network weights permits to reach a higher compression rate, reducing the number of weights of the pruned network by a further 14% to 33 % compared to the current state-of-the-art. Furthermore, we believe that this is the first study where the generalization capabilities in transfer learning tasks of the features extracted by a pruned network are analyzed. To achieve this goal, we show that the representations learned using the proposed pruning methodology maintain the same effectiveness and generality of those learned by the corresponding non-compressed network on a set of different recognition tasks.

Manessi, F., Rozza, A., Bianco, S., Napoletano, P., Schettini, R. (2018). Automated Pruning for Deep Neural Network Compression. In Proceedings - International Conference on Pattern Recognition (pp.657-664). Institute of Electrical and Electronics Engineers Inc. [10.1109/ICPR.2018.8546129].

Automated Pruning for Deep Neural Network Compression

Bianco, S;Napoletano, P;Schettini, R
2018

Abstract

In this work we present a method to improve the pruning step of the current state-of-the-art methodology to compress neural networks. The novelty of the proposed pruning technique is in its differentiability, which allows pruning to be performed during the backpropagation phase of the network training. This enables an end-to-end learning and strongly reduces the training time. The technique is based on a family of differentiable pruning functions and a new regularizer specifically designed to enforce pruning. The experimental results show that the joint optimization of both the thresholds and the network weights permits to reach a higher compression rate, reducing the number of weights of the pruned network by a further 14% to 33 % compared to the current state-of-the-art. Furthermore, we believe that this is the first study where the generalization capabilities in transfer learning tasks of the features extracted by a pruned network are analyzed. To achieve this goal, we show that the representations learned using the proposed pruning methodology maintain the same effectiveness and generality of those learned by the corresponding non-compressed network on a set of different recognition tasks.
slide + paper
deep neural networks, compression
English
24th International Conference on Pattern Recognition, ICPR 2018
2018
Proceedings - International Conference on Pattern Recognition
9781538637883
2018
2018-
657
664
8546129
none
Manessi, F., Rozza, A., Bianco, S., Napoletano, P., Schettini, R. (2018). Automated Pruning for Deep Neural Network Compression. In Proceedings - International Conference on Pattern Recognition (pp.657-664). Institute of Electrical and Electronics Engineers Inc. [10.1109/ICPR.2018.8546129].
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10281/215420
Citazioni
  • Scopus 48
  • ???jsp.display-item.citation.isi??? 41
Social impact