Understanding and Leveraging the Learning Phases of Neural Networks

zurück zur Übersicht

Referenz

Schneider, J., & Prabhushankar, M. (2024). Understanding and Leveraging the Learning Phases of Neural Networks. Paper presented at the AAAI Conference on Artificial Intelligence.

Publikationsart

Beitrag in Konferenztagungsband

Abstract

The learning dynamics of deep neural networks are not well understood. The information bottleneck (IB) theory proclaimed separate fitting and compression phases. But they have since been heavily debated. We comprehensively analyze the learning dynamics by investigating a layer's reconstruction ability of the input and prediction performance based on the evolution of parameters during training. We empirically show the existence of three phases using common datasets and architectures such as ResNet and VGG: (i) near constant reconstruction loss, (ii) decrease, and (iii) increase. We also derive an empirically grounded data model and prove the existence of phases for single-layer networks. Technically, our approach leverages classical complexity analysis. It differs from IB by relying on measuring reconstruction loss rather than information theoretic measures to relate information of intermediate layers and inputs. Our work implies a new best practice for transfer learning: We show empirically that the pre-training of a classifier should stop well before its performance is optimal.

Mitarbeiter

Einrichtungen

  • Liechtenstein Business School

Original Source URL

Link

DOI

http://dx.doi.org/https://doi.org/10.1609/aaai.v38i13.29408