AbstractUnderstanding the asymptotic behavior of wide neural networks is of considerable interest in machine learning. In the large width limit, we keep the number of layers of a network fixed and send the number of neurons in a layer to infinity. We present a general method for deriving scaling laws of correlators in this limit. The method is an adaptation of 't Hooft's large N expansion, where the number of neurons plays the role of N. We apply our method to study training dynamics during gradient descent. We improve on existing results in the strict large width limit, and compute the 1/N correction to network evolution. The talk is based on https://arxiv.org/abs/1909.11304.