Scaling Limits of Neural Networks

-
Boris Hanin, Princeton
Fine Hall 214

 Neural networks are often studied analytically through scaling limits: regimes in which taking to infinity  structural network parameters such as depth, width, and number of training datapoints results in simplified models of learning. I will survey several such approaches with the goal of illustrating the rich and still not fully understood space of possible behaviors when some or all of the network’s structural parameters are large.