Speaker
Description
In this talk, I will discuss recent results in Bayesian deep learning and how they may provide a new theoretical perspective that unifies several seemingly distinct functional interpretations for the role of noise in the brain. Specifically, I will show how: (a) multiplicative noise in neural network units; (b) Bayesian inference over neural network parameters; (c) "data augmentation"; (d) robustness of the network to structured input corruptions; (e) ability to generalize to different settings -- are all near-equivalent properties in (artificial) deep neural networks. In other words, noise in neural networks can simultaneously take multiple functional roles which yield increased robustness, data efficiency and generalization, all desirable properties for biological networks as well. Key points open for discussion with the audience are whether and how this proposed unified perspective can lead to actionable empirical insights and improve our understanding of information processing in the brain. The talk will build upon results from this Bayesian deep learning paper, adapted for a neuroscience audience: https://aaltopml.github.io/node-BNN-covariate-shift/