Speaker
Description
Discriminating distinct objects and concepts from sensory stimuli is essential for survival. Our brains perform this processing in deep sensory networks shaped through plasticity. However, our understanding of the underlying plasticity mechanisms is still limited. First, I will present recent work on Latent Predictive Learning (LPL), a plausible normative theory of representation learning based on predicting future sensory inputs. I will show that LPL allows sensory networks to disentangle object representations while accounting for essential plasticity experiments.
Second, I will discuss recent ideas on how biological neural networks could solve the credit assignment problem, thereby improving representation learning in deep biological networks.