Speaker
Description
Deep Convolutional Neural Networks (Deep CNNs) are currently unsurpassed as our best models of the object-recognition pathway in the visual stream of macaque monkeys. However, the extent to which these models generalize to the rodent visual ventral stream is disputed. In this talk I will recap recent attempts at using CNNs to model the rat ventral stream and present a novel, ecologically-grounded, comparison. I will stress that in the data-restricted regime of neuroscience, realistic image preprocessing is critical for meaningful comparisons and leads to the following insights: (1) mid-to-late layers in these hierarchical architectures offer the best match for several rat behaviors, (2) probing the visual strategy reveals how rats exploit visual strategies that are superior in terms of efficiency of visual clues. Finally, these findings highlight the role of CNNs as a common language to compare different biological visual systems.