The NeuroAILab's work is founded on two mutually reinforcing hypotheses:
By studying how the brain solves computational challenges, we can learn to build better artificial intelligence algorithms.
By improving artificial intelligence algorithms, we will discover better representations of how the brain operates.
We investigate these hypotheses using techniques from computational modeling and artificial intelligence, high-throughput neurophysiology, functional brain imaging, behavioral psychophysics, and large-scale data analysis.
Three of our main lines of research include:
Humans (and other animals) are astonishingly good at solving the hugely difficult computational problem of processing noisy and complex real-world stimuli (for example, sounds and images) into meaningful high-level representations. But how do brains do this? Using recent advances in artificial intelligence, we've been exploring how the detailed neural mechanisms of visual, auditory and somatosensory brain areas are shaped by the need to optimize performance on high-level recognition tasks. We're also interested in questions like: what's the dimensionality and capacity of our visual system? What do the neurons in higher visual cortex really represent? We're investigating these questions using a combination of computational modeling, data analysis, and neurophysiological and psychophysical experiments.
State-of-the-art neural networks learn parameters via heavily supervised methods involving huge numbers of high-level semantic labels, e.g. category labels for thousands examples in each of thousands of categories). Viewed as technical tools for tuning algorithm parameters, such procedures are perfectly acceptable. As real models of biological learning, they are highly unrealistic, because, among other reasons, infants simply do not receive millions of category labels during development. The discovery of deep neural network learning rules that are computationally powerful but psychologically and neurally accurate is a key challenge driving our work.
Visual perception plays a crucial role in enabling such amazing human abilities as describing a scene in sentences, answering novel queries about the scene, acting on the scene given a command, and imagining a given scene modified in some specific way. These tasks are intimately visual but bridge to cognitive phenomena beyond the visual domain. A key direction in our work is building neural network models that can solve these tasks, and use them to understand the neural circuits bridging sensory areas with other domains.