Computational Models for Auditory Scene Analysis (2015 - 2019)
Auditory Attention via Gated Recurrent Neural Networks (2018 - 2019)
Theoretical models of how top-down attention could be used to facilitate auditory attention.
Real-Time Speech Enhancement on Android (May 2018 - August 2018)
Working with the Machine Perception team at Google we experimented with running deep learning models in real-time on android phones using a new framework, TfLite.
Generalization Challenges for Neural Networks Architectures (2017 - 2018)
In this work we studied the generalization performance of different neural network architectures on the task of audio source separation.
Discrete Music Generation with GANs (May 2017 - August 2017)
Working with the Magenta team at Google Brain we built Generative Adversarial Networks (GANs) for improving the quality of our RNN-based generative models of music.
Unsupervised learning of auditory stimuli (2017)
Working with Profession Olshausen we studied how the principles of efficient coding could be used to explain the spiked-based cochlear response of mammals.
Voice Conversion (2015 - 2016)
Working with Professor Bruna we applied convolutional neural networks and generative adversarial networks to convert the voices of audio samples.
Modeling the Syrinx of Birds for Call Synthesis (Dec 2015 - Feb 2016)
Working with Professor Theunissen we applied machine learning algorithms to syrinx models in order to better understand how zebra finches produce their calls.
Learning Transformational Invariants (2014 - 2015)
Working with Professor Olshausen we studied the dynamics of sparse representations of images using recurrent neural networks.
Information-based learning by agents in unbounded state spaces (2014)
Working with Professor Sommer we built non-parametric bayesian models to study how animals might explore unknown and unbounded environments.
Paper (NeurIPS ‘14)