I’m Will Whitney, a PhD student working with Kyunghyun Cho at NYU. I’ve done internships at DeepMind with Martin Riedmiller and at FAIR with Abhinav Gupta. Before this I worked with Josh Tenenbaum and Tejas Kulkarni at MIT for my Master’s. In a past life I started a company and went through Y Combinator. I’m also the creator of Hydrogen, an interactive coding environment for the Atom text editor, which has been downloaded >2,000,000 times.
My research focuses on the problem of sample-efficient learning, mostly in the domain of reinforcement learning for continuous control. This breaks down into a few smaller problems:
Previously I worked on generative models with interpretable latent spaces. My interests include the inductive biases of neural networks, learned structure in latent spaces, and robotics. I’ve also been interested in questions of sample complexity and expressivity: what makes a function easy or hard to approximate with a neural network?
William F. Whitney, Michael Bloesch, Jost Tobias Springenberg, Abbas Abdolmaleki, and Martin Riedmiller. Rethinking Exploration for Sample-Efficient Policy Learning. Working paper. arXiv preprint arXiv:2101.09458, 2021.
David Brandfonbrener, William F. Whitney, Rajesh Ranganath, and Joan Bruna. Offline Contextual Bandits with Overparameterized Models. In submission, 2021.
William F. Whitney, Min Jae Song, David Brandfonbrener, Jaan Altosaar, and Kyunghyun Cho. Evaluating representations by the complexity of learning low-loss predictors. In submission, 2021.
William F. Whitney, Rajat Agarwal, Kyunghyun Cho, and Abhinav Gupta. Dynamics-aware Embeddings. In International Conference on Learning Representations, 2020.
William F. Whitney and Abhinav Gupta. Learning Effect-Dependent Embeddings for Temporal Abstraction. In Structure & Priors in Reinforcement Learning at ICLR 2019.
William F. Whitney and Rob Fergus. Understanding the Asymptotic Performance of Model-Based RL Methods. 2018.
William F. Whitney and Rob Fergus. Disentangling video with independent prediction. In Learning Disentangled Representations: from Perception to Control at NeurIPS’17. 2017.
Mikael Henaff, William F. Whitney, and Yann LeCun. Model-Based Planning with Discrete and Continuous Actions. arXiv preprint arXiv:1705.07177, 2017.
Vlad Firoiu, William F. Whitney, and Joshua B. Tenenbaum. Beating the world’s best at Super Smash Bros. with deep reinforcement learning. arXiv preprint arXiv:1702.06230, 2017.
William F. Whitney. Disentangled Representations in Neural Models. Master’s thesis, Massachusetts Institute of Technology, 2016.
William F. Whitney, Michael Chang, Tejas Kulkarni, and Joshua B. Tenenbaum. Understanding visual concepts with continuation learning. In International Conference on Learning Representations, Workshop Track, 2016.
Tejas D. Kulkarni1, William F. Whitney1, Pushmeet Kohli, and Joshua B. Tenenbaum. Deep convolutional inverse graphics network. In Advances in Neural Information Processing Systems, 2015. Spotlight presentation given by William Whitney.