I’m Will Whitney, a research scientist at DeepMind on the Control team. I got my PhD working with Kyunghyun Cho at NYU. Before that I worked with Josh Tenenbaum and Tejas Kulkarni at MIT for my Master’s. In a past life I started a company and went through Y Combinator. I also created Hydrogen, an interactive coding environment for the Atom text editor, which was downloaded >2,000,000 times.
My research focuses on the problem of sample-efficient learning, mostly in the domain of robotics. What kinds of algorithms can take advantage of the structure of our 3D physical world to make learning more efficient, and how do they scale with additional data?
These days, this mostly takes the form of learned simulation. It is much easier to encode structure into a dynamics model than a policy, and simulators unlock capabilities beyond policy training, such as policy evaluation and safety checks.
This page is typically outdated, so refer to my Semantic Scholar instead.
William F. Whitney1, Tatiana Lopez-Guevara1, Tobias Pfaff, Yulia Rubanova, Thomas Kipf, Kimberly Stachenfeld, and Kelsey R. Allen. Learning 3D Particle-based Simulators from RGB-D Videos. In International Conference on Learning Representations, 2024.
Kelsey R. Allen1, Yulia Rubanova1, Tatiana Lopez-Guevara, William F. Whitney, Alvaro Sanchez-Gonzalez, Peter Battaglia, and Tobias Pfaff. Learning rigid dynamics with face interaction graph networks. Spotlight in International Conference on Learning Representations, 2023.
David Brandfonbrener, William F. Whitney, R. Ranganath, and Joan Bruna. Offline RL Without Off-Policy Evaluation. In Neural Information Processing Systems, 2021.
William F. Whitney, Michael Bloesch, Jost Tobias Springenberg, Abbas Abdolmaleki, and Martin Riedmiller. Decoupled Exploration and Exploitation Policies for Sample-Efficient Reinforcement Learning. arXiv:2101.09458, 2021.
David Brandfonbrener, William F. Whitney, Rajesh Ranganath, and Joan Bruna. Offline Contextual Bandits with Overparameterized Models. In International Conference on Machine Learning, 2021.
William F. Whitney, Min Jae Song, David Brandfonbrener, Jaan Altosaar, and Kyunghyun Cho. Evaluating representations by the complexity of learning low-loss predictors. arXiv:2009.07368, 2021.
William F. Whitney, Rajat Agarwal, Kyunghyun Cho, and Abhinav Gupta. Dynamics-aware Embeddings. In International Conference on Learning Representations, 2020.
William F. Whitney and Rob Fergus. Disentangling video with independent prediction. In Learning Disentangled Representations: from Perception to Control at NeurIPS’17. 2017.
Mikael Henaff, William F. Whitney, and Yann LeCun. Model-Based Planning with Discrete and Continuous Actions. arXiv preprint arXiv:1705.07177, 2017.
Vlad Firoiu, William F. Whitney, and Joshua B. Tenenbaum. Beating the world’s best at Super Smash Bros. with deep reinforcement learning. arXiv:1702.06230, 2017.
William F. Whitney. Disentangled Representations in Neural Models. Master’s thesis, Massachusetts Institute of Technology, 2016.
William F. Whitney, Michael Chang, Tejas Kulkarni, and Joshua B. Tenenbaum. Understanding visual concepts with continuation learning. In International Conference on Learning Representations, Workshop Track, 2016.
Tejas D. Kulkarni1, William F. Whitney1, Pushmeet Kohli, and Joshua B. Tenenbaum. Deep convolutional inverse graphics network. Spotlight in Advances in Neural Information Processing Systems, 2015.
Equal contribution.