I’m Will Whitney, a research scientist at DeepMind. I got my PhD working with Kyunghyun Cho at NYU. Before that I worked with Josh Tenenbaum and Tejas Kulkarni at MIT for my Master’s. In a past life I started a company and went through Y Combinator. I also created Hydrogen, an interactive coding environment for the Atom text editor, which was downloaded >2,000,000 times.
Currently my research is about learning models of the physical world. I want to build generative models good enough to live inside of, whether for fun and games, to do work, or to train agents. This touches diffusion and video generation, 3D physics modeling, NeRF, and lots more.
I got into working on world models through robotics. Every problem in robotics that we can simulate sufficiently well is now relatively easy to solve. Every problem that we can’t simulate is basically impossible and requires thousands of hours of human demonstrations to get results. To me, that implies that scaling simulation and making it cheap to author huge, near-perfect simulators of new scenes is plausibly the best way to make progress in robotics. This is especially true since progress on video generative models is so fast.
This page is typically outdated, so refer to my Semantic Scholar instead.
William F. Whitney1, Tatiana Lopez-Guevara1, Tobias Pfaff, Yulia Rubanova, Thomas Kipf, Kimberly Stachenfeld, and Kelsey R. Allen. Learning 3D Particle-based Simulators from RGB-D Videos. In International Conference on Learning Representations, 2024.
Kelsey R. Allen1, Yulia Rubanova1, Tatiana Lopez-Guevara, William F. Whitney, Alvaro Sanchez-Gonzalez, Peter Battaglia, and Tobias Pfaff. Learning rigid dynamics with face interaction graph networks. Spotlight in International Conference on Learning Representations, 2023.
David Brandfonbrener, William F. Whitney, R. Ranganath, and Joan Bruna. Offline RL Without Off-Policy Evaluation. In Neural Information Processing Systems, 2021.
William F. Whitney, Michael Bloesch, Jost Tobias Springenberg, Abbas Abdolmaleki, and Martin Riedmiller. Decoupled Exploration and Exploitation Policies for Sample-Efficient Reinforcement Learning. arXiv:2101.09458, 2021.
David Brandfonbrener, William F. Whitney, Rajesh Ranganath, and Joan Bruna. Offline Contextual Bandits with Overparameterized Models. In International Conference on Machine Learning, 2021.
William F. Whitney, Min Jae Song, David Brandfonbrener, Jaan Altosaar, and Kyunghyun Cho. Evaluating representations by the complexity of learning low-loss predictors. arXiv:2009.07368, 2021.
William F. Whitney, Rajat Agarwal, Kyunghyun Cho, and Abhinav Gupta. Dynamics-aware Embeddings. In International Conference on Learning Representations, 2020.
William F. Whitney and Rob Fergus. Disentangling video with independent prediction. In Learning Disentangled Representations: from Perception to Control at NeurIPS’17. 2017.
Mikael Henaff, William F. Whitney, and Yann LeCun. Model-Based Planning with Discrete and Continuous Actions. arXiv preprint arXiv:1705.07177, 2017.
Vlad Firoiu, William F. Whitney, and Joshua B. Tenenbaum. Beating the world’s best at Super Smash Bros. with deep reinforcement learning. arXiv:1702.06230, 2017.
William F. Whitney. Disentangled Representations in Neural Models. Master’s thesis, Massachusetts Institute of Technology, 2016.
William F. Whitney, Michael Chang, Tejas Kulkarni, and Joshua B. Tenenbaum. Understanding visual concepts with continuation learning. In International Conference on Learning Representations, Workshop Track, 2016.
Tejas D. Kulkarni1, William F. Whitney1, Pushmeet Kohli, and Joshua B. Tenenbaum. Deep convolutional inverse graphics network. Spotlight in Advances in Neural Information Processing Systems, 2015.
Equal contribution.