I’m a machine learning researcher, particularly interested in social multi-agent reinforcement learning, open-endedness, and AI safety. I’m currently a fellow at OpenAI, where my work focuses on open-endedness and reinforcement learning. I previously participated in the OpenAI scholars program, and before that I worked as a machine learning engineer at Coinbase and as an algorithms research scientist at Fitbit. I have a B.S. in mathematics and physics from MIT.


  • November 2020: Learning Social Learning won a best paper award at the 2020 NeurIPS Cooperative AI Workshop!
  • September 2020: Fellowship on the OpenAI open-endedness team, working with Joel Lehman and Ken Stanley
  • February 2020: Scholar at OpenAI, studying social multi-agent reinforcement learning with Natasha Jaques


Multi-agent Social Reinforcement Learning Improves Generalization
Learning Social Learning (arXiv)
Kamal Ndousse, Douglas Eck, Sergey Levine, Natasha Jaques

We find that an auxiliary unsupervised prediction task helps model-free reinforcement learning (RL) agents learn social policies. The social policies allow them to learn from experts present in a shared environment, and social learners outperform solitary learners at the same hard-exploration, sparse reward task. The social policies also allow agents to perform well at zero-shot transfer tasks with experts.

Marlgrid (github)

Marlgrid is an open-source gridworld implementation built for multi-agent reinforcement learning (MARL). The design is based on Minigrid.