Research
The mission of the EMERGE lab is to make it easy and efficient to develop capable, safe, and intelligent multi‑agent systems through learning, simulation, and data. We emphasise use‑inspired basic research, turning serious efforts to solve problems in autonomy, systems, and transportation into new algorithms, insights, and discoveries. If there's a unifying thread to our research, it's that the fastest way to understand something is to start building.
In practice, our tools consist heavily of scaled reinforcement learning, fast and diverse simulation, human data, and solid engineering. A smattering of interests right now:
- Scaling up search in multi-agent RL
- Getting meta-reinforcement learning working
- Designing accurate reactive models of human behavior
- Building an open-source planning stack for a self-driving car
News
- We’ve built and open‑sourced highly reliable RL‑based driving agents! Read the paper.
- Daphne Cornelisse is off to Waymo for the summer and Kevin Joseph to Applied Intuition! Congrats folks!
- New scalable benchmarks support the hypothesis that self‑play policy‑gradient methods excel at two‑player zero‑sum games. Paper · Play our agents.
- In collaboration with Apple, we present the first simulated driving agent that is zero‑shot excellent on every planning benchmark using zero human data. Details.
- We just open‑sourced GPUDrive, a simulator that runs at over a million FPS!
- Our paper on KL‑regularisation for human‑compatible driving agents is out!