NeuroAI for AI safety
11-14, 10:58–11:06 (Asia/Bangkok), Breakout 3

Powerful unaligned AIs pose risks to humans. This talk will explore how neuroscience-inspired AI–or NeuroAI–can lead to a deeper understanding of the human brain, and help us build more secure AI. I’ll connect these ideas to d/acc, arguing that neuroAI can play an enabling role in creating technologies that are inherently defense-favoring and promote human well-being.

I’m a NeuroAI researcher at the Amaranth Foundation, which funds ambitious research in neuroscience. I focus on the intersection of neuroscience and AI safety. I did my B.Sc. in Math and Physics and a PhD in neuroscience at McGill. Previously, I was a software engineer at Google, a research scientist in brain-computer interfaces at Meta, and a machine learning researcher at Mila.