Dr. Kimberly Stachenfeld is a research scientist at Google DeepMind. Having done her PhD on reinforcement learning in the hippocampus with Prof. Matthew Botvinick at Princeton, she joined DeepMind, where she continues her academic work along with industry projects.

There are many bright young people, but few are as reflective and eloquent as Kimberly. She has been working with outstanding scientists in stimulating environments throughout her training, and she can articulate the lessons she gleaned with admirable clarity.

In this episode, we discuss:

 -why reinforcement learning is so successful

           -what graduate students can learn from school teachers

               -difference between neuroscience and enginneering&math subcultures

                      -how machine learning can be used to advance neuroscience and vice versa



… and there is a fascinating segment on the process of writing this breathtaking paper

Since Kimberly tip-toes the line between academia and industry so gracefully, I asked her to answer some of our traditional closing questions in relation to both.


1)Which skills you wish you had picked up earlier on in your career?
Writing clearly and reading efficiently.

2) What is the most successful theory in neuroscience today?
Efficient coding. 

What is the most successful technique in deep learning today?

Generative adversarial networks, variational autoencoders, meta reinforcement learning.

3) What is a recent piece of data you are most excited about?

OFC work from Erin Rich and John Wallis.

Christian Doeller’s integration of thinking about the function of grid cells.

Source image: Mike Dodd

Happy listening!