On New York University: Do we need to take AI to kindergarten?
Cristina Savin, associate professor in neural science and data science, says AI needs to start learning more like humans.
CS is an Assoc. Professor in Neural Science and Data Science at NYU and the Director for Graduate Studies (PhD) in the Center for Data Science. After obtaining a PhD in computational neuroscience from Goethe University in Frankfurt , and postdoctoral research at Cambridge University, ENS Paris and IST Austria, Dr. Savin joined NYU in 2017. Her lab studies neural principles of adaptive computation by combining machine learning, computational neuroscience theory and statistical analyses of experimental data from neuroscience collaborators.
Taking AI to Kindergarten
What if out of the blue someone asked you to juggle 3 balls while at the same time riding an unicycle. Unless you already have extensive experience with juggling and unicycle riding, you would not from the get-go set out to achieve that goal. Instead, you’d work your way towards it. If you already know how to ride a bicycle, you’d tap into that skill to figure out unicycle riding. Separately, you’d start training the basics of juggling— first try to keep one ball in the air, then add a second, and so on. Only after mastering these skills individually would you attempt to string them together.
This gradual path to mastery seems very intuitive to us humans but it not not in general how artificial intelligence systems learn to do complicated things. Instead, AI systems just get huge amounts of experience trying to achieve the end-goal. This approach works most of the time, but is slow and very inefficient. In our work we argue that AI agents should learn more like humans do. AI learning becomes a matter of breaking complex problems into more manageable subelements —like riding an unicycle and juggling— reusing knowledge of how to solve related tasks —using bicycle riding skills to figure out how to use an unicycle— and gradually increasing task difficulty —adding an extra balls while juggling. We talk about this as kindergarten training to reflect the fact that these subproblems teach the AI system basic skills that can then be used for further learning. Learning complex tasks becomes much faster with this approach. Moreover, the AI agents to come up with better solutions. Finally, modeling the entire learning process teaches us about the nature of learning and explains the cognitive strategies that animals adopt when trained in neuroscience experiments.
Read More:
[Nature] – Compositional pretraining improves computational efficiency and matches animal behaviour on complex tasks

