Matt Taylor, Washington State University – Knowledge Transfer

Matt Taylor

Matt Taylor

Computers are very useful in the classroom, but in the near future, they might be conducting the class!

Matt Taylor, assistant professor in the School of Electrical Engineering and Computer Science at Washington State University, is teaching computers how to teach.

Matthew E. Taylor is Washington State University’s Allred Distinguished Professor in Artificial Intelligence and an assistant professor in the WSU School of Electrical Engineering and Computer Science. His research interests include intelligent agents, multi-agent systems, reinforcement learning, and transfer learning, and he is a recipient of a prestigious National Science Foundation CAREER award. Taylor holds a PhD from the Department of Computer Sciences from the University of Texas at Austin and an AB in computer science and physics from Amherst College, where he graduated magna cum laude. He completed a two-year postdoctoral research position at the University of Southern California and was assistant professor at Lafayette College before joining the WSU faculty in 2013.

Matt Taylor – Knowledge Transfer: Computers teach each other to play Pac-Man

 

I work on artificial intelligence and my goal is to help robots become common and useful in our homes and businesses.

The people who build and program robots can’t anticipate every possible situation — robots must know how to adapt and handle new tasks. We also want them to be able to transfer their learned knowledge to newer models, or even to humans.

Our research has created a student-teacher framework to allow very different types of robots to teach one another.

We recently taught virtual robots, or agents, to play video games. After an agent learned to play a game, it became a teacher, providing advice to a new student agent. In fact, the student agent learned to play the game of Pac-Man faster than if it had to learn just on its own.

The goal of the teacher is to give advice when it would make the biggest difference. If you’ve ever taught in a classroom, you’ll understand that the trick is knowing when to advise and when to hold back. It’s the same with the robots: too little advice and there’s no improvement in learning. Too much, and the robot just mimics the teacher.

Our student agents were able to benefit from their coaching — learning better with advice than without it. Equally important was that the students could eventually outperform the teacher. After all, we don’t want students to be limited by imperfect teachers — robot students should be able to use and then improve on a teacher’s knowledge.

Our future plans include applying our methods to physical robots. We’re also expanding our framework to allow for three types of teaching: robots teaching robots, humans teaching robots, and robots teaching humans. Our goal is to let humans and robots to work better together, learn from each other, and solve real-world tasks.

 

Share