On Rensselaer Polytechnic Institute Week: There are always obstacles in our way.
Brett Fajen, associate dean for academic affairs and professor, looks at how we navigate this by tracking our eye movements.
Brett Fajen conducts research on perception and action. His main interests are the visual control of locomotion and perceptual-motor learning and adaptation. His research on these topics contributes to the development of the ecological and dynamical systems approaches to perception and action.
Vision
Getting from Point A to Point B in the real world is not simply a matter of moving along a straight path. Usually, the environments we occupy contain obstacles that must be circumvented or dodged. For humans, vision plays an essential role in allowing us to follow safe and efficient routes.
Scientists in fields like vision science, psychology, and neuroscience, study how the ability to see and the ability to move are related. Such research has led to the formulation of control strategies that capture how actions are coupled to visual information. Enabling us to steer cars, intercept moving targets, and catching fly balls.
Our research builds upon this work by exploring how people blend control strategies together to satisfy multiple goals over different time horizons. For example, a glade skier may need to alter how they approach a gap between two trees in anticipation of having to avoid a boulder a bit farther ahead.
One task that is well suited to study this behavior is first-person-view drone racing in a cluttered environment. The drone-piloting task is ideal because it requires people to negotiate goals and obstacles that lie at different time horizons.
In my lab, we used a custom-designed virtual-reality based simulator that simulates the task of flying a drone through a dense forest. The simulator is equipped with an eye tracker, which allows us to study the coordination of steering and eye movements in skilled and novice drone pilots. We use the data to build mathematical models of high-speed steering, obstacle avoidance, and path following.
Such models could inspire new solutions for autonomous navigation in aerial robots that rely on input from a camera to steer through densely cluttered environments.
Comments
One response to “Brett Fajen, Rensselaer Polytechnic Institute – Vision”
I am fascinated, as an artist who teaches drawing and painting, with the jumps between the rapid secades used to scan our field of vision, and how incredibly fast we interpret cues about depth and distance, tilt and rotation, and so much else to navigate. This informs both representations as well as pure abstractions. We have a UX lab with good monitors on campus that I’d love to hook our art majors up to, to help them grasp this concept and better apply it to their hand-drawn and painted work. Playing video games, especially VR, seems to naturally engage them in such awareness, but they stumble or omit it when working in dry old paper or canvas. Any suggestions?