John Christian, West Virginia – Space Rendezvous

John Christian 1

As you can likely imagine, space rendezvous is a highly complex process.

But, John Christian, aerospace engineer at WVU, is working to improve the computer imaging systems necessary for such a difficult task.

John ChristianDr. John Christian is an aerospace engineer with expertise in spacecraft navigation and space systems. He is presently an assistant professor in the Department of Mechanical and Aerospace Engineering in the Benjamin M. Statler College of Engineering and Mineral Resources at West Virginia University where he directs a research program focused on spacecraft relative navigation, attitude estimation, and spacecraft design. Prior to joining WVU’s faculty, Christian worked as an engineer in the Guidance, Navigation, and Control Autonomous Flight Systems Branch at the NASA Johnson Space Center in Houston, TX. He has experience with navigation system design, flight tests of relative navigation sensor hardware (STORRM experiment on STS-134), parachute drop tests, Inertial Measurement Unit data processing, system requirements definition, and space systems analysis.  He holds a BS and MS in aerospace engineering from the Georgia Institute of Technology and a Ph.D. in aerospace engineering from the University of Texas at Austin.

Space Rendezvous

AMico

For more than 50 years, aerospace engineers have worked to perfect the delicate dance that is necessary for two spacecraft to rendezvous in orbit. Our ability to complete such rendezvous has allowed us to send humans to the moon, repair the Hubble Space Telescope, and to assemble the International Space Station.

John Christian 2

But spacecraft rendezvous is not easy, and this complex technological feat requires mastery of the laws of nature and the practice of engineering. One particularly challenging aspect is the navigation of one spacecraft relative to the other – a task that many modern spacecraft achieve using pictures taken by onboard cameras.

Most humans use their eyes to understand and navigate through their surroundings. When we teach a computer (like the one on a spacecraft) to do these things using pictures from a camera, we call the result computer vision.

A variety of factors affect what can be discerned in any particular image. When something is very far away, it may only appear to be one or two pixels across. Conversely, as an object becomes closer, it spans many pixels and we can begin to pick out specific features on the object’s surface.

But what happens in between these two extremes? What do we do when something appears to be 10 to 20 pixels across?

Our research team is developing new ways to squeeze the maximum amount of information out of one or more images of a partially resolved object. We do this by adapting tools originally developed for multi-scale image processing – particularly the concept of Scale Space Theory, which is a framework for describing a particular feature at varying spatial resolutions.

Although still in its early stages, the research shows great promise. We hope that the new techniques being developed could allow us more efficiently perform the delicate dance of on-orbit spacecraft rendezvous.

Share