Raven (STP-H5 Raven) - 04.26.17

Overview | Description | Applications | Operations | Results | Publications | Imagery

ISS Science for Everyone

Science Objectives for Everyone
Future robotic spacecraft operating thousands of miles from Earth need advanced autopilot systems to help them safely navigate and rendezvous with other objects. The Raven investigation studies a real-time spacecraft navigation system that provides the eyes and intelligence to see a target and steer toward it safely. The investigation enables future exploration missions near Earth and beyond, including satellite servicing and repair, asteroid exploration and redirect missions, and the Orion program.
Science Results for Everyone
Information Pending

The following content was provided by Ross M. Henry, Ph.D., and is maintained in a database by the ISS Program Science Office.
Experiment Details

OpNom:

Principal Investigator(s)
Ross M. Henry, Ph.D., Goddard Space Flight Center, Greenbelt, MD, United States

Co-Investigator(s)/Collaborator(s)
Information Pending

Developer(s)
Goddard Space Flight Center, Greenbelt, MD, United States

Sponsoring Space Agency
National Aeronautics and Space Administration (NASA)

Sponsoring Organization
Technology Demonstration Office (TDO)

Research Benefits
Space Exploration

ISS Expedition Duration
September 2016 - September 2017; September 2017 - February 2018

Expeditions Assigned
49/50,51/52,53/54

Previous Missions
Raven is developed, integrated, and operated by the Satellite Servicing Capabilities Office (SSCO), the same team that built and conducts the multi-phased Robotic Refueling Mission (RRM) experiment on the ISS. A previous, single-sensor version of Raven technology flew as the Relative Navigation Sensor (RNS) Payload on STS-125 during the Hubble Space Telescope Servicing Mission 4. The Raven visible camera is a repurposed flight unit from the STS-125 demonstration. Raven also reuses the flash lidar flown as part of the Sensor Test for RelNav Risk Mitigation (STORRM) demonstration on STS-134. Raven uses a version of the infrared camera manifested on STS-128, STS-131, and STS-135 as part of the Neptec TriDAR demonstrations. Both RRM and Raven are part of SSCO’s ongoing campaign to develop and test advanced technologies, tools and techniques for the Restore-L servicing mission.

^ back to top

Experiment Description

Research Overview

  • Many future exploration missions have to robotically or autonomously dock, capture, or land on exploration sites, whether those sites are other spacecraft, orbital outposts, asteroids, comets, or planetary moons. With such a breadth of missions ahead of it, NASA desires a single solution to meet each of these mission's relative navigation needs.  Raven seeks to demonstrate that a single suite of sensors, avionics, and algorithms can indeed navigate a spacecraft to cooperative objects--those that have visible aides used for navigation purposes--as well as legacy satellites that were not designed to be serviced, or natural planetary bodies.
  • Raven uses a complex and compact system to image and track the many visiting vehicles that journey to the space station each year. It contains three separate sensors that span multiple wavelengths, coupled with high-performance, reprogrammable avionics that process imagery. Raven’s vision processing and navigation algorithms convert the imagery collected by the sensors into an accurate relative navigation solution between Raven and the vehicle. Raven also uses a two-axis gimbal to point its sensors at the vehicle to increase the time it has to follow the vehicle into docking or berthing.
  • Successful Raven operations allow NASA to demonstrate that a similar system can be used to pilot a satellite-servicing vehicle to a client spacecraft, verifying a key technology for the Restore-L servicing mission. Additionally, Raven demonstrates that NASA can move to a common specification on autonomous rendezvous and docking technologies, which means that a single set of hardware could meet the needs of multiple missions. This approach saves the government money over time, as it provides a standard specification that mission managers can order from, and that vendors can build to.

Description

The next generation of missions undertaken by National Aeronautics and Space Administration (NASA) requires the ability to autonomously rendezvous and mate space vehicles in a variety of Earth orbits and interplanetary locations. These future missions focus on exploring beyond Earth orbit with humans, building large-scale science observatories, returning samples from ancient asteroid or comets, and rescuing aging and ailing satellites. As such, NASA continues to invest in both commercial- and government-owned Autonomous Rendezvous and Capture (AR&C) technologies that enable rendezvous and capture across a wide spectrum of applications: from capturing man-made objects to landing on primitive bodies; from navigating to cooperative targets to tracking natural features; from operating under ground control supervision to reacting autonomously to the dynamically changing environment.
 
Raven is NASA’s newest flight experiment to develop and validate these new AR&C technologies. Specifically, Raven demonstrates the necessary components—next-generation sensors; vision processing algorithms; and high-speed, space-rated avionics—of an autonomous navigation system. Over its two-year mission on the ISS, Raven estimates in real time the relative navigation state of the various visiting vehicles (VVs) that come and go from the station each year: the Progress and Soyuz vehicles from Russia, the H-II Transfer Vehicle (HTV) from Japan, and the Cygnus and Dragon spacecraft from the United States.

^ back to top

Applications

Space Applications
Future exploration spacecraft may dock with, capture, or land on many different objects, from other spacecraft to asteroids, moons and planets. An autonomous navigation system that could handle any of these tasks would be more versatile than systems designed for one type of use. The Raven investigation demonstrates that a single set of sensors, controls, and computer software can be used to rendezvous with both human-built satellites and celestial objects.

Earth Applications
Raven uses a tracking system to image the many crew and cargo ships that visit the International Space Station (ISS) each year. Its vision processing and software converts images into a relative navigation solution between Raven and the object. This software could also potentially be used to inform and educate teams working on robotic navigation technology on Earth, including autonomous underwater vehicles and in unmanned aerial vehicles.

^ back to top

Operations

Operational Requirements and Protocols

  • Operation of the experiment are conducted from the ground via TReK workstations by the Principal Investigator.
  • Perched on an outside platform of the ISS, Raven is a technology demonstration of the real-time, relative navigation part of this system.
  • As vehicles approach and depart from the space station, Raven tracks them in action and sends collected data to Earth.
  • NASA operators then evaluate how Raven's technologies work as a system and make system adjustments to increase Raven's tracking performance.
  • Over the course of Raven’s nominal on-orbit lifespan, Raven is expected to monitor approximately 50 individual rendezvous or departure trajectories.

^ back to top

Decadal Survey Recommendations

Information Pending

^ back to top

Results/More Information

Information Pending

^ back to top

Related Websites
Satellite Servicing Capabilities Office, GSFC

^ back to top


Imagery

image
The Raven payload, prior to its integration on STP-H5. Image courtesy of NASA/Chris Gunn.

+ View Larger Image


image
NASA Image: ISS050E052652 - Space Test Program-H5 (STP-H5). Photo taken during Expedition 50.

+ View Larger Image