Siri in space? Virtual assistants may help NASA improve astronaut safety

Image: NASA

Future space exploration missions will call for a higher degree of autonomy for astronauts, due to longer communication delays and mission durations. If an emergency arises, a communications delay might prevent ground control intervention. Providing crew members with onboard support to identify and resolve issues in a timely manner is crucial to ensure their safety. A team of researchers at Texas A&M University is proposing the use of virtual assistants (VAs) to provide such support.

The team comprising Daniel Selva, Ana Diaz Artiles and Bonnie J. Dunbar from the Department of Aerospace Engineering in the College of Engineering, along with Raymond Wong from the Department of Statistics in the College of Science, has been awarded more than $1 million from NASA to study the impact of using VAs to support crew members in the context of spacecraft anomaly treatment during long duration exploration missions (LDEM).

During LDEM such as to Mars, ground support from Earth will be less effective for urgent tasks, such as identifying and resolving time-sensitive anomalies that appear during flight operations. Should the spacecraft develop a leaky fuel line or valve, for example, the crew might not have enough time to wait for instructions to come through due to a communication delay. “Imagine having a space-related Siri or Alexa specifically for astronauts to call on, that is essentially what this would be,” said Selva, assistant professor in the department.  

The team will develop a VA called Daphne-AT, based on similar software developed by Selva. Daphne-AT will be designed with three main skills: detection, diagnosis and recommendation. The detection skill is in charge of answering questions related to anomaly detection. The diagnosis skill helps the user characterize the anomaly and identify its root cause, answering questions such as “What do you think is causing the increase in temperature?” or “What have been root causes of similar anomalies in the past?” Finally, the recommendation skill helps the user devise a course of action to deal with the anomaly, answering questions such as “How can I stop the leakage on the water line?” or “How do I replace the sensor temperature on the battery?” The VA uses various machine learning and artificial intelligence techniques as well as databases with past anomalies and procedures to answer those questions.

Once developed, the impact of Daphne-AT on performance, cognitive workload, situational awareness and trust will be assessed by Diaz Artiles through a set of three experiments with human subjects in a laboratory environment. Twelve different subjects, with a degree in engineering or sciences, will be used for each experiment.

The first experiment will measure the impact of the baseline VA with the ability to answer questions, but without the ability to engage in dialogue with the user or take initiative. “Our hypothesis is that the VA increases human performance in the treatment of anomalies, while also reducing cognitive workload and increasing situational awareness,” said Diaz Artiles, assistant professor in the department.

The next experiment will measure the impact of adding in the capabilities of the VA to provide explanations and take initiative. “Here, the main idea is to show that the initiative and explanations lead to a significant improvement in trust, and hopefully an additional improvement in performance,” said Selva.

Finally, the system will be deployed in NASA’s Human Exploration Research Analog (HERA) environment, where the 12 subjects will be tested over three 45-day missions. HERA is a high-fidelity space analog that simulates some of the aspects of living in a space station, and is commonly used to perform scientific experiments and validate technologies before they go into space.

More at the College of Engineering