Suggested Searches

Behind Artemis Mission Control

Season 1Episode 241Apr 22, 2022

Shawn Gano and Richard Garodnick lay out the work that’s been done to prepare NASA’s Mission Control Center in Houston for future Moon missions. HWHAP Episode 241.

Houston We Have a Podcast Ep 241 Behind Artemis Mission Control

Houston We Have a Podcast Ep. 241: Behind Artemis Mission Control

From Earth orbit to the Moon and Mars, explore the world of human spaceflight with NASA each week on the official podcast of the Johnson Space Center in Houston, Texas. Listen to in-depth conversations with the astronauts, scientists and engineers who make it possible.

On Episode 241, Shawn Gano and Richard Garodnick lay out the work that’s been done to prepare NASA’s Mission Control Center in Houston for future Moon missions. This episode was recorded on March 3, 2022.

HWHAP Logo 2021

Transcript

Gary Jordan (Host): Houston, we have a podcast! Welcome to the official podcast of the NASA Johnson Space Center, Episode 241, “Behind Artemis Mission Control.” I’m Gary Jordan, and I’ll be your host today. On this podcast we bring in the experts, scientists, engineers, and astronauts, all to let you know what’s going on in the world of human spaceflight and more. I promised a lot of Artemis content in 2022, didn’t I? If you haven’t been tuning into our recent episodes, we’re talking about the Artemis program, returning humans to the Moon sustainably to do unique and interesting science and to prepare for the journey to Mars. In the past few months we’ve talked about the Space Launch System rocket, we’ve talked about the Orion capsule, the one that’s taking humans into deep space; we’ve highlighted a few payloads or stuff, experiments in particular, that are be brought on the first mission under the Artemis program, Artemis I. And most recently, we’ve talked about Moon science. On this episode, we’re going to be going inside mission control and explore what changes had to be made to support human missions to the Moon aboard the Orion capsule. There is a difference with running a low-Earth orbit mission and running a deep space mission. And that comes with challenges, like bandwidth and commanding software and working on a deep space network. To walk us through some of the changes to get mission control ready to support Artemis I, we’re talking with Shawn Gano and Richard Garodnick of the Mission Control Center engineering and development group here at the Johnson Space Center. They have been working hard to build the future mission control, and they walk us through its infrastructure. They’ll also introduce us to a payload onboard Orion called Callisto and how they’re working to support that. Artemis mission control, behind the scenes; let’s get into it. Enjoy.

[Music]

Host: Shawn and Richard, thanks so much for coming on Houston We Have a Podcast today.

Shawn Gano: Thanks. This should be fun.

Richard Garodnick: Great to be here.

Host: All right. For the both of you, I’m, I’m, I’ll be honest, kind of nervous to be talking to both of you today. Your, your technical knowledge of, of mission control is, is, I’ll say, intimidating. But I think we’re going to have fun just talking to you guys both, ahead of time, you both seem like really engaging people to talk to. And I wanted to start with just that, just to get our, our listeners to know you and what got you into a role where you are the engineer for mission control, for making human spaceflight work. What a cool job. Shawn, let’s start with you, a quick background that, that, of yourself that got you to where you are today working in mission control.

Shawn Gano: So yeah, my, my entire NASA career actually has been in the MCC (Mission Control Center) engineering group. So I’ve, I’ve worked in various different subsystems to make mission control kind of function. And after the shuttle retired I worked on a project called MCC-21, which was a redesign of how the MCC architecture worked. And the, kind of the, the idea behind that re-architecture was to prepare for new missions. And like, so Artemis I is a kind of a new mission for us, and we’re excited to support that from mission control. And kind of one of the neat things that our team does is, we kind of feel like Q from the James Bond kind of series, where our group makes a lot of the technologies, the hardware, and the software that, that make mission control run, while the flight control team is, is kind of like the James Bonds, they’re using these tools and they’re engineering expertise to go execute the mission. So, so we get to make the technology and that’s kind of what I like to do. I, I love to make things and build things. And so, that’s how I got involved here.

Host: Ah, that’s fantastic. Is your background more in like hardware, software, or, you know, a little bit of your background and then, also your interests, where’s, where’s, what really, you know, makes you excited about working in, as in mission control engineering?

Shawn Gano: I really have kind of a mixed hardware and software background. So I just, I love how they come together and, are able to kind of accomplish, kind of, human spaceflight missions.

Host: Very cool. All right. Well, Rich, what about you, quick background that got you to working on all these cool things?

Richard Garodnick: Yeah. So my background is actually a little bit different. So I originally came to NASA as a space station flight controller. So I was working in the environmental and thermal operations group, where we were basically managing all of the life support systems for crew members on space station. And I was also a crew instructor working a lot in the mockups. So that’s kind of where my background started, until a couple of years ago where I transitioned to work with Shawn in the mission control engineering and development group. So I’ve kind of moved over from the user end of the spectrum more to the engineering and development side of MCC, kind of working a little bit more behind the curtain. So that’s, that’s a little bit more of my background.

Host: Ah, OK. Very interesting. I’m super excited to have you both on today to talk about this. What were really going to dive into today is just mission control, maybe getting an understanding of how mission control works behind the scenes normally. Let’s, let’s, I think we should start there, just understanding what, what’s churning in the background to actually make a flight controller that’s sitting at a desk actually talk with something in space. And then what, what we’re really going to get into today, after we set that background, is what’s different to do that with the Moon missions coming up: Artemis program, it’s huge, you guys have put in a lot of hard work to prepare for that, and that’s really what we’re going to dive into. But Shawn, we’ll start with you: help us to understand the foundation of mission control. What’s happening in the background that makes everything work?

Shawn Gano: Yeah. Then exactly like you said, a, so, when you see mission control on TV, you generally think of the flight controllers and the, and the flight, flight director and, and they’re, they’re definitely the key portion of what happens in mission control, but below that there’s a lot of systems that, that make mission control possible. And kind of how we like to think of mission control and the, and its systems, it is sort of like an extension of the spacecraft itself. So it is really expensive and hard to launch a large amount of mass into space. And so, via communication networks, the MCC can sort of be like that extension, the ground portion of the spacecraft. So we add a lot of capabilities that aren’t necessarily on board that we can provide on the ground. So we can have teams of specialized engineers that can go tackle certain problems as they come up, where those whole teams can’t be on board the spacecraft. We also have a lot more computer processing capabilities on the ground. And so, that is, can be applied to a lot of different capabilities that, that you really just couldn’t do on board itself. So, and each spacecraft and each mission has their own unique challenges. And so, the difference between ISS and kind of going towards the Moon, toward, with Orion, is Orion has a lot more dynamic phases of flight. So ISS has a fairly static, it’s going around the Earth, there’s, there’s no launches, there’s, there’s not re-entries necessarily with ISS itself. There are other missions that provide those launches and re-entries. And so that, that adds a kind of a new, aspect to the mission. Also, Orion and SLS and the kind of, there’s an upper stage called ICPS (interim cryogenic propulsion stage), so they start their lives really as three separate spacecraft. And unlike shuttle, which, where all those communications were combined, it, it really looks like three spacecraft to the MCC. So we have unique communications protocols and formats and pathways for each of those, the Space Launch System, the ICPS and Orion. And so we have to take all that data in from the different networks, integrate it and present it to the flight controllers so they can make good decisions based on the data across all three of those parts of the spacecraft. And then also, Orion is going beyond low-Earth orbit. So we’re going, so you can think of it sort of like a cell phone coverage: when you go out, maybe out, way out into the wilderness, you, you kind of lose your cell phone coverage, and that’s kind of like what we’re doing. We’re going beyond low-Earth orbit where we have all these geostationary orbits that allow, allow us this nice continuous comm[munication] coverage, and we’re going way out by the Moon. So we have to rely on different networks to provide that communications to Orion. And we’re going also beyond GPS (Global Positioning System). So we have to have a better tracking and, and other navigation methods that are simply different than, than ISS.

Host: Yeah. A lot of things are different, and that’s what I wanted to talk about. I definitely want to talk about the communication networks too because that’s a big one, right? We’re, we’re, we’re not in the communication networks…

Shawn Gano: Yeah.

Host:… that we’re used to, so we, so we got to rely on something different. Rich, help us to understand these, these differences, just, just a, a tad bit more. When we’re talking about, Shawn was alluding to, you know, there, there’s an infrastructure of how we work things, but you guys are working hardware and software that is fundamentally and, and Shawn gave a couple of examples, there’s, there’s some different stuff, so you guys have to have to build the infrastructure. So give us an understanding of how mission control is working, just a general outline of all the different components of how it’s actually going to support this Artemis mission and Orion mission.

Richard Garodnick: Yeah, absolutely. So, to just kind of extend a little bit what Shawn was talking about, there’s a lot of big differences when it comes to the MCC architecture depending on which vehicle that you’re working with. So when it came to shuttle, you can think of mission control as more the, you know, monitoring the, the health and status of both the crew members and of the vehicle, making sure the mission timeline is, is going well, that they’re working on capture recovery, everything like that. Then you move to station where everything is commanded from the ground and the crew is more focused on science and that’s kind of how mission control has been evolving to more and more being taken care of on the ground. For Orion, it’s completely different. It’s a, it’s more of a hybrid system. So for Artemis I, you’re going to have the ground that’s, that’s working on commanding completely independently versus for Artemis II and beyond, it’s kind of a hybrid between the two. When it comes to the command system, it’s very different than what it was for station. We kind of had to adapt our own, and make it really work for Orion because it speaks a very different language than ISS. So all of that had to be written from the ground up. The other big difference that we really deal with for Orion is the amount of bandwidth. If you can think about space station has about 300 to 600 megabits per second, so you can think that, that bandwidth is close to what you get on a high-end internet connection on the ground. But for Orion it’s all over S band rather than KU-band, and S band is only maybe one to two megabits at most. So we have a very, very small bandwidth compared to ISS that we have to work with, which really impacts voice, video, commanding, telemetry, all of that has to fit into that very small pipe. The last part, as far as MCC is concerned, is when it comes to web tools and displays, a lot of that has all been customized for Orion, completely different than ISS. Same thing with training, when training all of our flight controllers and doing system tests on the ground and, you know, really getting ready for mission, that is completely different on how we simulate those failures than how we do it with ISS. So you can think of it as kind of an independent system completely within the same control center, but just for a different vehicle.

Host: Yeah. I find that so interesting. It’s, it’s really, you guys are, are working on a lot of unique things to support this mission, and let me, and, and Rich, let me zoom in on a couple of those just to, to clarify. So the command system, you talked about, it speaks an entirely different language. What are some of the reasons behind that? Why not just, you know, unify it, everybody speaks, if you’re flying to space everybody speaks the same language. What’s the, what’s the primary purpose there?

Richard Garodnick: Yeah. So the, the way the commands are processed on board for station versus Orion are, are really different. It, it would be, as you say, it would be a little bit easier if we were to make it all kind of compatible, but as things have changed with spacecraft over time, and the way that we interface with the vehicle is different, we had to be able to adapt that on the ground. So if the flight software is different and the network on board is very different than it is for station, then from the MCC position we have to be able to adapt that on the ground with our command system to be able to make it work with Orion. So that’s, that’s kind of what our role is, is that we’re, we’re really trying to integrate and be compatible with the vehicle.

Host: I see. And now, now the bandwidth was another one you, that you called out. Is that, is that more of, you know, a limitation of just we’re going far away, so that’s, that’s what it comes with? Are you using, you know, does Orion only support, S band or, or, you know, maybe the mission requirements were as such? What are, what are the reasons behind, the lower bandwidth being a consideration for supporting the mission?

Richard Garodnick: So there’s a couple differences as far as what the onboard hardware is versus what communication networks we have on the ground. So the types of antennas that are on Orion are going to be very different than the ones that are on station. The other piece of it is, like you said, the distance, that does have a lot to do with it. But the communication networks, which I know we’re going to go into, shortly, the, Deep Space Network is very different than what we use with the TDRS (Tracking and Data Relay Satellite) network. So Deep Space Network is a very point-to-point communication system where we’re communicating with the spacecraft as long as we have line of sight. And because we’re using S band it is a very small amount of bandwidth, versus if we’re using Ku the distance really does make a big difference because you have to be able to be a certain distance away to utilize that maximum amount of bandwidth. And that’s what we’re using on station, more for less critical video and downlink and things like that, whereas we still use critical voice and critical commanding over S band for station. So that part is a little bit similar. We just don’t have necessarily the same kind of capability that we would on station for Orion because we’re only utilizing the Deep Space Network once we get to a certain distance.

Host: Once we get to a certain distance; OK, I understand. Go ahead, Shawn.

Shawn Gano: And, yeah, and part of, part of that is, at, and early on in the design of Orion the, the Orion program itself had to make a lot of trade studies, or trades on things, and, and one of those was mass. They had to really reduce the mass of the spacecraft. And so, they had, I, at one point they were trying to do a, just essentially get rid of anything that was not necessary for, for the mission to reduce mass. And one of those things that was cut was a high-gain antenna. And so, when that, when that was cut, the, the ability to have much higher bandwidths kind of was lost from the spacecraft.

Host: Wouldn’t it be funny to imagine Orion with a big old dish, right? That’d be nice. You get, get some decent, decent amounts of data, but understand, they had to make a lot, a lot of hard decisions. Rich, you alluded to the fact that we were going to talk about communication networks, let’s get right into it. Shawn, I’ll toss it over to you to help us start, because one of the things Rich was talking about was, you’re, you’re using different communication networks through different phases of flight, and it’s unique to what we see with ISS and the Deep Space Network being one of those big ones, of course, but let’s start from the ground. What are some of the, what are some of the different communication networks that we’re going to be seeing and what, what has to be compatible in mission control and with all the different components of an Artemis mission to actually make it work?

Shawn Gano: Yeah. And, and that’s, that’s really one of the more complicated parts of mission control that probably most people are not necessarily aware of. And it, that’s where if you start drawing mission control, kind of on a diagram and you’re trying to figure out where they’re all kind connected to, this is where the, kind of the spaghetti diagrams come from. It’s because we really do have to connect to a lot of different places. And that’s because as the mission progresses it, it has to switch between different communication networks. And that’s just because of the different regimes that the spacecraft is in during the mission. So pre-launch, when it’s sitting on the pad in Florida, we can, it can be connected to the umbilical, and so we can get lots of data from that. And so that’s kind of like a hardline ground connection. And then early ascent there are ground sites kind of along the, the East Coast, there’s quite a few of them in Florida nearby, nearby the launch facility, and also out on Bermuda, the island of Bermuda, so the SLS will also talk to a ground station there. And then after it gets, kind of, Orion gets into orbit with ICPS, they transition to using TDRS, which is the Tracking Data Relay Satellite system, and that is the same system that is used for ISS. So, and that, and that kind of region that it’s more closer to, to ISS. But then once the, the TLI burn, that’s the trans-lunar injection burn, has occurred, and Orion then is on its journey towards the Moon and it gets past the, the TDRS network or beyond that kind of an, in altitude, then we transition to the Deep Space Network. And so the Deep Space Network really hasn’t been used for, for human spaceflight for, for decades now, especially as the, the primary communication method. And so, then we use quite a few of their sites, but the, the major DSN sites we use is one is in Australia, at Canberra, another one is in Spain, Madrid, Spain, and then the third one is in Goldstone, Goldstone, California. And so we’re on really that, that Deep Space Network for a majority of, of the mission. Then on the way back in, when, when Orion is returning to Earth, there’s a short amount of time that we switch back to the, the TDRS network. And so during that mission we have to seamlessly transition between all these networks and making sure we’re processing the data from all three of the aspects of the mission, which is the SLS, again, ICPS, Orion, so we’re monitoring these three spacecraft across all these different networks and then processing that data and then kind of getting it ready for all the flight control team and all the other users that, that use the data on the ground. So that, that definitely is a, a big challenge for us.

Host: Big challenge. And then, and then for, to the flight controller, the, the ultimate goal. Let, let, let me read this back and, and make sure I’m, I’m saying it right. The ultimate goal is that it’s seamless to the flight controller who, who needs to look at the data; that despite wherever the, wherever the vehicle is, they still have the data that they need, and that it’s seamless. That’s really your job.

Shawn Gano: Yep, correct. Yeah. We try to get it to them seamlessly and we try to get to as fast as possible. So they’re seeing the, the most recent data on their displays. And we also try to keep the data in sync. So all of it is coming down and displayed to them kind of at the same time when it was sent, sent on board.

Host: Ooh, yeah, we’ll get into that a little bit later. I know, I know one of our topics today is the, we’re talking about the clocks and everything and how that’s going to work. But let’s stay on networks for a little bit. Rich, I’ll toss to you. Shawn mentioned that the Deep Space Network is now up in human spaceflight mission. We haven’t used that for human spaceflight mission for, for quite some time. So let, let’s dive into that and just how that’s going to work for this type of mission. What, what is the Deep Space Network and, and how does that work?

Richard Garodnick: Yeah, so as Shawn mentioned, we have three facilities that we’re utilizing in Australia, Spain and California. The way that they’re, that they’re spaced apart is actually really interesting. They’re spaced equidistant from each other around the world. And so that way we always have one that’s be, that’s able to have line of sight with the vehicle. So as the Earth spins we’re kind of, we’re transitioning from one site to the other, and we’re always able to see Orion, which is, which is actually really interesting. And this is really the first time that we’re using DSN to relay telemetry from, from a spacecraft that’s, that’s meant for humans in a long time. So the, that we’re using it for this mission is, is actually pretty interesting. The way that it’s scheduled is actually a little bit unique in the way that ISS is scheduled. So for ISS, as Shawn mentioned, we use TDRS predominantly, and although there are a number of TDRS satellites used for other things we generally have around three TDRS satellites that are used for station that on, on any given day, we are the primary user. So we’re using TDRS for all of our commanding, voice and, and everything is going through that network. However, for DSN, we have to share with all of these other missions. So whether it’s the Mars rover or any other deep space satellite that’s going, you know, very far away from Earth, all of those are using the same network. So we have to be able to schedule around them and try to figure out which dish we are using and when, which dictates how much bandwidth we’re going to have in, in different phases of the mission.

Host: OK. Yeah. There’s a lot of work happening in the background there. One thing I was curious about, in order to support the Deep Space Network and, not to forget that Shawn was mentioning, there’s a lot of, there’s a lot of other networks that are, that are in play here, you got the, you got Near Space, TDRS, there’s, there’s, there’s a lot of other things; what hardware, what capability is inside Orion to enable, you know, to actually receive, to, to actually be a part of these networks? Does, does Orion need very specific hardware to, to talk on these networks?

Richard Garodnick: Yeah. So, the way that it works for Orion is that Orion uses four phased array antennas on the crew module, and then there’s also two phased array antennas on the service module. So all of those are used for all of the video, data, voice with the spacecraft, as well as a command uplink and telemetry downlink to different ground stations, TDRS and DSN, as soon as it leaves Earth orbit. So that’s kind of how it works on the vehicle side.

Host: Got it, got it. Now, you know, Mission Control Houston, obviously, one of the primary recipients of a lot of this data, particularly through, through the lunar aspect around the Moon, it’s, it’s, it’s definitely going to be one of the primary customers of this, of this data, the primary receivers. But I know, you know, Orion was built by Lockheed [Martin], Lockheed’s over in Denver, the SLS is been being built over in Marshall [Space Flight Center], there are a lot of other, a lot of other interesting customers to, to receive some of this data. And so I wonder if, if we can dive into the infrastructure there and how, how we’re all connected and sharing data, make sure everybody’s got eyes on, on supporting this mission. Shawn, we’ll go over to you for that.

Shawn Gano: Sure. Yeah, I’ll start off, especially for the Orion, Orion side. So yeah, as you mentioned, kind of, Orion is being built by Lockheed Martin. And so they are going to have a team of engineers in, in Denver and we need to ship them some of the data during, during the mission so they can analyze how the spacecraft is performing and kind of send, kind as, as they, if might notice things or if anomalies come up we can, we can use them as support. But also, before the mission ever flies, they’re, they’re developing their flight software and we’re developing our ground software to kind of, to, to work with them. And so we, we do a lot of testing. And so, before the mission, we are connecting to their labs. They have a lab in Denver as well as they have one in here in Houston, called the Houston Orion Test Hardware, the HOTH lab. And so we’re, we’re as they iterate through their software and as we iterate through our software, we do plenty of communication tests where we test out our command systems, we’re testing out telemetry systems and we’re just, we’re going through all those aspects. And, and during those times, also the flight control team can follow along and they’re, they’re learning about different aspects and how, how the hardware behaves. And so it’s, it’s definitely not a, something that we can test in space, and so that’s why these labs are, are highly, highly needed. And we make good, good use of them. And I, I think we’re a benefit to them because we can help, kind of troubleshoot some of their flight hardware or flight software as, as much as they’re a benefit to us because they help, they help us do kind of the same thing in, in reverse. And so, that, we do a lot more testing on the ground than we ever will do in, in space, but that’s, that’s to ensure that we have a successful mission. And, and then we can talk during all those different phases of the flight and, and we can work through different anomaly resolutions while it’s on the ground so we can prepare for, for the mission.

Host: Nice. Yeah. Well, you got a lot more time on the ground to do the testing, right?

Shawn Gano: And yeah, that’s for sure. And you, you can fix up problems much easier when it, when it’s still on the ground. And they’re, a very similar thing and there’s, some definitely some uniqueness, of the, the rocket itself. And yeah, Rich, Rich might be able to fill us in on some of the SLS ground capabilities.

Richard Garodnick: Yeah, sure. So, as Shawn mentioned, a lot of the Orion test labs are how we conduct a lot of, a lot of our testing on the ground. We also do a lot of testing with the SIL, which is the SLS System Integration Lab at the NASA Marshall Space Flight Center. So that’s how we can verify that they can receive data from the SLS rocket itself. We also do a lot of voice test between Marshall and JSC and we ensure that all the teams across the country, including MCC and LCC (Launch Control Center) in Florida, as well as engineering teams all over different locations, including in Huntsville, can communicate together. So a lot of that testing while you would think is more localized within MCC, there’s a lot of joint testing that goes on, which are these, really, really critical to be able to figure out which problems that we can solve on the ground before we ever do a rehearsal or we get ready for mission.

Host: Makes sense. Now, Rich, on the bandwidth, I want to talk about bandwidth for just a second, because I’m wondering how you are, how you guys are dealing with that. When, when Orion is in the Deep Space Network and you have these bandwidth concerns, but a lot of people are looking for, for certain data, what are you guys doing, what infrastructures have you put in place to support being able to get everybody what they need with such a low bandwidth?

Richard Garodnick: Yeah, great question. There’s, there’s a lot of data on the spacecraft that we have to return to the ground both in real time and that we need to downlink for later, and the largest amount of data is really including video. So you can imagine the amount of cameras that are in Orion, both inside the cabin as well as on the outside, generate a lot of video on board. And it’s really difficult to try to get a lot of this on the ground to the point where we’re trying to store as much of it on board as we can. But whenever we want to try to downlink, whether it’s video or other telemetry on board or other things that we’re trying to, to do during the mission, we have a very, very narrow pipe to be able to do that as you mentioned. So what we try to do is we try to prioritize which data that we’re going to send down when. There’s a specific flight control discipline that handles this, they’re called the, the CDH console, so they’re the command and data handling. And they take care of, the, the downlink from the vehicle using a technology standard that was developed at Goddard Space Flight Center called CFDP (CCSDS File Delivery Protocol), which is the file delivery protocol from the CCSDS (Consultative Committee for Space Data Systems) space standard group. So a lot of, a lot of acronym in there but really it’s, it’s a protocol that can deliver files from the vehicle down to the ground. So that’s what that console is, is taking care of during the mission.

Host: Got it. OK. Now, now there’s a lot of, you obviously are working with the files too, but then there’s also, you know, one of the things we’re talking about is how things are talking to each other. And I know this is a challenge that you guys are working right now with Orion and this, and the clock, the actual clock timer. And I wonder what, what that issue is — maybe Shawn, we can go to you for this — what that issue is, and then what you guys are doing from the engineering side to try to address that?

Shawn Gano: Sure. Yeah, yeah. You, normally, people don’t think about time being complicated, but it is, the more you kind of dig into “what time is it?”, that, that, that simple question, the kind of the, the harder you realize everybody kind of is on a slightly different time, whether you have leap seconds or not leap seconds, and your timers can drift and things like that, and so the, when you’re in space and you, you need to hit targets, like, say the, the Moon and you know, where the Moon will be at certain times, you need really accurate time and that’s because that’s really important for navigation. And so on board Orion we got to make sure that that clock stays within a kind of a box of the, the actual time. And as we leave low-Earth orbit we, we don’t have the benefit of having GPS satellites anymore, which have, have a really good time source. And so there, we had to develop on the ground some tools to help support the CDH flight controller position, who really is the, the experts on the onboard, time, timing for Orion. And so, they use these tools to, its, and we use a method called RDD, which is return data delay. And what that does is, essentially is it’s looking at the, kind of the round trip time and how long it takes data to get back from the spacecraft and go up to the spacecraft and, from a number of back and forths and a monitoring kind of the ground network, we can see how far off the time is on board. And then the CDH officer will then command Orion to kind of change the speed of the clock. So if they notice the clock on board is a little bit behind, then they’ll, they’ll, kind of slew the clock to be a little bit faster until it gets kind of back to where they want it to be. And if, if they notice kind of the inverse, if they see, hey, where we we’re a little bit of ahead of what the time really should be, they’ll kind of slew the clock backwards. And, and that, that slewing helps keep the time continuous, because you don’t really want to jump in time because that makes things, like if you’re doing integrations for, for burns or, trajectory calculations, you don’t really want big times, time jumps. And so, that makes it much smoother and kind of having that, that slewing of time. And so that, that was one of the more challenging tools to develop for, for Artemis I. And, and we’re, we’re still kind of finishing off the, the, the last verifications of that tool as we prepare for, for launch here soon.

Host: Yeah. No big deal, you guys have to have to be the clock for the mission. I wonder though, at the core of it, is it, is it a physics thing that you guys are working to address from like a software perspective to, to make up that time? Is it, is it really just like the, the, the reason that this issue exists is, is because of physics, just a, and, and I am not going to assume I know the physics here, but some combination of the vehicle traveling fast and being far away.

Shawn Gano: Yeah. And that’s just that all clocks drift over time. And so, you have to kind of make sure you can make up for that or, or understand that drift and, and correct it. And why it’s so difficult is it’s far away. Yeah, we have now this big, this, you know, one, one light-second delay, and so it’s unlike computer networks that are attached to each other, it’s pretty easy to synchronize time or it’s, it’s easier, to synchronize time across all of them because the delays between them are, are really small and, and kind of predict, predictable, or more predictable. And so, we have this big, this RF (radio frequency) communication link that have, can have losses, the, the kind of the latency involved can jump around, and so how, how do you know exactly what to time it is on board when, because of all these different uncertainties when it comes down to the ground. And so, yeah, there’s a lot of, lot of physics involved, a lot of, kind of statistics and trying to, to really understand what, what time is it on board, and also how far is that time off from the, kind of the universal truth of time, and then how do we, we, how do we fix it? That’s, that’s the kind of the, the main challenge.

Host: Amazing, amazing explanation.

Shawn Gano: And the — yeah. And the CDH flight officer definitely has the, they’re, they’re, have, have to work and, and they’re really the experts of that system. And yeah, so they’ll be, if you see them on, on, in mission control during the mission, they’ll be the ones that kind of are, in our, tasked with making sure that is done properly.

Host: I’ll, if I need to reset my watch in mission control, I’ll walk over to them and ask them for the time. How’s that?

Shawn Gano: Yeah. Yeah, definitely.

Host: Awesome. All right. You guys are working a lot on Orion, and of course all of the systems on board to support that and, and the infrastructure, it’s just fascinating, the infrastructure that you guys are trying to build to support these future Moon missions. Focusing on Artemis I think one of the unique things that’s on there, there’s, there’s a, there’s a payload inside of Orion called Callisto. And I wanted to explore this a little bit just to understand what this is, but then I think it’s a good topic to explore on this episode, because I know you guys are continuing to work on the infrastructure to support this payload. So let’s start, high level; Rich, we’ll start with you. What, what is Callisto?

Richard Garodnick: Yeah, so Callisto, is really serving two major purposes on Artemis I. So of course because it’s uncrewed, we’re really trying to look at what kind of future technologies we can use to operate crew-operated spacecraft in the future, such as Artemis II and beyond. The other big part of it is we’re really trying to engage the public and make it feel like they’re more a part of this mission. So what Callisto is, kind of as a whole, is that it’s a, it’s a tech, tech demo, similar to a lot of the tech demos that we have on ISS, but in this case it’s a tech demo that we fly a digital voice assistant that’s going to work with no internet connection at all, so it’s completely on board, and a video conferencing function that enables interaction from the Earth. So it’s both interaction from users in mission control that are interfacing with it, as well as being able to interface it from home. So we kind of have multifaceted way that the public can get engaged with the payload.

Shawn Gano: The onboard payload that will be in, in the Orion capsule is, was, is, or has been built by Lockheed Martin. And they’re also integrating it kind of into the spacecraft. And as a part of that, there’s been a couple, couple other companies that have provided a lot of expertise. Amazon has provided their, their digital assistant Alexa capabilities to be on board Orion, and Cisco has also provided kind of their WebEx video capabilities to allow people to be able to see themselves inside of the Orion spacecraft, to sort of feel like they’re a part of the mission. And so, but our, our MCC team here has been, has been responsible for making sure that all the communications with the payload has been properly set up and then that we can interface with it from, from the ground. And it’s been a really fun project because we have to work with different companies than we normally, don’t generally interface with on the ground. So…

Host: Yeah, it’s a little bit different. Rich, we’ll go to you. What, what do you think, I mean, I, I can understand the, the, it’s the technology demonstration and all of this, but what, what would be nice about this kind of technology, why, why are we exploring this now on Artemis I? What’s nice about this technology to have for future crews?

Richard Garodnick: Yeah, great question. I mean, when it comes to controlling a spacecraft, you can think of, back when we had shuttle, how the crew was really the instrumental commander and, you know, the ones who were throwing all of the, the switches on the vehicle. They were really the, the ones in control of the craft, and there was a lot of things going on. With station even more complex, and now you have a lot of that being taken by the ground and the crew can focus more on science, but again, massive vehicle, massive mission, a lot of things to do. So what we’re trying to do is we’re trying to see if, how we can use voice technology to be able to interface with the vehicle to try to increase situational awareness, maybe improve efficiency for those that are on board the spacecraft. So for Artemis II, the idea is, is that maybe there will be some future way to be able to use voice to help out with the crew so they won’t necessarily have to do everything manually and with, and with switches, with their hands, they can use their voice to be able to help them fly the spacecraft.

Shawn Gano: Yeah, a lot of few things there. And it’s, quite interesting that, that we get to use these technologies that many people are familiar with. Many people have kind of digital assistants in their house, and they use video conferencing a lot now, nowadays. However, making them viable for space travel is a, kind of another thing, too, because there, there’s all these different delays that are involved going, kind of going to the Moon, they don’t, there’s no real light delays on, on Earth. And so adapting these technologies and trying them out on Artemis I where there isn’t crew, definitely gives us kind of a leg up to understanding what’s possible and what we can learn from that to make future missions better and to support astronauts, and just to allow them to be able to accomplish a lot more science while, while they’re out there and manage the spacecraft better. So it’s, it’s, I think, will be a kind of a diving board off into some new ways of doing things on this spacecraft.

Host: Yeah. Super-interesting stuff. Shawn, what, what, what are you guys doing? I mean, I, I know you guys have been doing a lot of work to support this payload. I mean, I, I can understand, you know, it’s, it’s an interesting thing to explore, but for, from your guys’ perspective, what are you doing to support that? What, what infrastructure are you working on?

Shawn Gano: One of the main, one of the main things that we, we kind of faced when doing this payload is the bandwidth is so limited that it’s, it’s hard to have a really good, kind of, dialogue, a lot of a lot of video coming down and voice. And so, to have a better experience with that, we, we worked with the Orion program to effectively double the bandwidth, and we did that by, so the, the, our return link from the spacecraft has something called, in coding it’s actually called one-half low-density parity-checking, and what that does is that, it’s added to the communication link and it actually takes up about half, half of the bandwidth itself, but what it does is it allows you to correct errors. So as, as the signal travels through space and, and we detect it, there are errors involved and, and which that encoding allows you to correct those errors on the ground and so you effectively get better a better communication link. So if, for this payload we wanted more bandwidth and so what we did is said, hey, let’s get rid of that encoding. And so, when, when that happens, now we, we have the chance of having more errors. And so, to, to get around that we are going to use, the DSN has a, has three dishes that are around the world and they’re 70 meters wide and, they’re humongous dishes. And so, they’re used by some of the spacecraft that are really far out and, and then does, and they can really detect those small signals. And so, with those 70-meter dishes we get a lot more signal to, or a lot less signal to noise. And so then we can kind of get rid of that encoding. And so by doing that we can now use two different, stream two different video channels at the same time, have some voice going up and down to support the payload, and just, it, it really opens up a lot more options, having quite a bit more bandwidth. Another thing that is interesting is on Artemis I, since there’s no people on board, there really was no thought to adding voice to the, kind of in, in the cabin of the spacecraft. And so as part of this payload voice is, is an important aspect of it, and so we had to, so we worked with all the partners and, and developed a way to kind of add voice to this mission. And so we worked on especially how to transmit that, how to get that kind of data back as well from the spacecraft. So as, as someone is talking to this payload from mission control, they, they can hear their voice in, in the cabin, and then that has to be relayed back down to the ground as well. And so, we, then we need to distribute that once, once it’s on the ground. So we had to add that capability. And then the challenge with those 70-meter dishes is since there’s only three of, of them in the world, a lot of people do need them and so we have to, again, kind of schedule that at special times and, and then work around those other missions as, as we need to, to make sure that we can do our, our payload but then also, have, be good stewards of kind of, of that of time for those 70-meter dishes because they have a lot of science that’s also using them.

Host: Awesome. Yeah. Rich, have you guys been testing, testing this out? How are, how are things been going?

Richard Garodnick: Absolutely. Yeah, it’s definitely been a journey when it comes to testing very early on. As Shawn mentioned, we were testing in the Houston Orion Test Hardware facility, or the HOTH here in Houston, and we had some early, prototype hardware, some Raspberry Pis that were given to us by some of the commercial partners. And that’s actually where the hybrid engine for the Alexa voice assistant actually was, was on. And we had the other half of it was the intercom that’s interfacing with it. So very early on we were connecting between the HOTH and mission control to try to figure out just, you know, can we talk between the two, can we actually send, you know, commands, how are we going to interface with, with Alexa and how, how are we going to have the WebEx piece of it integrated. You know, there’s a lot of early things that we were testing, and as that evolved we started to get a little bit closer and closer to what the real hardware is going to be on Orion. So you can imagine there’s been a lot of testing that we participated in the HOTH, as well as moving over to the CTIL (Communications and Tracking Integration Lab) and ITL (Integrated Test Lab) in Denver. So that, that was kind of where a lot of that, a lot of that testing came in, came into play.

Host: Lot of work for, for you guys and, and the teams you’re working with. Its, I mean, but, but if you, if you pull back and think about it, what you’re doing is mission control, right, it’s such a historic thing, it has such a big imprint on, on a lot of our lives, really on the world, especially with the Moon landing, but what you got, the Apollo Moon landing; now, you guys are, are building the infrastructure to support not just the, the next Moon landing but a sustainable human program where, where /it’s going to be, you’re building the next generation, guys. So, so reflecting on that, and Shawn, we’ll start with you, reflecting on that, how does that feel to just be, be a part of that?

Shawn Gano: Busy, it’s busy. But, and there, it’s, it’s also extremely rewarding. So, yeah, and as, as a little kid growing up in kind of a more rural part of Michigan, I remember daydreaming of, man, man, it would be awesome to be, to work in space exploration someday; so that’s really kind of a dream come true. And, and as I come on-site I’m still, you know, awed every day when I look at the rockets right when you enter the front gates, and then it’s exciting to really be part of building technology that will really be, you know, kind of part of tomorrow’s history.

Host: It’s a big deal. Rich, what about you? Same question.

Richard Garodnick: Yeah. I’ve, I’ve always wanted to work, you know, with, with spacecraft, it’s been an early dream of mine. It’s, it’s the reason why I went to school to do what I do. And, you know, when I first came to NASA, being able to be a user in, in the Mission Control Center for, for the International Space Station was, was such a dream come true. And now, being able to build out, you know, a new ops suite, a new control room, where we can actually interface with this new vehicle for the next set of missions is, you know, I, I really wouldn’t replace it with anything. It’s, it’s just unimaginable.

Host: It was so awesome to, to talk with both of you. And so, so I wanted to, end, end with, end with one more question. And, and Shawn we’ll toss to you for this one is, I’m sure this is, you know, you guys are building, you’re, like, like I said, you’re building the infrastructure for, for Moon missions, right? But, but of course, your primary focus right now is Artemis I, which is right around the corner. And I expect it doesn’t stop there, right? I expect you’re going to continue to improve, and, and, and especially as we start integrating humans into these missions and, and working through some of that. But I wonder what you’re looking forward to: of course, you got a lot to work forward to for Artemis I, but for Artemis II and beyond, what guys, what, what, what do you guys have that we can get a sneak preview on?

Shawn Gano: Yeah, you’re right. It’s really, we have yet another leap in capabilities that are required of the MCC to support sending humans back to the Moon. We really need to add primary voice capabilities, so that’s beyond what we were developing for, for this tech demo. We also have to build tools to help manage life support systems. And because humans are on board future missions, we really need to provide a lot more system redundancy. So we have to have more emergency communication methods and also have a, more robust backup control center functionalities that support Orion. So there’s a lot of big things coming and, we’re really excited to be a part of the mission.

Host: Yeah. I’m excited for both of you. Thank you both, Shawn and Rich for, for coming on and, and talking about behind the scenes of, of, of Artemis mission control. So, so cool. What a fascinating topic and I, and I appreciate your time for, for, for walking me through it. Definitely learned a lot. Thank you both for coming on.

Shawn Gano: Thank you.

Richard Garodnick: Thank you very much for having us here.

[Music]

Host: Hey, thanks for sticking around. I had such a good time talking with Shawn and Rich today. Such smart people to walk us through what’s happening behind the scenes of mission control to get ready to support Artemis missions. We have a lot of Artemis content that you can check out, but of course, you always have NASA.gov/Artemis to learn more about the mission as a whole. If you want to check out our podcasts, specifically on the Artemis topic, go to NASA.gov/podcasts. We are there with many other great NASA podcasts that you can listen to. But if you click on our name, we have a collection of Artemis episodes specifically, and you can listen to any of them in no particular order, just pick out the, the topics that interest you most. If you want to talk to us, we’re on the Johnson Space Center pages of Facebook, Twitter, and Instagram. Just use the hashtag #AskNASA on your favorite platform to submit an idea, or maybe a question to the show. Make sure to mention is for us, at Houston We Have a Podcast. This episode was recorded on March 3rd, 2022. Thanks to Alex Perryman, Pat Ryan, Heidi Lavelle, and Belinda Pulido for their role in making this podcast possible, as always. And of course, thanks to Laura Rochon, Rad Sinyak and Erika Peters for their help in getting this episode scheduled. And of course, thanks again to Shawn Gano and Richard Garodnick for taking the time to come on this show. Give us a rating and feedback on whatever platform you’re listening to us on and tell us what you think of our podcast. We’ll be back next week.