[SFX: Galactic Center sonification]
Andres Almeida (Host): What does the universe sound like? Sound doesn’t really travel through space because sound waves need a medium, such as air molecules. But there are creative ways to convert data into sound. For example, the chiming you hear is an audio translation of an image of the center of our Milky Way galaxy. It’s real data turned into sound. How?
Let’s hear about it on this episode of “Small Steps, Giant Leaps.”
Welcome to Small Steps, Giant Leaps, your podcast from NASA’s Academy of Program/Project & Engineering Leadership, or APPEL. I’m your host, Andres Almeida. In each episode, we dive into the lessons learned and experiences from NASA’s many missions and projects.
From black holes to star clusters, scientists are turning space data into soundswith a process called sonification. Dr. Kimberly Arcand, visual scientist with the Harvard-Smithsonian Center for Astrophysics, joins us to explore how data sonification lets more people experience the cosmos and give researchers a new way to interpret science one note at a time.
Host: Hi, Kimberly, welcome to the podcast.
Dr. Kimberly Arcand: Yeah, it’s great to be here.
Host: Can you give us some background on data sonification?
Dr. Kimberly Arcand: Sonification is just the process of translating data into sound. So, like, we can visualize information, like we can represent it in an image or graph form or in a mathematical model.
We can also represent it through sound. For the Chandra X-ray Observatory and for other telescopes like Hubble and Webb, we’re taking observed data, such as the position, the brightness, the energy of, say, X-rays in Chandra’s case, and mapping that information to sound. We’re using pitch, volume or instrument type to help represent the information.
So, you can think of one of my favorite sonifications. It’s the Chandra image of a supernova remnant called Tycho’s supernova remnant. And it’s really lovely. It’s very multi-colored, and sort of, I don’t know, energetic feeling. It has these, these fingers or sort of protrusions of ejected material that seem like they’re coming at the viewer. And we’ve been able to use spectroscopy, which is just like the DNA or the fingerprinting of light, in order to understand that some of that material is silicon and some of it is iron and some of it is sulfur.
So, we’re able to color code that into an image, but we can also take that and represent it through sound. So, we can have some of the sounds essentially playing iron and some playing silicon, or, for example, sulfur, and you just get the sense of the data in a very interesting way. And it’s quite different than just looking at it.
Host: So how do you decide exactly which data gets translated into sound? What is that process like?
Dr. Arcand: Yeah, there’s a lot of really great data, and my goal, I think, would be to translate all of it into sound. I think it’s a real worthwhile process.
But when we’re trying to make decisions on what to sonify, we really start off by just asking questions. So, what is the scientific story that we have to tell? What is the data set telling us for the specific data? Is there structure? Is there some kind of motion? Is there interaction happening between different cosmic objects, right? So, some data sets will really lend themselves particularly well to audio format, because there’s already some kind of natural, I don’t know, spatial or temporal flow, perhaps a jet coming out of a black hole, or the rapid expansion of a supernova.
And so, really, the process begins with selecting the data set and then selecting the features that need to be highlighted, like how to tell that scientific story best. And then you just have to define the rules, so to speak, right? You’re going to select the mapping information, so you might be mapping energy level to pitch or intensity to volume or a specific kind of light to a type of musical instrument.
And the idea is to both preserve the scientific meaning while also creating something that is musically and sound-based like, you know, it’s pleasant, it’s engaging. And I think there is a dance between the precision of the data, but also the creativity that’s needed to create something that will be pleasing, because we work very closely, particularly with people who are blind or low vision, to make sure that what we’re creating makes sense. And specifically, if you are mapping the data only through sound.
So, we want to make sure that story is being told and being told well, and there are rules in there, right? You don’t want sounds that are too high or too low, too soft. You don’t want sounds that occur too quickly, because you can’t capture all of that information across time.
You need to make sure, if you’re incorporating different kinds of light, that the sound differentiation profile is distinct enough that you can tell the difference between, say, in the Galactic Center, the lowest energy is attached to a soft piano. The medium energies, the sort of near-infrared optical [light waves] are attached to a violin. And then the highest energies, the really highly energetic particles, those are a glockenspiel. So those three types of instruments, you can very distinctly tell them apart. And that helps you sort of understand the differences of the different kinds of light, and where some of them have moments of these little these little solos, and then there are moments of really beautiful harmonies where they’re interacting together.
So, being able to understand sound well enough to sort of help set some of those rules, and to have them make sense for the listener is a really, really fun part of the job that my collaborators and I get to work on.
Host: That’s wonderful. And I, myself, am a choir singer and I love that there are even chorale sonifications.
Dr. Arcand: Oh, me too! Well, yes, there is. There is a choral-inspired version of the Whirlpool Galaxy that we could insert here, perhaps.
[SFX: Whirlpool Galaxy sonification]
It’s a really beautiful piece because it incorporates four different kinds of light, from infrared light all the way up to X-ray light, with ultraviolet and optical in between. So, it’s the Spitzer Space Telescope, the Chandra X-ray Observatory, the GALEX telescope, and then also Hubble. And so, we have, well, digitized voices, if you will. So, it sounds like a very intensive choral piece as you’re scanning through the data. And I particularly like that one because I am a former choir geek (or once a choir geek, always a choir geek).
But I love the very interesting sort of depth you get when you can layer sound, even through a digitized voice. And it provides a really interesting moment to be able to scan through all of that data. It’s a busy data set. There’s a lot happening. You know, you’ve got these beautiful spiral arms, you’ve got younger stars, you’ve got mature stars, you’ve got exploding stars, you’ve got hot gas, you’ve got black holes and binary stars. There’s so much happening and being able to differentiate that through the sound of voices makes for a really scientifically interesting but also beautiful-sounding moment.
Host: Do you consistently apply the same instrument to a black hole, for example?
Dr. Arcand: So, I think every object, or every object type is pretty unique. There are some similarities, but there’s also uniqueness to them, and so you really have to pay attention with the parameters you’re setting each time.
And I should mention here, this is not a “me” project. This is a “we” project. I’m working with some fantastic collaborators at system sounds. I’m working with the very talented Christine Malek and other folks around the world that have contributed to this project over time. And I just want to, like give some credit that this is a[n] intricate project that takes a lot of creative minds to be able to contribute towards it.
But what I appreciate about the data is that for things that are specific, like super massive black holes, there’s some really cool stuff that we can provide when we’re working on a sonification, right? So, for example, one of our most popular sonifications is of the Perseus Cluster. And there is this supermassive black hole at the very core, inside the center of a galaxy, and it is burping out tremendously into the hot gas around it and causing pressure waves, which are sound waves.
[SFX: Perseus Cluster Sonification]
And about 20 years ago, scientists calculated that that sound wave was essentially a B-flat, about 57 octaves below middle C, so very deep, very strong note that humans could never hope to hear.
A.) We are way too far. There’s no medium in between us and it.
B.) Our human ears could never pick up a sound that low. So, for that sonification, it was just about taking that sound wave and bringing it back up into the register of human hearing and the resulting sound was very lovely, but also kind of intense, eerie.
Host: Oh, social media went wild with that one! Let’s take a listen.
Dr. Arcand: Yes. It was rhythmic, almost. But I mean, we had people saying, “Oh, it sounds like a horror movie soundtrack, or it sounds like Hans Zimmer was busy creating this sound.
And the interesting thing is, you know, black holes have a bad rap, cosmically speaking, like these cosmic monsters, these cosmic vacuum cleaners, but they’re actually doing a lot of good. They kind of sit there in their locations, and they’re responsible for, in the case of supermassive black holes, the care and feeding of a galaxy. They are the ultimate cosmic recycling centers, right? So, there’s an immense amount of power that these black holes are giving off, but also an immense amount of responsibility, right? If you want to think about it. And so, being able to take that sonification and make sure it really situated the data accurately, authentically, but also helped give across that sort of, you know, immense power.
And you know, between you and I, I kind of love (as choir geeks), I kind of love that there are these cosmic divas sprinkled throughout the universe that are singing these powerful songs that humans could never hear directly, because it’s not just Perseus that has these sound waves. It’s others like the M87 super massive black hole. It is a even deeper and lower note.
These black holes are unique, right? They have different environments. They’re different sizes, they have different amounts of intensity, and so you get different resulting sound waves from them. And I just think that’s a really lovely bit of poetry to think about when you’re talking about sonification.
Host: What has the general response been?
Dr. Arcand: The response has been all over the place, and I’ve really enjoyed hearing from people on how much they’ve appreciated the sonifications.
The project was a bit of a, a bit of a shot. We’re just like, “Oh, I’m not sure if this will catch on,” but I feel like this could be useful, and particularly, again, because people who are blind or low vision need to access data in a very specific way. And that was kind of the inspiration for this project. I had worked with Dr. Wanda Diaz, who is an astronomer and computer scientist who is also blind, and she has described, you know, sitting in a math class not being able to see the board as a professor is teaching math, and that sort of thing sticks with me.
So, for this project, like, it was very intentional to create something that, particularly if you can’t see the image, that it will help convey all of that science-y goodness that’s caught up inside those pixels that’s, you know, displayed to us via those, those photons, those packets of energy that eventually traveled to our telescopes. And so, we’ve had people say that, you know, they felt like they were part of the universe, that they understood that there is some connection. And I’ve had students who are blind say it was the first time they felt included in how space is, you know, presented. I’ve had someone say that this sounds like the universe breathing. I loved, loved that. I loved that quote. But there’s been a true emotional connection through the data, and that, to me, feels a bit rare, but also quite profound.
You know, my colleague Christine Malek, that we work with, has talked about how, since she’s been blind since birth, . Sshe has never had that experience of being directly able to witness cosmic phenomena, but hearing it through sound made her feel empowered as if she had and that’s something I would like everybody to have, right?
And just like when you, when you cut a curb, when you, when you take the time to do something for one group, it always benefits others, right? It’s not just someone in a wheelchair that appreciates a cut curb. It is someone with a cane. It’s someone on crutches, it’s a parent with a stroller. It’s, you know, someone on a bicycle cutting across illegally, whatever the case might be, right? It’s helpful for more people.
Host: What have been some lessons learned from both the research and the process?
Dr. Arcand: Hmm, yeah, we’ve learned a lot! And the whole process I think is wonderful when you can keep learning like that.
My colleagues, Matt Russo and Andrew Santaguida, who work on this project, have been really dedicated to all — to dealing with the complexity across, for example, the data that I’m often bringing to the table. Many wavelengths composited into one. Huge dynamic ranges, data changing over time, in time lapses, entire catalogs of data, of say, the X-ray sky. Each time there’s a new data set to work on, it is a new challenge, because there’s no set of, like, perfect standards that you can apply to all astrophysical data sets that will give you this perfect result of sonification, or image for that matter, right?
All of all the things that I’m talking about today, by the way, if you take out the music part of it, the sound part of it, they also apply to how we actually process it for imagery as well. And so, there’s a lot of, sort of, symmetry between just how we think about our data and how we deliver our data, whether it’s kind of the typical way through an image, or whether it’s a different way through sound or through haptic, vibrational response and whatnot.
And so, we’ve been learning a lot, and I think you know, there’s always a challenge that you always want to preserve all of the scientific integrity, of course. But when you have a lot of data, you do actually have to make decisions on what to keep and what not to keep, and that is for an image as well. If you show all of the data, sometimes you’re showing nothing, because if you stack up all the data, you essentially turn your data into a white swath, right? It’s too much data stacked up, so you do have to be careful to not over–stack your data, to not provide too much data at once. It makes it incredibly difficult as a visual sometimes, but also as layers of sound.
And so, being able to understand how you make that audio meaningful, listenable, so that people can, you know, take away their perspectives and understand it in the way that they, they need to, from their own backgrounds and from their own experiences, I think it’s really useful.
So, you have to think about it as a process or a translation, just like if you’re translating, you know, Spanish into Mandarin, right? You make decisions because there are contextual clues that only make sense in certain situations. You can’t do a perfect A-to-B translation.
Host: Right, often it’s more an interpretation rather than a translation.
Dr. Arcand: Exactly, you have to have that interpretation. And it’s not saying the Mandarin version is incorrect, right, from what the Spanish was. But in order to translate that from one language medium into another, you have to understand enough contextual clues to make it relevant for the end user. The same thing goes for sound, right?
And then, of course, there’s also the question of aesthetics, and balancing that fidelity with an emotional resonance that makes sense. We’re not trying to sort of force feed emotion on people, but we want it to be pleasurable, so that it’s an experience that you don’t mind having, right? And so you can, of course, select really high-pitched noises if you wanted to, if that had some very specific meaning. But we’ve learned, especially working with our blind and low vision community members, that there are some very high sounds and some very low sounds that are just difficult. And sometimes they can, like, provide sort of a physiological response. Certain high-pitched sounds can cause seizures in certain people. So, we are really careful to select the sounds in a way that makes sense for the majority of the audience.
And I think I mentioned earlier, tempo really matters, right? In an image, if you’re looking at it, you get all of the data at once. Your brain might take moments to process different sections at different time[s], but you’re presented with all of the information at one moment. It’s different through sound. Sound is a journey across time, right? And so, there is that ability for tempo to play an important part. You can’t go too fast because you can’t absorb that. You don’t want to go too slow because then it’s sort of prodding. So, you have to find a balance that makes sense for your listeners and really works, right? And it’s really all about collaboration.
So, we collaborate. We are astrophysicists, we are musicians, we are sound designers and engineers and accessibility experts. And you know, people essentially, who come with different experiences in order to provide something that hopefully makes sense.
Host: In the papers you’ve co-authored and authored, did you find there were communities of people who had been exposed to sonification in different science fields?
Dr. Arcand: So, we did do some research on sonification because I’m a scientist and that’s what I want to do. I want to make sure I’m not, you know, tossing spaghetti at a wall. I really want to get as much data as possible so that I can understand as much of like how these are being used, if this is a worthwhile thing. And that’s essentially what our survey research was meant to do.
And what we found is that people really wanted to be engaged with sonifications. They enjoyed listening to them. They enjoyed getting the data that way. They wanted to learn more about the NASA science. They wanted to be a part of that, you know, what was happening.
And then, also, there were some cool things, like people who thought about that other folks don’t always get their data the same way that they might, right? So, that some people might just listen to information if they are blind or low vision, for example. And I thought that was, that was kind of unexpected and lovely.
And then, also, it was the emotional connection we had a sort of open field where people could put how these sonifications kind of made them think, or made them react, and what they thought about them, and the amount of sort of emotion-evoking adjectives was amazing. Like it was just a lot of awe and wow. And just, some were peaceful, some were scared, some were, you know, energized, others were calm, depending probably on the sonification and also, you know, people’s perspectives. So, being able to document that was very cool.
And particularly the blind and low vision community found the overall value of them to be very high that they, you know, wanted more, and that was really exciting. We didn’t ask particularly if people had come with pre-existing knowledge of sonifications or experiences with them. But of course, most people have, right? Most people have interacted with data through sound at some point, whether it’s the garbage truck backing up and making a really loud, you know, beeping, honking noise, or whether it’s, you know, something, if you’re getting an EKG and you’re hearing that, that blip of sound, right? And so I think there’s just a lot of potential.
There are exciting opportunities in science and in the world, really, to use sound as a valid means of scientific analysis and scientific expression, right? You could think of researchers who have used sonification to understand protein folding, to study heart cells, or to research star seismology, right? Those are all really interesting examples of when you can use sonification as a valid scientific tool, which is what is being done, and kind of what this project was inspired by. But it’s also a really lovely way to talk about the universe.
We live in a Technicolor universe that we’ve only recently begun to see, right? We’re like Dorothy when she stepped out of her little house into Oz, and everything went from black and white to being lit up around her like, you know, candy-colored beauty, right? Human eyesight is incredibly limited. We can access just a sliver of what’s available via the optical light reaching us from various objects, right? The universe radiates in so many more kinds of light and other types of information, such as gravitational waves and cosmic rays, and we’re just being able to really capture all of that full picture, right? We are stepping out of that little black-and-white house, and we are part of that Technicolor universe.
Host: And that makes me think of the sonification of the Pillars of Creation, arguably one of the most famous objects in astronomy. I’m going to play the sonification. Can you walk us through what we’re hearing?
Dr. Arcand: Yeah, that is absolutely one of my favorites, though I feel like I’ve said that of all of them. [Laughter]
[SFX: Pillars of Creation Sonification]
In the Pillars of Creation piece, we are essentially capturing data of stellar formation, right? Baby stars that are forming in these really tall pillars of gas and dust, and we’re using two different kinds of light in the data set. We’ve got this incredibly beautiful and rich data from the Hubble Space Telescope that shows those beautiful finger-like protrusions, if you will, where those little baby stars are nested and cocooning. And then all around it, we have this sprinkling of the high energy X-ray data that’s showing the slightly older stars that are busy throwing these little X-ray temper tantrums, right? And so, the data set is wonderful, and it really gives us a fantastic glimpse of how stars evolve, how they are born and kind of grow up.
But in the sonification, when we’re taking that and translating into sound, we’re doing a scan from essentially left to right of the data, and so the vertical position of the recorded light is controlling the pitch. But we have like, two basic sounds to those two different kinds of light that I mentioned, right? we have the optical light, and then we have the X-ray light. And the X-ray light is like this sort of, I don’t know, beeps, beeps and boops, these, these short, pure tones, right? Those are what all those little compact sources are.
And the optical light, we really wanted to emphasize the dimensionality of those, those towers of of gas and dust. And so, there’s this sort of synthesized sound that can, like, it consists of combinations of sine waves that are controlled by the data in the image.
And so, together, you kind of get this sweeping synthesized sound across that three-dimensional structure, those tall pillars of gas and dust, and then overlaid by all of those beeps and boops, is the best way to think to describe it, those, those short bursts of tone that are showing where all of those slightly older stars are forming and kind of, you know, making their way out in that, that local part of the universe.
Host: Wow, thank you for that journey. Kim, what do you consider to be your personal giant leap?
Dr. Arcand: For me, I’m always taking smaller steps of learning. I am just on a journey to understand as much as I can. I want to understand the data. I want to understand the science. I want to understand what’s happening with all of this fantastic cosmic stuff that we’re capturing. But then I also want to understand, like, how people process it themselves, how people respond to it, what kind of meaning do people make from it?
And so, for me, I don’t know that I can say it’s a giant leap, but I just want to constantly be moving forward and constantly be learning more about how to do things better. How to, you know, play in that cosmic sandbox and figure out ways that we can really make the most of our universe.
Host: This is a noble effort and it shows there’s space for everybody.
Dr. Arcand: Oh, thank you. I’m very proud of the whole team, and happy to be part of the broader NASA team as well. I think this type of work is just really, you know, inspiring, empowering, amazing, you name it. So, thanks for having me on.
Host: It’s our pleasure. Thank you, Kim.
That wraps up another episode of Small Steps, Giant Leaps. For more on Dr. Kimberly Arcand, and more sonifications, visit our resource page at appel.nasa.gov. That’s A-P-P-E-L dot NASA dot gov. And don’t forget to check out our other podcasts like Houston, We Have a Podcast, Curious Universe and Universo curioso de la NASA.
As always. Thanks for listening.
Outro: Three. Two. One. This is an official NASA podcast.