Podchaser Logo
Home
Interview with Mark Ballora at ICMC 2015

Interview with Mark Ballora at ICMC 2015

Released Thursday, 4th August 2016
Good episode? Give it some love!
Interview with Mark Ballora at ICMC 2015

Interview with Mark Ballora at ICMC 2015

Interview with Mark Ballora at ICMC 2015

Interview with Mark Ballora at ICMC 2015

Thursday, 4th August 2016
Good episode? Give it some love!
Rate Episode

Please enjoy this interview from the 2015 International Computer Music Conference (ICMC) in Denton, Texas. The conference brought together scientists and artists from all around the world gathered to share their latest projects in the field of computer music.

In this interview, Jef Tonks speaks with Dr. Mark Ballora. Ballora holds joint appointments in the school of music and school of theater at Penn State University. He is the author of Essentials of Music Technology and The Science of Music. His compositions have been played at electro-acoustic music festivals around the world. He has also written articles describing uses of sonification in the areas of cardiology and computer network security. His sonifications have been used in a collaborative effort with musician Mickey Hart and cosmologist George Smoot in the film Rhythms of the Universe.

Tonks: To begin can you tell us what is computer music and how did you first become interested in it?

Ballora: What is computer music? I mean you could say that now everything is computer music if you wanted to because just about all of the music that we hear is recorded which means that it’s digitized. So you could make the argument that just about everything we hear is computer music.

But traditionally it’s meant to describe how people used the computer as a music-making machine. The term dates back to the fifties and sixties when it was a pretty far-out thing to do. Computers were high-end calculators so the idea of using one to make music was pretty science fiction but it’s democratized quite a bit since then so now you buy any laptop and it’s going to come with music-making capabilities. Multimedia and computers is pretty mainstream now so computer music like I said can be just about anything. Everybody does it now.

At a conference like this people are talking about developing new tools for the computer. Creating new kinds of software. Creating new approaches to creating music, new ways of understanding music or processing music or analyzing music through through a computer. So the term is much broader than it used to be. The community is a lot more diverse than it used to be when this conference got started which was I think in the late seventies. It was a pretty small group of people now it really includes everybody from all walks of the Arts.

Tonks: Very interesting so I’ve heard the word sonification being used at this conference. Now what is sonification?

Ballora: Yeah it’s actually Carla Scalletti who is one of our key note speakers here is the person who coined the term I think or she’s one of the one of the people who did. She’s the one whose definition we use which is a mapping of numerical relations on to sound.

It’s kind of nice – there’s the artistic side and there’s the informatics of it.

Sonification means representing information with sound. I tell people it’s just like visualization really where you map information to information that your eyes can take in. Sonification you do the same thing but you do it for the ears so when you’re doing visualization your mapping information to things like color or height or size and when you doing sonification your mapping information to things like pitch or volume or stereo pan position or different types of timbre. So that’s what it is in broad strokes.

For someone like me it’s a compelling compositional / artistic pursuit how can you make music out of scientific information.

Tonks: That’s very interesting. How does your recent work involve sonification?

Ballora: Well I go around and I meet scientists and I ask them if I can sonify their data and those that let me they give me data and then I come up with the a way of making sound out of it.

So you I think you mentioned in the introduction I did some work with Mickey Hart and George Smoot. That was basically everything we could find having to do with the cosmos and translating that into sound. I guess I ought to be kind of careful about this. There are a number of ways of of going about it and I wouldn’t want you thinking that it’s literal like recordings of things. So it’s not like we went out and we went up to a galaxy and we stuck a microphone in it and we recorded it and say “here’s the sound of the galaxy”. It’s more you can go online you can get information about the spectrum of different galaxies and the spectrum is is a bunch of numbers and you can take numbers and you can translate numbers into sound.

I meet scientists and I ask them if I can sonify their data. Those that let me they give me data and then I come up with the a way of making sound out of it.

The spectrum lends itself well to sound because sound also has a spectrum. So you can you can easily translate a light spectrum to a sound spectrum but it’s a matter of transposition. It’s a matter of coming up with a sound model using a synthesis program and synthesizing a sound that is just intuitively suggestive of the phenomenon that you’re working with and then playing that instrument with the numbers of the data set and transposing the numbers in such a way so that what you hear is informative to somebody who wishes to study it. It’s kind of nice – there’s the artistic side and there’s the informatics of it.

Like for the galaxies. We did a bunch of galaxies for this film that I contributed to. I had this idea that galaxies are sort of like wind chimes up there in the sky. So I wanted a wind chimey type sound so I came up with something that was kind of bell like. Then I mapped the spectrum wavelength values to pitch values. So it was this sequence of pitches and when I first played it, it sounded kind of like a machine gun. It was like “oh ok, that doesn’t do.” What I want is a nice bell sound but it really sounds kind of dreadful so what can I do about this and then I thought well rather than play them at regular intervals in time to time it so that the timing of a particular bell depends on the difference in intensity value between this data point and the last data point.

So then I got a kind of irregular bell sound and that had the wind chime kind of quality that I wanted. I thought it worked nicely not that we’ve used it to study the spectra but it was important that we could if we wanted to. So there’s kind of a redundancy there. You could tell how the the spectral data behaves by the succession of pitches but also by the timing differences of it – the rhythm. So they reinforce each other and that that works nicely when you have more than one cue describing the same thing. I think that was about the first thing I did for them and that was the first time when Mickey [Hart] was like oh this is great this is great thank you.

Tonks: So how has your work with sonification change the way that you think about music?

I don’t know. I don’t know that it’s changed the way I think about it because I’m just following in the footsteps of other people that did this kind of thing. It goes back to composers like Iannis Xenakis who was using probability theory in the 1950s to generate musical material. He called it stochastic music. As the number of events approaches infinity, what kind of behavior do you get? He would he would liken it to like a swarm of insects or the sound of rain over a roof or something like that and what is this, what are the characteristics of this sound cloud that results when you’re generating a lot of events. So you’re not focused on the individual event you’re focused on the the overall sequence of a whole bunch of events occurring at about the same time.

It led to an approach to music called granular synthesis where you you use the computer to generate hundreds, thousands of really short sound events per second and so you’re not so much composing each event but you’re using the computer to calculate events that fall at random within a range of let’s say pitches or volumes or wave types.

So as a composer you’re dealing with the kind of the overall and you’re letting the computer really do the really fine grained work of coming up with individual events. So I knew about those things before I started doing sonification. Sonification just seemed like a new way to approach those approaches to composition which I had already found pretty interesting.

Tonks: Do you see sonification as a way to understand data in a new way or as a tool for music composition?

Ballora: Yeah. Both. I mean there is an auditory display community. I mean we’re here at the international computer music community at their conference. There’s also an international conference of auditory display that meets each year and they they tend to be focused on understanding data in new ways through sound which makes a lot of sense.

I mean when it when people started using the term and this was in the early nineties they were beginning to recognize that this was you know the information age and we’re you know we’re putting up new sensors all the time giving us more information. We’re at a point where we have more information than we know what to do with right now. We get more data than we can meaningfully comprehend.

Some of my friends in the Information Science and Technology College back at Penn State call it “cogminutiae fragmentossa.” I think that’s that the the the term they’ve coined for it. They say we get all this information and we just create digital landfills out of it because we don’t know what to do with it once we get it. We don’t have time to go through all of it and try to make sense out of it, so there’s more than we can handle.

We get all this information and we just create digital landfills out of it because we don’t know what to do with it once we get it. We don’t have time to go through all of it and try to make sense out of it, so there’s more than we can handle.

That’s what Gregory Kramer was writing about in the early nineties the guy who organized the first auditory display conference. His take on it was that the ear is also a pretty important sense. I mean we’re used to visualizing data. In fact we visualize it even when there’s no clear need for it but we’re coming up with new ways to visualize stuff just because we get our kicks out of it or we think it’s fun or something. But you know just as we go through life and you know for healthy we have both eyes and ears and they serve complementary and supplementary functions as we navigate our way through our environment, it really makes sense if we’re getting more and more information from different sources, it seems only natural that we should rely on more than one sense to makes sense out of that.

Tonks: Do you think our auditory system has any kind of advantage over the visual system for interpreting and finding meeting and data?

Ballora: Yeah well that’s that’s interesting I mean that’s what perceptual psychologists get into. Why do we have ears and why do we have eyes? Why do we have both? What do they mean? They both give us a picture of of our environment and we tend to rely on the eyes. But there’s a lot of information that the ears get and we tend to disregard it.

You see blind people who walk around and they seem to know where they’re going. People have told me if you blindfold yourself you learn to see with your ears pretty quickly. You start relying on that information that the ears are getting that we usually don’t pay much attention to, but it’s there. You can hear subtle differences in things. Right now I’m talking to you and I can hear my voice bouncing off that wall over there. I can hear it bouncing back at me. You can hear reflections off of things. It’s kind of like sonar. Its kind of like what dolphins do and under water creatures. They hear reflections off of things. We do that. That’s how you can hear where the door is. That’s what a bat does.

Tonks: Echo location.

Ballora: Echo location, yeah. I was in a house not too long ago and a bat got stuck in there. It’s like, “just open the door he’s going to find his way out pretty soon.” And he did. Because he’s doing echolocation. He hears “over there, over there. Its not bouncing back at me” and flies out that way. You can learn to do that. That’s what blind people doing as they tap their canes. They’re listening for echoes. They’re listening for reflections. You can hear where the staircases are, where the door is pretty easily.

The eyes are good at giving us pretty static information about things you know like size, color, texture, shape.

Tonks: Very rudimentary?

Ballora: Rudimentary, if you want. But mostly, static things. That wall hasn’t changed much since we started talking. It’s pretty much the same as it was. But the ears are really good at dynamics. At things that change.

I just heard somebody on NPR a week or so we’re go saying that the ear processes information about 20 times faster than the eyes do. So the ears really set the stage for everything else that our senses bring to us. It’s all based on that impression we get with our ears.

My line I’ve been using for the past few years is: if you don’t believe me, there’s a simple test you can do at home. Next time you you rent a DVD and you watch it just turn the sound down and then turn on the subtitles so that you know what they’re saying you can follow the plot but I guarantee you you’re not going to enjoy the movie as much without the soundtrack is really what embraces you. It’s really what brings you into the world of that movie. When you see the movie you may be more struck by the visuals and the special effects and everything like that but it’s really the sound that captures your attention and that puts you in that environment.

It really makes sense if we’re getting more and more information from different sources, it seems only natural that we should rely on more than one sense to makes sense out of that.

So in sonification that’s what we’re trying to do with data. We’re trying to create this sonic environment that you can live in. That you can interact with. So like the galaxy I was talking about a minute ago. I had redundant cues for spectral components of it. So there was a pitch component and there was a rhythmic component. Our ears are really good at those things. Our ears are really good at detecting rhythms. At detecting small changes in pitch. You don’t need any musical training to sense those things. We’re born with that.

It’s probably something that’s hardwired into us from evolution and probably goes back to you know the days when we had predators and we had to fight or flight. Our ears developed into very sensitive sensors of change in the environment. You know, “here’s something over there. Let’s do something about it. Let’s run or let’s fire an arrow.” The ears are really good at dynamic changes.

The ears also happen to be really good at following multiple patterns of information at the same time. That’s what we do when we listen like to chamber music, to multi-part counterpoint and we can hear all these simultaneous melodies.

So in sonification that’s what we’re trying to do with data. We’re trying to create this sonic environment that you can live in. That you can interact with.

There have been tests that people have done. Gregory Kramer did one. They did a simulated operating room. They had a virtual patient that they were they were taking through surgery and they had people doing different roles you know. “You’re the nurse. You’re the doctor. You’re the anesthesiologist.” They had to follow the vital signs of the patient. They did it where they only had visual cues and where they only had auditory cues.

With the visual cues there’s the the fact that you’ve got to be watching something. So you gotta have your eyes fixed on a screen and that’s a disadvantage if you’re in an environment where there’s a lot going on and you got to stay watching this screen that shuts you off from everything else.

But aside from that they found that people reacted more quickly and more accurately to the sound cues. When the patient was having a change in vital signs, there’s evidence to show that we that we can follow multiple streams of information and respond more quickly and accurately to multiple streams of [sound] information than we can to multiple visual streams of information.

So sonification seems only common sense when we’re talking about a highly dimensional data. Data is hard to visualize above about four dimensions so if you need to go higher than that it’s hard to do it visually. So the ears are the natural sense to turn to in that case. You turn it into a piece of chamber music and you have five instruments playing and then you’ve got a better chance of following it.

Tonks: So you can’t get much more compelling than sonification of the universe but, what are some of the most compelling sonifications that you’ve listened to?

Ballora: Well, I could tell you about some other things I’ve worked on that have been kind of fun. So last year I met somebody in meteorology at Penn State and she studies hurricanes. She was going to an international hurricane workshop and conference in Korea last December. So you can see satellite videos of hurricanes at the NOAA website, National Oceanic and Atmospheric Association, I think. So you can see these satellite videos. It’s like watching a silent movie. You see this thing spinning and you see different colors and that’s what they use.

She was curious about what it would sound like if I put a soundtrack to those. So she gave me some data sets. The datasets had information about air pressure and latitude and longitude and the symmetry of the of the storm system. So I made these things and I put them on there and she liked it and then she asked for a few more. I wound up doing 11 of these storms and that’s what she took to Korea.

I said you know, “how’d it go over?” and she’s said “You know, pretty good. They liked it they said they found it engaging.” I said “well did you get anything from the sound that you didn’t get from the visuals?” She said “Yeah, you know actually, when the storm gets more symmetrical, it becomes more intense. It’s like when an ice skater brings her arms in and then she spins more quickly. It is the same kind of phenomenon and we can see that. But there were some times when the storm got more intense when it didn’t get more symmetrical. I wouldn’t have known that by watching the video but by listening to I could hear it. It was very interesting to hear the storm getting more intense but also to watch the shape of the storm and notice when it’s not symmetrical and look at that, it’s more intense.”

I was like, okay good, there’s there’s my golden ticket there. We got something out of the sound that we couldn’t have gotten from just the the visuals. That was pretty nice. I don’t know how useful it’s going to be for hurricane researchers. I mean they may enjoy it. I don’t know if it’s going to teach them anything.

Maybe it will. That was a pretty nice line that we we could hear the intensity when we couldn’t see it. But I think she was most interested in it for like a freshman in her intro to meteorology class. It’s the engagement thing that I was talking about before. If they hear a sound track to it that they find compelling, they’ll find it more interesting and they’ll study it a little bit more.

That was a fun one because it was multi-dimensional. I had these four data dimensions that I had to work into sound somehow. That was that was kind of fun.

[T]here’s evidence to show that we that we can follow multiple streams of information and respond more quickly and accurately to multiple streams of [sound] information than we can to multiple visual streams of information.

I met somebody in forestry and he studies Arctic squirrels. Squirrel body temperatures. I was like “Really you study squirrel body temperatures, do you?”

“Yes we do. We look at this population of squirrels up by the arctic circle. In this environment, we know how many squirrels there are, and we know where they live. We know what they’re doing.”

They capture them and they implant them with a little sensor and then they go out and they let him run
around for a year. Then they capture as many of them as they can in a year later and they take the sensor out and then they’ve got a year’s worth of body temperature data for the the squirrels. So they can hear when they give birth. They can hear when they go underground in preparation for hibernation. They can hear when they hibernate. They can hear when they come out of hibernation.

This is interesting because of all of these interlocking cycles that make up an ecosystem. When they breed affects the predators who eat the squirrels, and do they have their food supply so that they can they can reproduce.

The example he gives in his paper is like if birds go to a particular breeding ground each year to lay their eggs and they go there when the temperatures are right. But let’s say because of climate change, it gets warmer a little earlier in a particular year. So the birds get there before the plants have grown that they need to eat. So they don’t have the food supply that they need. That affects their population the next year and that affects the population of the animals that feed on the birds. So it’s a phenology or phrenology. I never get the term right but it has to do with life cycles of animals and plants.

This is what this fellow goes around teaching. He focuses on the squirrels. He goes into schools. He tells grade school kids about squirrels and their body temperature cycles. He sometimes found that they don’t seem to be fully engaged in the graphs that he shows them of the body temperatures of the squirrel over the course of a year.

What I’m interested in is sonification as educational enhancement. As a way of introducing people to a field of study that is new to them. Making that part of the doorway into that. This is what gets them interested in it.

So we’re thinking if there were kind of a soundtrack and there were this kind of groove that they could hear so you hear. You get a regular body temperature cycle, it’s a daily cycle as it gets light and dark and light and dark. Then they go underground. Depending on whether they’re male or female they’re there for a certain period of time before they go into torpor when they’re in hibernation.

So they’re conscious, but they’re in sensory deprivation. They don’t know when it’s light and when it’s dark. The body temperature gets irregular sounding. It is kind of interesting. Then when they go into hibernation, it plunges. I mean you can hear that, but I have a feeling the interesting part is going to be the change from active and preparation-for-hibernation because that’s when it’s kinda regular and then it gets kinda drunk and irregular before it just plunges.

So what I’m interested in is sonification as educational enhancement. As a way of introducing people to a field of study that is new to them. Making that part of the doorway into that. This is what gets them interested in it.

If we make this a part of how kids study science now, we’ll get a generation of kids who grow up with the idea. I grew up with the idea that you look at science. I mean I was making graphs in about the 4th grade. That’s what they had us do you know. If you brought kids up with the idea that you also listen to science then we’re going to have a different kind of scientist in the next generation. We’re gonna have a scientist who’s used to listening to stuff as well as looking at stuff.

It’s not going to be like this sudden thing where somebody invents a sonification that suddenly changes everything. That’s what I was hoping for for about 10 years and I don’t see that happening. You know, maybe it will. That would be dandy. But, you know I would be happy enough to plant some seeds and just let the next generation of people develop scientific tools that incorporates sound in new ways that help them understand the cosmos and the earth differently than I grew up doing it.

Tonks: So would you say that probably most compelling idea of sonification is education?

Ballora: Yeah, I would. At least, that’s what I am making my focus right now. I go to the the ICAD conference and they’re mostly like scientists and psychologists there. I’m kind of the artsy music guy who is usually there. Sometimes there are a few others but I mean I’ve done all these weird and wacky things with you know George Smoot and Mickey Hart, and this meteorology person, and this forestry person. I think the range of things I’ve sonified is unusual.

What I liked about working with Mickey Hart was that he wanted to have integrity, but I also knew that it was going to have to grab him musically right away. If he didn’t like it right away it wouldn’t matter how well I could explain why it was valid. If it didn’t sound good if it didn’t grab him in the gut right away he wasn’t going to be interested in using it. So it had to be musically compelling.

So it had two bottom lines that I needed to address: It had to be musically compelling and it had to be
scientifically informative. Even if we haven’t gotten around to studying it scientifically, there has to be that potential there.

It’s got to have the integrity so that if you wanted to you could study it. Using music as a scientific education tool, I think is a really compelling idea right now. That’s what I’m working toward with some colleagues back at Penn State. I’ve got a colleague in music education he wants to work with me on this – to help create some educational programs.

If you brought kids up with the idea that you also listen to science then we’re going to have a different kind of scientist in the next generation. We’re gonna have a scientist who’s used to listening to stuff as well as looking at stuff.

I’ve gone around and I’ve talked to some people about the film. They’re like visualization people and
science people. They come up to me and they say “So what assessment data you have on this?” and I say “I don’t have any. I mean, it’s a movie. What assessment data should I have?” I’m not trying to be snarky. You’re making a movie, you’re making an album, what assessment do you have?

I mean, people buy it or they don’t buy it. But how do you assess how good that is? I understand you have to do that but I don’t know what the right way of doing it is. I think that rather than like getting together a focus group than having them listen to some things and doing a bunch of statistic gymnastics with it. I mean I could do that. I guess. But I’m more interested in putting a nice thing online and having teachers be able to get to this from all over the place and tell me how it works out for them.

That, to me, would be a more interesting kind of assessment. More qualitative. Not such a quantitative assessment. But I think that’s the direction I see it going in now.

She’s done some work with Sony and Folkways. So the other day I got together with her and she was showing me like you know there’s the mariachi’s site. You can bring up this little animation and here’s a mariachi band you can click on this guy and you can hear the trumpet and click on this guy you can hear the guitar and click on this guy you can hear the vocals. So you can hear the different components of the mariachi band. This really nice engaging introduction to this form of music. If we could do something analogous to that in an educational setting, I think that could be really valuable.

It had to be musically compelling and it had to be scientifically informative. Even if we haven’t gotten around to studying it scientifically, there has to be that potential there.

I think museum settings would be valuable. The earth and mineral sciences museum at Penn State was interested in these hurricane sonifications. So we’re just about ready to put a little exhibit together where people can go in and there will be a touch screen and you can pick your hurricane and you can listen to it and you can watch it. Then you can go to another screen that breaks down what the sonification is. You can hear, okay here’s how we did pressure and here’s how we did latitude here’s how we did longitude. Then you can go back and you can listen to another hurricane. You can hear all of these things at once. The director of the museum there is interested in doing more of this. He’s interested in the squirrel data. He’s interested in something I did two years ago with somebody in the Arctic times.

He had Antarctic ice data that went back four hundred thousand years about the volume of the ice and the surface area of the ice and the solar energy reaching the ice. All this stuff from 400,000 years ago. I worked on this with a grad student and people enjoyed that. They enjoyed hearing all of these things working in combination. There were the kind of quasi-cyclic patterns you can hear and so he would like to have that in the museum also. You know a museum exhibit where people can go and explore these data sets in that way would be pretty nice.

If we also found a way to put it online so that it weren’t site-specific so you can listen to it online too. And then if you can get to a museum there’s a bigger and better display and it’s better loudspeakers. Just like the Smithsonian Folkways site is in association with the Smithsonian Museum. I mean you can go there if you can get to DC. You can go there, but you can also look at it online anywhere.

If we could do that with sonification then it would be really interesting to see how people like it. How do people take to it? How does that help people approach studying squirrels or studying the Antarctic in different ways? It’s just a different lens with which to to experience this this kind of
information.

Tonks: Dr. Ballora, it has been such an honor to talk to you.

Show More

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features