Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:00
This podcast is sponsored by
0:02
Queens University Belfast, one
0:04
of the UK's leading research universities.
0:07
This historic Russell Group University has over 99%
0:09
of research assessed as
0:12
world leading or internationally excellent
0:14
in REF 2021. Their
0:16
researchers are making critical breakthroughs
0:19
in food, ionic liquids, cohesive
0:21
societies, and data-driven healthcare. Visit
0:24
qub.ac.uk and see how they are
0:26
bringing research to reality. This podcast
0:28
is supported by the Icon School
0:31
of Medicine at Mount Sinai, the
0:33
academic arm of the Mount Sinai
0:35
Health System in New York City
0:38
and one of America's leading research medical
0:40
schools. How will advances
0:42
in artificial intelligence transform medical
0:45
research and medical care? And
0:47
what will this mean for patients? To find
0:50
out, we invite you to read a special
0:52
supplement to Science Magazine prepared by
0:54
Icon Mount Sinai in partnership
0:56
with science. Just visit our
0:59
website at www.science.org and search
1:01
for the frontiers of medical research
1:03
dash artificial intelligence, the
1:06
Icon School of Medicine at Mount Sinai.
1:09
We find a way. This is a
1:11
science podcast for January 5th, 2024.
1:13
I'm Sarah Krespe.
1:20
Welcome to a new year at the Science
1:22
Podcast. We start out with a look back
1:24
at 2023 and
1:26
the top online news stories with
1:29
editor David Grimm. There'll be cats,
1:31
but also electric cement and mind
1:33
reading. Next, can a machine tell
1:35
a tuberculosis cough from other kinds
1:38
of coughs? Researcher Manuja
1:40
Sharma joins me to talk about her
1:42
work collecting a cough data set to
1:44
prove this kind of discrimination is possible
1:47
with just a smartphone. Every
1:53
year, our online news team publishes hundreds of stories
1:55
from all sectors of science. We talk about cats,
1:57
but we also talk about the science. hear
2:00
about lots of physical science, science and
2:02
kind of news of the weird on
2:05
occasion. And at the
2:07
end of the year, Dave, our online news editor,
2:09
rounds it all up with some
2:11
staff favorites and fan favorites,
2:13
reader favorites. So Dave, what
2:15
have you brought us today? What could I guess that there's
2:17
a cat story in the mix? Yeah, you
2:19
know, sir, every year I feel like we've got a
2:22
cat or a dog story that may be mandated
2:24
at this point by me. But
2:28
actually, it's usually because, you know,
2:30
a lot of these stories are some of our
2:32
most popular of year and that always ends up
2:34
either being a cat or a dog
2:36
story. Yeah, we have to watch out for the internet.
2:38
Right. Influencing everybody to become
2:41
cat content sites. That's right. Okay, so
2:43
I'm going to go with cats first
2:45
then. Okay. This one is
2:47
about the number of facial expressions that cats
2:49
make. Did you write the story? I
2:51
did not write this story. Actually, it was a remarkable
2:53
year because we had a lot of cat and dog
2:56
stories that I did not write. What's
2:58
cool about this story, which is true, a lot of
3:00
our top 10 list, it not only was a popular
3:02
story, but it's also an exclusive story. So we were
3:04
the ones to break the story and
3:06
the only ones who had the story. And
3:08
so this is a story about facial expressions and
3:10
cats and how many they make and how they
3:13
actually make a remarkable number. Researchers
3:16
have found that cats make
3:18
276 facial expressions. That does seem like a
3:20
lot. I don't know how many people make.
3:23
I don't know how many dogs make. I don't really have
3:25
any point of reference, but that cat makes
3:27
faces at me all the time. Right. Was
3:29
it surprising? I mean, I guess they're social
3:32
with us, but is that enough
3:34
to give them such a diverse repertoire
3:37
of facial expressions? Well, the thing I really
3:39
like about the study is people kind of
3:41
think of cats as sort of these antisocial
3:43
creatures, which is clearly not the case. I
3:45
mean, they live in our homes, they sleep
3:47
in our beds, and also they have to
3:49
often interact with other cats. They don't
3:52
have language like we do. So
3:54
how are they communicating affection, anger,
3:58
other types of communication if they a
4:00
language and it turns out, you know, the things that cat
4:03
lovers notice with their cats, the squinty
4:05
eyes, the whiskers that sometimes go in
4:07
different directions, the ears that go in
4:09
many different directions. The dangerous direction
4:12
back. The dangerous direction, right? And
4:14
all of these can combine in
4:16
various different combinations to produce all
4:19
of these different expressions. What
4:21
does the researchers do to figure out
4:23
this quantity of cat expressions? Well, I
4:25
was kind of jealous. They spent a
4:27
lot of time in a cat cafe
4:29
in Los Angeles recording, video recording all
4:31
of these cats interacting with each other.
4:33
They actually got 194 minutes of cat
4:38
facial expressions. And that's how they
4:40
got this very large number. You know,
4:42
Sarah, you had mentioned earlier, how
4:44
do we know how this compares? And
4:46
we know that chip NZs, obviously very
4:48
closely related to us, produce 357 facial
4:50
expressions. So it's actually not too far
4:53
from chip NZs. But the numbers surprisingly
4:55
haven't been tallied for humans or for
4:57
dogs. So we don't know how that
4:59
compares to some of these other animals.
5:01
We should film some people. That's right.
5:05
Seems like that would be pretty straightforward to do. That's
5:09
our mandatory cat story. Now
5:11
we're going to talk about cement batteries.
5:13
I don't usually think
5:16
of cement as something that can hold
5:18
charge, right? Or can pass charge like
5:20
a big insulator. So
5:23
how can you turn cement into a battery? Cement
5:26
is not a very good conductor.
5:28
But what the study did
5:30
was it started with not what
5:32
can cement do, but imagine if
5:34
it could do these things, right?
5:36
So imagine if the foundation of
5:38
your home, which is probably native
5:41
cement, could store power. The foundation
5:43
could power the home or roads
5:45
made of cement. What if those could
5:48
transfer that power to cars in the
5:50
roadway and therefore be a source of
5:52
power for, let's say, electric vehicles, or
5:55
cement storing the power that's
5:58
generated by green technology? like
6:00
wind and solar. So that's
6:03
the possibilities. That's the possibilities.
6:05
Of the future. So
6:07
what do we do to get there? We start
6:09
with something really cool, which I'd never heard of. It's
6:12
a powdered form of carbon known
6:14
as carbon black. And what's cool
6:16
is we're talking about a very
6:18
futuristic technology that's reliant on a
6:20
technology that goes back a long
6:22
time. Since antiquity, carbon black has
6:24
been used to make black pigment.
6:26
But what you can do, what
6:28
the researchers did, was they found
6:30
when they mixed this with cement,
6:33
particles of carbon black repel
6:35
water. So they come together and
6:37
they actually form these long interconnected
6:39
tendrils within the hardening cement that
6:41
can act like a network of
6:44
wires. Maybe you see where we're
6:46
going here. Now if you've got wires, you've
6:48
got something that can all of a
6:50
sudden conduct electricity. Yeah, so you can
6:53
move and sort charge. And so then
6:55
you can store power into cement. Exactly.
6:58
How big is this? I'm assuming it's
7:00
not the size of my garage
7:02
floor. No, it's not. And there was
7:05
no powering of homes or cars done
7:07
in the study. It was actually just
7:09
powering up a few LED lights. And
7:11
the idea is now that they were able to show that
7:14
it actually works on a small scale, can
7:16
they power it up to a larger scale
7:18
to do some of these Gee-Whiz things we've been
7:20
talking about. Very interesting. All right,
7:22
Dave, we're gonna move on to the
7:25
next story, which is about how mad
7:27
scientists can help the public better understand
7:30
the practice of science rather than
7:32
the results of science. Who
7:34
is your favorite mad scientist? I already have one picked
7:36
out. My favorite mad
7:39
scientist is from my favorite movie, Back to
7:41
the Future. So it's gotta be Doc Brown.
7:43
Doc Brown. Doc Brown. Doc
7:45
Brown, unfortunately not in this story, but who
7:47
is your favorite mad scientist, Sarah? Okay, so
7:49
I actually have a tie between young Frankenstein.
7:52
Okay, that's a good one. And with a
7:54
man with two brains, have you seen that
7:56
movie? Steve Martin. Yes, either way,
7:58
there's a brain transfer. plant involved with
8:00
my mad scientist, my top mad scientist.
8:03
Not to date myself, but I saw
8:05
that movie in years. Nice. I'm
8:07
going to say these brain transplants, they don't
8:09
really seem ethical. They don't. And nor
8:11
does probably some of the stuff that Doc
8:14
Brown was doing from the other mad scientists
8:16
discussing the story, Professor Frank from
8:18
the Simpsons. Oh yeah. Cave
8:21
Johnson from the video game portal.
8:23
So we got mad scientists in
8:26
all types of medium TVs, video
8:28
games, movies, everything. And
8:30
some people would say that gives people a
8:32
bad impression of what science is like. It
8:35
does. And actually the point of this story, yeah,
8:37
or the point of the researchers that
8:40
we will get to eventually, uh, that we interviewed
8:42
for the story is to actually do the opposite
8:44
to show not that mad scientists
8:46
are actually good, but to actually show that
8:48
discussing this stuff can actually be good for
8:50
science. It can help the public understand things
8:53
like ethical oversight and review boards, all
8:55
the things that we have in place
8:58
to ensure that science is
9:00
conducted not only sort of
9:02
feasibly, but also ethically
9:05
humanely. Yeah. So this
9:07
is, this had to do with a panel that was conducted
9:09
at a research conference
9:12
and then they had an audience of 450 participants that
9:14
were sort of asked
9:16
all these questions, you know, if mad scientists
9:18
did X, so developed a freeze ray that
9:21
could potentially maybe save people's lives. You
9:23
knew we might be freezing a child to
9:25
prevent him or her from crossing the street.
9:28
But what if the freeze ray like peeled
9:30
their skin off, you know, like be too
9:32
gruesome, but you know, like, how do you
9:34
sort of, how do you sort of deal
9:36
with ethical implications of testing and developing a
9:38
technology like that? Yeah. So this is one
9:40
of my favorite stories of the year because
9:43
we don't often get to do pop
9:45
culture in our stories. We also often
9:47
don't get to run a still from
9:49
the symptoms in our stories. And this
9:51
one has one of those as well.
9:53
Yes. And I know that's a severe
9:55
hardship for you. So
9:58
I like this story because it makes the IRB. fun.
10:00
Yes, exactly. I think our ethical review
10:02
boards need a shiny new reputation. That's
10:04
right. We pride ourselves on writing these
10:07
stories that we're trying to
10:09
translate a scientific discovery to the general public
10:11
so they understand what does it mean and
10:13
why is it important. And this is sort
10:15
of a part of science we don't often
10:17
talk about the review boards, but we also
10:19
want to give the public a sense for
10:21
how these work. And this is a fun
10:23
way to do it. Definitely. All right.
10:25
Next, we have a story with some sound.
10:28
And the question these researchers, I think we're asking
10:30
was, can you take recordings from the brain and
10:33
figure out what is happening in there? Like
10:36
what people are listening to or seeing?
10:38
And in this case, the
10:40
researchers try to reconstruct a song from
10:43
brain recordings. I'm going to play
10:45
the song, the reconstructed song now.
11:00
Okay, Dave, would you have
11:02
been able to identify what song this is? Probably
11:05
not. And for a couple of reasons. One,
11:07
as you can tell from the recording, or
11:09
maybe not tell from the recording, it's a
11:11
little, it doesn't sound like
11:13
any song you've probably heard.
11:15
But the other reason is that the
11:18
song is actually another brick on the
11:20
wall part one, which is the
11:22
lesser known of the another brick
11:25
on the wall songs from Pink
11:27
Floyd, the more much more famous
11:29
version being part two. So
11:31
listeners out there should feel bad if they
11:34
couldn't get it. But it is still
11:36
very remarkable, because this is basically the
11:39
case of researchers taking
11:41
brain readings from a number of participants
11:43
who had heard a variety
11:45
of songs, but another brick on the wall part one
11:47
being one of them, and training
11:49
an AI to sort of
11:52
be able to see could it piece
11:54
these recordings together? Could it interpret them?
11:56
Could it sort of spit out what
11:59
these participants that actually heard and
12:01
while it's not a perfect rendering
12:04
of the song, it's also pretty
12:06
remarkable. There are some elements there. There
12:08
are some elements there, especially if you
12:10
know the song. Of course we
12:13
can't play the song because copyright, so
12:15
go Google it. Oh no, too
12:17
bad, too bad. But yeah, I
12:19
do hear elements if you listen
12:21
to them one after the other that are
12:23
similar. So how do you
12:25
get recordings like this from somebody's brain? Yeah,
12:27
right. Well, this was not easy. I mean,
12:30
this is from something that was done almost
12:32
a decade ago. There was
12:34
electrodes inserted into the brains of people
12:36
with epilepsy. The purpose of
12:38
the study was to record brain activity during a seizure,
12:41
but there was also part of the study that was,
12:43
you know, they were playing music. And
12:45
so many years later, scientists took these
12:47
readings for this new study to try
12:49
to figure out, can we reconstruct this
12:51
song? But it's not just that. It's
12:54
because that's maybe too fun for a scientific
12:56
study, right? My reading is important, but maybe not
12:58
the goal here. What was really cool about the
13:00
study was not just the reconstruction of the song,
13:02
but that the researchers were actually able to pinpoint
13:05
a new area of the brain that
13:07
seems to be involved in the perception
13:10
of musical rhythm. And the
13:12
other cool thing is the application. So you could say,
13:14
well, why are we even doing this? But you can
13:16
imagine we have obviously a lot
13:18
of people in the world that can't speak as
13:20
results of strokes or injuries or degenerative diseases. And
13:22
so if there was a way to sort of
13:24
be able to read their minds as it were
13:26
and be able to sort of kind of spit
13:29
out what they're trying to do now, obviously we're
13:31
not there with this study, but
13:33
studies like this are the first steps
13:35
in being able to do something like
13:37
that. Okay. All right, Dave. Last
13:40
story we're going to talk about, you have to check out the rest of
13:42
the top set online after this one. And this
13:44
one, I think, I feel must have garnered a
13:47
lot of interest from the headline alone. So I'm
13:49
going to just read. Crocodiles
13:51
are alarmingly attuned to
13:53
the cries of human infants. Key
13:56
word there being alarmingly, right? Exactly.
14:00
All right, so this seems like it might have been
14:02
a difficult study to do. How did
14:04
researchers decide that Crocs had
14:06
a preference for stressed out babies? Well,
14:09
Sarah, I want to let you know
14:11
that no babies were harmed in the
14:13
making of this study. Instead, researchers went
14:15
to a zoo in Morocco that houses
14:18
more than 300 Nile crocodiles. And
14:22
what they did was they played on loudspeakers
14:24
a number of different
14:27
cries, some from human babies,
14:29
some from other primate
14:31
babies. And the
14:33
babies cry sort of differed in what sparked
14:35
the cry. Some of these cries were just
14:38
like, they were kind of upset because they were
14:40
getting a bath, you know, and they didn't want to get
14:42
a bath. But some of them were much more frantic cries,
14:44
you know, they're getting a shot and they don't really know
14:46
what's going on. So there's a lot more panic in
14:48
the cry. And the question was, would
14:50
the Crocs respond, really responded
14:53
all to these cries? And would they
14:55
respond especially well to a cry of
14:58
an infant in distress? The
15:00
big reveal is yeah, they're alarmingly
15:03
attuned to the cries. Alarmingly
15:05
attuned, right. Okay. But
15:07
I was also surprised that people weren't so
15:10
good at this. Like even with babies, not
15:12
like bonobos or whatever, but like chimpanzees. No,
15:14
people weren't good at it with babies, human
15:16
infants. Right. That was the other thing.
15:18
When the researchers played these sounds for people,
15:21
they knew that they were all cries,
15:23
but they couldn't really make that distinction
15:25
between like, is the baby especially kind
15:27
of panicked, you know, worried
15:29
in this particular sound or is
15:31
this kind of just kind of a more normal run
15:33
in the middle cry? Should I play them? For
15:36
me, it's hair raising. I can play them though.
15:38
Yeah. Well, we got, this
15:40
is all about the sound files, right? Well, let's
15:43
do distressed baby. And
15:47
then complaining, but not really upset
15:50
baby. I
15:54
can tell the difference, but I don't know. You
15:56
can tell the difference, right? So you would make
15:59
a great Nile crocodile. Okay,
16:02
so when we say that crocodiles responded
16:04
to these cries, what was the response
16:06
like? What did they do? Well, there
16:08
were speakers again playing these cries and the
16:11
crocs tended to approach them much more likely
16:13
to approach the speakers when they were playing
16:15
these cries, sometimes fight the speakers. Alarmingly,
16:18
Sarah, they would sometimes fight
16:20
the speakers. So this is
16:23
a pretty substantial response, these
16:25
crocs said. So the next
16:27
leap in logic is that they were trying to eat
16:29
the baby? I mean, could they have been trying to
16:31
help the baby? Right. And you
16:33
know, let's give these crocs the benefit of the
16:35
doubt. One of the sources we spoke to in
16:37
the story said, you know, we
16:40
know that they're responding, but we don't really know
16:42
what they're trying to do. Because actually,
16:45
we know that these particular species
16:47
of crocodiles, that they respond to
16:49
the distress calls from their own
16:51
young. And so possibly, this is
16:53
just a sort of a nurturing
16:56
reaction, you know, they're concerned, you know, it
16:58
doesn't quite explain the biting of the
17:00
speakers. Well, hey,
17:02
don't some crocodiles put babies in their
17:04
mouth and carry them around? I don't
17:06
know. But that's possible. Yeah, I guess
17:09
that may be true. So okay, so
17:11
I'm going to go for altruistic. Okay,
17:13
crocodile in this. All right, Dave,
17:15
thanks so much for bringing us some really
17:17
fun stories this year. There are five more
17:19
phase on the site. There's five
17:21
more. So we got, you know, we've got our
17:24
top 10 list. So we've discussed five, there's a
17:26
bunch of other ones that are very popular, very
17:28
loved by either our staff or our readers or
17:30
both. So be sure to go to the site
17:33
and check the rest of those stories out. All
17:35
right, thanks so much, Dave. Thanks, Sarah. Dave Grimm
17:37
is the online news editor for science. Visit
17:40
science.org/podcast for a link to the list. Up
17:43
next, listening for tuberculosis coughs
17:45
with minuetia Sharma. Before
17:54
we get to the next part of the show, I'd like
17:56
you to consider subscribing to news from
17:59
science. Every week, we share
18:01
stories from our news site, News from Science.
18:04
Science journalists and editors kindly come on here
18:06
and tell a story for our ears that
18:09
they've been spending sometimes weeks or even
18:11
months reporting and writing. If
18:13
we were counting, our award-winning journalists publish
18:16
as many as 20 stories a
18:18
week, from tracking policy to
18:20
investigations, international science news,
18:22
and yes, when we find
18:24
new secrets about mummies, we report on that too.
18:27
It's an unbelievably valuable service. If
18:30
you were here with us during early COVID days,
18:33
you must have heard how plugged in
18:35
and devoted our news team truly is.
18:38
Please consider supporting non-profit science
18:40
journalism by becoming a
18:42
subscriber for around 50 cents
18:44
a week. To subscribe,
18:46
go to science.org/news, scroll
18:49
down a little bit, and click subscribe on
18:51
the right side. What can
18:53
you tell from a cough besides, oh, that person is unwell?
19:07
This week in Science Advances, Manuja Sharma and colleagues describe
19:10
a method of screening for
19:13
tuberculosis infection using coughs at a smartphone. Hi,
19:17
Manuja. Welcome to the Science Podcast. Hi, Sarah.
19:19
Nice to meet you. Thanks for
19:22
having me. Thank you. This is super
19:24
interesting. Tuberculosis is a big killer. More than
19:26
10 million people per year die from this disease and
19:28
it's on the rise lately. Manuja, what do we do
19:30
now to screen for active TB? What
19:33
kind of screens are there? The
19:36
current goal standards for TB diagnosis include sputum
19:38
culture or gene experts. Molecular
19:42
tests and they are basically
19:44
done on the sputum that's collected
19:46
from patients after they cough
19:49
and that is sent for either a
19:51
PCR analysis or a treatment. culture
20:00
to see if there is any TB
20:02
bacteria growth in it. These two
20:05
are the most common ones though
20:07
they are not available at every
20:09
level of clinics. Right, so not
20:11
everybody's gonna have the PCR machine.
20:13
Yes, these are not really available
20:15
at the peripheral health centers and
20:17
also it's not very easy to
20:19
collect the sputum especially for kids.
20:21
Oh yeah. So that is another
20:23
thing that adds complexity. So availability
20:25
of tests and also doing those
20:27
some of the tests are not
20:29
that easy. There's a
20:31
need here and your study actually aimed
20:34
to use coughing sounds as a screening
20:36
tool. You know how people
20:38
attempted to use coughing sounds before
20:40
for disease diagnosis? Looking at cough,
20:42
both the cough counts as well as
20:44
the cough features goes way back. Cough
20:46
count is more popular you know when
20:49
you go into the clinic, doctors ask
20:51
you know have you been coughing, how
20:53
often are you coughing? There are studies
20:55
showing that those subjective methods are not
20:57
that accurate. People really don't remember you know
20:59
how often they are coughing and so we need
21:01
counters to help with that. Beyond
21:04
just the frequency of coughing it's
21:06
also the features of the cough
21:08
that's been of interest lately. There
21:10
have been studies done it even
21:12
in 1990s to understand you know
21:14
different kind of pulmonary diseases and
21:17
trying to understand if the cough sounds different
21:19
and what they have seen is
21:21
there is some differences because of
21:24
how the cough is produced and
21:26
essentially because of that difference in
21:28
the source of sound, we get
21:30
different kind of frequency
21:32
mapping of coughs and that's
21:34
been recently studied, COVID-19 made
21:36
it quite popular and so
21:39
there have been a lot of interest in
21:42
the field of tuberculosis also because cough is
21:44
one of the most primary symptoms that's used
21:46
for symptom screening. So if we can add
21:48
more objective analysis in there, can we add
21:51
more analysis to the features and try to
21:53
understand from the sound of the cough if
21:55
it sounds different from any other pulmonary disease
21:57
that would be an easy to use screening.
22:00
tools at peripheral health centers. So what makes
22:02
it such a hard problem to solve,
22:04
you know, using cough sounds as
22:06
data for a screening tool? Well,
22:09
the first is actually establishing the
22:11
fact that there is signature in
22:13
cough. That's not yet established. So
22:15
we are still at that research
22:17
question whether there is signal and
22:19
cough, disease-specific signal and cough. The
22:22
first hurdle is actually getting access
22:24
to the dataset of
22:26
subjects coughing because they have TB
22:28
and subjects naturally coughing because they
22:30
have some other health issues and
22:33
trying to understand if we see
22:35
a difference there. How were you
22:37
able to capture coughs from people
22:39
with tuberculosis? So this dataset was
22:41
collected at Kenya Medical Research Institute
22:43
in Nairobi where patients were coming
22:46
in with TB symptoms and they
22:48
were screened with TB symptoms using
22:50
the standard screening protocol. And
22:52
then they were asked whether they want
22:54
to sit for a two-hour study in
22:57
a room where they'll be just, you
22:59
know, sitting in passively coughing and not
23:01
asked to produce cough voluntary. So all
23:03
these subjects sat in a room and
23:06
their cough was recorded using three
23:08
different devices. And we made sure
23:10
that the room did not have
23:12
a lot of ambient noise and
23:14
also there was less talking in
23:16
the room so that we get
23:18
clean cough. And similarly, we had
23:20
a lot of subjects who did
23:22
not have TB and or who
23:24
were incorrectly screened for TB and
23:27
they were also enrolled in the study
23:29
to get some coughs from non-TB subjects.
23:31
How many coughs did you end up using in
23:34
your model? We have a cough dataset of around
23:36
33,000 coughs from 149 subjects. That's a lot of
23:40
coughs. Yeah, that's a lot of coughs.
23:43
We ended up using for the balanced
23:45
dataset for training and testing. We used
23:47
around 21,000 coughs. And we're
23:50
also releasing that dataset with the state book.
23:52
Right. So I actually asked you for cough
23:55
sounds and we can play them here. We
23:57
have some that are from TB patients. Yes.
24:02
And some that are from non-TB
24:04
patients. And
24:07
we can't tell the difference. Doctors can't
24:09
tell the difference. And actually, for
24:11
your study, you transform those piece
24:14
of audio into basically a visual,
24:16
you know, like a graph. And
24:18
that was important to get a computer
24:21
or machine learning to tell the difference between them.
24:24
Yes. Like you said, anyone with
24:26
normal hearing abilities will not be able to
24:28
differentiate between the two costs. And
24:30
so the idea was to look it into
24:32
a different domain, try to understand and see
24:35
how the frequency and energy is changing over
24:37
time. And can we see
24:39
any features that specific to TB
24:42
or other pulmonary health issue. And
24:44
so we transformed that into scale grams,
24:47
which is essentially a plot of
24:50
how the frequency is changing over
24:52
time. And we took
24:54
images of those plots and send
24:56
it through a ResNet 18 classifier
24:58
and looked at the results of the
25:00
Briney classifier, whether it's classifying is that TB
25:03
or not TB. And how good
25:05
was the classifier at making that distinction? So
25:07
we use cross-validation with our
25:09
dataset. And with that, we
25:12
got an accuracy of around 79%. This
25:16
obviously needs further validation using independent
25:18
test sets. And that's true for
25:21
every machine learning model we need
25:23
for the validation. So
25:25
one thing I noticed is that you used different
25:27
kinds of recorders to capture the costs.
25:30
And maybe I'm interested in this because I
25:32
work in audio. But I thought it was
25:34
interesting that smartphones that you use turned out
25:36
to be the best way to record these
25:39
costs. Is that surprising to you? It
25:41
was a little surprising because we used
25:44
three kinds of microphones. One was a
25:46
microphone similar to what's being used for
25:48
the spot cast recording a high-end microphone.
25:51
Then there was a smartphone microphone. And
25:53
then the third one was a conference
25:55
microphone that we see in conference rooms
25:57
on the table. The worst. the
26:00
worst. My hunch was that the
26:02
worst would be the conference microphone,
26:04
then the smartphone and then the
26:06
best recording but what we saw
26:08
was the smartphone did the best
26:10
and one of the reasons could
26:12
be that when the subjects are
26:14
coughing, the smartphone had automatic gain
26:16
control so it was able to
26:18
adjust the amplitude based on how high
26:20
the subject is coughing which was not
26:23
the case for the higher-end microphone and
26:26
we got a lot of pops out that
26:28
was saturated so essentially clipping so
26:31
that was I think one of the reasons
26:33
for not getting good coughs from there. Oh
26:35
that was some good audio chat. Yeah
26:37
for sure so
26:39
yeah if you have a microphone that doesn't compress the
26:42
sound when it gets too loud you end up with
26:44
yeah it's just a wall of sound that you can't
26:46
see any features in and so here you go the
26:49
smartphone is good for that. So another
26:51
question about your data real quick before we move on
26:53
to how you can turn this into an
26:55
application. This is unsourced information
26:57
but I do feel like people
27:00
from different places sneeze differently, do
27:02
they also cough differently place to place?
27:05
So that could be true so all
27:07
of this data was collected in Kenya
27:09
at a research facility there so it
27:11
was a study just done in that
27:14
demographic and we saw the difference there.
27:16
There could be differences in how people
27:18
are coughing for example here versus there
27:21
and that's just because you know it's
27:23
part of the sound people sound different
27:25
and so the coughs can also look
27:27
different and that's where the machine learning
27:30
comes in you train the model you
27:32
need more data and you also need
27:35
various transformations that you can do to map
27:37
models from one data set to the other
27:39
so yes there could be differences there and
27:41
that's something to work on when you know
27:44
the screening tools come out actually is used
27:46
in the public. Yeah so it'll be interesting
27:48
to find out how much coughs vary place
27:51
to place and how this can be adapted
27:53
to that if they do it
27:55
might be that these features are universal
27:57
regardless of you know your vocalization preference.
28:00
Do you know anything about the relationship
28:03
between what you're seeing in your scaleogram,
28:05
what you're hearing in the cough and what is going
28:07
on with the person? Why does it
28:10
sound different? One of the
28:12
theory is that TB impacts the
28:14
lung tissues and because
28:16
of those impact in those lung tissues,
28:19
the cough sounds differently.
28:22
And what we played, also played
28:24
around was different frequency ranges trying
28:26
to understand which frequency range was
28:29
having the most impact on the
28:31
model. And we saw that
28:34
data in the range of 10 to
28:36
4 kilohertz was impacting the model
28:38
the most. Okay, that's super interesting.
28:40
And so perhaps there's something about
28:42
the way the lung tissue has
28:44
changed that's like affecting that frequency
28:46
of the cough. Could be. Now,
28:49
if this smartphone is good enough
28:51
to record your sound for your
28:53
data set, is it also good
28:55
enough to take sound from people
28:58
and use it to diagnose or
29:00
at least say this person might have
29:02
TB based on their cough? Yes,
29:04
and that's what the next steps for
29:06
the projects would be. And even
29:08
the model that we have built is a lightweight model
29:10
that can go on the smartphone. The
29:13
idea would be to capture the sound of the mic on the
29:15
phone and run it on the phone and
29:17
then see the analysis in the clinic. So
29:19
there is a challenge that comes in with
29:21
a screening tool based on packs of coughs.
29:23
It is when the patient comes in, we'll
29:26
have to wait for the patient to cough
29:28
and then do the analysis. There have been
29:30
studies using forced coughs that you the
29:32
patient comes into the clinic and you ask them to
29:35
cough and then you take those coughs
29:37
and analyze that. But what we
29:39
saw in a study that the passive
29:42
and natural coughs differ a
29:44
lot and model trained
29:46
on passive cough is not translatable
29:48
to voluntary cough. Yeah.
29:51
So there are some challenges there,
29:53
but we think generally patients having
29:55
TB do cough a lot and
29:57
so maybe if not in the
29:59
clinic. they could be given that app on
30:01
the phone if they have a phone or they
30:03
could be given a phone to take home for
30:06
a day and it can record night coughs and
30:08
then that can be used later on to
30:10
analyze and screen for TB. Okay.
30:13
Are there other diseases, other human body
30:15
sounds that we could listen for,
30:18
analyze and help with disease diagnosis?
30:20
There has been studies done in
30:22
general on overall voice analysis because
30:24
you know how people sound worse
30:27
when they are down with so.
30:29
So we do think different sounds
30:31
can be important, not just cough,
30:33
just in general sound analysis of
30:36
how the person is talking or
30:38
even like the wheezing sound that
30:40
you know at times is produced.
30:42
So all of that can also
30:45
help in analysis. But for this purpose of
30:47
the study, we only looked at cough and
30:49
one of the reason was cough is
30:51
used in symptoms screening for tuberculosis and
30:54
we wanted to see if there's any
30:56
way to make that symptoms screening more
30:58
robust. So if
31:00
people in the clinic or you know at home
31:02
have this app and they come in
31:04
and they say, oh my app says I have TB,
31:07
what does the clinician do differently? Do
31:09
they still you know do those next
31:11
tests but this kind of helps
31:14
rule in or rule out TB quicker
31:16
and easier? So it's essentially
31:18
a screening tool to help facilitate
31:20
the presence screening tool which is
31:23
just you know asking patients whether
31:25
they have fever, cough or night
31:27
sweats. So just adding
31:29
on to that with objective analysis
31:31
of cough and not just patient
31:34
level, it can also be used
31:36
to understand hotspots of TB transmission
31:38
because it is one of the
31:40
most infectious diseases causing death right
31:42
after COVID-19. So it's
31:44
very critical to understand where the spread
31:46
of TB is. So if a
31:48
lot of people in the community have the app and
31:51
we get the data, we are getting
31:53
a lot of TB sound like coughs
31:55
in this area. So that could help
31:57
bringing in interventions to prevent that transfer.
32:00
transmission. So we think it
32:02
can be used at two levels, individual
32:04
and community levels. That's wonderful. What's
32:06
next for this research? What else do
32:09
you want to learn about coughs or
32:11
how to do screening with them? So
32:13
the first step is validating this result.
32:15
We have done a cross-validation study and
32:17
we want to evaluate our models on
32:19
an independent test set. And
32:22
with that, we still have
32:24
ongoing cough recordings happening in
32:26
the area. So we want
32:28
to switch now to
32:30
smartphone and do all the cough collection
32:32
on that and try to see if
32:34
we can use the model
32:37
to do some real-time prediction and how
32:39
that behaves. Yeah, that's great. Thank
32:42
you so much, Manuja. Thank you, Sarah, for
32:44
having me here. Manuja Sharma was
32:46
a PhD student in the Department
32:48
of Electrical and Computer Engineering at
32:50
the University of Washington when she
32:53
worked on this paper. You can find a
32:55
link to the Science Advances paper at
32:57
science.org/podcast. And
33:00
that concludes this edition of the Science
33:02
Podcast. If you have any comments or
33:04
suggestions, write to us
33:07
at sciencepodcast at aaaf.org.
33:09
To find us on podcasting apps,
33:11
search for Science Magazine, or
33:14
you can listen to the show on
33:16
our website, science.org/podcast. The
33:18
show was edited by me, Sarah Crespi,
33:20
and Teva McLean with production help from
33:22
Megan Tuck at Pataji. Jeffrey
33:25
Cook composed the music on
33:27
behalf of Science and its publisher, AAAS.
33:29
Thanks for joining us.
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More