Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:04
It's predicted that this year we'll see artificial
0:06
intelligence transform industries
0:09
and redefine human interaction
0:11
with machines , but it's also facing
0:13
challenges to meet ethical commitments . I'm
0:15
Mike Landis To further discuss the benefits
0:18
and potential downside of AI . Ut
0:20
Tyler Radio connects with University Assistant
0:22
Professor of Communication , dr Eric
0:24
Gustafson . Endless possibilities with
0:26
AI and huge ethical challenges
0:28
am I right ?
0:29
Absolutely , and thanks for having me on , mike . I think
0:32
when we get to the conversation
0:34
of ethics , oftentimes
0:37
we've already had those changes
0:39
happen right , and
0:41
the questions surrounding ethics , I
0:43
think , really are at a flashpoint
0:46
, on college campuses particularly and
0:48
then sort of leaking out into other
0:50
industries as well , where we first
0:53
need to figure out what even are those challenges
0:55
. What are the questions that we need to ask
0:59
People in
1:01
a number of different industries ? Very brilliant people
1:03
all around have isolated
1:06
some of those questions . We still have them tinted
1:08
with the different emotions that come from something so
1:10
new and we've
1:13
yet to completely remove ourselves from
1:15
the equation or at least do the best we can
1:17
to assess these things on their own merits
1:20
as opposed to on
1:22
our own fears
1:25
or excitements about them .
1:32
Perhaps the flashpoint of this international conversation came with the demonstrations of what ChatGPT
1:34
was capable of Tell us more about the upside and downside
1:36
of this free-to-use AI system
1:38
.
1:39
Well , one ChatGPT if you've
1:41
ever peeked into it is immensely helpful
1:43
. It's a great tool and a great
1:45
aid for helping to find
1:47
information , to conduct research
1:50
, to learn . It's
1:52
a great tool , but
1:55
it also poses a
1:57
lot of different issues because it replaces those skills that we used to
1:59
have to do on our own . Because it replaces those skills
2:01
that we used to have to do on our own , those
2:03
research skills , those identifying what
2:06
is credible and
2:08
what's not , and while
2:10
with every new technological
2:12
development we often we get a
2:14
new way of knowing or a new way of coming
2:17
to know or creating knowledge , which
2:19
means we lose a pathway . But
2:24
I guess we are in the stage of figuring out is that new way that ChatGPT
2:26
proposes or allows us
2:28
to sort of come to knowledge and create
2:31
knowledge ? Is
2:33
that good , bad ugly somewhere
2:35
in between ? What do we do with it ? Sounds
2:38
like it can be all of those things . Yeah
2:41
, I had a great colleague who once said
2:44
yes , but . Or
2:47
in communication studies we always say it
2:49
depends , it's contingent on context
2:51
. All the time it depends .
2:53
In a day and age in which we are bombarded
2:56
daily with all kinds of information that
2:58
may or may not have been professionally vetted , how
3:00
will we know for sure , going forward , what
3:03
is true and what isn't ? How will
3:05
?
3:05
we know for sure , going forward , what is true and what isn't
3:07
? I'm not sure we will , especially with the upcoming election
3:09
and
3:28
we think about the different sort of campaign messages and that can exacerbate those
3:30
difficulties of identifying what's true and what isn't . I'm not sure we will for
3:32
a little bit . Right now we have different fact checkers or AI checkers
3:35
that have developed rapidly in
3:37
concert with these technologies , but we also
3:39
have programs
3:41
that actually strip AI
3:43
generated content of the markers that
3:45
would be caught by the detectors . So
3:48
in Concert we have all these technologies
3:50
and software specifically
3:52
sort of running together and they're
3:55
leapfrogging well ahead of our
3:57
questions right now .
3:59
I'm thinking of the 1950s
4:01
and 60s sci-fi
4:04
movies in which machines would take over
4:06
the world . They'd start talking to each other and
4:08
then they didn't need humans anymore . I
4:10
mean , is that even technically possible
4:12
?
4:14
I think some of the most interesting
4:17
predictions
4:19
or sort
4:22
of explorations into what the future will
4:24
be seem so far-fetched that
4:26
we can't
4:28
fully grasp them . They seem so fantastic
4:31
, so far off , and yet
4:33
if you transported someone from
4:35
the 50s or 60s to
4:37
here , they'd probably go . What is all of this
4:39
? And I think we will have the same experience
4:42
realization in another 50
4:44
and another 60 . So I
4:47
think all of those at the time of their writing
4:49
seem so far-fetched . And then , all
4:52
of a sudden , in the 50s and 60s we
4:54
were already . You
4:56
know , if we talk about the Turing machine and think
4:58
about sort of this birth of computing
5:01
, this birth of sort of the first artificial
5:04
intelligence , if you will , we already
5:06
had it then , and then it's finally exploding
5:08
now , 70
5:10
years later . I think we'll
5:13
see that with other hosts
5:15
of technologies that are running alongside this
5:18
.
5:19
It's said that security and privacy
5:21
are essential requirements of developing and
5:23
deploying AI systems , but that's
5:25
also the biggest problem facing AI . It
5:28
would feel a little bit like the foxes
5:30
are essentially guarding the hen house at
5:32
this point in time . Is that an overestimation
5:34
of it ?
5:36
I don't think it's an overestimation . I think it
5:38
may be a way of characterizing
5:40
exactly what's always been the case , which
5:42
is , when we look at technological developments
5:44
, especially those that push the frontiers
5:47
of our understanding of the world , we
5:49
often see war
5:52
and security at the forefront
5:54
. One of the technologies
5:57
running alongside artificial intelligence
5:59
that is said to going
6:01
to supercharge it in the next decade or
6:03
two is quantum computation
6:05
, which represents a fundamentally different way of
6:08
computing from classical computation
6:10
. But our original impetus
6:12
for developing quantum
6:14
theory and quantum mechanics
6:16
was to create the atom bomb . It
6:19
also helped us create MRI machines , helped
6:21
us create the transistor , which
6:23
was the fundamental unit for
6:26
classical systems , which
6:28
was the fundamental unit for classical systems
6:30
. But
6:35
oftentimes we do see the foxes
6:37
are the ones funding the research that makes these things possible
6:40
and oftentimes pushing those boundaries .
6:43
And we get to find out about it later . So
6:45
we're just months away .
6:59
We're just months away from what promises to be a very contentious presidential election . You
7:01
mentioned this a moment ago . In theory , a voter in Smith County could get a phone call
7:03
with Joe Biden's voice asking them to vote for Donald Trump . I may sound ridiculous , but technically
7:05
that's possible . Right last semester , I had a student create an AI generated podcast and
7:07
he used the voices of Joe Biden and Donald Trump
7:10
for the two podcast members and
7:12
you would be shocked
7:14
at how good this sounded . It's
7:17
absolutely a possibility . It's more than a possibility
7:19
. It's most likely a probability . And
7:23
to the extent that
7:25
to your earlier question , can we tell
7:27
the difference ? Sometimes not
7:29
. Sometimes , yes , it'll
7:32
develop along with that human touch of the people
7:34
in these campaigns saying , oh
7:37
, if we tweak that , this will sound better and
7:39
maybe no one will know . It's
7:43
really hard to tell , but it'd be very interesting . Yeah
7:45
exactly .
7:46
Deep fakes with video are a little scary
7:48
. I see some of them being used for comedy bits . You
7:50
see those on the Internet . They involve those
7:53
two men , the president and
7:55
Donald Trump . But someone with enough money and
7:57
a determined enough agenda could certainly get
7:59
a lot of myths and disinformation
8:02
out there .
8:03
Absolutely . I think with prior
8:05
elections we've seen that
8:07
explosion of this idea of fake
8:09
news and to our
8:12
college students , to younger individuals . It's second nature
8:14
to them now to question these things , whereas
8:16
some of us who are older had a
8:18
touch more faith in them . With
8:27
video and audio , what we're seeing is fake news sort of pushed to
8:29
its extreme , and it might reverse to its opposite , to use sort
8:31
of a phrase from Marshall McLuhan , a
8:33
media scholar from the 20th century . If
8:36
we take this so far , we push it to its opposite
8:39
. Instead of simulating these voices and
8:42
crafting a message that creates
8:44
trust , we put
8:46
all these messages out there and none of them
8:49
create trust , and so accomplishing the opposite
8:51
of what we thought .
8:53
There's been a call for international collaboration
8:56
and ethical standards to take place . How
8:58
quickly could safeguards be established
9:00
and put into place Should that ? Should that happen
9:02
?
9:03
for the election . I'm unsure for um
9:05
, the legislative
9:07
system in general . There have been talks that have started
9:09
. The European Union just pushed an
9:12
act through that levies significant
9:14
penalties on AI
9:17
, transparency and sort of attempts
9:20
to safeguard these things , but
9:34
legislation always travels slower than technological development . So whether or not
9:36
the right safeguards were put in place prior
9:38
to the election will
9:41
be something we'll see soon .
9:43
Just over 30 years ago , the World Wide
9:46
Web went into the public domain , a decision
9:48
that fundamentally altered the entire
9:50
past quarter century . Are we
9:52
fretting over the unknowns about AI
9:54
in the same way that some did about the Internet
9:57
years ago , or are the concerns about AI
9:59
more substantial ?
10:01
I think if you look to any technological
10:05
development , you're going to find
10:07
anxieties with it . If you go back
10:09
to the ancient Greeks , socrates
10:11
bemoaned literacy because it removed
10:13
knowledge out of the human mind . And how do we
10:15
know if someone's smart if they can't remember it ? Try
10:17
to tell a college student that today . And
10:20
the Internet is a great example too , because we
10:23
find the roots of it first being developed
10:25
by the military to share documents in
10:27
the late 1960s , and
10:29
then in 1983
10:32
, they switched a protocol which was what we consider
10:34
sort of the birth of the internet . And then , probably
10:37
20 years later , we
10:39
see it 20 , 30 years
10:41
. We then see it as integrated into
10:43
every single facet of
10:45
our lives , right ? So I don't think
10:47
the concerns
10:49
about AI are unwarranted
10:53
. I think we're just now
10:55
realizing that this has been a long time
10:57
coming and oftentimes we just
10:59
need to . If only we
11:01
could learn about these things before they exploded
11:04
onto the scene , I guess .
11:06
Any final thoughts you'd like to share about AI ?
11:10
You know AI . It's the next big scary thing
11:12
when we ask ourselves questions about
11:14
it , it's not productive to
11:16
say this is awful or this
11:19
is amazing . It's more productive
11:21
to weigh both of them . Ai
11:23
represents the tip of the iceberg for me .
11:25
Thanks for listening as UT Tyler Radio connects
11:28
with Dr Eric Gustafson of the University's
11:30
Department of Communication For
11:32
UT Tyler Radio News . I'm Mike
11:34
Landis .
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More