Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:00
ABC Listen, podcasts,
0:02
radio, news, music
0:04
and more. From
0:11
the first rudimentary programs of the 1950s to
0:14
the sophisticated algorithms of today, the
0:16
evolution of artificial intelligence has been
0:19
rapid and exciting to watch. From
0:21
navigation apps to chat GPT, artificial
0:24
intelligence has the potential to change the way
0:26
we all live. But this week on Download
0:28
This Show, what does the future look like?
0:31
This is your guide to the week in
0:33
media, technology and future. My name is Mark
0:35
Fennell and welcome to Download This Show. Yes,
0:47
indeed. It is a very special episode of Download This
0:49
Show and a very big thank you to our guests
0:51
talking all about AI. I want to
0:53
confirm that they are real people. They are definitely not AI. Joining
0:56
us this week, special guest, Kylie Walker, the
0:58
CEO of the Australian Academy of Technological Sciences
1:01
and Engineering. Welcome to Download This Show. Thanks
1:03
so much, Mark. It's a pleasure to be here. And
1:05
good friend of the program, reporter with the Guardian, Josh
1:07
Taylor. Welcome. Good to be back. I don't think I've
1:10
been able to deep fake myself yet. I
1:12
mean, give it time. The night is young. We're
1:14
not that far off, I assume. All right.
1:16
I want to start off with how we got here,
1:18
right? There's been so much discussion over the last couple
1:20
of really years about
1:23
what AI can do now. But let's talk
1:25
about how we got here, right? 10,
1:28
15 years ago, what did AI actually look
1:30
like, Kylie? Look, it
1:32
was very, very different. It was probably a
1:35
lot less sophisticated than you would imagine an
1:37
AI to be right now. It was more
1:39
of a concept that many, many computer scientists
1:41
were working on and a goal. But
1:44
it had started to become applied in
1:46
ways that probably weren't quite so visible
1:48
to the public eye. So things like
1:50
TAP GPT, obviously, which burst onto the
1:53
scene last year and really brought AI
1:55
up into the public domain. The
1:58
icing on the cake, if you like. there
2:00
have been more applications in
2:02
robotics industry and in that sort
2:04
of everyday, just keeping society ticking for a little
2:06
bit longer now. So what happened? Is it been
2:09
a sort of a slow development behind the scenes
2:11
or was there sort of a major
2:13
development that shifted because it feels like there was
2:15
quite a defined moment where AI was something that
2:17
was sort of used on us and then suddenly
2:20
there were these suite of services where we could
2:22
now use it, Kylie. Yeah, absolutely.
2:24
I think that that big change
2:26
was the large language model generative
2:29
AI. So TAT-TPP is obviously the
2:31
most well-known of those. And
2:33
that, yes, it hasn't just suddenly happened. Of
2:36
course, these things take a lot of time and a
2:38
lot of people and a lot of build up to
2:40
get to the point where suddenly it seems to
2:42
be everywhere. This research has been going
2:44
on for years. It's been applied for years, this technology.
2:47
It's the generative models, the large language
2:49
models and those creative kind of applications that
2:51
are suddenly made it very prominent to the public
2:53
eye because all of a sudden we now
2:55
have technology, machine learning technology
2:57
that everyday citizens can interact with in
3:00
a way that's fun, in a way
3:02
that's interesting and intriguing and a way
3:04
that's also creating some risks and some
3:06
questions about the application of technology. So
3:09
we've gone from things like, you know, council
3:11
trucks looking for potholes and reporting them in
3:13
an automatic way through to the
3:15
point where now you can create those deep sites.
3:17
You can mash up images and
3:19
create sort of interesting poetry using
3:22
these generative models. With technology,
3:24
it often is about the everyday.
3:26
People are paying more attention to
3:28
technologies that they understand, that they
3:30
use, that they can see in their
3:33
everyday life. But really these advances are going on
3:35
across a whole range of sectors all of the
3:37
time. Josh, what do you think the psychological
3:39
impact is? We go from AI being a
3:41
thing being sort of used on and around
3:43
us to something that we can now engage
3:45
with. People can make AI images, people can,
3:47
you know, chuck their school reports
3:49
into chat, GPT. How
3:51
do you think it's changing us knowing that that
3:53
technology is there and accessible? I think it is
3:55
just making it a lot more present in people's
3:57
lives and making people feel like that they can.
4:00
actually be a part of it whereas as
4:02
you were saying before, it was things being done
4:04
to us before in terms of organizations and governments
4:06
had huge amounts of data. They were building up
4:08
these pieces of technology. They could actually use that
4:10
data and learn things. We talk about social media
4:12
algorithms and things like that quite a bit. I
4:15
think we're still at the early phases of it. We'll probably
4:17
get into this later but I think that probably
4:19
one of the clear and present dangers is it
4:22
does make things harder to figure out what's real
4:24
and what's not as we go forward. I
4:27
think for now, it's just really about
4:29
the democratization of AI for everyone at
4:31
the moment. What do you think, Aali?
4:33
You've actually been there. Take a beat.
4:36
I was. I was. At the same
4:38
time, I think that's a really important
4:40
point there about large data sets because
4:42
really the size of the data sets
4:45
on everyday citizens has been rapidly evolving.
4:47
It's been growing very, very quickly over
4:50
the last decade or so. We are
4:52
as citizens, we're giving a whole lot
4:54
of companies and governments our data for
4:56
free really when we sign up for
4:58
social media platforms, when we sign up
5:00
to marketing websites, when we sign up
5:03
to programs that give us points for purchasing
5:05
things. All of those really
5:07
are staying here, have all of the information about
5:10
the way that I behave, the things that I
5:12
think, what I look at, what I research, what
5:14
I buy. Now those
5:16
powerful data sets are really an
5:18
absolutely crucial foundation for AI to
5:20
work really, really effectively because machine
5:22
learning, yes, you set the parameters
5:24
for it early. The researchers or
5:26
the computer scientists who are building
5:28
it will create rules,
5:31
if you like, for that machine to think
5:33
and to learn. But the data sets are
5:35
like the food that they feed on. They
5:37
need that information in order to be
5:39
able to evolve, in order to be able
5:41
to make decisions and then act on them.
5:43
That's really been a crucial step. Where
5:46
to next? Well, it's got some really interesting questions
5:49
for privacy. It's got some really interesting questions
5:51
for security. It's got some really interesting questions
5:53
for inclusion and social
5:55
responsibility as well. We're going to
5:57
get into all of that, but I'm going to do something
5:59
uncharacteristic. for this show. I'm going to start with
6:01
Unbridled Optimism, Kylie. It's very unlike the show. Because
6:05
I know how it's going to end. It's going to end, it
6:07
always ends with some dystopic vision of the future. Oh,
6:10
I'm always optimistic about the future, Matt. Well,
6:12
good. Then I'm glad we've got you on
6:14
the show. So let's, before we get into
6:16
risks and security and privacy issues, let's just
6:18
talk about what you're optimistic about. What as
6:20
you look at the technology as it sits
6:22
now, what's coming down the pipeline, what are
6:25
you most excited about that this technology can
6:27
change about people's lives? Oh,
6:29
so much. So much. It's
6:31
almost, I don't even know where
6:33
to start. So I'll give you
6:35
a couple of examples. So research,
6:37
scientific and technological research requires a lot
6:40
of work to go through data. And in
6:42
order to sort through and shift through those
6:44
meta analyses, imagine if we can do
6:46
that almost automatically in the background. So
6:48
you don't need people in there to
6:50
do things like the Citizen Science Project
6:52
that the DBC collaborated on a couple
6:54
of years ago, where people were looking
6:56
at pictures of the underwater landscape at
6:58
the Great Barrier Reef and clicking on
7:00
the Carniform starfish that they
7:02
could see there, because it's an invasive
7:05
species and scientists were trying to work
7:07
out, you know, how quickly it was
7:09
spreading. Now, you don't need
7:12
thousands and thousands of people to stick
7:14
through images and click on those invasive species
7:16
images when you've got an AI to do
7:18
it. So that can happen very, very quickly.
7:20
And many fewer people hours means that we
7:22
can speed up that pace of research. We
7:24
can do that modeling much more quickly. And
7:27
that means that we can engage in novel
7:29
design for solutions much more quickly. And that's
7:31
particularly exciting when it comes to drug design.
7:33
So there'll be many more options coming up
7:35
much more quickly. And if you put that
7:38
together with the way that I
7:40
can promote divergent thinking
7:42
by facilitating the creation of novel
7:44
ideas by making connection between what we
7:47
might think are completely disparate concepts, then
7:50
you can go into whole new directions
7:52
of inquiry and potential problem solving. So I
7:54
think that's really, really exciting. What about for
7:56
you, Josh? I think if you take the
7:59
sort of high level view and don't
8:01
think about the potential for job losses that could
8:03
come from this. But if
8:05
it makes it easier for people to do
8:07
their jobs, takes out a lot of the
8:09
routine manual processing work that
8:12
they might do. I was thinking
8:14
about, you know, I use AI now to transcribe
8:16
my interviews for me. I obviously check them and
8:18
things like that. But stuff like that,
8:20
that would just take so many hours, it frees me out
8:23
to do other things as well like that. Just more things
8:25
like that where the job necessarily isn't
8:27
being replaced, but it just makes life so much easier
8:29
for a lot of people to take out some of
8:31
that sort of more mundane work they don't necessarily need
8:33
to do. And it doesn't really add anything to what they're
8:35
doing. And yet, I mean, Kylie, we
8:38
talked about this a lot at the time,
8:40
but there was this moment not that long
8:42
ago last year where we saw a number of
8:44
very high profile people calling for a slowdown
8:46
of development of AI. So
8:48
Elon Musk, Steve Wozniak, co-founder of Apple. Like
8:51
when that moment happened, what
8:54
was going through your head? Did you think
8:56
it was the right call at the time
8:58
or did it not make sense? Well, there
9:01
are two potential interpretations of that call for
9:03
a slowdown. One, you know, the cynical interpretation
9:05
is that they just wanted a pause so
9:07
that they could get themselves ahead, you know,
9:09
commercial competitiveness. But the
9:11
less cynical interpretation is a concern
9:13
for privacy and the issue of
9:16
data sovereignty, which essentially translates
9:18
as a concern that people may not
9:20
know, we may not have a really
9:22
good understanding of the ways in which
9:24
we're giving up our own information and
9:27
therefore we're giving up our right to
9:29
control our own information. And that's a
9:31
really important piece that regulators and societies
9:34
need to think about and get right
9:36
if we want AI to support a
9:38
healthy, thriving and inclusive society. You
9:40
know, there's also a lack of regulation and control.
9:43
Deepfake is probably the most
9:45
obvious example there using Deepfake
9:47
images and videos to harass
9:49
people to spread misinformation, for
9:51
example. So that is something that we
9:53
do need to think about. We always
9:55
move much more quickly than regulation can move. But in this
9:58
case, we're not going to be able to do that. one,
10:00
it is evolving more quickly than
10:02
many technologies that we've seen in the
10:04
past. So there's a real sense of
10:07
urgency around that. Josh, there
10:09
are a whole range of risk factors with this
10:11
technology and it isn't just one set of technology,
10:14
right? It's a whole suite of
10:16
technologies that we're talking about here. But are
10:18
there risks that you don't think are being
10:20
talked about enough? Yeah, I mean, I think
10:23
ultimately we're still seeing it sort
10:25
of play out at the moment. It's still
10:27
very much this AI is having unintended consequences.
10:29
Oh, there's risk to jobs. Oh, there's copyright
10:31
issues. Oh, there's election integrity
10:33
issues. There's obviously like the doomsday scenario of
10:35
like AGI, sort of a conscious AI that's
10:38
doing its own thing and ending the world
10:40
as we've seen in so many movies and
10:42
TV shows. But yeah, I think it
10:44
is one of those things where I think it's not
10:47
really recognized a lot that we are essentially now
10:49
in an AI arms race. And this was something
10:51
that came up towards the end of last year
10:53
when there was sort of the board turmoil at
10:55
OpenAI where Helen Toner, who was the the only
10:57
Australian member of the board, she'd written a paper
10:59
basically saying, among other things, OpenAI
11:02
potentially brought out chat
11:04
GTP to quickly. And that
11:06
forced all these other companies, including Google and
11:08
things like that to start rushing out their
11:11
own AI to basically compete. And
11:13
we can say it's good because a lot of companies
11:15
like Google and things like that are doing relatively responsible
11:17
things, trying to do the right thing, putting in guardrails
11:19
and things like that. But there will be a lot
11:21
of companies and
11:24
countries who don't really think about doing
11:26
sort of the responsible thing here. And I think
11:28
that's the risk at the moment as well. If
11:30
everyone's sort of developing this technology, not everyone's going
11:32
to develop in the same responsible way. I know
11:35
there's a lot of panicky sort of
11:37
articles and I should say like some
11:39
of it quite justified about AI. But in the commentary
11:41
that's happened, are there areas you don't think it talked
11:43
about enough? The one that I
11:45
would like to highlight is the risk that
11:48
marginalized people become even more marginalized. So people
11:50
who are underserved, people who are in minority
11:52
groups, people who have barriers to participating in
11:55
democracy, in society, in education, in
11:57
the workforce, that they are are
12:00
edged further out. And there are a
12:02
couple of ways that we can mitigate that risk.
12:05
One is around the guardrails
12:07
that governments set for AI
12:09
development. And you would hope that
12:11
corporates setting their own responsible guardrails as
12:13
well. But we also know that we need to
12:15
keep an eye on each other and make sure
12:17
that we hold each other to account on that
12:19
ethical and moral responsibility. But the
12:21
other really, really important way that
12:24
we can mitigate that risk is
12:26
to ensure that there is a
12:28
diversity of voices at the table,
12:30
at the building table for these
12:32
technologies. So it's absolutely crucially important.
12:34
And I think really urgent that
12:36
we have really genuinely diversified technology
12:39
workforce. And I say it's really
12:41
urgent because this stuff is moving
12:43
so quickly. Those parameters, those technologies
12:45
have been built every
12:47
day as we speak. And the longer
12:50
we go, the more likely they are
12:52
to be applied to managing things like
12:54
health systems, social systems, education, transportation, and
12:56
all of those other big picture systems
12:59
that we rely on to function as
13:01
a society. And if they are built
13:03
by people who all think the same
13:06
way, who all look the same way,
13:08
I'm not suggesting that
13:10
those scientists and those programmers
13:13
have ill intention. But if they don't know
13:15
about the unintended exclusions that they're building into
13:17
the system, then we've got no way to
13:19
rein that back in. So we
13:22
need that diversity of perspective simply so
13:24
that we understand the diversity of potential effects
13:26
of the systems that they're building. Kylie,
13:29
we're entering a year where there'll be a
13:31
whole host of elections all around the world,
13:33
and of course, the big one in the
13:35
US. How confident are you that there are
13:38
enough regulations and guidelines around AI that it
13:40
won't have a significant impact on what
13:42
people think and misinformation? There's actually
13:45
no doubt that AI can and will
13:47
be used to create misinformation during elections.
13:49
We've already seen that. It can
13:51
be used to create all kinds of campaign
13:53
materials. We can expect to see
13:55
it being used to discredit or smear opponents And
13:58
in highly polarized conditions. they are more likely
14:00
to believe and sharing. planetary my son that
14:03
matches their identity or it can be so
14:05
appeal to people says pessimism choices as well
14:07
and that can be to buy the on
14:10
social media as well as the news media
14:12
and community connections so that the he
14:14
kinds of a only since my son a
14:16
real possibility and really the on the antidote
14:18
to that. Is to equip
14:21
people with the critical thinking skills.
14:23
And the technological awareness to understand that
14:25
what they're looking at my not baby
14:28
oh he said Other flipside. To their
14:30
go with people, learn to be distrustful
14:32
of absolutely everything, but also creates it's
14:34
own problems with people genuinely don't trust
14:36
anything. As a more common. Such
14:39
a challenge! Absolutely. You know it's a
14:41
really fine line to walk ends up
14:43
and we need to see find. A
14:45
way to build trust in evidence. And I
14:47
mean maybe we're going to guy a little
14:49
bit retro with be analog because people are
14:51
going to want to see people in the
14:54
flesh. Can must select a. Sort
14:56
of nice if is do we don't My
14:58
my Billie Piper. Lessons they posted out to
15:00
fight for that you can't trust what you what
15:02
you rating in the social In the social. I'm
15:05
Josh back in October verb or the
15:07
bottom administration in the Us made labeling
15:09
in the Texan of I generated content
15:11
focus of a of an executive order
15:14
d thing but decisive and will regulations
15:16
like that will help. Our
15:18
year but I think we're We're already starting to
15:20
see companies be a little be pro active on
15:22
that is all. obviously realizing that near wouldn't given
15:25
it's such a big election year at some and
15:27
nine to be on top of the know. We
15:29
had met a girl adobe a few others say
15:31
to the building in water marking into or in
15:33
are taught support machine to i generated images that
15:36
will make it easy for them to detect and
15:38
and label it profitably on social networks as which
15:40
is very helpful and they're also in the in
15:42
the process of developing technology that will automatically detects
15:44
we've yoplait of for a photo to post because
15:47
I'm about nothing. to say i that it's doing
15:49
it so they are doing it but it's
15:51
a kind of comes back to what i'm
15:53
saying before in terms of wall alone companies
15:55
will be doing the right thing there will
15:57
be others that are not and although in
15:59
a repost a lot on what gets
16:01
posted on Facebook or Twitter or Instagram
16:04
or any of the other social networks. A lot of this
16:07
stuff, the highly volatile fake
16:09
news stuff, ends up going on those group
16:12
chat, less public view
16:14
kind of circulations that we don't potentially
16:16
see. While they might end up on
16:18
TikTok or something else at some point, a lot
16:21
of the damage might be already done if people are not
16:23
really thinking about, is this real or
16:25
is this not? In that place, I think government regulation
16:28
will help, but it's a matter of enforcement as
16:31
well. I think some companies will probably be more
16:33
cooperative than others. Carly, there's a
16:35
question that has bubbled up in my head listening
16:37
to the two of you talk. I just
16:39
need you to promise not laugh at me when I ask. I
16:42
don't think I can promise that.
16:44
Fair enough. I walked into that
16:46
one. For decades, we've had this
16:49
fear instilled in us by popular
16:51
culture of AI achieving
16:53
sentience. Is that actually
16:55
a thing? Is there a
16:57
circumstance under which AI actually can achieve
17:00
sentience or is that purely the domain
17:02
of popular culture? The
17:04
terminator thing. I'm trying to approach
17:08
it with some sobriety here, but I don't know
17:10
how to phrase it. Is guy that going to
17:12
happen, Kylie? Tell me now. No,
17:15
it's the short answer. I
17:18
think it's really dangerous to
17:20
anthropomorphize technology. This isn't a
17:22
person. It's not going to
17:24
become sentience. It's not going
17:27
to be an independent
17:29
entity that does its own thing
17:31
and stops listening to the rules.
17:34
We absolutely have the responsibility and
17:36
I guess the way that it's
17:38
been done is there's no other
17:40
option that this thing does what
17:42
it's told. Any technology responds to
17:44
the rules that it's been given by the people who
17:46
built it. If there was some
17:48
kind of sky net emerging, it wouldn't be because a
17:51
computer kind of gained sentience and decided
17:53
to rule the world. It would be
17:55
because people were driving it. Look,
17:58
technology is neutral. If only A... Good
18:00
or evil is the people he doesn't he said.
18:03
I can say he's my thing with
18:05
that. I'm this plenty of people capable
18:07
of evil atm. Or
18:09
does it should that make me feel more
18:11
or less confident? Joss or think Joe thing
18:14
is that we've already seen with with a
18:16
lot of the gen our this other already
18:18
that's the companies, put rules in place and.
18:21
People. Figure out ways to make her the
18:23
i bend the rules to give it the
18:25
response that it wants. Five would not be
18:27
like sceneries and put these girls in, but
18:29
people will ultimately find a way to get
18:31
her out. I would be surprised if is
18:33
I got so intelligent and who fucked of
18:35
Iran and yeah obviously Eve of Eve got
18:37
nefarious by is in there to I wonder
18:39
is this. Ultimately, Does come down
18:42
to. If. He treated like if
18:44
you know nuclear weapons and and ah rules
18:46
around who can have what piece of technology
18:48
And in for that because I went fast
18:50
getting the stage where. Are
18:53
in a sort of responsible players who potentially have
18:55
access to about how do you actually decide who
18:57
gets access to it or not? It's it's it's
19:00
very sort of marketers hurt and I'm an integral
19:02
completely total for. I. Just one I
19:04
did come from my son and and I'm
19:06
I'm onto pissing and home to to this
19:08
collie skynet unlikely. You
19:11
can come and find me, it's gonna and basis that.
19:13
Yeah, It's a highly
19:15
unlikely as. A
19:17
He's always tempting to think of. Say it like
19:19
a worst case scenario that I think we like
19:21
to do that. And it's useful to do that
19:24
because it provides a warning. Bell say
19:26
we can act now to and saw
19:28
that leading the best our ability that
19:30
we do have to go about the
19:32
place that we date has a societal
19:35
moral imperative for responsible He said They
19:37
technologies. Like we take out the technology and proud
19:39
of that is the regulation and the rules and
19:41
tired of it He said. The. Public hoping.
19:44
Cycle gilding they things and using the
19:46
thing. so town start. Of it is also
19:48
died at the sit education out. And it's
19:50
I think it's really really important that
19:52
we provide that tiny in education not
19:54
just at school but across the community
19:57
that people can really equipment cells that
19:59
the critical thinking. Understanding of
20:01
what they can, technology and. Days and
20:03
how to interact with them responsibly and apply
20:05
them responsibly as far as is adjacent to
20:07
the cities. were talking about the fact that
20:09
there are. Already. Applications in
20:11
military contacts and cyber security context where I
20:13
is big news I'm taking the know Skynet
20:15
prediction from from. probably bit odd but we
20:18
do neither. Eyes being used in in theaters
20:20
of war joss but those are often there
20:22
is a we dart have a lot of
20:24
transparency of is my understanding is that is
20:26
is that your understanding armenia like we don't
20:28
know how the his head solace in scenario
20:30
defenses gotta keep it secret. Further the enemies
20:32
of know was during hunting. For that I
20:34
think there needs to be way more transparency
20:36
around and not just in defense. A nice
20:38
be everywhere That one of the things. That
20:41
that frequently happens is that when Gen
20:43
Vi in particular is doing something that
20:45
is not supposed to be doing, the
20:47
companies involved will say all with dementia
20:49
that with the something when the dollars
20:51
have a movie club and him for
20:53
that. but. Our the time is the
20:55
actual the daughter that's been trained on. We don't
20:57
actually have access to retire so see for ourselves
20:59
and that means that is always can be sort
21:01
of some level of of bias or something missing
21:04
from it or something like that and I think
21:06
that out comes down to. No matter how much
21:08
we use the synergy they always need to be
21:10
humans involved and and and ultimately someone has to
21:12
hold responsibility for and say making the decision about
21:14
what's being put on what's allowed in things about
21:17
think you need to. Ultimately you kind of leave
21:19
it up to the I you need have a
21:21
human involved in the process, the take on things
21:23
and be coming. Out here in his
21:25
bath. transparency and humans being involved
21:28
is that the level at which.
21:30
Regulations that happens to be
21:32
like broader our top level
21:35
language around. Ah, what we
21:37
expect as a society from
21:39
transparency. This technology pilates. I'm
21:42
absolutely. Com and I think that
21:44
we can probably get a little bit more
21:46
specific about potential applications that wow. I think
21:48
they should say something. that he seek
21:50
regulations where it comes see the human
21:53
element say if you're looking at ah
21:55
defense is a really tricky one that
21:57
as just said and we know that
22:00
the R&D budget for defence globally
22:02
is by far
22:04
the biggest R&D budget that exists. So
22:06
we know that these technologies will continue
22:09
to evolve much quicker in that domain
22:11
than potentially in others. But
22:14
when it comes to things like
22:16
social services, community sector, transportation, looking
22:19
at how these technologies are applied
22:22
in education, I think
22:24
that we can absolutely apply the same
22:26
kind of moral guidelines as we do
22:28
already in those domains. So
22:31
it is about making sure that we
22:33
continue to be responsible, that we continue
22:35
to pay attention to the way that
22:37
the technology has been changed and used,
22:40
and that we have mechanisms for
22:42
people to report and whistle blow
22:44
on bad players and potentially, whether
22:47
intended or unintended, potentially bad consequences
22:50
for people. And then there
22:52
do have to be, I think, some
22:54
consequences available for governments and
22:56
courts to use as well. I
22:58
know this is very general, but what
23:00
sort of consequences would be meaningful in that
23:02
realm? Well, I think if
23:05
the work that you're doing and the technology
23:08
that you're building and applying is creating harm
23:10
to people, then you ought to be held
23:12
responsible in the same way that you might
23:14
be held responsible for causing harm to people
23:16
in other ways. That's because
23:18
it's a new technology. It doesn't mean that
23:21
you are absolved of that responsibility. The
23:23
examples are all around us, and new technology is
23:25
just a new tool. So the
23:27
people behind it ought always
23:29
to be held responsible for using it
23:31
appropriately and responsibly. What if it's not
23:33
obvious who's behind it? No, that's tricky,
23:36
isn't it? That's the challenge
23:38
for law enforcement, right? They're going to have to
23:40
keep pace too. I mean,
23:42
like we're seeing that already, you know, you look
23:45
at in the area of cybersecurity and you're seeing
23:47
it being much easier to launch malware attacks and
23:49
things like that. And once sort of AI gets
23:51
involved in that, it's going to be even
23:54
bigger and there's going to be a little bit
23:56
more enforcement to do, except to sort of try and disrupt
23:58
it at the source that says... much harder to hold
24:00
anyone to account for that. We've talked
24:03
a lot about things like chat GPT and
24:05
military applications, but AI has filtered its way
24:07
into a whole host
24:09
of areas we interact with,
24:11
healthcare finance. Kylie, as
24:13
we look forward to the next year or
24:15
so, is there a particular area where you
24:17
expect its use to massively explode? I mean,
24:21
I think we've got some
24:23
really interesting opportunities around modern
24:26
manufacturing, bespoke manufacturing and logistics,
24:28
which doesn't sound terribly planning when
24:30
you say that, but think about
24:32
the efficiencies that it might bring,
24:34
and that will open up the
24:36
capability to establish a genuinely circular
24:38
economy, so an economy in which
24:40
we have zero waste. That's
24:42
where we're hoping it will head. So when
24:44
you think about it, if you're building
24:47
a product, you can embed sensors into
24:49
that product that will track the product over
24:51
its lifetime. You can create it in such
24:53
a way that it can be dismantled and
24:55
reassembled as a different product, and
24:57
so that you look at not recycling,
25:00
but reusing in new ways, repurposing. So
25:02
you don't just have a life cycle,
25:04
you have multiple life cycles. And then
25:06
at the end, because you've embedded those
25:08
sensors, you can make the manufacturer of
25:11
that product responsible for its decommissioning at
25:13
the end of its multiple life cycles.
25:16
And so that opens up possibilities for
25:18
a really much more responsible approach to
25:20
environmental management when it comes to waste.
25:22
For example, we can look at, again,
25:24
the efficiencies in transportation coordination. I
25:27
think that's a really tricky
25:29
problem to manage congestion on roads
25:31
and designing the most effective bus
25:33
routes and train timetables and making
25:36
sure that public transport
25:38
is appropriate for the ways
25:42
in which people want to use it, which we know it's
25:44
not in many places around the world. So
25:46
bringing AI to solving
25:49
that problem could Not
25:51
only help people get to places faster
25:53
and more efficiently, but it'll help save
25:55
money for councils and local governments as
25:58
well. What
26:00
did the application coming in? The next
26:02
couple years and not necessarily the things that
26:04
you might think homicide right sizing but I
26:06
think that materials it consisted like the people
26:08
these Justin's on that we've been sitting here
26:10
telling I got not one but two separate
26:12
emails about taxi be say being used for
26:14
legal advice other areas and with Ai is
26:16
why could be used in the next couple
26:19
years he think people should keep their eye
26:21
on am I mean I think the legal
26:23
one is is quite an interesting question because
26:25
there are in are politically standardized forms they
26:27
need feel armed and for that that that
26:29
process can be automated. but we've already seen.
26:31
People. Getting in trouble where they've had
26:33
oh the try to me advil the
26:35
legal argument put in front of a
26:38
judge in it's quoted some some more
26:40
the doesn't exist in his occurs at
26:42
the I hallucination working in that I
26:44
think that's probably. That's
26:46
probably the sort of the area we probably
26:48
need to most the live on my own.
26:50
Look at, its it's almost seems where are
26:53
we going to see people integrating into lives
26:55
without particularly thinking about? Ah, the the ramifications
26:57
in the consequences of it and and been
26:59
caught out for using a my substantially should
27:01
have been using a nothing about boat ultimatum?
27:04
Sat down to this: This is all happening
27:06
very quickly and we don't really have the
27:08
rules in place. hit so it it's can
27:10
be Sort of mulling our way through a
27:12
little bit. We are unfortunately at a time
27:15
sued. Thank you to I. Guess wake just
27:17
highlights. Report with the got into some of
27:19
the joining us thanks enemy and the see
27:21
our of the A Strain Academy of Technological
27:23
Sciences and Engineering. Kylie walk out It was
27:25
an absolute pleasure. Thanks so much. Like
27:27
I think you max out. If you enjoyed
27:29
the program make sure you leave a review on
27:31
which ever podcasting app you happen to listen to
27:33
us on and of course you can find by
27:36
the South and all I B C podcasts and
27:38
I B C with them or my and L
27:40
and thank you for listening to another episode of
27:42
them I decide. You've
28:00
been listening to the WebEx
28:02
Live. Please join
28:04
us in welcoming the
28:08
audience. Please
28:10
join us in welcoming
28:12
the audience.
28:16
Please join the
28:18
webex live. Please
28:20
join the webex
28:22
live. Please join
28:26
the webex live. You've
28:29
been listening to an ABC
28:31
podcast. Discover more great ABC
28:33
podcasts, live radio and exclusives
28:36
on the ABC Listen app.
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More