Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:00
This Washington Post Live podcast is
0:02
sponsored by Intel. Experience AI
0:05
PCs built for business. Get
0:07
powerful AI performance, hardware-based security,
0:09
and remote manageability with real
0:11
sustainability benefits. Upgrade to
0:14
the power of Intel Core Ultra Processors
0:16
and Intel VPRO. You're
0:18
listening to a podcast from Washington Post
0:20
Live. Bringing the newsroom to
0:22
you live. Hello and welcome
0:24
to Washington Post Live. I'm Danielle Oberl, Tech
0:26
at Work writer here at the Post. Today
0:29
I'm joined by Microsoft's Corporate Vice
0:32
President of AI at Work, Jared
0:34
Spitaro, to talk about how technology
0:36
is changing the workplace. Jared,
0:39
it's so good to see you again and welcome
0:41
to Washington Post Live. Mike,
0:43
it's great to be with you. Thanks for having me. Absolutely.
0:47
Let's get started. So I want to
0:49
start with obviously the big
0:51
thing that's going on at Microsoft, which
0:53
is Microsoft Co-Pilot. You guys
0:55
launched it last February. For
0:58
those of you that don't know, that's
1:00
Microsoft's AI assistant. As
1:03
you know, I tested it. But
1:05
for our viewers, please explain how
1:07
Co-Pilot works and how you
1:09
believe using it impacts
1:11
or using AI impacts worker
1:13
productivity. Oh boy.
1:16
You could get me started here. Well, let
1:18
me just start by saying AI can do
1:20
some amazing things, but it needs to be
1:22
in the right places. So with Co-Pilot, what
1:24
we've done is we have created an assistant
1:26
that takes the magic of generative
1:28
AI and puts it in the places that people work
1:30
today. So that means places like
1:32
your email and Outlook. It means Word, Excel,
1:34
and PowerPoint. It also means in new interfaces
1:37
where you can chat about all of your
1:39
work data, everything related to your job. So
1:41
you can ask questions like, hey, look at
1:43
my calendar over the last month and tell
1:45
me how I spent my time and give me
1:47
suggestions for how I could improve. So it both
1:49
helps people on what they do today in the
1:51
apps that they're familiar with and opens new vistas
1:54
for them. And then if I just get to
1:56
the heart of what we see, we think of
1:58
it not just as a tool. some sort
2:00
of incremental improvement, but it's a whole new
2:02
way to work. It takes different habits, takes
2:05
different skills, but it also gives
2:07
you outsized rewards that we're really excited about
2:09
that and some of the studies that we've
2:11
done from copilot users. So
2:14
we did try out copilot and as
2:16
well as Gemini on Google Workspace, which
2:18
is their AI assistant. At the help
2:20
desk, we wanted to see really, are
2:23
these things easy to use? And
2:25
we found that although the AI tools really
2:28
help complete tasks, you
2:30
probably shouldn't rely on them entirely to
2:32
do your job. How should
2:34
workers think about when to use AI
2:37
and what they should consider or keep in
2:39
mind if they're considering turning to these tools
2:41
for work? Well,
2:43
it's a good observation. We use the
2:45
name copilot on purpose to indicate that the
2:48
copilot shouldn't do your job for you, that
2:50
you still are the pilot, you're in the
2:52
driver's seat and it's your job to do
2:54
your job. But we recommend that
2:56
if you get the right skills, you can use
2:58
copilot to advance, move the ball down the field
3:01
faster than ever before. Now, one
3:03
really important thing I'll say, Daniel, that's
3:05
really just key for people to understand
3:07
about this new generation of technology is
3:09
it's unlike anything that they've experienced with
3:11
computers before. Most people think of a
3:13
computer the way they do a calculator.
3:15
I'm gonna punch in a question in
3:17
a calculator, it's in the form of numbers and
3:20
I'm gonna get the right answer back. That's
3:22
not the way generative AI works. In fact,
3:24
sometimes it's not even right, it gets things
3:26
wrong, almost like a person can get things
3:28
wrong. But when it gets things
3:30
wrong, we tend to find that it's
3:32
what we call usefully wrong. It actually
3:35
helps you move again your work forward.
3:37
But you have to trust and at the
3:39
same time verify. So you have to
3:41
learn a new way of working. In many ways, it's
3:43
kind of like working with a colleague that is learning
3:45
your job and trying to help you do it better.
3:49
So I wanna pick up on something you
3:51
actually just talked about. AI doesn't
3:53
always get it right. I mean, it tries
3:55
to give you what it thinks is probably
3:57
the answer based on also we learned.
4:00
based on how you ask the question, which if you
4:02
turn a question around through different ways, you might actually
4:04
get the right answer. But
4:06
when we think about AI, a lot
4:08
of users are really worried about those
4:10
times it gets it wrong, it
4:13
hallucinates, which is again, kind of making up
4:15
things, or just misinterprets your question or what
4:17
you're trying to get at. And
4:20
I wonder in terms of
4:22
being able to identify the
4:24
errors, my understanding from
4:26
experts I've spoken to as generative
4:28
AI continues to advance, it
4:30
gets harder and harder to identify where
4:32
it's wrong because it's so believable. The
4:35
answers come out, they look wonderful, and
4:37
you're like, yeah, that sounds right. And
4:39
you go ahead and push it through.
4:42
What are your thoughts around these issues, especially if
4:44
they relate to higher risk work, work
4:46
that could have medical, financial, or
4:49
legal consequences? It's
4:51
a great question. And you have to
4:54
understand the technology, at least just fundamentally,
4:57
what it's doing is it's reasoning statistically,
4:59
the actual term is stochastically. So much
5:01
like a human does, we don't reason
5:03
perfectly about questions. We take the available
5:06
facts, sometimes the facts aren't enough, sometimes
5:08
our judgment isn't quite correct. And then
5:10
we kind of reason. And
5:12
you're exactly right. I think people are
5:15
worried that when the answers get presented,
5:17
it feels so reasonable that it's easy
5:19
to believe. There kind of are two things
5:21
that I have seen in my own usage
5:23
that have made a really big difference. Number
5:25
one, there is a new technique that we
5:27
call grounding, that allows you to actually make
5:29
sure you give the AI tool the latest
5:31
information related to a question, a prompt that
5:34
someone asks. And that grounding
5:36
technique actually reduces the
5:38
hallucination significantly. So if you're asking,
5:40
for instance, a question about a
5:43
recent event, the co-pilot will
5:45
actually go out to the internet, collect
5:47
information on that event from reliable sources,
5:49
and then use that as its fact base
5:51
to reason over when it gives you the
5:54
answer. And then the second
5:56
thing, which seems small, but is incredibly
5:58
important, is we actually provide reference. references,
6:00
literal explicit references in the copilot
6:02
answer to the source data that
6:04
the copilot used to provide the
6:06
answer to you. And
6:09
it prompts a new skill. You have to
6:11
read what's there and then spend a little
6:13
bit of time in making sure that you're
6:15
looking at those sources so that you understand,
6:17
have a bit fuller understanding of the sources.
6:19
Now some people ask, well Jared, shouldn't you
6:21
just do it yourself? My own
6:24
experience has been, no, not really. It's
6:26
like having a very competent research assistant
6:28
that is pulling together relevant information. At
6:30
the end of the day though, that name is so
6:33
important. It's the copilot. You are meant to do your
6:35
job. So it's a new way of working, Daniel. I
6:37
think that's, I couldn't stress that enough. As
6:39
you learn it, we are finding that users are faster. You
6:41
know, in our battery of tests, for instance, people are almost
6:43
30% faster on common
6:46
information worker tasks with no
6:48
change in their accuracy in
6:50
the responses of the tests that we ran. So
6:53
don't get lazy. You gotta go look and
6:55
make sure everything's exactly where it
6:57
should be. Got it. Well,
7:00
you know, obviously a lot of
7:02
workers see the value here in
7:04
expediting mundane tasks with AI, things
7:06
like drafting emails or organizing their
7:09
inbox. But what do you
7:11
think about AI for more complex tasks?
7:14
And in what scenarios might it be helpful? And
7:16
what scenarios might we want to steer clear of
7:18
when it comes to those complex tasks? Sure
7:21
thing. Sure. It's
7:23
not good yet. It's not good at math, it
7:25
turns out. We can augment it with mathematical skills.
7:28
We are, for instance, wiring it up to
7:30
Microsoft Excel. And there it's learning
7:32
how to use a calculation engine together with its
7:34
larger language model. And it's still learning. We're still
7:37
learning how the tool can be used. So there
7:39
are some domains where I would say make sure
7:41
you understand its strengths and weaknesses. When
7:43
it comes to complicated tasks, things that I
7:46
would call long running chains or sequences of
7:48
tasks that need to be done, oftentimes
7:51
what's best is to use the tool in its form
7:54
today to help you complete portions of that, but for
7:56
the human to be the one that is putting them
7:58
all together. in a budgeting
8:00
process, for instance, pulling together
8:02
the latest information, looking
8:04
at that information and analyzing, perhaps even
8:06
looking at from different angles, all of
8:08
that is fantastic. But make
8:10
sure you're the one that is going from
8:12
end to end to have the budget make
8:15
sense. And that's true of almost all long-being
8:17
kind of complicated, sophisticated tasks today.
8:19
But the great news is, Daniel, it's
8:21
like it's getting better every week, week
8:23
in and week out, as we
8:25
are learning more about how people use it. And as
8:27
the technology is improving, I can really see the improvements
8:29
in my own usage. And
8:32
in terms of where it's kind of doing
8:34
well in those complex tasks? Sure
8:36
thing. The types of things
8:39
that it's particularly good at are summarizing. So
8:41
it does a great job when you have
8:43
lots of information, you need to get the
8:45
key points. It does a really
8:47
nice job, for instance, in meeting settings. It
8:49
can be a meeting you attended, where it's providing notes,
8:51
or it can even be a meeting, in my case,
8:53
that I don't attend anymore. Lots of meetings I don't
8:55
attend, and I just ask it to take notes for
8:57
me so I understand what the key points are. I
8:59
can even query that after the
9:01
meeting and ask questions about what happened. It
9:04
does particularly well in email, as a simple example.
9:06
So I use it every day to summarize long
9:08
email threads. We all get them. I hate to
9:10
read from the bottom loop. I don't have to
9:12
any longer. It does great in drafting, replies, in
9:14
fact, saves me a lot of time there. And
9:17
it's particularly good in a set of what
9:19
I call sophisticated information
9:21
retrieval scenarios. Things like,
9:24
hey, it looks like I'm gonna meet
9:26
with this customer on Thursday. I remember that there's
9:28
been lots of emails. We even had a meeting.
9:31
Someone wrote me a document to get me ready.
9:33
Can you pull all that together for me and
9:35
give me, essentially, an information pack? It's
9:38
incredible, that type of work. And again, if you
9:40
think about what that can save you, it's not
9:42
just minutes, but for me, oftentimes, hours. I'm
9:45
gonna squeeze in an unexpected question here,
9:47
because you just mentioned something, and
9:50
I just recently remember that Zoom is
9:52
also kind of getting into the game
9:54
and trying to release some AI capabilities
9:56
that go across different apps that they've
9:58
started to release. And
10:00
you mentioned, obviously, one of the things that I
10:02
found really helpful in our co-pilot test was that
10:04
I'm going to have a meeting with my boss.
10:06
What are the last conversations and emails and documents
10:08
we've collaborated on? And it could quickly kind of
10:10
give me an update and I, oh yeah, this
10:12
is what we need to talk about in the
10:14
meeting. But I do wonder, a
10:16
lot of times, a lot of
10:18
workers work across apps, right? And we
10:21
don't necessarily work on solely Microsoft or
10:23
solely Google or solely Zoom. We're kind
10:25
of using probably a mix of things.
10:28
People just like different features from
10:31
different providers. Would Microsoft
10:33
ever consider opening up its AI to
10:36
work cross-function across apps? We
10:40
not only consider it, we've done it. It turns
10:42
out that out of the box, so without any
10:44
configuration, you certainly can reach into your email inbox
10:46
and to your documents and the places that are
10:48
already a part of what we call Microsoft 365.
10:52
But the product itself has
10:54
the ability to use connectors. In fact, over 1200
10:57
connectors that we provide that plug
10:59
into everything from SAP to
11:01
any other system, you can imagine, HR systems,
11:04
workflow systems, any
11:06
type of system that you have out there
11:09
to allow you to grab that information and
11:11
what we say is to reason over it.
11:13
And that's incredibly powerful. So when you combine
11:15
the ability to pull financial data together, for
11:17
instance, with unstructured data and have that give
11:19
you a full view, maybe even with your
11:21
CRM data of a customer to get you
11:23
ready for a meeting, there's just nothing out
11:25
there like it. And you hit on a
11:28
very key point. People live in very, what
11:30
we would call heterogeneous environments that will persist
11:32
for a very long time. We recognize that we're
11:34
not the center of the world, but the advantage
11:36
we think we have for individual users is we
11:38
can be where they do spend a lot of
11:40
their time every day in the tools that they're
11:43
really familiar with and that they choose. Well,
11:46
Jared, we're kind of running low on time and I
11:48
have so many questions I wanna get in, so I'm
11:50
gonna kind of bounce around here. But
11:53
late last year, Microsoft found out that
11:56
digital natives, basically, generation Z, the youngest workers
11:58
in the world. in the workforce right now are
12:01
falling behind in adopting AI at
12:03
work. That's kind of
12:06
shocking to me given that we see how
12:08
often Gen Z is using AI in their
12:10
personal life and breakup messages and things
12:13
that seem a little bit frivolous for
12:15
usage of AI. What's
12:18
happening here? Well,
12:20
I think it's a combination of a bunch
12:23
of factors but one that I've spoken of
12:25
previously that really, for me
12:27
has touched my imagination. I've realized, wow,
12:29
there's something there is that people are
12:31
doing best with AI today or those
12:34
who have managerial experience. And
12:36
the reason that that's true, if we go back
12:38
to what we've already talked about is that you
12:40
really do best with the technology when you interact
12:42
with it the way that you'd interact with a
12:44
direct report. When you work with someone
12:46
who reports to you, who you're responsible for, you have
12:49
to give them a lot of context. You
12:51
get to know kind of their strengths and weaknesses.
12:53
You don't settle for what we call kind of
12:55
a one shot type of approach. I'm gonna give
12:57
you this assignment. I hope you get it right.
13:00
No, instead they come back to you, give them
13:02
some coaching, you guide them in different ways. All
13:05
of those types of skills are really, really
13:07
important for using AI. And what I've observed
13:09
in our tests, even anecdotally is that often
13:11
when you're just new in the workforce, you're
13:13
kind of learning those things. It's not the
13:15
first set of things you've learned. So
13:18
managers, people who've been managing real people
13:21
are doing very well. And that's a really
13:23
interesting thing to think. I think of it this
13:25
way, what types of skills will new graduates need
13:27
in the near future? Well, I actually think we're
13:29
gonna need to help new graduates know how to
13:31
manage people and manage AI. And
13:34
that's a really interesting thing to think about
13:36
education and entering the workforce. Well,
13:39
and that also tells us a little bit
13:41
about how organizations should think about maybe
13:43
their workforce and getting people ready to use
13:46
AI. If they're already starting to adopt
13:48
these things, is that something we should be
13:50
thinking about from a management level? Absolutely,
13:53
skilling and entering into a new era
13:55
requires kind of a new way of
13:57
thinking about what people need to know.
14:00
And I think it's important if I could frame
14:02
this up not to think of this technology as
14:04
just incremental. If you think of it as like,
14:06
oh yeah, this is kind of like a
14:08
new incremental productivity improvement, you're going to
14:10
approach it the wrong way, or at
14:13
least that's my observation with customers. Those
14:15
who realize at the individual, the personal
14:17
level, and at the organizational level, this
14:20
really is a brand new era as
14:22
we approach work. Those people are doing
14:24
the best because they have a lot of imagination and
14:26
they quickly lead as you indicate to new skills. They
14:28
realize, oh, I'm going to have to teach my people.
14:31
No one's born knowing how to use a Go but
14:33
you can learn it fairly quickly and then it can make
14:36
a really big difference. So
14:38
we only have a few minutes left, but this
14:40
is a question that keeps coming up in my
14:42
work. And no matter how much I talk about
14:44
AI, people are very
14:46
concerned about the future of AI
14:49
and where it brings us in terms of jobs
14:51
and the workplace. You know,
14:53
generative AI is only expected to get better.
14:56
As you mentioned, it's getting better every day.
14:58
And I know right now, as you started
15:00
off this conversation, co-pilot is exactly that. Really
15:02
needs a person in the driver's seat to
15:05
really not only tell it what to do,
15:07
but make sure it's doing the thing that
15:09
we ask it to do properly. But
15:12
as this advances, what
15:15
does this mean for jobs in the future? Could
15:17
we see a future, we're talking to a lot of
15:19
young people who are worried about entry level jobs disappearing,
15:22
or even the case where AI
15:25
isn't necessarily taking
15:27
a job away per se, like
15:29
a full job, but it's taking
15:31
away so many tasks, so many
15:34
repetitive tasks from people, that
15:36
instead of needing five people to do
15:38
a job, maybe you only need three
15:41
because they are a lot more productive
15:43
now because AI is taking away some
15:45
of the stuff that's been bogging them
15:47
down for so long. So
15:50
in that respect, you would still
15:52
see some sort of decreasing of jobs or
15:56
because it's making people so much more productive.
15:58
What is your take on... on where this
16:00
heads and what this means for jobs. And
16:04
I'm incredibly hopeful and optimistic, but let
16:06
me explain why. You know, whether it's
16:08
electricity, whether it's a steam engine,
16:10
it could be a word processor or a PC, we
16:13
always have this immediate reaction to new technology. Uh-oh,
16:15
like this is gonna do stuff that people do
16:17
today. What are the people gonna do? Does that
16:19
mean they'll be out on the streets? It
16:22
is true that there's displacement, meaning that
16:24
jobs shift, that things that people
16:26
used to do may now be done by
16:28
machines. There's no doubt about that. As
16:31
you look at the old typists of yesteryear,
16:33
you saw people whose job was to literally
16:35
type on behalf of other people. And then
16:37
the introduction of the PC changed all of that.
16:40
But what we always see, what we've seen
16:42
since the beginning of the Industrial Revolution and
16:44
recorded history, on really getting into what
16:47
science and innovation does, is that it creates
16:49
new opportunities. So my advice for people
16:51
who get worried about that, is I say, look, it's
16:54
very natural to be worried. But
16:56
instead of taking that energy and funneling
16:59
it into anxiety, funnel
17:01
it into innovation, funnel it into
17:03
the future, and learn about
17:05
the technology, embrace the technology, you'll see
17:07
that it creates more opportunity both on
17:09
the grand scale, the macro scale, as
17:11
well as on the individual scale. And
17:13
we're seeing that already. People who have
17:16
AI skills, they are already getting ahead,
17:18
because in many ways, they kind of
17:20
have the equivalent of a backhoe when
17:22
many of the rest of us are
17:24
still using shovels. It's exciting that there's
17:26
a bright future and we just have to embrace it. And
17:29
we only really have one minute left, but I
17:32
wanna squeeze in this question so if we could
17:34
get a brief answer. Obviously,
17:36
we're seeing the development of new collaboration
17:38
tools and new ways to work. AI
17:40
is really changing the modern workplace. What
17:43
do you expect to happen next? Where do we go
17:45
from here? Well,
17:47
I expect AI to be
17:49
so woven into the way that we do
17:52
things, that it will sit alongside humans and
17:54
going forward our teams, that
17:56
people we work with will include not
17:58
just humans, but also. these AIS
18:00
systems, it will become a natural part of the
18:03
way we do things. And that skill set
18:05
will be incredibly important for every worker no matter
18:07
where they sit in an organization. Wonderful,
18:10
Jared, thank you so much for your time.
18:13
Unfortunately, we are out of time, so we'll
18:15
have to leave it there. Jared
18:18
Sputaro, thank you so much again for joining
18:20
us. And we'll be right back in just
18:22
a few minutes with our next guest. Please stay with us.
18:25
The following segment was produced and paid for
18:27
by a Washington Post Live event sponsor.
18:30
The Washington Post newsroom was not involved in
18:32
the production of this content. I'm
18:34
Kathleen Coke, a long time Washington
18:36
correspondent. Every business and
18:39
individual wants to protect their most
18:41
sensitive data, right? But that's becoming
18:43
more challenging by the day as
18:45
cybercrime increases, leading often
18:47
to both loss of revenue and
18:50
customers. But AI is giving companies
18:52
a new incredibly effective tool. Well,
18:54
here to talk with me about
18:56
that today is Todd Kramer. Todd
18:58
is Director of Security Ecosystem Business
19:01
Development and Intel. Thanks for joining
19:03
me, Todd. Great to be here,
19:05
Kathleen. Todd, how
19:07
would you describe the threat landscape
19:09
that businesses face today? We know
19:11
that cyberattacks are more frequent, but
19:13
how have they evolved? Yeah,
19:16
sure. I mean, obviously, people have seen the
19:18
news in the headlines, right? There's no shortage
19:21
of factoids that show that this
19:23
is having an impact. Take ransomware.
19:25
Health care, school districts, even
19:28
personal individuals are getting devastated by
19:30
these types of attacks. And so
19:33
we know it's top of mind. And so
19:35
you've seen the security industry elevate in importance
19:38
and increase in budget spend by companies
19:40
of all sizes to better prepare to
19:42
protect their workers. And so that's
19:45
the first part. The second part
19:47
is these attackers are constantly evolving.
19:49
They know how security software like
19:51
antivirus that we run at home
19:53
or endpoint detection and response that
19:55
enterprises run, how they look for
19:57
these threats. So they're constantly obfuscating
19:59
and trying. to hide. And so I think
20:01
some of the things we can talk about here
20:03
today is how AI is going to help uncover
20:05
those hidden areas that help bolster
20:08
protection overall. So are
20:10
the criminals using new tools? They
20:13
are. You have these crime
20:15
syndicates, actually syndicating ransomware
20:17
as a service tools, other
20:19
things, you know, the
20:21
cost of actually run a ransomware
20:23
attack, I think is down to
20:26
$1,000. So anybody, it's not just
20:28
nation states, these hackers, kids can
20:30
get ahold of these tools, learn
20:33
how to get access to a credential,
20:35
learn how to launch their favorite attack
20:37
and get paid in Bitcoin behind the
20:40
scenes. So there's lots going on that
20:42
are showing these defenders are moving more
20:45
rapidly and having success. So
20:47
how is AI so
20:49
important in trying to in
20:51
thwarting these types of attacks? Yeah,
20:54
sure. The AI itself, so you think
20:56
about all these alerts that are generated
20:58
in the security software, it's approaching a
21:01
point where already has that humans can't
21:03
process all this. What's a false positive?
21:05
What's a real attack? Do I respond
21:07
to this alert from this user's desktop?
21:10
So AI is from a digital assistant
21:12
standpoint, will enable the security analysts to
21:15
better triage all of these alerts. The
21:17
second part is what I'll call AI
21:19
for security. And these are unique things
21:21
that we're working on today, to use
21:24
AI itself to help uncover these attacks
21:26
so they can't hide, like I talked
21:29
about previously. So
21:31
how has Intel then been incorporating
21:33
AI into your products to boost
21:35
security for businesses and individuals? Sure.
21:39
On the server side, you know, the
21:41
security software has been using cloud side
21:43
AI and deep learning. So everything's gathered
21:45
off an endpoint agent, you
21:47
workers, all that data is sent to a cloud
21:49
and the AI is done in the cloud itself
21:52
to discover the threats. This
21:54
year, we launched Intel Core Ultra,
21:56
and it has an Intel MPU,
21:58
a neural processing unit along alongside
22:00
a GPU and the CPU. All
22:02
three of those do AI optimization.
22:04
So now what you have is the
22:06
ability for the security software industry to
22:08
apply AI, to run it
22:10
directly on your machine using
22:13
your laptop. And when you put that
22:15
AI closer to the detection source, you
22:17
can do interesting things, right? You can't
22:19
send all that user data to the
22:22
cloud, it's impractical from a cost standpoint.
22:24
But if you put the AI there to do
22:27
its job, now we're seeing these
22:29
novel use cases of increased ability to
22:31
classify malware on the fly, in
22:34
all sorts of pieces. So we're
22:37
not only accelerating the ability, but
22:39
it's of these vendors to find
22:41
threats, but we're actually detecting
22:43
more of those threats with this AI. And
22:46
I take it this is faster and more scalable
22:48
than anything that humans can do. That's
22:51
right. And so you're just going to see
22:53
the ability, not only of the bad guys
22:56
to launch these attacks at scale, this
22:58
AI for good, the
23:00
ability of hardware to enable security
23:03
software, to do AI, to do
23:05
it faster, better, to scale,
23:08
and those types of things are possible here.
23:10
So that's where we're headed. I
23:13
know that we're just really, it's
23:16
the tip of the iceberg, we're just beginning to see
23:18
what AI can do. Point
23:20
us on the horizon, and from a security
23:23
perspective, what would you say is the next
23:25
big innovation that we can expect? Before
23:29
I answer where we're headed, I'll just
23:31
point out that for the last five
23:33
years, Intel threat detection on our VPRO
23:35
laptops, we have had AI for security
23:37
already working on that. It
23:39
offloads to our GPU, does a
23:41
CPU detection assist for ransomware, for crypto
23:44
jacking, that's rolled out on
23:46
a billion PCs today, and you may
23:48
not know that on your personal computer,
23:51
there's an extra assist from the hardware
23:53
that you get. So that's here today.
23:56
And then this year, it's all about the year of
23:58
this AI PC. We're working with... the likes
24:00
of CrowdStrike, the defenders, all of those
24:02
type of vendors to unleash
24:04
these novel use cases with this
24:06
new horsepower down
24:09
on the client. So you're seeing
24:11
ISVs release software, put out blogs
24:13
on new techniques. And then in
24:16
the longer term, you're gonna see
24:19
these security analysts, the tools
24:21
for the analysts to be able to triage
24:23
things, heal infrastructure on
24:25
the fly. So it's their ability to respond
24:27
that's gonna automate. And that's where we're gonna
24:29
be a couple of years out. That's
24:32
really exciting, especially if it keeps
24:35
us and our data safer. We hope
24:37
so. Well, thank you so much. Todd
24:39
Kramer, director of security ecosystem, business development
24:42
and Intel. Really enjoyed
24:44
the fascinating conversation. Thank
24:46
you. And now back to Washington Post
24:48
Live. Welcome back.
24:50
To those of you who are just joining
24:52
us, I'm Danielle Liberl, tech at work writer
24:54
for the Washington Post. My next
24:57
conversation will examine how the workforce will
24:59
need to be educated and trained in
25:01
new ways with the rise of technologies
25:03
like AI. I'm
25:05
joined now by Dr. Ayana Howard.
25:07
She's the Dean of Ohio State
25:10
University's College of Engineering. Dean Howard,
25:12
welcome to Washington Post Live. Thank
25:15
you, thank you. So
25:19
you are a roboticist, an entrepreneur
25:21
and an educator, and
25:23
you use an interesting term
25:25
called humanized intelligence instead of
25:28
artificial intelligence. Can you tell
25:30
us, you know, what's the distinction there? Well,
25:33
so when we think about intelligence, just in
25:35
general, when we put it in computing or
25:38
artificial, really what we're thinking about
25:40
is how do we use AI to
25:43
augment our human functions? And so humans
25:45
are fundamental to thinking about the data,
25:47
thinking about the outcomes, thinking about what
25:50
it is that we wanna do. And
25:52
so when I think about humanized intelligence,
25:54
it's really about thinking of the next
25:56
generation of intelligence as AI,
25:58
but coupled tightly. to
26:00
engage and enable us, improve
26:02
our quality of life and incorporate
26:05
things around workforce development. And so
26:07
that's why I've used humanized intelligence really for
26:09
the last 15 years. That
26:13
makes a lot of sense. In assessing
26:15
the impact of AI on
26:17
jobs, you have said that
26:19
while the technology will change existing
26:21
jobs, there could be whole new
26:23
fields as well as
26:25
new opportunities. How so,
26:27
can you explain that a little bit? And
26:30
what role will the human emotional quotient you've
26:32
spoken of pull? Yeah,
26:34
so one of the things we think about computer
26:36
science and everyone's like, oh, I wanna be a
26:38
computer scientist and I wanna go into coding. But
26:41
computer science as a discipline, like I
26:43
can enroll in major in computer science.
26:45
If you look at engineering, which
26:47
existed in the 1850s, 1860s, there
26:51
was no discipline called computer science. And yet
26:53
computer science is one of the fastest growing
26:55
jobs even today. Jobs
26:57
change as we expand and
27:00
increase our technology footprint. And
27:03
so when I say we don't even know what jobs
27:05
are for the next year, for the next 10 and
27:07
next 20, it's because as
27:09
we grow, as we advance, we
27:12
are required to think differently about how we
27:14
train our students, how we train our next
27:16
generation. I like to think
27:18
about robotics. When we do have
27:20
self-driving cars on the road, who knows
27:22
when that will happen, but it will. Do
27:25
we need robotic mechanics? What about the
27:27
gas station attendants? Are there gonna be
27:29
a new breed of gas station attendants
27:31
that can deal with robotic cars? We
27:34
don't know, but we really need to train
27:36
the next generation so that they can adapt
27:38
to the new requirements in the new jobs.
27:42
You know, it's funny, we're already seeing
27:44
that in San Francisco where I'm based.
27:46
We're seeing the self-driving cars
27:48
run around and trying to deal with
27:50
the issues there. So yeah, a
27:52
reality already coming to life. That
27:55
being said, you mentioned this, we saw it a little
27:57
bit in the intro, how
28:00
students are gonna need to be educated in new
28:03
ways to adapt to this
28:05
new workplace and economy. I
28:07
want you to expand on that. How
28:09
do we need, you talked a little bit about
28:12
prompt engineering and things like that, but what are
28:14
ways that we need to think about how students
28:17
are gonna be educated, especially when you're
28:19
talking about jobs that we don't even know
28:21
are gonna exist? How do you educate
28:23
that workforce and are you already seeing
28:25
these changes happen? I
28:28
am seeing the changes happen and
28:30
I am a proponent of college
28:32
education. And the reason is
28:34
it's not necessarily the discipline, but
28:37
when you go to college, even if
28:39
it's two year college, you're given the
28:41
tools to think, ask questions and figure
28:43
out things. And so when
28:45
we provide the tools and I always
28:47
think about, can we create
28:50
a computer scientist that really
28:52
is fundamentally a humanities student?
28:56
Can we add those together so that I love
28:58
English, I love language, but can I figure out
29:00
how to use large language models to
29:04
create better writing styles,
29:06
to tease out editorials?
29:08
Those are the kinds of things we have to think
29:10
about as the next generation of the tools. And
29:13
so in education, what we do is we teach
29:15
students how to learn so
29:19
that when they change their jobs 20, 30,
29:21
40 years, they are comfortable with, oh wait,
29:23
it's a new job, new requirements. Oh
29:25
wait, I know how to learn, I know how
29:27
to figure this out, I can adapt as the jobs adapt. So
29:32
it's really about the skillset and less about
29:34
necessarily like, training
29:37
to be a computer scientist or a coder or
29:40
a software engineer and specifically like
29:42
just knowing how to learn and
29:44
knowing the skills you might need going in to
29:47
sort of adapt to those new jobs. Exactly,
29:50
exactly. And that, believe it
29:52
or not is difficult. Learning how to learn, learning
29:56
how to ask questions, learning to be a
29:58
computer scientist. be curious
30:00
always and all the time, even learning
30:02
to question the things that come out
30:04
in terms of AI and artificial intelligence,
30:07
those are hard skills. Traditionally, if you
30:09
think about it, as students, a teacher
30:11
comes, they say, you believe them. So
30:13
now when you're in the workforce, when you're
30:15
interacting with AI as your co-pilot, for example,
30:18
it's like, oh, I'm just going to
30:20
take the guidance. I'm just going to
30:22
understand. And so that whole skill set
30:24
of being curious and questioning and expanding
30:26
our own knowledge, liking to
30:28
learn is actually not always natural
30:31
when we think about it. I
30:34
want to read an audience question we have
30:37
from Indra Klein in Washington, DC.
30:40
Indra asks, given the perceived
30:42
gap with respect to technical skills
30:45
needed for the future of work,
30:47
any thoughts on better integration of
30:49
tech-related skills in K through 12
30:52
curriculum with
30:54
traditional basic subjects? Yes,
30:57
I truly believe that compute, and
31:00
I'll call it computational skills versus
31:02
computer science, but computational skills, which
31:04
also links to AI, should
31:07
be one of the elements. So you think about K through
31:09
12. We just assume that
31:11
you will learn how to read. We
31:14
just assume that you will learn how to do
31:16
basic math skills. And we've designed our curriculum from
31:18
K through 12 so that at
31:20
the end of high school, you are able
31:22
to read at some grade level. You are
31:24
able to do some basic math. We need
31:26
to do the same thing around computational
31:29
skills. When you have a
31:31
student in kindergarten and they are
31:34
using their apps, which they are, actually
31:36
asking them, OK, let's think about the data.
31:39
When you're providing and you're playing your game,
31:42
do you know what data is? Let's
31:44
define data. What do we think about
31:46
when we're collecting your data? You can
31:48
start as simple as kindergarten and continue
31:50
going through that. And so by the
31:52
time you're in high school or graduate,
31:54
you will have a basic understanding. And
31:56
the skill sets of computation, whether it's
31:58
early, very basic. like Python coding
32:00
or maybe it's just like I know how
32:03
to prompt my chat agent,
32:05
my chat bot so that I can get better
32:07
answers for my homework. We have
32:09
to integrate that deeply. I
32:11
think of it as AI literacy, computer
32:13
science literacy. Now, that's
32:16
a total change from the way
32:18
that probably you or I were educated
32:20
growing up. So that would be amazing
32:23
to see what comes out of that
32:25
if you're educated at kindergarten with that
32:27
knowledge and then you go through college.
32:30
I bet you have a totally different mindset. I
32:33
want to move to another question. We saw in the
32:35
intro video, you emphasized coding will
32:37
be a key skill for future generations. I
32:40
want to talk about who specifically will benefit
32:42
most from this and how should we think
32:44
about coding for future generations? I mean, you
32:46
talked a little bit about it in this
32:49
last question. Can you expand on that a
32:51
little bit? Yeah.
32:53
So when I think about coding,
32:55
it really is how do we give students
32:57
and next generation of workers, the future
33:00
workforce, the ability to think in a
33:02
logical fashion, identify a problem, figure out
33:04
the steps to solve that problem, and
33:06
then come up with a solution. I
33:09
call it engineering know-how, but when everyone says
33:11
engineering, it's like, oh my gosh, that's like
33:13
too hard. But when you say
33:15
coding, people are like, oh, okay, I can
33:17
learn how to code. But coding really gives
33:19
you that ability to process in that very
33:21
logical sequential problem-solving way.
33:24
And so that's really what I say coding
33:27
skills. It's the ability to think in that
33:29
way, think in that logical sequence, problem-solve. And
33:32
it really doesn't matter what the code is. I
33:34
need to figure out how to make you think
33:36
differently. And that's why that skill is
33:38
so important when you go out. I
33:41
know people who learn how to code in
33:43
one language, I'm 10 years ago, say, you
33:45
know, basic or Pascal or assembly. Now
33:47
they can program in any language that comes
33:49
up. I say code language of the
33:51
day. It's very easy to pick up because you
33:53
have that basic understanding of how to
33:55
think about the basic building blocks, the
33:58
sequences, and... what are the outcomes
34:00
that are expected? So
34:03
we're already seeing generative AI
34:05
tools gradually being incorporated into
34:07
the workplace, from chat
34:09
GPT to our
34:11
previous speaker talked about co-pilot. What
34:14
are the basic skills that employees need
34:17
to know as they engage
34:19
with these AI tools and how do we
34:21
go about learning them? So
34:24
I believe in a trial
34:26
by error. So
34:29
just use it, just think about what is it that
34:31
you wanna solve? What is it that you wanna do?
34:34
If you are a journalist and
34:37
you would like to write an article, ask
34:39
your agent, whether it's co-pilot or
34:42
Gemini or Anthropic Claude,
34:45
oh, please write an article about the future
34:47
of work for the Washington Post. Ask
34:49
it three different ways and you as a
34:51
person can see, oh wait, I get a
34:54
better response if I ask or prompt it
34:56
in one way versus the other. And
34:59
that allows you to basically learn by doing.
35:01
I think that's really the only way that most
35:04
people can become comfortable of using these tools and
35:06
not being afraid that they're using it wrong. It's
35:08
just like you would ask a parent or a
35:10
child and you don't like the answer, you ask
35:13
again and you ask again until you get the
35:15
answer that you want. You should do the same
35:17
thing where you're using these tools that
35:19
are augmenting our work and
35:22
making us more efficient, but we also have
35:24
to learn how to get it
35:26
to make us more efficient. So
35:30
obviously one of the big barriers to
35:32
adoption is trust. People trusting these tools
35:34
for various reasons, maybe they're worried about
35:36
it, maybe they're scared of it or
35:38
maybe they've just had some experiences with
35:41
hallucinations and errors. What
35:44
role does the private and public
35:46
sectors play in building trust in
35:48
a technology that could disrupt jobs
35:50
and society in other ways? So
35:54
I think of trust in really two
35:56
tranches. So one, a lot of
35:58
individuals say, oh, I don't know. trust AI
36:00
or don't trust the companies. And
36:03
you look at the surveys and oh yeah, okay. But
36:05
then if you look at the behavior, the behavior
36:08
doesn't match what people say. People will still use
36:10
the tools. People will still use the AI. When
36:13
it hallucinates, it's not like they cut it off, like oh,
36:15
I'm never going to use it again. It's
36:17
like oh, okay, I'm going to try it again and see what it says.
36:20
And so what we call behavior is
36:23
so much different. The behavior of people shows
36:25
that we actually trust the use of these
36:27
tools. Because surveys is
36:29
like no, I don't trust it. I
36:32
think one of the things is that we
36:34
are not being truthful to ourselves. We
36:36
are not really thinking about the fact that
36:39
we like a lot of these AI tools
36:41
because when it works, it does make us
36:43
more efficient. It does make us better. It
36:45
does improve our learning capabilities. And when it's
36:48
wrong, that's when we're like oh, we don't
36:50
trust it, but we're going to still use
36:52
it. And so that's kind of when I think about trust
36:54
in those two tranches in
36:56
those aspects. And so for companies, I
36:59
think that one, companies should take a
37:01
little more responsibility about thinking
37:04
of how do I build and really
37:06
distrust on the behaviors of the AI
37:09
so that people won't start over trusting
37:11
it in terms of its use. I
37:14
always think about maybe we should have
37:16
denial of service at some point. Like today we're going
37:18
to have an off day for AI. I
37:21
mean, could we have companies actually
37:23
think about that as a possible
37:25
way of building distrust, but
37:28
then also trust in that
37:30
aspect? You're
37:33
one of 27 experts appointed to
37:35
the National Artificial Intelligence Advisory Committee.
37:38
Recently, prominent academics have called
37:41
on the Biden administration to
37:43
fund AI researchers to help
37:45
them keep up with the tech giants. Do
37:48
you also share this concern? I
37:51
do. So one of the problems with
37:54
current versions of AI and really it's
37:56
generative AI is that you need a
37:58
lot of compute. which basically means
38:00
you need back ends, you need to figure
38:03
out how do you have these large servers,
38:06
which means that it's very, very costly.
38:08
As an academic researcher, most researchers aren't
38:11
able to afford to take a billion
38:13
parameters or learn around a billion types
38:15
of input data to come up with
38:17
a solution, whereas companies can't. But
38:19
if you think about the fundamental things
38:21
that have moved our entire society forward,
38:24
that has come from basic research. That
38:26
has come from researchers working together, trying
38:28
to figure out sometimes working 20 years
38:30
and then, oh my gosh, we have an mRNA
38:33
vaccine. This is what's
38:35
required. And so really
38:37
thinking about what can we do when
38:40
we can have government support in terms
38:42
of open source resources, in terms of
38:45
computes that are freely available to researchers,
38:47
and basic researchers, both in academia and
38:49
in the K through 12 system. It
38:51
allows us to work on problems that
38:53
might take 20 or 30 years to
38:56
solve. There is no necessarily commercial
38:58
benefit in the now, but it
39:00
can solve things that are really
39:03
important in terms of education, health
39:05
care, water scarcity, manufacturing mobility, all
39:07
the things that we care about
39:09
as a society, but it's not
39:11
necessarily financially profitable in the now.
39:15
So as I mentioned in your intro, you're
39:17
also an author, along
39:20
with everything else, being an entrepreneur, being an
39:22
educator. And you wrote the book
39:24
Sex, Race, and Robots. How to be
39:26
human in the age of AI. How
39:29
does AI inherit human biases? And can
39:32
it be circumvented? I kind of wonder,
39:34
is the genie out of the bottle?
39:36
We've seen so many of these cases
39:38
and situations where AI
39:40
has gone wrong and just been really
39:43
biased in their answers. Where
39:45
are we at? Can we pull some
39:47
of that back? We'd love to
39:49
hear your opinion on that. Yeah,
39:52
so the reason why AI in its
39:54
current rendition has bias
39:56
is this element of what
39:58
I call humanized intelligence. learning from
40:00
our data. And the fact is, like, as
40:02
people, we are biased. We have historical biases.
40:05
And if you think about collecting data in
40:07
the last year versus the last 10 years
40:09
versus the last 20 or 30, there
40:12
is extreme cases of bias that grants pretty
40:14
much any group you can think about. And
40:17
so AI inherits this bias.
40:19
Now, I will say that
40:21
AI is better than biases
40:24
in people. But the
40:26
problem is, is because we over
40:28
trust AI, when it says something,
40:31
we very rarely question. And so
40:33
that bias gets amplified if it's
40:35
making decisions around, again, housing, policing,
40:37
surveillance, it's inheriting that bias. So
40:39
how can we circumnavigate? It is
40:41
out of the box. The genie
40:43
is out of the box. It's
40:45
out there. It's being deployed. I
40:47
think one of the things we can do to deal
40:49
with this is, one, putting in
40:52
more safeguards for the things that
40:54
are around our liberties, around things
40:56
that really can cause harm to
40:58
us as a society. And when
41:00
I say safeguards, safeguards are about human
41:03
oversight. Safeguards could be about rules and
41:05
regulations such that they're designed not to
41:07
impede progress, but to ensure
41:09
society does not have any issues. There's
41:12
no harm to society. And we can
41:14
look at the UN to say,
41:16
OK, what are these things that are really, really
41:18
dangerous for AI to be in and make
41:21
sure that we have human oversight till
41:23
we get to the point that the
41:25
bias in AI is mitigated. It will
41:27
never be eradicated because we as people
41:29
will always have some biases, but at
41:31
least mitigate the bias from AI based
41:33
on their decisions. And
41:36
you recently wrote, technologists aren't
41:39
trained to be social scientists
41:41
or historians. We're in
41:43
this field because we love it. And
41:45
we're typically positive about technology because it's
41:47
our field. That's a problem. We're not
41:50
good at building bridges with others who
41:52
can translate what we see as positive
41:54
and what we know are some of
41:57
the negatives as well. we
42:00
kind of wrap up this segment, where do
42:02
you think those bridges need to be built
42:05
better in terms of where we are today? Well,
42:08
one, I think that as
42:11
technologists, we should be
42:13
learning about things, ethics
42:15
and social science. Even
42:18
though I got into engineering and coding because that's what
42:20
I want to do. I'm not
42:22
really a great reader. And so it's like, oh my gosh,
42:24
I have to read more? No, I want to do math.
42:26
I want to code. So it's also
42:28
anti what I enjoy. But
42:30
I think what is we are becoming
42:33
so much more reliant on technology. And
42:36
in order to contribute to the good,
42:39
we also need to think about what is the harm that
42:41
we are instituting and what is the
42:43
harm that we are perpetuating. I
42:46
can think about coding. I can think about my
42:48
tools. I can think about A.I. And I'm able
42:50
to put in guardrails in
42:52
my own code when I think
42:54
about it. It's so much harder
42:56
after the fact. And so I
42:58
think it's not just technologists. We should
43:00
be self-aware and start learning ourselves. But
43:02
I think companies should also help their
43:04
technology workforce to become more ethical. And
43:06
it's not good enough just to put
43:08
in an ethicist and a team because
43:10
as technologists, we tend to respect other
43:12
technologists. I'll just say that. And
43:15
so it really is helping us
43:17
become better, better humans, better
43:20
society in society. It's
43:24
hard because it's based on a human problem,
43:26
not on a technical problem. But I think
43:28
it's one of the ways to get to
43:30
the other side. Got
43:32
it. Yeah, that makes a lot of
43:34
sense. Unfortunately, we're actually out of time. So we're
43:36
going to have to leave it there. Dean
43:39
Ayanna Howard, thank you so much for joining
43:41
us today. Thank you. Thank
43:43
you for having me. having me. Thanks for
43:45
listening. For more information on our
43:48
upcoming programs, go to
43:50
washingtonpostlive.com. Is
43:52
there anything more satisfying than finding something that
43:54
perfectly lines up with your taste and checks
43:56
all the boxes? Like getting
43:58
the perfect fit with a suit? from Indochino.
44:01
Their suits are made to measure and
44:03
totally customizable with endless options. From
44:06
timeless classics to bold statements, you can
44:08
express your style exactly how you want.
44:11
Choose your own cut, fabric, lining,
44:13
buttons, lapels, and more to create the
44:16
suit of your dreams. All
44:18
at a surprisingly affordable price. They
44:20
also offer fully customizable blazers,
44:23
pants, outerwear, womenswear, and more.
44:26
Every Indochino piece is made to your exact
44:28
measurements, and they make getting measured easy. Simply
44:31
set up your measurement profile in less than 10
44:33
minutes. You can send your measurements online
44:35
from the comfort of your home or make an appointment
44:38
at one of our showrooms. Find
44:40
the perfect fit with Indochino. Go
44:42
to indochino.com and use code PODCAST
44:44
to get 10% off any
44:46
purchase of $3.99 or more. That's
44:49
10% off with code PODCAST.
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More