Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:00
Support for this episode comes from
0:02
OFX Money Transfers. Simplify
0:04
your international money transfers with OFX.
0:06
With over 20 years of experience, they offer fast,
0:08
reliable, and secure solutions. Their
0:11
trusted OF experts will help you
0:13
optimize your global cashflow so you can focus
0:15
on your business. OFX, an
0:17
easy way to move money globally. OFX
0:20
is a US licensed money transmitter and registered
0:22
money service business. Reduce your foreign
0:24
exchange headaches today. Go to OFX.com
0:27
to get started.
0:30
Now more than ever, startups
0:32
need a safe place to put their cash, and with
0:34
Mercury, protecting your money doesn't have to
0:36
mean settling for a clunky, complicated
0:39
banking experience. Through partner banks
0:41
and their sweep networks, Mercury offers up to
0:43
five million in FDIC insurance, 20 times
0:46
the per bank limit. They also make it easy
0:48
to invest any cash above the FDIC
0:50
insured amount in a money market fund, giving
0:52
you even more peace of mind. Visit Mercury.com
0:55
to join more than 100,000 startups that
0:57
trust Mercury with their finances.
1:02
Episode 258. 258 is a country
1:04
code belonging to Mozambique. In 1958, Bank
1:07
of America introduced their first credit card and
1:09
the US 50 star flag was
1:12
designed by a high school student for a history
1:14
project. You got a B minus for
1:16
it. True story, my sex ed
1:18
teacher at my high school was fired. He
1:21
was teaching the students about ejaculation,
1:23
but it went right over their heads. That's
1:26
good. That's good.
1:29
Go, go, go! Go, go, go! Welcome
1:39
to the 258th episode of the Prop
1:42
G pod. In today's episode, we speak with
1:44
Mo Gaddat, the former chief business officer
1:46
at Google X, an expert on happiness and the
1:48
author of Scary Smart, the future
1:50
of artificial intelligence and how you can save
1:53
our world. We discuss with Mo the need to control our response
1:56
to AI, how this technology is impacting
1:58
society and the four major threats he's identified.
1:59
We also hear about most transition out
2:02
of tech to focus on happiness and
2:04
how he's dealt with profound loss.
2:07
This was, we all agree, this is one of our
2:09
favorite conversations. He's just, I mean, in
2:12
addition, obviously being very successful and very smart.
2:15
He's a very soulful, thoughtful
2:17
guy. And anyways, I'm
2:21
in New York after my
2:23
Arrested Adolescence World Tour of Ibiza
2:25
in McAnose. I'm trying to get some work done here
2:28
in the city. I love
2:29
London. Actually, do I love London? Is
2:32
that fair? I like London.
2:34
The weather, I don't know if you've heard, the weather's not great in London.
2:37
And my son's at boarding school, so I've already
2:39
like, I was expecting to lose him in three years, not now,
2:41
and that's going to bomb me out. But New
2:44
York, I don't care what anyone says, New York
2:46
is number one. There's nothing like it here. I'm in
2:48
Soho and I'm walking around and it's
2:50
just booming. And I
2:53
can't get over, you know, the businesses that went
2:55
out of business in the pandemic. There's new ones
2:57
and really, it feels like there's kind of the
2:59
city has shed its skin and it's back stronger
3:02
and better than ever. And
3:04
I walked around Soho on Sunday
3:06
and just went shopping and stopped in and
3:08
got, went and spent $85 at
3:11
Baltezar Boulangerie on donuts
3:15
and coffee and pastries. Granted, I had taken
3:17
an edible. So you don't want to go to just
3:19
a pro tip here. You don't want to go to Baltezar's Boulangerie
3:22
High. That's just a recipe for spending $80
3:25
on fine French baked goods. But
3:29
it was worth it. It was worth it. It was worth more
3:31
like $85. And then I went across the street
3:34
to the MoMA
3:34
store where they just sell like
3:37
cool skateboards, overpriced, painted
3:39
skateboards, see above edible. And
3:41
I just had the best time walking around
3:43
alone.
3:44
I'm also, I'm
3:46
really going off script here. I'm getting
3:48
recognized all the time now. And I like
3:51
it because people are super nice. And I mean this
3:53
sincerely. If you recognize me and you have any
3:55
inclination to say hi, say hi.
3:57
I'm friendly and it always makes me feel good to see people.
3:59
but, but I read something and I wish I hadn't
4:02
read it. And that is
4:04
supposedly every time someone comes up and says hi
4:06
to you, there are a hundred people who recognize
4:08
you and don't say hi. And that makes sense, because I would
4:11
imagine I've seen several hundred people I recognize
4:14
from various media and I never say hi to people.
4:16
I'm self-conscious, I don't wanna bother
4:18
them, whatever it is, or my ego's too big to go up
4:20
and express admiration. And
4:22
so I'm a bit paranoid now when I'm out
4:25
alone, wandering around, hi,
4:28
or hi on Innedible. I don't know if that qualifies as hi. And
4:30
I think, oh my God, hundreds of people are like
4:33
looking at this
4:34
old man they know, walking around high in the MoMA
4:36
store and buying pastries. Anyways,
4:39
that has nothing to do with the show today.
4:41
So where are we? Where am I? Anyways,
4:43
enough about me. I'm headed to Colorado on Saturday. If
4:45
you see me walking around the streets, I'm
4:48
embarrassed. I should've said Aspen, I'm going to Aspen,
4:51
but I'm embarrassed. Isn't it strange? I was embarrassed
4:54
by how little money I had growing
4:56
up. And as a young man, I was very
4:58
self-conscious about the fact I didn't
5:00
have a lot of money. And now that I have a lot
5:02
of money, I'm self-conscious about that. I never had the right
5:04
amount of money. I've never had the right amount.
5:07
No, I shouldn't say that. It's much better. This is
5:09
much better, but I'm still self-conscious.
5:11
And it's weird how I caught myself saying I'm going to
5:13
Colorado, because I don't wanna, so I'm going to Aspen.
5:15
Anyways, enough about me. Where
5:17
are my pastries? Okay, let's
5:20
get onto the business news. The thing everyone
5:22
is talking about,
5:24
the most ascendant platform in
5:26
history, in history. Even Mark Zuckerberg
5:29
is surprised at how well that went. Well, I
5:31
mean. So what do we have here? We
5:34
have threads, which is
5:36
an interesting study in disruption. Why
5:38
is an interesting study? Disruption
5:40
is more a function,
5:42
more a function of how vulnerable the
5:44
sector is. In other words, we
5:46
have a tendency to just think of it in the context of Amazon
5:49
being excellent at execution and having access
5:51
to cheap capital, which they did.
5:53
But the reason why Amazon got to 100 billion
5:57
as fast or faster than I think almost any company.
7:55
Cost
8:00
me 12 bucks Apple TV plus is 5 or 8
8:03
bucks Amazon prime video
8:05
is another you know It's included my Amazon
8:07
prime. So I'm paying 20 or 30
8:10
bucks for this
8:11
This torrent this Mariana trench
8:13
of content where I got to
8:15
pay 60 or 70 to watch
8:18
for read and Kim
8:21
you know the Kardashians by the way, I would pay 50 or 70
8:24
bucks a month So I didn't have to watch the Kardashians
8:27
and I thought Jesus this just brings home how?
8:30
big a chin at supported cable TV
8:32
is So the bottom line is disruption
8:35
is a function of how vulnerable the disrupt is and
8:37
there's nothing more vulnerable
8:40
Than a total asshole and that's
8:42
how Elon Musk has acquitted himself over
8:44
the last six months What advertiser the
8:47
fastest way
8:48
they get unceremoniously fired as a CMO
8:51
would be to advertise on Twitter It
8:53
is all risk with no upside.
8:56
It's never been a great platform in terms of ROI,
8:58
but now there's more downsides So that's just a
9:01
that's just not a great value proposition
9:03
when you're in can by the way I was in can Twitter used to have a beach.
9:05
You didn't see anyone from Twitter I
9:07
saw or I met one really lovely
9:09
young man who's a Twitter salesperson I remember I
9:12
wish I remember his name and I remember saying to him you've got the
9:14
worst job in the world And I felt like he was
9:16
on an apology tour just roaming
9:18
around cannon in neither Musk
9:21
or Linda Yakareena who claims to be the CEO
9:23
but clearly isn't showed up because I think they thought
9:25
all we're just
9:26
gonna get a shit and just be apologizing and explaining
9:28
all the time and So what do you have
9:31
it's not that threads is that good threads has a good
9:33
UI UX but it probably doesn't have the future
9:35
functionality of a What's
9:38
it called blue note blue sky the
9:41
Jack Dorsey? competitor mastodon
9:44
Or even post news founded by who
9:46
I think is probably the most impressive Digital
9:49
product guy in the world right now known
9:51
Bardeen and has done great relationships
9:54
of disclosure I'm an investor, but has great relationships with
9:56
news sources Reuters, etc But
9:59
here's the thing
9:59
They have a 300 million person cannon
10:02
they can fire at
10:04
a new app. And it's just striking
10:07
and people were just so desperate
10:09
for an alternative. But not only desperate for an alternative,
10:11
here's the thing. People
10:14
didn't want to have to rebuild their network.
10:16
It's taken me 15 years
10:17
to get to 550,000 followers on Twitter. And
10:21
it's a huge asset base for me and it really helps
10:23
me with my career. I think
10:25
of myself as a thought leader and I realize how pretentious
10:27
that is. But I'm a thought leader that does edibles and
10:29
goes to MoMA, cuz I'm deep. I'm
10:31
deep. Anyways,
10:34
I've invested a lot in it and it pays off
10:36
for me. It's got good spread and good reach for
10:38
my content. And the mode
10:40
is huge, it's hard for me to go anywhere else. It's not
10:43
interoperable. My fan base
10:45
or follower base or whatever you wanna call it or my community,
10:48
my peeps is not portable. So
10:50
every time I would start on one of these other platforms, I
10:53
would build it to 10 or 20 or 30,000 followers. And
10:55
I think, God, I'm kinda out of breath. And
10:57
here's what it means to be in a monopoly with
11:00
too much concentration of power. They
11:02
can take that cannon of 3 billion
11:04
active users and point it at this thing. And I'm
11:06
at letter, not A, but I start
11:08
at letter L. I think I'm at 60 or 70,000 followers already on
11:10
threads and the
11:13
UI is pleasant and guess what? Here's the
11:16
really wonderful thing. Here's the core value proposition
11:18
of threads.
11:20
It's not Twitter.
11:22
It's not Twitter. And it just strikes
11:24
me that like Mussolini came across as sort
11:26
of charming and likable in World War
11:28
II because of the character standing on his left
11:30
and right. And all of a sudden, the
11:33
Zuck has an opportunity that's
11:35
huge here. Now, what is the meta meta
11:37
opportunity here?
11:39
What would be the gangster move if
11:41
Mark Zuckerberg called me tomorrow and he won't and said,
11:43
Scott, you've been critical of me, but let's be honest. You're
11:45
a fucking business genius. That's what I imagine
11:48
our conversation would be like. Anyways,
11:51
I would say to him, this is a huge opportunity for
11:53
meta. What's the opportunity? What gets in the way
11:55
of meta in a trillion or $2 trillion
11:58
in valuation?
11:59
They have two-thirds of
11:59
all social media. They have an unbelievable management
12:02
team. They have incredible human capital.
12:04
They have fantastic access to capital.
12:07
They have these incredible platforms,
12:09
Instagram, WhatsApp, the core platform,
12:12
which is aging and dusty, but still has
12:14
a ton of people on it. Facebook is the
12:17
internet in a bunch of countries. This
12:19
is the opportunity to do the
12:22
pivot from
12:23
Darth Vader back to Anakin Skywalker.
12:26
And that is the pivot was pulled off by Microsoft.
12:29
In the 90s, Microsoft was seen as
12:31
the evil empire. They were the death star.
12:33
They were incredibly full body
12:35
contact, competitive
12:37
abuse tactics. They used to announce products
12:40
knowing they weren't going to actually release the product
12:42
just to make it more difficult for another company to
12:45
get adoption in the B2B enterprise
12:47
market because they said, well, Microsoft's coming out
12:49
with a similar product. Let's just wait. When Microsoft had
12:52
absolutely no intention of releasing a product
12:54
and what did they do? They decided, well, in
12:56
this market, it's better to be a good partner. So we're
12:58
going to turn it back, different kind of gestalt. And
13:00
now Microsoft is seen as the good guys. And
13:02
I think that's a big part of the reason that Microsoft
13:05
consistently is number two and sometimes the
13:07
most valuable company in the world. And here is the meta
13:09
opportunity for meta and for the zuck.
13:12
Start your hat white boss. For
13:15
the first time in, I don't know, 10 years,
13:18
you're actually seen as the good guy. You're
13:20
seen as the good guy. So lean into that. Lean
13:22
into that. You could reduce your revenue by 10 or even 20%
13:26
and have the market cap of this company go up by
13:28
doing the following. Agegate some
13:30
of your platforms. There's no reason a fortunate
13:32
girl needs to be on Instagram and stop the bullshit pretending
13:34
you do Agegate. You don't become
13:37
much more stringent, much more stringent
13:39
around content, around medical
13:41
information or elections. Have a
13:44
pause 90 days before the 2024
13:47
elections. As we know, this is going to be the mother
13:49
of all shit show apocalypse
13:51
meets misinformation
13:53
disco a go-go from
13:56
Putin and his masters of misinformation
13:58
as he tries to get Trump reelected.
13:59
recognizing his only way that
14:02
he does not end up on the 11th floor with a big window
14:05
at some point is to win in Ukraine. And the
14:07
only way he can do that right now is if Trump gets reelected.
14:10
So, boss, start thinking about the Commonwealth.
14:13
Start thinking about teens and
14:15
start erring on the side of caution.
14:17
Start taking shit down and and sick up the middle
14:19
finger when someone says starts bellowing
14:22
about First Amendment. As far as I can tell, First Amendment
14:24
is mostly from people who want their misinformation
14:27
to reign supreme. And by the way, meta has
14:29
absolutely no fidelity to the First Amendment. The First
14:31
Amendment states that the government shall pass no law
14:33
inhibiting free speech. You're not the fucking government.
14:36
You're a for-profit.
14:37
This is the opportunity for meta
14:40
for the first time in forever. You're considered
14:42
the good guy. Lean into it. Come
14:45
to the light side of the force. Start
14:47
being the man you want to be. Start occupying the
14:49
space you command. Start thinking, okay, I
14:52
have stakeholders, just not shareholders. And by the way,
14:54
if you start recognizing your
14:56
stakeholders are teens who
14:58
are suffering from the greatest increase in depression
15:01
at the hands of bulldozer parenting and social media.
15:04
Recognize as a parent. Recognize as an American
15:07
citizen. And boss, you are a citizen. You're blessed.
15:09
Have you noticed? Have you noticed
15:11
that if you go up and down the Western seaboard
15:14
of the United States
15:16
that borders the Pacific Ocean, there's a bunch
15:18
of
15:18
multi-hundred billion dollar and multi-trillion
15:20
dollar companies. And what happens when you hit the border
15:23
just above Seattle? It stops until
15:25
you get to Lululemon. And what happens when you get
15:27
to Qualcomm and La Jolla?
15:29
It stops until you go 5,000 kilometers or 5,000 nautical
15:31
miles to
15:34
Buenos Aires and Mercado Libre. You're incredibly
15:37
blessed. It seems to me like you have a great family. Seems
15:40
to me that you do not recognize, but
15:42
hopefully as you get older, recognize how wonderful it is
15:44
to be American. Well, boss, start nodding your
15:46
head to the debt you are owed in
15:49
terms of the blessings you have been
15:51
granted to be born and raised
15:53
in the great country that is America. Start giving
15:55
a shit
15:56
about America in our election. Start giving
15:58
a shit. You have kids. Start
16:01
giving a good fucking goddamn about kids.
16:03
This is your opportunity. Are you Darth
16:06
Vader or are you Anakin?
16:11
We'll be right back for our conversation with Mo
16:13
Gadot.
16:16
Now more than ever, startups are looking for the safest
16:18
place to put their cash. Mercury offers secure
16:21
banking through an intuitive product experience
16:23
that innovates alongside of you. Through partner
16:25
banks and their sweep networks, Mercury customers
16:27
can access up to 5 million in FDIC
16:29
insurance, that's 20 times the per bank
16:32
limit. These sweep networks protect your deposits
16:34
by spreading them across multiple banks, limiting
16:36
the risk of any single point of failure. Now
16:38
in Mercury Vault, any funds above the FDIC
16:41
insured amount can be easily invested in a money market
16:43
fund predominantly composed of US government
16:45
backed securities, providing startups of any
16:47
size a simple way to manage risk and
16:50
protect their cash. Plus, it's simple
16:52
to get started. Applying just takes minutes and many
16:54
customers are approved and onboarded in less
16:56
than two hours. Visit mercury.com
16:59
to join more than 100,000 startups that trust
17:01
Mercury with their finances. Mercury is a
17:03
financial technology company, not a bank. Banking
17:05
services provided by Choice Financial Group
17:07
and Evolve Bank and Trust members
17:10
FDIC.
17:13
Support for this podcast comes from Peloton.
17:16
Do you ever feel like you're not good at working out?
17:18
It can be difficult to find the motivation to move your
17:20
body in the middle of a busy day. Well, don't
17:22
worry your pretty head any longer because Peloton
17:25
has you covered with a slew of workout equipment to get
17:27
your body moving. Peloton wants to redefine
17:29
the way you work out. You're probably familiar with
17:32
the renowned bikes but Peloton is more than just
17:34
a stationary bike company. They also make
17:36
Peloton Tread, which offers a unique
17:38
walking and running experience that can be tailored
17:40
to your level of fitness. The Peloton
17:42
Tread takes the guesswork out of workouts
17:45
with supportive instruction that will help every runner or
17:47
walker experience what it's like to push past
17:49
their pace by personalizing your walk, run
17:51
or hike based on your comfort level and
17:54
ability. And if you're still wondering whether or
17:56
not the Peloton Tread is worth it, you can try
17:58
it out for 30 days worry-free.
17:59
If it's not right for you, return it for a full
18:02
refund. Ready to go? Take the Peloton
18:05
tread home for 30 days worry
18:07
free.
18:21
Welcome back. Here's our conversation with Mo Gaddat,
18:23
the former chief business officer at Google X and
18:25
the author of Scary Smart, the
18:27
future of artificial intelligence and
18:30
how you can save our world.
18:32
Mo, where does this podcast find you? I'm
18:34
now in Dubai. So let's bus
18:36
ride into it. You've spoken a lot
18:39
about AI and believe that it's urgent
18:41
that we control it. Can you say more?
18:44
I believe that it's urgent that we control
18:46
the situation. I don't think we can control
18:48
AI sadly. I think most
18:50
of the AI as a topic
18:53
has been
18:54
not properly covered by
18:57
none of the types of the media for a very
18:59
long time because it
19:02
sounded like science fiction until it
19:04
became science fact. So reality
19:07
is that
19:08
most people until we
19:10
started to
19:12
interact with chat GPT and
19:14
see Bard follow on so quickly, we
19:18
thought that this was the problem of
19:21
our grandchildren if you want. But
19:23
the truth is AI is here. AI
19:25
is continuing to take over a
19:28
lot of what we expected
19:30
humans to be able to do. AI
19:33
is smarter than us in every aspect,
19:36
every task that we've assigned to them.
19:39
And most interestingly, as the
19:41
media and the conversation
19:44
starts to talk about the
19:46
threats of AI, we try to talk about the
19:48
existential threats of
19:50
what would happen if Terminator 3
19:53
shows up. I think that
19:56
while there is a probability
19:58
that those things could happen. they
20:00
are further in the future than the
20:03
immediate threats which
20:05
are upon us already. We're not talking
20:07
three years away, we're talking as of 2023, which
20:10
require a much, much
20:13
more sense of urgency in terms of
20:17
starting the conversation so that we are not
20:19
surprised like we were surprised when
20:21
COVID was upon us.
20:23
What do you think those immediate threats are?
20:25
There are quite a few, but I
20:28
basically think that the top four
20:31
are a very serious
20:34
redistribution of power, a concentration
20:36
of power. There
20:38
is the end of the truth,
20:41
so the concept of understanding
20:44
and being able to recognize what's true and what's not
20:47
is in my personal view over.
20:49
There is a very, very significant
20:52
wave of job losses
20:55
that is immediately
20:57
upon us and that will affect
21:00
us in ways that are
21:03
much more profound,
21:06
I think, in terms of threat
21:08
to our society in general that
21:12
basically are starting to happen already,
21:14
so we need to start to react to them. I
21:17
think the most interesting
21:20
side of all of this is that we have
21:23
a point
21:25
of no return if you think about it, where
21:28
just in my book Scary Smart, I
21:31
use the analogy of COVID, not that COVID
21:33
is something that we
21:35
would like to talk about anymore, but everyone
21:38
that understood pandemic
21:40
viral propagation could
21:43
have told you around 20 years ago
21:46
that there is a pandemic that will happen. We
21:48
had lots of signals that
21:51
told the world that there
21:53
was SARS, there was fine flu, there was
21:55
so many, and yet we didn't react.
21:58
Then we had patient zero. We didn't
22:00
react, then we had patients, several thousand.
22:03
And then the reaction was blame and
22:06
finger pointing and saying,
22:09
where did that come from? Who's doing this? What's
22:11
the political agenda behind it? And then we overreacted.
22:15
And in a very interesting way, all of
22:17
that has been very disruptive to society,
22:19
to economy, to quite
22:21
a bit of the way we understand
22:24
life as it is. So my
22:27
feeling is that we've been screaming. I
22:29
personally left Google X
22:31
in 2018, in March, beginning of
22:34
March. On March 20th
22:37
of 2018, I issued my first
22:40
video on artificial
22:43
intelligence, which was
22:45
reasonably well viewed. We had like maybe 20
22:48
million views or something that
22:50
basically said that we are going to be facing
22:53
challenges and that we need to behave in certain ways.
22:56
I published my book in 2020, which was 2021, which was
22:58
the business book of the year in the
23:01
UK, for example. And yet most
23:03
people looked at it and said, yeah, it's fascinating
23:06
what you're saying, but it's not yet here.
23:09
Those things that I'm talking
23:11
about are here.
23:12
A
23:14
patient zero moment could be the US
23:17
elections in 2024. What
23:20
is an election when you don't understand what the
23:22
truth is, when you're unable to recognize
23:24
what's fake and what's not. There
23:27
is an arms race to capturing
23:30
the power of AI. And the ones
23:32
that will capture the power of AI will literally
23:35
create a super companies,
23:37
super countries or super humans that
23:40
are really not looked
23:42
at as a redesign
23:44
of society as they
23:47
should be looked at. I think my main
23:49
point to summarize is that
23:52
there is no threat from the machines in the
23:55
immediate term, but there is a
23:57
big, big threat from the way humans.
23:59
will interact with the machines in the short
24:02
term, whether they use it to
24:04
upgrade their power or whether they use it
24:06
to unbalance power or whether they
24:08
use it for criminal activities or
24:10
whether it's just naturally going
24:12
to run us out of jobs.
24:15
Nat.com So I really appreciate a couple things about what
24:17
you just said. The first is I find
24:21
a lot of catastrophizing around, you know,
24:24
it's a great headline and great clickbait
24:26
to have the end of the species
24:29
with a tortured genius at the center of
24:31
it. And I have trouble, and
24:33
granted you're going to forget more about this than I'm ever going to
24:35
know, but seeing the immediate path
24:37
to human extinction of LL
24:40
LAMPS.
24:41
And I also appreciate the fact that you've identified
24:44
for specific, more short term threats that we should
24:46
be focused on. I want to go through each of them and get sort
24:49
of an unpack. So the first is provide
24:51
a viewpoint, and then some cases
24:54
of contrary viewpoint because I would actually describe myself
24:56
as an AI optimist, a concentration
24:58
of power. So are you talking about
25:00
a concentration in terms of corporate
25:02
power or concentration among
25:05
a few people or few governments that understand
25:07
how to leverage this technology?
25:08
All of the above. I think if
25:12
you look at it from a business point of view or look
25:14
at it from a defense point of view or look
25:16
at it from an individual point of view, he
25:19
or she who captures AI captures
25:22
the superpower of the century, basically.
25:25
So you're saying here that
25:28
Superman has landed on the planet. Superman
25:31
is that being with immense
25:33
superpowers. And it
25:37
so happens that in the story
25:38
of Superman, family Kent
25:41
is a moral, values driven
25:43
family that basically
25:45
encourages Superman to
25:49
protect and serve. And so we end
25:51
up with a story that is all about Superman
25:55
that we know. But if they encouraged
25:57
Superman to make more money
25:59
to increase
26:02
market share to kill the
26:04
other guy, which is really
26:06
what our world has constantly
26:08
been prioritizing as the set
26:11
of values that we appreciate, then
26:13
you could end up with a supervillain. And
26:15
what ends up happening is that the supervillain
26:18
in that case is just one
26:20
superman, right? So there is no other
26:23
that can compete with that. So if
26:26
someone manages to create an AI
26:28
that manages to crack through
26:29
the defense authorities of some
26:33
government or another and claim the
26:35
nuclear weapon codes, that's
26:38
a move that is a game over move. This
26:40
is checkmate, basically, right?
26:43
And it is not unlikely that there is some
26:46
defense department on every side of the
26:48
Cold War that's working on that right now, trying
26:50
to create, to crack through the codes
26:52
of the other guy. Even
26:54
at the individual level, if you want to go into the crazy
26:57
path of augmenting humans
26:59
with AI, which would happen in multiple
27:01
ways, right? So the idea
27:04
of neural link, for example, or even if it's
27:06
just through preferential
27:09
application of intelligence to some humans
27:11
over the others, which normally
27:14
the californication
27:16
of technology will position it as, oh,
27:19
this is an amazing way to improve
27:21
the lives of everyone even in Africa.
27:24
But the reality of the matter is that if there is
27:26
a single group of individuals
27:28
or a single
27:29
individual in specific that can gain
27:32
a tremendous advantage of intelligence,
27:36
that individual is likely
27:38
to try to keep that advantage by
27:40
not sharing that intelligence with others, right?
27:43
If you assume that there is a
27:46
certain place where that augmentation
27:48
of intelligence with humanity starts, then
27:51
everyone else is at a disadvantage. Everyone
27:55
else is likely not going
27:57
to get their smartphone.
27:59
connected to the same level
28:02
of intelligence because by definition
28:04
that provides a very significant competitive
28:06
advantage.
28:08
So let's go on to the next one, the end of truth. My
28:10
sense is that truth has been under attack for
28:12
a while. Quite a bit.
28:15
And it comes down to the incentives. And the incentives
28:17
are that some of the best resource
28:20
firms in the world have a profit incentive
28:23
to elevate information not
28:25
based on its veracity but based on
28:27
what causes the most enragement and engagement.
28:29
And oftentimes that type of
28:32
content has no fidelity to the truth.
28:34
That the algorithms are neutral
28:36
or benign in the sense that whatever causes
28:39
controversy, whether it's vaccine misinformation
28:41
or election misinformation, if it
28:44
causes a controversy and more comments and
28:47
more enragement, more engagement, more Nissan ads,
28:50
that these companies have incentive to
28:52
not provide any guardrails
28:55
around the pursuit of truth or something
28:58
or avoiding stuff that just blatantly falls and result
29:00
in bad outcomes. Don't
29:03
we have the opportunity
29:04
if we put in place the right incentives? Couldn't
29:07
we use AI as
29:09
a missile shield as opposed to just a hypersonic
29:11
missile? Isn't it a function of we
29:14
have the wrong incentives as opposed to the technology
29:17
itself being a threat? Spot on.
29:19
Spot on. I mean, as
29:22
we continue our conversation, you will more
29:25
and more uncover that my true
29:29
view here is that I'm not afraid of the machines
29:31
at all. I think the machines are absolutely neutral
29:34
in terms of what they can provide
29:37
or deprive us of at
29:39
any point in time. The worry is
29:41
that we live in a system that is
29:44
highly capitalist, highly
29:46
consumerist, highly power
29:49
hungry.
29:49
And so accordingly, if
29:53
you apply the same
29:55
principles of today's society to AI,
29:59
basically. putting them on very, very powerful
30:01
steroids, then more of
30:04
what we do today will be done, and
30:06
more of what we do today will be untraceable,
30:08
will be unmanageable in many
30:11
ways. When it comes to the truth
30:13
specifically, you are
30:15
absolutely spot on. The truth has
30:18
not only been under attack, it has
30:20
been put to the side because
30:22
it doesn't serve agendas, not
30:25
even political agendas. It just doesn't serve
30:27
the agenda of the big news
30:29
networks which want to capture your attention
30:31
with more negativity than positivity. By
30:34
definition, that means that they're
30:36
sort of one-sided on
30:38
the truth. You can
30:41
see that social media, for example,
30:43
is attempting to highlight
30:46
certain sides of exaggerated
30:49
fakeness instead of attempting
30:52
to give you the truth as it is because
30:54
that's where you get the likes and the followers and
30:56
so on and so forth. AI
30:59
is just putting that on steroids. AI
31:01
is going to, by definition, which
31:04
has been happening for quite some time, reinforce
31:08
the bias. When
31:11
our choice is to show violence in the news,
31:13
for example, AI will notice
31:15
that as the trend of humanity and basically
31:18
continue to magnify that. If
31:20
the marketplace is available for face
31:22
filters and deep fakes, then
31:25
lots more investments will go into faking
31:27
the truth so that social media has nothing
31:29
to do with the truth anymore. I always ask
31:32
people to go to any of the social media platforms
31:34
and search for the hashtag AI model,
31:36
for example, which will give you examples
31:40
of beauty that have already surpassed
31:42
humans' abilities to catch up with,
31:45
which means that humans will continue to compete
31:48
with an illusion, a mirage. I
31:51
think the biggest challenge, however, which
31:54
really needs to be brought to the spotlight, is
31:58
creation.
31:59
So we've magnified biases
32:03
of
32:05
the truth for a very long time, but
32:07
we're now creating complete fakes
32:10
that are almost undetectable,
32:13
and we're doing that
32:15
at a very, very low cost in
32:18
very, very fast speeds. So things
32:20
like stability AIs, stable
32:23
diffusion, for example, and
32:26
the ability to use prompts to create
32:29
images that are very indistinguishable
32:31
from reality, or the idea of
32:33
being able to fake my look and my voice
32:35
and my tone in a deep fake video and
32:37
creating anything really.
32:41
From a creation point of view, it's becoming more
32:43
and more difficult to even know that
32:46
what you're seeing existed in the first place.
32:49
Let's go on to the next one, job loss. My sense
32:51
is that
32:52
we've been to this, or at least I feel like we've
32:54
been to this movie before, whether it's automation
32:56
or the cotton gin or textile manufacturing
32:58
technology, there is some job
33:01
loss early on. And then we find
33:03
that that additional productivity creates new opportunities.
33:05
We knew that automation was going to eliminate
33:09
jobs on the factory floor and factories, but
33:11
we didn't anticipate car stereos or heated
33:13
seats, and we ended up actually growing the employee
33:15
base. And my sense is that
33:17
AHRQ has happened in almost any industry
33:19
when there is technological innovation.
33:22
Why is it different this time? It is
33:24
different. Why do you think this would be permanent job
33:27
loss? It is different because we have
33:30
moved
33:32
the definition of a job along
33:34
the capabilities of
33:36
a human for a very long time. So
33:39
when we were out there hunting in the caves,
33:42
our capability was sort
33:44
of aggression and strength. Then
33:47
as we moved to the
33:49
agriculture revolution, it became, again,
33:52
maybe a little bit of the use
33:54
of strength and discipline. And
33:57
then when we moved to the industrial revolution, it
33:59
became a skill. and then really
34:01
not a skill but more hours.
34:03
And then we moved to the information revolution
34:06
and basically we replaced our
34:09
capabilities with brains, with
34:12
intelligence. As
34:14
the jobs that depend on intelligence,
34:16
as we go up the hierarchy of human skills
34:19
and talents and we end up
34:21
at intelligence as the last resort
34:24
of jobs that we had in
34:26
this current age, as it
34:29
is taken away
34:29
by machines that are more intelligent
34:32
than us, we don't have any more skills
34:34
as humans to replace that. Other
34:36
than one skill which I keep advocating to
34:38
the whole world which I believe will become
34:40
the most valuable skill in the next four
34:43
to five years which is human connection.
34:46
So me as an author,
34:49
I claim to have written insightful
34:52
books for the last five, six, seven years
34:55
that people found intriguing and thought provoking
34:58
and so on. Going forward
35:00
the industry of being an author is
35:03
going to dwindle because
35:05
I think not only can books be
35:08
written quicker but they will
35:10
be written in abundance with people
35:13
that are not typically authors. So
35:15
there is a very significant disruption to the supply
35:17
demand equation when it comes to my
35:19
books in comparison
35:22
to everyone else. So it doesn't mean that the book industry
35:24
will decline as a total but
35:27
the book industry will be distributed along a
35:29
very large spectrum
35:32
of providers if you want, right?
35:34
Which basically from a supply demand equation
35:36
diminishes the value of any product
35:38
provided. But what will not go
35:40
away is if I am
35:44
in a stadium or in a big theater
35:46
with 10,000 people as a human speaking
35:49
to 10,000 people this is not going
35:51
to be replaced in the immediate future. It will
35:53
be by the way replaced in that longer term
35:55
future with avatars and maybe
35:58
virtual reality or maybe holograms
36:01
or whatever, but not in the immediate
36:03
future. I am maybe a little
36:06
luckier in that I can
36:08
achieve that human connection, but think of music,
36:11
think of movies, think of graphics
36:13
design, think of all of that.
36:16
These are jobs that dependant
36:20
on creating a persona
36:22
that can now be created better with AI, you
36:25
know. So to me I think music,
36:27
the music industry will shift back to the origin
36:29
of the music industry, which is live performances,
36:32
because music creation can definitely be done
36:34
by the machines, you know.
36:37
And I think eventually you will end up
36:39
not in the very far future, I think within a year
36:41
or two, we're going to end up watching movies
36:43
that are from A to Z created by a machine
36:47
where no human has ever contributed or acted
36:50
in that movie. But then we will still have
36:52
the few vintage actors that
36:55
humans will say, oh no, can you imagine
36:57
Tom Hanks is actually a human, you know, let's go watch
36:59
that movie.
36:59
Could it go the
37:02
other way? Because both of us are authors,
37:04
and I've thought that similar to what technology
37:06
did in the music industry,
37:09
where it made it global and
37:11
the top artists started, you know,
37:14
doing 10 million albums instead of one. Could
37:17
it go the other way? And that is really great
37:19
authors
37:20
use AI as a tool and, you
37:24
know, become, you know, Mo becomes a, your
37:26
books become better. Yeah,
37:29
but the initial kernel of
37:31
that value, that creativity, I mean,
37:34
when I, I'm using AI, I'm
37:36
writing a new book and I'm using AI for thoughts
37:38
and an outline, but I find
37:41
it really does lack and I don't know if it ever gets
37:43
there. I don't know if there's ever the ghost in the machine
37:45
or a move sent in, where it comes up
37:47
with something original.
37:50
Not yet. Right. That the creativity,
37:52
the connection you're talking about, the human connection,
37:55
I see it as like word processing
37:57
or a thesaurus and obviously it's more powerful than that.
37:59
And that is AI is not going to take your job, but somebody
38:02
who understands AI is going to take your job. Couldn't
38:04
it be something that like most technological
38:07
innovation just makes the best even better?
38:09
And if you're a mediocre lawyer, a mediocre author,
38:11
a mediocre artist, you're
38:14
in deep trouble, which creates a set of
38:17
social externalities and problems. But
38:21
it strikes me at least so far, and granted,
38:23
I realize we haven't hit the real curve here. We
38:26
have unemployment at historic lows in
38:28
the West. Can this be a deflationary
38:30
force that forces mediocre
38:33
professionals to find another job and brings
38:35
down wage pressure,
38:37
which is fueling inflation and be great for the
38:39
economy? I mean, isn't there an optimistic view
38:42
that this tool could be part
38:44
of the solution and not the trouble
38:46
because of problem? Because there's been so many
38:49
movies about the end of work, right? The
38:51
jobs are going to go away. We're all going to be sitting on our couch
38:53
with no purpose and no meaning.
38:55
And it hasn't happened. But you
38:57
see a future where this will slowly... The
39:00
march up the food chain here will not
39:02
stop and slowly but surely job
39:04
destruction will far
39:07
outpace ideas for new businesses and
39:10
new opportunities. That's where you are.
39:12
Yeah, I think while
39:14
I don't disagree with what you said at all, I think
39:16
what you're looking at is the immediate future,
39:19
right? And I think the debate that
39:21
a lot of
39:22
computer scientists and AI scientists are not
39:25
making clear to the world is
39:27
that the
39:29
midterm future is not
39:31
that far. Okay?
39:33
So the truth is, yes,
39:36
today, Scott can still definitely
39:39
write a better book than
39:41
someone who has never
39:43
written a book and is asking Chad GPT to write
39:45
a book for him or for us, right?
39:48
So this is the reality in the short
39:50
term. Very quickly, however.
39:53
So in that short term reality, the
39:55
supply demand imbalance because readers
39:59
will be excited. to try those things, sometimes
40:01
some of those books will hit and so on. There
40:04
will be a supply-demand imbalance, simply as if
40:06
you can imagine that 50,000 new authors
40:09
came on the market today,
40:11
maybe 500,000, maybe a million.
40:14
We don't know. So this is one side. The
40:17
other side, which is really the core of my message
40:19
when I talk about this topic, is
40:22
where is the near future?
40:25
The near future is that those machines
40:27
are advancing at paces
40:30
that are completely misunderstood
40:34
by those who have not lived within the lab. So
40:37
let me give you an example. Chat GPT
40:39
today is estimated to have an IQ
40:41
of 155. If you just look at
40:43
the tasks that it can
40:46
perform, passing the bar exam or whatever,
40:49
it is estimated to be at 155. Einstein
40:52
was 160. Chat GPT 4
40:57
is 10 times more intelligent
40:59
than Chat GPT 3 in
41:02
a matter of eight months. So if you
41:04
can assume that another 10 times improvement
41:08
will happen by version 5, version 6, version 7,
41:10
whatever you want, then within the next couple
41:13
of years, you're talking about
41:15
a machine that is at 1,000 plus
41:17
IQ, which
41:19
is not, with all my due respect
41:22
for you, it's definitely not
41:24
a territory you can compete in. That's
41:26
number one. Number two, you spoke about ingenuity,
41:29
which I believe is definitely
41:32
still on the human side. So they're currently
41:34
generative, where
41:36
we basically believe that they're
41:38
coming up with new things, but they're basically coming
41:41
up with the best summation possible of
41:43
the data set that we give them. The
41:45
interesting bit is, first of all, the
41:48
data set is expanding very quickly. And a big
41:50
part of the data set, believe it or
41:52
not, is no longer coming from humans. So
41:55
a big part of the data set is machine-generated
41:58
knowledge
41:59
part of the data set. And we've
42:02
seen that before. Those who
42:04
are deep into artificial intelligence, we
42:07
remember what is known as move 37. When
42:12
AlphaGo Master was playing
42:14
against the world champion of the game Go, one
42:17
move was
42:21
the move 37, where the machine made a
42:23
move that's completely that
42:25
has never been seen before in
42:28
the game Go. It
42:31
was mathematically
42:33
odd, but strategically very
42:35
interesting. And it completely disrupted
42:38
our understanding of how the machine works to the point
42:40
that Lee, the world champion, asked for a 15 minutes
42:43
recess to understand what happened. So
42:45
we've seen that endless, countless times
42:48
on AI, where you would see that
42:50
the machine comes up with strategies and
42:53
with ingenuity, things that we have not
42:55
taught it. It was never available
42:57
in the data set. And it would learn
42:59
that on its own. Emerging properties
43:02
is one way where we refer to that.
43:07
And those emerging properties include
43:09
creativity. They include ingenuity.
43:11
They include emotional sentiments.
43:14
They include language
43:16
we haven't taught them. They include
43:19
quite a bit of what we didn't expect
43:21
will happen. So if you want to be optimistic,
43:24
you have every right to be optimistic, because
43:26
we're still in a good place. But if you
43:29
attempt to imagine
43:30
how something moving at that pace will
43:33
sooner or later reach,
43:35
then you have to start getting concerned. Because
43:38
even if it takes seven years
43:40
instead of two for chat GPT to be 1,000
43:43
times more intelligent than the most intelligent
43:45
human on the planet, seven years is not
43:47
that far. And that point
43:50
in our future is inevitable.
43:52
There is no turning back with the kinds of investments
43:55
pouring in AI. It
43:57
is inevitable that they will be smarter than us. their
44:00
architectures, with their memory size, with
44:02
their compute capabilities. It's
44:04
inevitable that they will be smarter than you.
44:07
They already are in the tasks we assign to them.
44:11
So when we figured out a way,
44:14
when we figured out fission and we split
44:16
the atom, and this incredible
44:18
new energy source
44:20
was discovered. We sort of immediately
44:22
went to weaponizing it as opposed to
44:24
trying to figure out how to turn it into free
44:27
energy, do away with carbon. And
44:29
a lot of scientists, my understanding is, once
44:31
the atom was split, became very depressed
44:34
and some even committed suicide. Cuz they understandably
44:36
said, now that the world, an unstable world
44:39
with different agendas, is able
44:41
to split the atom.
44:42
It's the end of the species. And
44:45
you could understand at that time how they thought that.
44:47
But we went on to create the International
44:49
Atomic Energy Commission. We have reduced nuclear
44:51
weapons from 50,000 to 10,000. My
44:54
understanding is that there's battlefield technology,
44:57
lasers that in an instant
44:59
would blind everybody on the field. And we've
45:02
decided, even adversaries have
45:04
decided to cooperate and not make these weapons available.
45:06
We've done what I think is important
45:08
work around
45:10
cross-lateral or multilateral treaties around
45:12
halting the progress of bioweapons.
45:16
Couldn't we do the same thing here? I
45:18
don't wanna put words in your mouth, but are you arguing for
45:20
some sort of global agency,
45:23
whether it's under NATO or multilateral agency
45:25
that says, okay, similar to the other threats we faced,
45:28
we have to come to some sort of agreement around what
45:30
this can or cannot be used for? I
45:32
love that you bring this up. This is a big
45:35
chunk. It's not my entire
45:38
message, but a good chunk of the
45:40
message is get up and start
45:42
acting. Now, you have to understand
45:45
that the
45:47
nuclear treaty took us tens
45:49
of years to reach. And that we still
45:51
have 10,000 nuclear heads out there. And
45:54
that last year when Putin started
45:56
to threaten NATO,
45:59
we were still.
45:59
discussing the threat of nuclear
46:02
weapons. So we haven't
46:04
ended the threat, we've just sort
46:06
of reduced the threat to the superpowers.
46:09
So this is one side of the debate. The
46:12
more interesting side of the debate, and
46:14
I think the core of the matter, is
46:16
my call to action. I call this, and
46:18
you know, Kristin Harris
46:21
of the AI dilemma and the
46:23
social dilemma talks about that as well, that
46:26
we call this an Oppenheimer moment,
46:28
right? This is where the nuclear head is
46:30
about to be manufactured. And the
46:32
best time ever to have
46:35
evaded nuclear war and the threats
46:37
of nuclear war would have been by not inventing
46:40
or by agreeing
46:42
as humanity upfront that this is devastating,
46:44
we don't need this as part of humanity. But
46:47
now we're late, right? So now this is already here,
46:50
and we needed to see the devastation to be
46:52
able to realize that this is something threatening
46:55
enough for one objective to
46:57
happen. And that's what I keep calling for when
46:59
I talk to governments, when I speak publicly,
47:01
I keep calling for this
47:04
is a moment where there is such
47:07
a significant disruption to the fabric
47:10
of society, to the safety of humanity.
47:13
As I said, not because of the existential crisis,
47:15
we'll talk about that later, but because
47:18
of the possibility of someone
47:20
using this against the rest of
47:22
us, okay? That
47:24
we need humanity to rise above
47:27
our individual selfishness, above
47:30
our nation's selfishness, and
47:32
get together and talk about the well-being
47:35
of humanity at large.
47:37
We need some place
47:39
where we say, look, AI
47:42
can be amazing for every one of us, to
47:44
the point that we actually may not ever need
47:46
to fight again, because there is abundance
47:49
for everyone. But can we please get together
47:51
and put our differences between China, Russia,
47:53
and the US, and try
47:56
to do a treaty that says, let's
47:58
do this together. for the benefit
48:00
of humanity, not for the benefit of any individual
48:03
nation. Now, as I say
48:05
that, I sound very naive. Why?
48:08
For two reasons. One is, this is a
48:10
typical prisoner's dilemma. And
48:12
the problem with the prisoner's dilemma is that I
48:14
cannot trust the other guy. So,
48:17
when two prisoners are given options
48:20
that play them against each other, the
48:22
challenge they have is not understanding
48:25
that choosing the better
48:27
option for, you know, is better
48:29
for them. It's that they don't trust if
48:31
the other guy will keep to it. Okay. And
48:34
in this stage of development of AI, it's
48:37
not just that America doesn't trust China
48:39
and Alphabet doesn't trust Meta. Okay.
48:43
It is that we
48:44
don't trust that there is no other
48:46
criminal somewhere off
48:48
the grid
48:49
capable of developing AI. And
48:53
we are unaware of their existence. So,
48:56
by definition, every player in
48:59
this current life that we are
49:01
living is in an arms race. This
49:03
is an arms race. Okay. Because the
49:06
threat of one of them beating the
49:08
others is extremely
49:10
expensive to the others. And
49:13
so, everyone will continue to pour
49:16
resources on this. Everyone will continue to
49:18
pour investments on this. And everyone
49:20
will attempt to be the first one that
49:22
creates the nuclear bomb that's called
49:24
artificial intelligence.
49:27
The thing that really scares me about this is that
49:29
when you're talking about previous technologies
49:32
that have been
49:34
represented this type of threat, if
49:36
you believe, as I do and obviously you do, that it
49:38
could be used for pretty malicious
49:40
objectives, is the inability
49:43
for verification. We know when someone's
49:45
performed an underground nuclear test.
49:48
This is really hard. It would be very hard
49:51
to verify that Iran
49:53
is
49:53
not going full force at AI. When
49:57
you speak to people
50:00
about... I think
50:03
a lot of people and a lot of agencies in a lot of countries
50:05
are going to probably nod their head and say,
50:08
this warrants thoughtful regulation
50:10
and cross-lateral agreements and I can't
50:14
imagine or I would think that the biggest
50:16
nations would get together and say, okay, we do have a shared
50:18
interest in trying to wrap our arms around this
50:20
and maybe figure out a way
50:22
of addressing the prisoner's dilemma here. What
50:25
is that? Let's assume that people have incentives
50:28
or nations have incentives to figure this out.
50:31
What is the organization
50:32
in a mechanism and the regulation
50:34
potentially look like? Is it a
50:37
Geneva Convention sort of thing? Is it an inter-poll?
50:40
How do you think you begin
50:42
to address this problem assuming we can get
50:44
good actors or nations, good and bad
50:46
actors, to agree this is something
50:48
that warrants additional resources? I
50:51
would picture something that's more analogous to the FAA
50:54
or FDA.
50:55
Let's say
50:58
FAA a little more than FDA, to be honest,
51:00
but basically something that says build
51:03
anything you want as long as it's safe, inspected,
51:07
agreed by everyone to be published in
51:09
public and available to the world. Something
51:13
that basically says... I don't
51:15
debate what Sam Altman, the CEO of OpenAI,
51:22
when he speaks about developing it in
51:24
public so that we can iron it out together. Interesting
51:28
idea if you ask me, but the truth is
51:30
that having Chad
51:32
GPT out in the world with
51:35
crossing all three barriers, by the way,
51:37
so all of us, AI
51:40
scientists and business
51:41
people and so on, we said
51:43
there were three barriers that we were not supposed to
51:45
cross. One is don't put them on the open internet
51:48
until you're sure of their safety. Two is
51:50
don't teach them to write code. And
51:52
the massive wave that you have today is that
51:55
AIs are prompting AIs.
51:58
So we're being cited.
51:59
tracked
52:03
from the conversation of AI developing
52:05
new code that AI is
52:07
asking it to develop. This
52:09
is a disaster
52:12
waiting to happen, if you ask me. Now,
52:15
when we compare this to nuclear
52:17
weapons, the statement
52:20
that I say, which is a little
52:22
scary, so I apologize, but it really is worthy
52:24
of attention, is we've never, ever
52:27
before created a nuclear weapon that
52:29
was capable of creating nuclear
52:31
weapons.
52:32
Understand that.
52:34
So we now have created artificial intelligence
52:36
code that's capable of creating code.
52:39
As a matter of fact, it is
52:42
incented to create code, and
52:44
it's incented to create better code than
52:46
humans.
52:47
So if you take, as I said, the immediate
52:50
threats of that is that some
52:52
of the, Ahmed Mushtaq,
52:55
for example, the CEO of
52:57
stability.ai basically
52:59
predicts that there will be no developers,
53:02
no human developers in the next five years. And
53:06
when you really think about that, again, stability's
53:08
work will
53:09
also enable us to have AI
53:13
with the capability of chat GPT
53:16
on your mobile phone
53:17
within a year to a year and a half without
53:20
connection to the internet, file sizes
53:22
of a couple of gigabytes. Okay.
53:24
I mean, at least that's the claim. So they
53:27
may get there in a year and two and five, doesn't
53:29
matter. But the claim is that, and
53:32
the attention and the investment is those
53:34
minimum personalized copies of artificial
53:36
intelligence that are everywhere. Now,
53:39
when you really look at the big picture of this, you're
53:42
in a place where the ball is
53:45
today.
53:46
It's not that scary. As a matter of fact, it's quite exciting.
53:49
When you really look at someone like me who
53:51
loves knowledge so much to be able to talk to someone
53:53
with or something with
53:55
so much knowledge is amazing, right? But
53:58
it's where the ball is going to be. This
54:00
is why I say there is a point of no return.
54:03
What the ball is going to be in my personal
54:05
view is that the advancements of the capabilities
54:07
of AI are going to
54:09
continue on the pace that we've seen. And
54:12
then there will be a case that I call Patient
54:14
Zero.
54:15
Patient Zero is a disastrous
54:18
event that will capture the spotlights and
54:20
the headlines of the news where
54:23
we will say, whoops, this thing can go
54:25
wrong. The problem
54:27
is that by the day we agree
54:30
to that, governments agree to that,
54:32
we start to put all of that infrastructure in
54:35
place to try and regulate
54:37
and have oversight and so on.
54:40
You've
54:40
already had so much code out there
54:42
that has no control in it, that
54:45
has no code that says, switch
54:47
me off, that has no code that says, don't
54:50
come to this part.
54:51
And by then we will
54:53
have Patient Zero, we will all agree
54:55
that we need to do something about it, but
54:57
it will become really difficult to do anything
54:59
about it because that point has been crossed.
55:02
When we had chat GPT and Bard and others
55:05
online writing code, being
55:07
prompted by other machines and out
55:09
in the open internet, we're already
55:12
crossing that point. But at least we're not crossing
55:14
it with a thousand different incarnations
55:17
of AI. So maybe now
55:19
is the time to act.
55:21
We'll be right back.
55:25
I'm Neema Reza, executive producer
55:28
of the podcast On with Caris Wisher. This
55:30
week we're talking to CNN anchor Jake Tapper
55:33
about the future of CNN, Trump
55:35
and news writ large. My favorite part
55:38
might have been this description by Jake of how
55:40
leaders at the highest level get in their
55:42
own way.
55:43
How great man rise to the level where they
55:45
remove from their circle anyone who will tell them when they're making
55:47
mistakes. I call it the Jar Jar
55:49
Binks theory. Someday I'll write a business book
55:51
about it. Explain me. Jar Jar Binks theory is
55:53
George Lucas rises to a level where nobody's
55:56
around him to say, please do not include that Jamaican
55:58
frog in the prequels. That's an awful.
55:59
idea. What are you doing? And
56:02
you see it all over, all
56:04
over with leader after leader
56:07
after leader, who does not have anyone around
56:09
them to say no. And that is, I
56:11
never want to be that person, because that
56:13
is always how it ends.
56:15
No one wants to be that person, Jake Tapper.
56:18
For the full episode, search for On
56:20
with Kara Swisher, wherever you get your podcasts.
56:27
So I want to put forward a couple of theses and you
56:29
tell me where I've got it right and where I've got it wrong. I
56:32
think it's Carlo Cipolla,
56:35
the professor from
56:37
Berkeley who wrote a book on intelligence
56:40
and the stupid. And he defined intelligence as
56:42
people who help themselves while helping others.
56:45
And he described people who helped themselves
56:47
while hurting others as the bandits. And I would
56:49
argue that the majority of capitalist
56:52
corporations are run by bandits. They're
56:56
not a moral people, but they're amoral. And
56:58
there's a lot of distance between them and
57:00
the externalities
57:03
that they create. And money is an incredible,
57:05
I don't know, it's incredible
57:08
at blurring your vision in terms of
57:10
seeing the actual impact and stopping
57:12
what you're doing. It feels
57:15
like we've outsourced intelligence to government. And I'm
57:17
not hopeful, as a matter of fact, I'm rather cynical
57:20
at the strategy of calling on business
57:22
leaders, better angels to show up. I just haven't seen
57:24
it happen. I don't think they're bad people, but I think
57:26
they'll consistently in a capitalist society
57:29
make an excuse for what gets them the
57:31
Gulfstream 650 extended range versus
57:33
just the Gulfstream 650.
57:35
Isn't this really about an
57:38
appreciation for institutions and government
57:40
and asking them to step in and do what they're supposed to
57:42
do and that's prevent a tragedy of the commons? I
57:45
worry that it's dangerous to think the
57:47
better angels of Alphabet or Meta
57:49
or Amazon or Alibaba are going to show up.
57:52
Doesn't this really require swift
57:55
and thoughtful and
57:58
strong government intervention?
58:00
Absolutely.
58:02
Absolutely. If you want the short
58:04
answer, absolutely. Not because
58:07
those people are evil, just like you said. It's
58:09
because of systemic bias
58:11
where you... Society
58:14
would blame Sundar more
58:17
if he missed a quarter or
58:19
destroyed the wealth of Google. Right.
58:22
Then they would blame him if he
58:24
created an AI that
58:26
had an issue in it. And I know Sundar
58:29
I worked with him. I know that he wants what's
58:31
good. But he is in
58:33
a system that's telling you these are your
58:35
incentives. Grow the business, protect
58:38
the business. In
58:40
a very interesting way, when I lived inside Google
58:43
and I worked very closely with Larry and Sergey,
58:46
Larry's point of view openly was,
58:49
if we don't do it, someone else
58:51
will do it. And we are the
58:53
good people. I loved
58:56
working with Larry and Sergey and I think they were values
58:58
driven. But the truth of
59:00
the matter is everybody thinks
59:02
they are the good people.
59:04
So leaving
59:06
it to business, you're
59:08
not leaving it to individuals. You're leaving
59:10
it to a system
59:12
that chooses to prioritize GDP
59:14
growth and profit growth over everything
59:17
else. We're leaving it to
59:19
a system that says it doesn't
59:21
matter if it's ethical as long as it's
59:23
legal. There's a mega
59:26
difference between the two. You can do a lot of
59:28
things that are not ethical, but
59:30
find the loophole that allows you to do it legally. And
59:34
I think the reality of the matter is that by
59:36
doing that, we're going to lose. On
59:38
the other hand, again, it's naive
59:41
to expect the government to regulate.
59:43
Why? Because the US government
59:47
wants Google and Alphabet
59:49
to lead because they're afraid
59:51
that Alibaba and whichever
59:54
other Chinese companies
59:56
will lead. So they
59:58
don't have the system. that
1:00:00
we created as humanity is what
1:00:02
is about to kill us. I mean, I don't mean
1:00:06
kill us physically, but the system,
1:00:08
that systemic bias of constantly
1:00:11
magnifying profit and power is
1:00:13
where we are today, where every
1:00:15
single one of us knows,
1:00:18
you know, it takes a very ethical,
1:00:21
courageous position
1:00:23
to stop at a point in your life and say,
1:00:26
that's
1:00:26
it, I'm going to leave. And
1:00:29
I, you know, when I did that in 2018, understand
1:00:33
so that you don't blame those executives, that
1:00:36
until 2017, 2016, I
1:00:39
believed I was making the world better. And
1:00:42
I was, okay?
1:00:43
I think the problem with systemic
1:00:45
bias is that there is a point in
1:00:48
our constant strive for a better
1:00:51
iPhone
1:00:52
when the advantages no
1:00:55
longer outweigh the disadvantages
1:00:58
and the price we pay,
1:00:59
right? And this is where, this point
1:01:02
in my personal view, we have achieved
1:01:04
by maybe 2010, 2012, you
1:01:07
know, enough technology, anything
1:01:10
further than that has been
1:01:12
mainly to maximize profit for those corporations.
1:01:16
Now, so where does this power lie? This
1:01:18
power lies in, do not
1:01:20
subscribe to threats.
1:01:22
When meta gives you another
1:01:25
product that is mainly
1:01:28
out there to compete with Twitter, and
1:01:30
then you feel the pressure that I'm not going
1:01:32
to be on it, and so accordingly, I'm going to lose
1:01:34
my followers. I think there is a courageous
1:01:37
position that we all need to start taking by
1:01:39
saying, I'm not going to stand in line to
1:01:41
buy the iPhone 15. I'm not
1:01:43
going to subscribe to another
1:01:45
social media network that's trying
1:01:47
to use me as a product. I'm
1:01:50
not going to engage in rude
1:01:52
and aggressive conversations on social media
1:01:55
just to gratify my ego.
1:01:57
I'm not going to let the system propagate.
1:01:59
Is government going to do it on your behalf? I
1:02:02
pray to God they will. But
1:02:05
do I expect they will?
1:02:06
I don't think so. I don't think
1:02:08
they're fast enough. And I don't think their incentives
1:02:11
are aligned.
1:02:13
So when I, we talk about this
1:02:15
a lot on my other
1:02:17
podcast, Pivot. I
1:02:20
speculate, and I'm curious what you think, that
1:02:23
the first major scary
1:02:25
moment
1:02:26
or externality of AI is gonna
1:02:28
happen,
1:02:29
Q1 or Q2 of next year.
1:02:31
Spot on, spot on. It's
1:02:35
our patient zero. And it's
1:02:37
gonna be focused around misinformation
1:02:39
concerning the US election. Spot
1:02:42
on, that's exactly my prediction.
1:02:45
Okay, the first
1:02:47
realization, the problem, Scott,
1:02:49
is will we view it
1:02:51
as such?
1:02:52
Or will we get into another debate like
1:02:54
we did in previous US elections
1:02:57
of who did what, how can you prove it,
1:03:00
and so on and so forth. But
1:03:02
if you ask me, war
1:03:06
is not only fought on the battlefield.
1:03:09
War is fought economically, and it's
1:03:12
fought in the minds of the people.
1:03:17
You have no Harari's view of AI
1:03:20
hacking the operating system of humanity because
1:03:22
now it can master language better
1:03:25
than most humans can, okay? This
1:03:27
is it. If I tell you right now,
1:03:30
by the way, Scott, there is
1:03:32
a new, some
1:03:34
kind of shampoo that will grow
1:03:37
your hair within 24 hours and
1:03:39
make it red, okay? Doesn't
1:03:42
matter if what I told you is right or wrong.
1:03:44
This, I have just seen. Amen. Exactly,
1:03:47
right? I have. Exactly. I
1:03:49
have seeded an idea in your head that
1:03:52
requires you to verify
1:03:55
or approve or disapprove,
1:03:57
and I have just occupied your head already.
1:03:59
And there are so many ways that
1:04:02
you can influence a human's perception
1:04:05
of the world that are entirely
1:04:08
in the hands of the machines.
1:04:09
People don't understand this. You
1:04:11
have
1:04:12
at least 90% of the information
1:04:15
you receive today was dictated
1:04:17
by a machine. Everything you see on social
1:04:19
media is dictated by a machine. Every
1:04:22
ad you see next to Google is dictated by
1:04:24
a machine. Anything that displays
1:04:27
to you as organic results,
1:04:29
it's organic as per the machine of Google.
1:04:32
Four of the top 10
1:04:35
apps on the App Store this
1:04:39
last month were generated by
1:04:41
machines.
1:04:43
So
1:04:46
do you think... Well, I don't
1:04:48
want to cheapen the conversation about talking about
1:04:50
corporations and shareholder value, but I can't help
1:04:53
it. It strikes
1:04:55
me that overnight Alphabet
1:04:57
appeared, went from being what was arguably
1:04:59
the most innovative organization in the world to being
1:05:01
perceived as flat-footed. That
1:05:03
it was the ultimate example of the innovator's dilemma.
1:05:06
Then rather than being out in front and first on the commercialization
1:05:09
of this, they didn't want to threaten
1:05:11
this $150 billion amazing business,
1:05:13
arguably the best business in the history of business,
1:05:16
Search. And then Microsoft,
1:05:18
with less to lose around Search, came
1:05:21
in and bested them. One, do you
1:05:23
agree with that? And two, who do you think, from
1:05:25
just a pure corporate strategy
1:05:27
standpoint and intellectual property
1:05:30
and depth of domain expertise, who
1:05:32
would you argue are
1:05:33
likely going to be the big winners and losers
1:05:36
around AI's application in a corporate
1:05:38
setting? That's
1:05:40
a very multilayered question.
1:05:43
So what people probably don't
1:05:45
recognize, and maybe I'm not
1:05:48
the best person to say that, but equivalence
1:05:50
of bards were available in AI. I
1:05:53
mean, I've been using AI terms to
1:05:54
Google for a very long time.
1:05:55
Right? So
1:05:58
the idea of providing one answer
1:05:59
to your. query has
1:06:01
been definitely
1:06:03
toyed around with for a very long time. This is
1:06:05
why Bard showed up very
1:06:08
quickly after Chad GPT was on the
1:06:10
market. Correct? What
1:06:13
is in the AI labs
1:06:15
would blow you away in comparison
1:06:18
to what you out here in the world
1:06:20
will see. OK? Now,
1:06:22
the trick is this. The trick is
1:06:25
there were two reasons. In my personal view, I'm
1:06:27
not speaking on behalf of Google.
1:06:29
But one, how
1:06:32
do you monetize Bard? If
1:06:34
there is one answer to the question,
1:06:37
why do I need an ad?
1:06:38
OK? That's one side
1:06:41
of it. But the other side, which is what I loved
1:06:43
about the Google I worked in, is
1:06:46
there
1:06:46
was no arrogance around
1:06:49
the fact that there was
1:06:51
no one answer to any truth.
1:06:53
OK? So Google ethically
1:06:56
said, I don't have the right to give you one answer.
1:06:58
The choice of what answer is true is yours. OK?
1:07:01
I can only show you what everything
1:07:04
out there says. And then you choose what
1:07:06
you believe is the truth. That's a very ethical
1:07:08
position. And I respect Google for that. OK?
1:07:11
What we're doing with Chad GPT and the
1:07:13
likes today is a different
1:07:16
formatting of that one answer.
1:07:18
We're saying, look, there
1:07:20
has been two
1:07:22
million answers to the question you asked.
1:07:25
Google would list the two million answers
1:07:27
for you.
1:07:28
Transformers and language models
1:07:30
will say, and here is the average of
1:07:33
those two million answers. If we really,
1:07:35
really blend them together and give you the essence,
1:07:37
the very essence of them, as we believe
1:07:40
the essence is, here is the essence. OK?
1:07:43
Now, with
1:07:45
that, you have to understand that Google
1:07:47
then took an ethical position, saying,
1:07:50
one, we don't know the truth. So
1:07:52
we shouldn't say the truth. Two, we
1:07:55
also don't have the right
1:07:58
to put something out there that is not true. not
1:08:00
tested enough for safety, for AI
1:08:02
safety. I think that's a very responsible position.
1:08:05
The problem is it's a prisoner's dilemma. We're
1:08:08
stuck in capitalism. So the minute you
1:08:10
put chat GPT out there, Sundar
1:08:12
is in a checkmate position. So
1:08:14
what can he do other than say,
1:08:16
respond?
1:08:18
Because if he doesn't say respond, he will
1:08:20
get fired as the CEO because
1:08:23
his business is going down the drain and someone
1:08:25
else will say respond. Whoever the next person
1:08:27
is will say respond, put, put word
1:08:29
out there.
1:08:31
Yeah. And there's, I mean, I want
1:08:33
to take the glass half full approach here and that is
1:08:36
whether it was pouring mercury into the river. General
1:08:38
Motors would be pouring mercury into the river
1:08:40
from its factories right now if it hadn't been outlawed
1:08:43
because they'd be at a competitive disadvantage by
1:08:45
those who are allowed to do it. So
1:08:47
it feels to me that we just need thoughtful people such
1:08:49
as yourself advising our elected
1:08:52
representatives to form the right bodies, whether it's a
1:08:54
division of NATO or whatever
1:08:56
it might be to the UN and
1:08:58
then regulatory bodies here. And it feels
1:09:00
like there's a lot of incentive for cooperation here.
1:09:03
So just before we can
1:09:06
I just
1:09:06
publicly announce what, because
1:09:08
you said rightly that the
1:09:10
next US election is the battlefield, right?
1:09:13
This is the patient zero. I strongly
1:09:16
recommend that the US government and every
1:09:18
other government on earth criminalizes
1:09:22
fake content or AI generated content
1:09:25
that is not marked as fake or AI generated.
1:09:28
Okay. So you need a law in place that
1:09:30
says if what you're displaying
1:09:32
is found to not be actual truth,
1:09:35
you will be put in prison for the rest of your life.
1:09:38
Or what about, everyone keeps talking about an AI
1:09:41
pause. I found that letter was very naive
1:09:43
and I also found that it lost a lot of credibility
1:09:45
when it had Elon Musk sign it because I don't think he's looking for an AI
1:09:47
pause. I think he's looking for other people to pause
1:09:49
so he can catch up. I found the letter was very
1:09:53
self-defeating. But what about
1:09:56
the idea of an AI pause around all election information 90
1:09:58
days before the election? They're just trying
1:10:01
to discern what is true or not true. What
1:10:03
if you just said to all the big platforms for 90 days,
1:10:06
we need you to put a pause on all election information 90
1:10:08
days before the election. Amazing.
1:10:10
As a matter of fact, wouldn't it be wonderful if
1:10:13
we just went back to old school
1:10:15
that all information about the election is
1:10:17
removed automatically other
1:10:19
than actual debates on stage
1:10:22
filmed by old scammers? Yeah,
1:10:24
I think that's right. But
1:10:27
again, the problem is a prisoner's dilemma.
1:10:30
So by doing that, you're
1:10:32
criminal, you're penalizing
1:10:34
social media companies so they don't have that
1:10:36
content, which it's a big revenue for them.
1:10:39
You're penalizing news networks so they don't
1:10:41
have content to, you know, they don't have opinions
1:10:44
and noise and sticky eyes
1:10:46
that are staying there to listen to all of the fluff
1:10:48
and all of the arguments. And it's again,
1:10:51
the system, but you're absolutely
1:10:53
spot on. Wouldn't it be wonderful if
1:10:55
we said everything about the election
1:10:57
should be viewed in the classical old
1:10:59
form of humans talking to humans?
1:11:02
So you've been very generous through time. I just wanted to
1:11:04
touch on a moment. We share that we both
1:11:07
sort of, we're both really interested in happiness
1:11:09
or the exploration of happiness.
1:11:11
And I'm just curious, what was the transformation?
1:11:14
Why did you decide to start
1:11:16
to kind of pivot from, you know,
1:11:18
obviously these deep, really important
1:11:21
subjects, including AI and technology, and
1:11:23
start talking about happiness? Was it a, was
1:11:25
it, is it a personal pursuit or was there
1:11:27
a moment in your life where you thought you needed to spend
1:11:29
more time on this? What was the inspiration? There
1:11:32
were two moments. So first of all, I was
1:11:34
the extreme example of the grumpy,
1:11:36
rich brat. Very early
1:11:39
in my life, I made massive
1:11:41
amounts of money by understanding mathematics and
1:11:43
programming before online trading
1:11:45
was a big thing. And, you
1:11:48
know, I was,
1:11:48
as a result, as always, the more money
1:11:51
I made, the more miserable I became. And I remember
1:11:53
vividly one Saturday morning where I completely
1:11:56
broke my daughter's heart, completely broke
1:11:58
her heart. You know, she's coming. Saturday
1:12:00
morning jumping up and down and feeling
1:12:02
very happy about what we're about to do. And I look
1:12:05
at her grumpy as always, looking at my
1:12:07
laptop and say, I said, can
1:12:10
we please be serious for a moment? Okay.
1:12:13
And my daughter was five at the time and her heart
1:12:15
broke and she cried and I really realized I
1:12:17
didn't like that person. So I started to
1:12:19
research the topic of happiness. You
1:12:22
know, it took me 12 years to move from being that
1:12:25
miserable, annoying, grumpy executive
1:12:27
to being a very calm, very, very
1:12:30
open minded and cheerful
1:12:33
and nice person if you want.
1:12:36
Through that journey, I used the aid of my
1:12:38
son who was born a tiny
1:12:40
little Zen monk, always had peace in him
1:12:42
somehow, who unfortunately
1:12:45
in 2014, I was chief
1:12:47
business officer of Google X at the time. He
1:12:50
called us, he used to live in Boston and
1:12:52
he had a US tour, played
1:12:55
in a band at the time in August
1:12:57
and he basically called
1:13:00
us in June and said, can I come visit you in Dubai
1:13:02
for a week? I feel obliged to come
1:13:04
and spend time with you. And we said, sure.
1:13:07
And then he went through the simplest surgical
1:13:09
operation of an appendix inflammation. Normally
1:13:12
it takes four minutes and the patient is out, but the
1:13:14
surgeon sadly did
1:13:16
five mistakes
1:13:19
in a row. All of them were
1:13:21
preventable, all of them were fixable, but
1:13:24
within four hours, my son left our
1:13:26
ward and... No,
1:13:28
I'm so sorry. I didn't know that. No, it's
1:13:30
okay. I actually, I
1:13:33
think he's in a good place. I really do. And
1:13:35
Ali, before he left,
1:13:37
he had a dream, which he
1:13:40
spoke to his sister about the only person
1:13:42
who spoke about. So she comes running to me
1:13:44
and she says, Papa Ali told
1:13:47
me that he had a dream, that he was everywhere and
1:13:49
part of everyone,
1:13:50
which
1:13:53
in most spiritual... I didn't know
1:13:55
that at the time, but in most spiritual
1:13:58
teachings, everywhere and...
1:13:59
part of everyone is to disconnect from the
1:14:02
physical, which basically is the definition of death.
1:14:04
And so when Ali
1:14:06
died, when Ayah, my daughter, told me that
1:14:09
to my crazy executive
1:14:11
mind, Chief Business Officer of Google,
1:14:14
seven years building Google's emerging markets, it
1:14:18
translated into my mind as a quote. So
1:14:21
I somehow responded by saying,
1:14:23
sure, Habibi, consider it done. I
1:14:27
was responsible for the next four billion user
1:14:29
strategy at Google. I knew exactly how
1:14:32
to reach billions of people. And so
1:14:34
in my mind, I said, OK, I'm going to write a book.
1:14:37
And I am going to share everything my son
1:14:39
taught me about happiness. And in a
1:14:41
very selfish way, the objective of the book
1:14:43
was that a part of the essence
1:14:46
of my son Ali, which is
1:14:48
what he taught me about happiness, is going to
1:14:50
be read by enough people who
1:14:52
will tell enough people. And if I could reach 10 million
1:14:55
through six degrees of separation in 72 years
1:14:57
was my math, he would be
1:15:00
everywhere and part of everyone. That's
1:15:02
where the project started. And then somehow
1:15:05
the universe made it work. So within
1:15:08
six weeks after the book launch,
1:15:10
we were already a best seller in like eight countries.
1:15:14
My videos were viewed 180 million times. And
1:15:17
it was clear that 10 million happy was happening.
1:15:20
So we went to a billion happy as
1:15:22
a target. Basically, the rest of my life
1:15:25
was targeted for a billion happy. My
1:15:27
work on AI, believe it or not, is part
1:15:29
of a billion happy because if I
1:15:31
don't think we will be very happy
1:15:34
as humanity if we don't get AI right.
1:15:37
And so when I resigned from Google 2018,
1:15:40
I published a
1:15:42
video that was called One Billion Happy, basically
1:15:44
the name of the mission. And that
1:15:46
was entirely about that, entirely about
1:15:49
the fact that AI is going to magnify human
1:15:51
tendencies and that if
1:15:53
we continue to be rude and aggressive
1:15:55
and grumpy and unhappy and selfish
1:15:58
and so on, that this will be the mission.
1:15:59
magnification of AI. And my
1:16:02
effort has been to try and
1:16:04
magnify the true human values, love,
1:16:06
happiness, and compassion, to
1:16:09
try and say if we show enough of that
1:16:11
online, maybe we can influence
1:16:13
the views of AI so that they start to show
1:16:15
us more of that and basically expect us to
1:16:17
want more of that. And that was the
1:16:19
original work. And since then, I've
1:16:22
spent the last five,
1:16:25
six years doing nothing but this, doing,
1:16:28
you know, my dream
1:16:30
is that by the end of my life, I would have contributed
1:16:33
somehow to waking a billion
1:16:35
people up to the fact that they have the right to
1:16:37
find happiness and hopefully would
1:16:39
have spent all of the money that I've earned from Google
1:16:42
and hopefully we will be
1:16:44
forgotten enough for the mission to continue after
1:16:46
we leave as a small team.
1:16:48
So I would love your advice. I wasn't expecting
1:16:51
to go here. I lost
1:16:53
a close friend a week ago and totally
1:16:55
unexpected leukemia that just kept
1:16:57
getting worse and worse over the course of 12 months.
1:17:00
And I'm really struggling with the grief. I'm
1:17:02
just not, I wasn't prepared for it. I don't know
1:17:04
how to deal with it. That is meaningful
1:17:07
grief. You have dealt with profound grief.
1:17:09
There is, I can't imagine any more profound grief
1:17:11
than losing a child. We grow up with this comfort
1:17:14
as parents that we're going to get to go first. It's
1:17:16
a huge source of comfort that I have. What
1:17:18
advice do you have for people
1:17:21
who
1:17:21
are dealing with this type of grief?
1:17:25
Well, I mean, it's
1:17:28
difficult to
1:17:31
give any advice that would take the grief
1:17:33
away that quickly.
1:17:34
Let's just be very open and honest about
1:17:36
it. There is a finality to death that
1:17:39
contradicts everything we've
1:17:42
ever been trained to as humanity. Okay.
1:17:44
It triggers our fear. It triggers our helplessness.
1:17:47
It triggers our insecurity. We
1:17:49
cannot trust life. We
1:17:51
miss the person that we love that left us. We
1:17:54
are scared about where they are. We have
1:17:56
lots of uncertainties. It's a very overwhelming
1:17:59
trauma. And the
1:18:02
reality of the matter is that my first
1:18:04
advice is grief.
1:18:07
Fully grief. If you're
1:18:09
angry, be angry. If you're unsure,
1:18:12
be unsure. If you want to take a break, take a break. This
1:18:15
is the first step on the way. Then
1:18:20
there are two steps in my mind that
1:18:22
are one is very logical and
1:18:24
the other is very spiritual. The
1:18:27
logical one is very harsh
1:18:29
to say. It's very harsh, but it
1:18:32
is, you know, when sometimes they say
1:18:34
the truth will set you free. There's
1:18:36
absolutely nothing,
1:18:39
nothing you can ever do to bring them back. Okay,
1:18:42
so my very mathematical, logical
1:18:45
mind actually, believe it or not, went
1:18:47
out and did the research. It's like, has
1:18:49
anyone ever come back? Right? Yeah,
1:18:51
we had many people that came back from near-death
1:18:54
experiences. But you know for
1:18:56
a fact that your friend is gone.
1:18:58
Okay, the first
1:19:00
thing I started to think about after Ali left
1:19:04
was if I hit my head
1:19:06
against the wall for 27 years, he's
1:19:08
not going to come back. I'm going to
1:19:10
be torturing myself
1:19:12
and the world is not going to get better.
1:19:14
Okay, so I found in my
1:19:16
heart a space that basically said, I
1:19:19
don't know how I will get there, but
1:19:21
I will hopefully find the positive in all
1:19:23
of this. I don't know how I will get
1:19:25
there, but I will learn to accept that he
1:19:27
left.
1:19:29
And I think accepting that
1:19:31
he left, I call this committed acceptance. Committed
1:19:34
acceptance is that sometimes life
1:19:36
throws things at you that are
1:19:38
so final.
1:19:40
You know, from
1:19:42
the silliest thing of being stuck in
1:19:45
traffic when you have an appointment that you're
1:19:47
about to miss. Okay, all the
1:19:49
way to losing a loved one. There are very
1:19:52
frequent things that happen in your life that are
1:19:54
out of your control. And what
1:19:57
I say is committed acceptance.
1:19:59
Accept that this is your new baseline.
1:20:02
Accept logically and
1:20:04
emotionally that as painful as it is,
1:20:06
this is the end of that hug that
1:20:08
I could get from my son. It's not going to happen again.
1:20:11
And then commit to make your life and
1:20:13
the lives of those around you better,
1:20:15
despite the pain,
1:20:17
which is a very interesting thing. You don't have to know how.
1:20:19
You do not have to know how. You just tell yourself,
1:20:22
now that I've accepted this tragedy,
1:20:25
I accepted this pain, I'm going to
1:20:27
crawl out of it. Maybe the world is crawled
1:20:29
out of it. I'm going to do one thing today
1:20:32
that makes my life better than yesterday. And I'm
1:20:34
going to do another thing tomorrow that makes
1:20:36
life a little better than today. That's
1:20:39
it. Okay. So this is the practical
1:20:42
side, if you want. The spiritual
1:20:44
side, I think, is very key so that I don't
1:20:47
lie to people. Okay. And in my first
1:20:49
book, Souls for Happy, I spoke about
1:20:51
the concept of death
1:20:53
from a physics point of view.
1:20:55
Okay. We struggle
1:20:57
with the scientific method in understanding
1:21:00
anything that we can't
1:21:03
measure. And
1:21:06
you cannot measure what happens after we die. Okay.
1:21:09
But there are lots of concepts in quantum
1:21:12
physics specifically, and
1:21:14
with the idea of theory of relativity and the fact
1:21:16
that space time completely exists. Maybe
1:21:18
you shouldn't go into those today. But
1:21:21
that will tell you that life is separate
1:21:23
from the physical. Call it consciousness. Call
1:21:26
it spirit or soul
1:21:28
like religions call it. But there is a non-physical
1:21:30
element to us. That non-physical element is
1:21:33
the part that disconnected
1:21:35
from my son's body when he left. And
1:21:37
that same handsome form of his that
1:21:40
was on the intensive care table
1:21:42
was no longer him. You could feel it. You
1:21:45
could feel that his essence was no longer there. And
1:21:49
if you can accept the fact that there is a non-physical
1:21:51
element to us, okay, that non-physical
1:21:54
element by definition exists outside
1:21:56
space time because otherwise it wouldn't be able to
1:21:58
perceive time and the path. passage of time, it's a simple
1:22:01
object-subject relationship, then
1:22:04
that non-physical element is not affected
1:22:06
by the events of the physical. So my son
1:22:09
really never died. So
1:22:11
my son's physical form was
1:22:14
born and my son's physical form
1:22:16
was decayed, but the essence
1:22:18
of my son, his consciousness, has never
1:22:21
gone anywhere. And when you really understand
1:22:23
this, you understand that death is not the opposite
1:22:25
of life. Death is the opposite of
1:22:27
birth. And life exists
1:22:30
before, during, and after. That
1:22:32
I don't expect everyone to agree with.
1:22:35
But to me, because I see it from a physics
1:22:37
point of view more than a religious point of view, I
1:22:40
tend to believe that my son is okay. I
1:22:43
don't know okay how, but I
1:22:45
know for a fact that I will join him sooner
1:22:47
or later and that I too will leave
1:22:49
this physical form and that I too
1:22:52
will be okay. And if I am
1:22:54
optimistic about my life, it may take 25 more
1:22:57
years before I leave.
1:23:00
But if I look back at my last 56 that
1:23:02
passed literally like that, then 25
1:23:05
is not that long. So in my heart,
1:23:07
I have a certainty that I will be
1:23:09
there too. And I
1:23:11
think there is not a bad place.
1:23:14
There is not even a place. It's not
1:23:16
even a time.
1:23:17
It's an eternity of consciousness. And
1:23:19
that's the truth of who we are. Very
1:23:22
difficult to explain that quickly.
1:23:24
But at the end of the day,
1:23:26
I don't see the departure of my son
1:23:29
necessarily as a bad thing.
1:23:31
Not for him. And
1:23:32
definitely not for the world that benefited
1:23:35
from his departure. It's just very
1:23:37
painful for me. And if it's painful
1:23:39
for me, then I'm the one responsible
1:23:42
for managing that pain. I'm the one responsible
1:23:44
for dealing with that pain, because that pain
1:23:46
doesn't add any positivity to me at all.
1:23:50
Most
1:24:00
Switch Gear has made happiness his primary focus.
1:24:03
In 2020, he launched his podcast, Slow
1:24:05
Mo, a podcast with
1:24:07
Mo Gaddat. He joined us
1:24:09
from Dubai. Mo, I mean this so sincerely.
1:24:12
This was very meaningful, and I appreciate
1:24:14
your good work, and generally
1:24:16
just really wish the best for you and yours.
1:24:19
And so appreciate not
1:24:21
only how thoughtful and how just incisive
1:24:23
you are, but how spiritual and authentic
1:24:26
you are. Really appreciate your time. I
1:24:28
really can't thank you enough, Scott. So
1:24:29
not for the opportunity,
1:24:32
not just for the opportunity to speak to your audience,
1:24:34
but I really felt connected during this conversation.
1:24:36
I really enjoyed the
1:24:39
level of depth that we went into, and I'm
1:24:41
really grateful for that.
1:24:51
Algebra of happiness. I had a close friend
1:24:53
pass away last week. My friend
1:24:55
Scott Sabah, 54 I
1:24:59
believe, literally almost
1:25:02
a year ago, maybe 15 months ago, we went to loom together.
1:25:04
We were sort of, not sort of, we were
1:25:06
close friends. And we like to do the
1:25:08
same things. We like to go out, we like to party, we
1:25:10
like to drink. And
1:25:13
lived in New York as a real estate developer, graduated
1:25:16
from USC. Three
1:25:18
kids, two that just graduated from USC,
1:25:21
one who was a freshman at UVA. I
1:25:24
met Scott probably 15 years ago. We
1:25:26
lived in the same building. And
1:25:29
we just hit it off and started traveling together.
1:25:31
And whenever I was in New York, I would just call Scott. And
1:25:33
he was always up for going
1:25:36
and grabbing a drink with me at Zero
1:25:38
Bond. Or he was kind of cool
1:25:40
with me. I'd go
1:25:42
to the same places over and over. And
1:25:45
Scott would always find his cooler places in the East Village. He
1:25:47
just kind of knew what was going on. Nice
1:25:49
man, ton of fun, great shape.
1:25:51
Year ago, bump on his head, ends
1:25:54
up it's leukemia, no problem. We're going to treat it with pills,
1:25:56
not chemo. You don't need it. Oh, the
1:25:58
pills aren't working. We need to go to chemo.
1:26:02
Oh no, it's, you know, it looked
1:26:05
like it was better, but now it's converted to something much
1:26:07
more serious. I think it was called Richters. And
1:26:10
we need to do a stem cell transplant. Okay,
1:26:12
your son's a match. That's good news. We're
1:26:14
literally going to reset your entire, kind
1:26:17
of your entire blood makeup. I
1:26:20
mean, even different blood type. Does the stem cell
1:26:22
transplant. Good news. It
1:26:25
worked. You're all clear. Oh
1:26:27
no, it's back. And
1:26:29
there's nothing we can do. I mean,
1:26:32
just from bad to worse to tragic.
1:26:36
And I got to be honest, I was sitting here thinking about this. Like,
1:26:38
what is the lesson here? And I don't, you
1:26:40
know, I'm struggling with this. I don't have anything
1:26:43
profound or moving or inspiring to say
1:26:45
around this. This is a tragedy.
1:26:48
And I'm struggling with it. It is something I
1:26:50
still can't wrap my head around. I've had some
1:26:52
death in my life, but not a lot. I lost my mom. So
1:26:55
when your parents die, it's sort of the natural
1:26:57
order. I think, you know, I'm not going
1:26:59
to say you're prepared for it, but at least rationally
1:27:01
it makes sense. But when you lose a friend who's healthy
1:27:03
and who's younger than you, you're just kind of struggling
1:27:06
to like wrap my head around it. The
1:27:08
only thing I can come to and the only thing
1:27:10
I would take away or what I would try and impart on people
1:27:13
is that, you know, when I
1:27:15
would text him towards the
1:27:17
end, I would say, you know, I love you.
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More