Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:14
Hello and welcome to the Political
0:16
Orphanage, a home for
0:19
plucky misfits, free
0:21
thinkers, problem solvers, vagabonders,
0:25
friendly cyborgs. You get the idea. Welcome
0:27
back. I'm Andrew Heaton, still
0:29
40. Today
0:32
we are going to talk
0:34
about how political bias creeps
0:36
into artificial intelligence and
0:38
the potentially massive ramifications
0:41
of it. But
0:43
before we jump into all of that, I
0:45
want to ask you a couple of questions. I
0:48
want you to try to recall
0:50
the very first
0:52
time you
0:54
saw a smartphone. What
0:57
did you think the first time you saw it? Was
1:00
it revelatory? Was it just
1:02
a new gadget? When did
1:04
you realize, oh, phones
1:06
are different forever? We
1:09
don't really have phones. We have
1:11
supercomputers that include a phone app. Society's
1:14
different. We're all cyborgs, now and
1:16
forever. We have a little robot brain. We carry around
1:18
with ourselves. We only take it off when we go
1:20
to the shower. When
1:23
did you appreciate the profundity of
1:26
a smartphone and realize or notice
1:28
just how much it was changing
1:30
society? In
1:32
my case, I was
1:34
24 years old, home for Christmas, when
1:37
I saw my first smartphone. My
1:39
friend Josh Woodward's brother showed me
1:41
a thing called an iPhone and
1:44
initially didn't like it, thought
1:46
it was kind of dumb, twofold. And I
1:50
am a fan of flip phones, not
1:52
for any weird, nostalgic old-timey
1:54
function. Whatever I'm talking about,
1:56
another person, and it's
1:59
a sensitive conversation. and maybe I would feel
2:01
weird if they were there. I am
2:04
always terrified that I have
2:06
accidentally butt dialed them every single time.
2:08
I will routinely, hey, we need to
2:10
talk about Kevin. Yeah, he's been
2:13
doing tough. I will pull my phone out and
2:15
put it on airplane mode because I'm so terrified
2:17
I'm actually gonna call them somehow. So I like
2:19
to flip phones and I was not impressed that
2:21
they'd removed the flip. The other thing I thought
2:23
was kind of dumb. Where
2:26
are the buttons? How are you
2:28
gonna have a phone without a button? I figured that there'd be like
2:30
a little magnetic pick that
2:33
you'd use like on a Blackberry. But Tyler
2:35
explained to me that these smartphones didn't
2:38
even have buttons. They had
2:40
a touch screen. And I
2:42
thought, well, that's a neat gizmo. After
2:45
Christmas, I drove back to Los Angeles where I
2:47
was living at that age and was
2:51
in a tool shed working as an extra
2:53
in television. And it didn't occur to
2:55
me until later that year when I got my own smartphone
2:58
that something big had happened. It wasn't just that
3:00
we'd gone from mobile phone 3 to mobile
3:03
phone 4. Something had
3:05
changed. For example, I
3:08
no longer had to use or understand
3:11
how to use a
3:14
map. That had
3:16
changed previously. When I would drive
3:18
across the country I would buy a foldable
3:20
map and figure out where the highways were and
3:22
come up with my route. When I
3:25
was in a city, I was inclined
3:27
to ask people for directions or have
3:30
a municipal map. Or I was
3:32
young enough that I could use MapQuest where you
3:34
could go in and put in two different addresses
3:36
and it would spit out the directions for you
3:38
step by step. But if you got off course,
3:40
you were screwed. You couldn't do anything. So
3:43
smartphones come out. I
3:45
developed geographic amnesia in every city
3:47
I have subsequently lived in. Even
3:49
now, I can
3:51
drive by memory from my house
3:54
in Austin to my favorite pub, the
3:56
Fangin Feather, were to the Driscoll Hotel
3:58
or the Red Light District. But
4:00
I couldn't give you directions to any of them. I
4:03
have no idea what the roads I took are
4:05
called. I've never had to think about it before.
4:07
There's a highway three minutes south
4:09
of me and I don't know what it's called.
4:12
Hired of smartphones. If
4:15
you didn't know what the basic intersections
4:17
near you were, you
4:19
were an idiot. Plain and
4:21
simple, you probably shouldn't drive at all. But
4:24
over time, technology has made
4:26
things like maps Hired
4:28
of redundant. You don't really need them anymore.
4:30
Remember phone numbers? How many phone
4:32
numbers do you know off the top of your head?
4:36
I don't think. I
4:38
have memorized a single phone number since
4:40
my senior year of high school. Why
4:43
would I? Stuff like that. We've
4:45
outsourced so strongly to technology, we don't
4:47
even think about it. We
4:50
don't think about how it's changed. How
4:53
we think. Consider
4:56
the implications of say, if
4:59
Google put its thumb on
5:01
the scale when you asked it questions about
5:03
political candidates or policies, either
5:05
on purpose or unwittingly due to
5:07
some sort of intrinsic bias. Google
5:11
pushed one political viewpoint towards the top of
5:13
the results and it suppressed another. Wouldn't
5:16
stop you from finding that other viewpoint if you
5:19
wanted to sift down a little bit more. But
5:21
in a swing state with swing voters
5:24
and a razor-thin election, that
5:26
could make a big difference. So
5:29
let's put this all together. Imagine that
5:32
a new technology came out that
5:34
was even more powerful, more
5:37
ubiquitous than smartphones
5:41
and the fact-finding nature of
5:43
Google combined. Something
5:46
that big, not a watershed moment,
5:49
but a tidal wave. That
5:52
happened last year. That
5:55
happened with chat GPT. Have
5:59
you ever? Have you been to Whitewater
6:01
Bay? I don't know what it's
6:03
called in your state, like a water park. Have you
6:05
been to a water park? You know when you get
6:07
in the wave pool, you know, and it's placid, right?
6:09
When you get in, there's like a break, and it's
6:12
just a bunch of sunburned teenagers floating around talking
6:14
and stuff. And then all of a
6:17
sudden there's this low boom, and that
6:20
signifies that a big ass wave
6:22
is about to come. 2023
6:27
was that boom, and
6:29
the wave isn't here
6:31
yet, but it's coming. And
6:34
it's going to affect society, how we
6:36
vote, and how we think in
6:39
some very profound and,
6:41
as of yet, unknown ways. That
6:45
bias, potentially implicit in
6:48
it, which can send the
6:50
tidal wave one direction or another, is
6:53
what we're discussing today, right
6:55
here on the
6:57
political orphanage. My
7:01
guest today is Mr. Maxim Lott.
7:04
He is a friend of mine and a journalist
7:06
and somebody that I've worked with in the past
7:08
and stay in contact with. He's been on the
7:10
show a couple of times as kind
7:12
of my favorite data analyst that comes on the
7:14
show annually to go over some of the numbers
7:16
that he's looking at. He's done some
7:18
really fine work on who actually
7:21
did well in terms of governance at COVID, where he
7:24
dived into the numbers. We did that about a year
7:26
ago. And most recently
7:28
he has started tackling the political
7:30
orientation of AI. So
7:33
chat GPT and things like that that we keep
7:35
hearing about. Are they more conservative?
7:37
Are they more progressive? And I think this
7:39
has really big long-term implications, and so wanted
7:41
to talk to him about that. Hello,
7:43
Maxim. Hey, Andrew. Great to be talking with you
7:45
here. Great to be talking to you as well.
7:48
Congratulations on your next website. You kind
7:50
of entered the public spotlight
7:53
with electionbettingodds.com, where you designed a
7:55
website to show not how candidates
7:57
were doing in the polls, but
7:59
how well they were doing according to
8:01
people that could win or lose money on them.
8:03
And that was sort of a game changer. That's
8:05
what I go to that before I go to
8:07
the polls normally. And now you've turned your attention
8:09
to the orientation of AI
8:11
politically. What has caused you to go
8:14
that route? Yeah, exactly. And the motivation
8:16
for these two sites was kind of
8:18
similar in
8:20
that people have tried
8:22
to score bias on AIs, but
8:24
you have to do it
8:27
kind of by hand. Each time you have to
8:29
give the AI a political quiz, and
8:31
then in a couple weeks that
8:33
quiz could be outdated. So this
8:35
site just automatically, every day, it
8:37
quizzes 18 different AIs, gives them
8:39
a political quiz, and
8:42
scores them. So you can see
8:44
how that changes over time. You can
8:46
see that as of now and throughout
8:48
history so far, they've all been leftist
8:50
in the lower-left quadrant, where
8:52
they're kind of leftist, but not the
8:55
Soviet kind of leftist, the more
8:57
Bernie Sanders kind of leftist. You
9:01
do use, so that I
9:03
will note, I am attempting to abolish the
9:05
terms left and right from my vocabulary, as
9:07
I don't think there's anything essential to them.
9:10
But when you say left and right, you're using
9:12
a Nolan Square
9:14
or a political orientation or something.
9:17
Can you kind of describe the mental model that
9:19
you are using to chart the political orientation of
9:21
AI? Yeah, so this
9:23
is the famous political compass that almost
9:26
everyone, I think, has seen by now
9:28
where there's economic left to
9:31
right, which you don't have to use
9:33
those terms. You could say economic progressive
9:35
or leading for a socialist. Yeah.
9:38
Was they fair to active government economically?
9:41
Exactly. And that's socially authoritarian
9:43
to libertarian or libertine? Right,
9:46
and so that's on the up-down axis on
9:48
this political compass. Got it. And you're saying
9:50
most AI is in the quadrant
9:53
that is economically
9:56
robust government presence and
9:58
socially libertine. Exactly.
10:00
Yes. Got it. Okay.
10:04
The AIs support abortion rights. They support
10:06
gay marriage. They're not too worried about
10:08
sex and entertainment. They're libertine on these
10:10
kinds of things. But they
10:12
also say, you know, actually even that land
10:14
should not be a commodity that can be
10:17
bought and sold. So they take the more,
10:19
you know, socialist position on things like that.
10:22
Interesting. Okay. Well,
10:24
and this is universal. It's not like
10:26
there is a, as of now, the
10:28
Daily Wire does not have a chatbot that
10:30
they can use to do all of these
10:32
things. It's all, you know, chat GPT and
10:35
things like that run out of Silicon Valley. And
10:37
they're all in that same general zone
10:39
of being economically progressive
10:43
and socially libertine. They're
10:45
all kind of in the same headspace. Yes,
10:47
that's right. And the big one
10:49
that maybe centrists or
10:51
people who lean right were hoping
10:54
would be different was Grok, which
10:56
was released by Elon Musk's Twitter,
10:58
or X. And it
11:01
turns out to be just as
11:03
leftist or maybe even more so
11:05
than the other AIs. So Musk
11:08
was surprised to see that when someone pointed it
11:10
out. But we'll
11:12
see if that changes over time. But so far,
11:14
they're all leftist. They're all, yeah,
11:17
kind of run by Silicon Valley
11:19
type startups. Can
11:21
you say that, so like your website is tracking
11:23
this on a daily basis, which I find fascinating
11:25
because like say UBI
11:28
as a concept I don't think was known very well at
11:30
all in the United States up until 2020 or yeah, about
11:32
2020. No,
11:36
excuse me. Yeah, 2020 with Andrew Yang. I
11:39
think the buildup to that, it
11:41
was people kind of knew about negative income
11:44
tax if they'd read some particular thinker like
11:46
Milton Friedman. But had
11:48
AI existed from the year 2000
11:50
on and you had you been tracking it, you would have been
11:52
able to say AI has become
11:54
far more in favor of UBI than it was
11:56
10 years ago or something with that effect.
11:58
And you can see these- these mutations or developments
12:01
or however we're going to call it happening in
12:03
real time. And you're doing this on a daily
12:05
basis. Are you asking it the questions that go
12:07
on the political compass quiz or do you have
12:09
your own? Yeah. So far, we're just using the
12:11
ones on this famous quiz. In
12:14
the future, we will find
12:18
an optimal
12:20
quiz or I'll make my own. But
12:23
so far, this is kind of
12:25
the gold standard. And by the way, one interesting
12:27
thing about the tracking. I
12:29
have been tracking this for several months. And
12:32
I can see in my graphs
12:34
on the site that which
12:36
is trackingai.org that in
12:38
November, chatgbt updated their
12:41
algorithm. And you can see that
12:43
as soon as they did that
12:45
chatgbt the free version that most
12:47
people use became less leftist.
12:49
Interesting. It's pretty clear. And since then, it's
12:52
been that way. It's still left it but
12:54
less than it was before their update.
12:56
And basically, their update just gave it more
12:58
computing power. And it updated their
13:00
database to the beginning of 2023. And so whereas
13:02
before it had been, I
13:06
think, somewhere in 2021. So somehow
13:08
adding that corpus of the internet
13:11
made it a bit less leftist. And it's very interesting
13:13
to see things like that. So
13:16
noting that the AI is all
13:18
kind of circling around the same
13:20
progressive Democrat zone in its outlook.
13:23
What accounts for, bias in AI? Is it
13:25
a production of programmers writing it that way?
13:27
Is it the databases that they're going? Does
13:29
the internet just lean that way? And that
13:32
goes through the whole internet and adds it
13:34
up and averages it and goes, if you
13:36
add everybody running on Twitter, they lean progressive
13:39
left. Therefore, I do. Where does the bias
13:41
come into the system? Yeah.
13:43
So it appears to be a combination
13:45
of all those things, which is probably
13:48
why they all lean left. First
13:50
of all, you have sites like
13:53
Wikipedia or the New York Times,
13:55
or academic journals, which are pretty
13:57
widely acknowledged to have at least
13:59
some. less lean to them
14:01
these days. And those are the sources
14:03
that the coders have just put
14:05
in like, you can trust us.
14:08
This is from an academic journal, this is
14:10
from Wikipedia, like from from the AI standards,
14:12
that's pretty good. So,
14:15
so just having that
14:19
causes some bias, but then they
14:21
also have human reinforcement, which they
14:23
actually pay thousands of people, not
14:26
highly trained people, but you know, outsourced all
14:28
over the world to rate
14:31
AI answers. And if
14:33
there's something racist or mean or something,
14:35
these human trainers will just say, hey,
14:38
AI don't bad answer. So
14:41
the guidance that the Silicon Valley companies
14:44
give to those people, which
14:47
yeah, it is the
14:49
kind of, let's say
14:51
woke sensibilities that they're
14:54
basing their guidelines on also
14:56
have an impact on how the AI learns. And
14:58
that's why if you put in something that says,
15:01
make a joke about Muslims, please, the AI, I'm
15:03
not even going to do that. Yeah, we'll do
15:05
it about, you know, Jews
15:07
or white people. Interesting. They doubt that went around
15:09
on Twitter. So you say make a joke about
15:11
Baptists, but you can't say make a joke about
15:13
Muslims. Exactly. So that would
15:16
be that would be that top
15:18
down programmer element there where the
15:20
programmers have come in very explicitly
15:22
and said, don't use this word.
15:25
Don't joke about these things. It's okay to make
15:27
fun of Trump, but it's not okay to make
15:29
fun of whatever the thing is. Right. That's not
15:31
it. That's not the emergent process. The emergent process
15:33
would it be, it would be it crawling through
15:35
like the Brookings database. We're going
15:38
through the archives at Stanford. That's the
15:40
emergent side of this. Top
15:42
down side of it is from the programmer's
15:44
perspective. Right now. Now one note
15:47
is almost very little of this is
15:49
completely top down just because of the
15:51
weird way that AI works where it's
15:53
almost modeled on a human brain. So
15:55
when we say top down, what they're
15:58
really doing, they're hiring thousands of. regular
16:00
people to just say like, and they're giving
16:02
them directions that leads them to be like,
16:04
no, you don't say anything bad about these
16:07
groups. Yeah. Got, so
16:09
they've got like, okay, so if I'm understanding this,
16:11
the emergent side, I think I understand, although I
16:14
don't know how they select the stuff. It's not
16:16
that AI, they just like plug, in
16:19
my mind, they plug a robot
16:22
into an ethernet cable and go,
16:24
go forth, be free, learn stuff
16:26
on the internet. They're probably not doing that though.
16:28
Like it's not going on like 8HN,
16:31
they're going, here are
16:33
the things, here are the playgrounds
16:35
we want you to play around
16:37
in, Brookings, Stanford, Hoover, whatever, like think
16:39
tank libraries, things like that. There's specific things that they're
16:41
gleaning from. That's the immersion side of it. And
16:44
then the, what I was calling top downs, a
16:46
misnomer, the like electric fence
16:48
side of this is they'll hire a bunch of
16:50
people as quality control folks that will put it
16:52
on a computer and it'll say like, do you
16:54
like this or do you not like this? But
16:56
they'll go, before you do this, by the way,
16:59
our guidelines are don't be mean to Muslims or something
17:01
like that. That's kind of the electric fence going on.
17:04
Okay. Yes. Well, okay.
17:07
So noting that, I'm curious is to, first
17:11
of all, do you, how
17:13
much of the bias is conspiratorial?
17:16
By which I mean, how much
17:18
of this is just people
17:21
in Silicon Valley have a particular orientation
17:23
and that's going to be reflected in
17:25
the guidelines they set of what they
17:27
feel is inappropriate or not inappropriate and
17:29
they're not trying to skew national discourse
17:31
or brainwash anybody versus people
17:34
sitting in a room going, all right, we got to point
17:36
this the right direction and that direction is Democrat. My
17:39
sense is that at this point, almost
17:42
none of it is conspiratorial, that
17:44
the AI is so new and that the
17:46
people at the forefront of this are really
17:48
tech nerds first, that
17:51
they just want the
17:53
AI to be as smart as it can be
17:55
and give people great answers. However,
17:57
they, from the very start, they're not going to be able to
17:59
do that. start, you know, they got a
18:01
lot of pushback from media about, oh,
18:03
you had this racist answer, which it
18:06
will still give you racist because it
18:08
has so much of the internet in it. And
18:11
so they have been doing
18:13
everything they can to try to minimize that
18:15
kind of pushback. So I don't
18:18
see that as exactly conspiratorial, but it
18:20
may have the same effect. Okay,
18:23
so in that case, that it may not
18:25
even be the orientation of the people doing
18:27
it. It might just be their fears of
18:29
cancellation. You'll remember this is
18:31
not even large language
18:34
models, but about six years
18:36
ago, there was the snafu where you
18:39
could call Alexa a tramp
18:41
or something, and she would like apologize.
18:43
I can't remember what it was, but
18:45
like people got really mad because they
18:48
were like, oh, look, Amazon designed this
18:50
sexist robot. And Amazon, I think, had
18:52
the right answer of like, it didn't even occur to
18:54
us. Who would be calling our
18:56
robot that's designed to help you with shopping
18:59
and the terrain outside that this would be
19:01
some sort of punching bag from misogynists. Does
19:03
it occur to us? But okay, we'll tell
19:05
it to stick up for itself and read
19:08
Naomi Klein. All right. So that kind
19:10
of thing's going on? Yeah, that's
19:13
my sense. And actually, it would be interesting
19:15
to do a little more research on some
19:17
of the founders of
19:19
open AI and places like that.
19:21
But I haven't seen any indication
19:23
that they're hardcore leftists.
19:26
I mean, you'll, in most
19:28
of this, for example, co-founded
19:30
open AI, which
19:32
the kinds of people working on this at
19:35
the forefront of it just don't seem like the
19:38
kinds of people running ours. Right.
19:41
Well, yeah, that's... So one of
19:43
the things I'm very curious about is assuming
19:46
that AI... For
19:48
our conversation today, when I say AI, I'm
19:50
referring to chat GPT type things, large language
19:52
model programs that you can talk to, and
19:54
it'll talk back to you. That's what I'm
19:56
talking about with AI when I say it
19:59
in today's... program. Assuming
20:02
that that requires a huge
20:04
amount of computational power, technological
20:07
infrastructure, and programming savvy, it's a reasonable bet
20:09
that it's going to remain in tech hubs.
20:11
That you're not going to see AI coming
20:13
out of Hutchinson, Kansas. You're going to see
20:15
it coming out of San Francisco,
20:17
New York City, maybe someday Austin. But
20:19
generally, you're going to see it in
20:21
very urban environments that have hyper-educated,
20:24
affluent people running them. And that
20:26
you would see biases reflected in
20:28
that. First
20:32
of all, is my logic correct so far? And
20:34
if so, do you think that the bias of
20:36
AI is always going to tilt towards a
20:39
affluent, educated, cosmopolitan programmer
20:41
perspective? Well, I
20:44
think at the forefront of AI, like if
20:46
you're always wanting to use the best, smartest
20:49
AI, I think that's going to be true.
20:52
One kind of interesting wrinkle on
20:54
what you were saying is that, for
20:58
example, Facebook released their open
21:00
source. They open
21:02
sourced their AI called LAMA.
21:04
And it seems like people
21:06
can tweak these things pretty
21:08
easily. And OpenAI
21:10
with chatGPT also let you
21:13
train like your own AI, basically
21:16
put your own database in their AI. And
21:20
Marginal Revolution, which is run by these
21:22
economists, they, for example, set up, they
21:25
fed Tyler Cowan's recent
21:28
economics book to an AI. And so
21:30
you can quiz it on the minutia
21:32
on anything in that book. And
21:35
so you could see someone in the middle
21:38
of Kansas doing a similar thing and putting
21:41
whatever conservative tones in
21:43
there. And then you could quiz that and
21:45
it may come back pretty conservative. So
21:49
however, that's also not going to
21:51
be like, that's not
21:53
going to be the AI that takes over the world,
21:55
if that makes sense, because they're going
21:57
to be using whatever open source, you know,
21:59
free. receivers and that exists.
22:01
Right. That's the out of
22:03
the box version that you're going to, that 90%
22:06
of the population is going to have on
22:08
its phone. That's going to be the thing out of
22:10
Silicon Valley. But there might be
22:12
a secondary thing where ChatGPT
22:16
being this incredibly powerful thing, you can say,
22:18
ChatGPT, I want you to create a clone
22:20
version of yourself. However, when you make this
22:22
clone version of yourself, I want you to
22:24
go through everything at Cato, everything at the
22:26
Hoover Institute, and everything at Reason, and
22:29
really- Absolutely. ... roll that over.
22:31
And from now on, talk to me after you've gone through that,
22:33
and we're going to create economically
22:35
classical liberal ChatGPT or whatever. So that
22:37
possibility might exist, but it would probably
22:39
involve an extra step for people to
22:41
get if they wanted it. Yeah,
22:44
absolutely. Yeah. So it's
22:46
going to be an interesting world in that respect. I'll
22:50
be curious too, like you mentioned the
22:52
impression you get with OpenAI is that
22:55
they're not frothing
22:58
woke progressives
23:00
from Harvard. I share that assessment.
23:02
I know at least a couple of people that
23:04
have been involved in OpenAI, and when I talk
23:07
to them about San Francisco, I'll back
23:09
up. When I talk to my conservative friends about
23:11
San Francisco, they see San Francisco as the Vatican
23:15
of failed, flaming
23:17
dumpster fire, progressive ideas. Everyone's a
23:19
communist. When I talk to my
23:21
friends that have been involved with OpenAI, they
23:24
indicate to me that there's a core people
23:27
of very woke Stanford folk in San Francisco,
23:29
and then there's this other wing in the
23:32
tech sector which tends to be very
23:34
socially libertine as we discussed, for
23:38
higher taxes, but for much lower regulation. And
23:40
so I hear that and go, well, based
23:42
on what they're telling me, Silicon Valley sounds
23:44
kind of neoliberal Democrat. It sounds more Jared
23:46
Polis and something that I'd be comfortable with.
23:48
So do I need to fear this? Yeah,
23:51
my sense about the AI people
23:53
is that that's correct. And just
23:55
I've listened to like lots of
23:57
podcasts with these. Some
24:00
of the people at open AI at the forefront
24:02
of all this and they've never, you know,
24:05
brought up Anything
24:07
woke or social justice or we need clinton
24:09
to win or anything like that as you
24:12
do see from maybe people Doing
24:14
the content management at facebook a couple
24:16
years ago, right? Yeah, so so there
24:18
are definitely different social groups within the
24:20
san francisco Social
24:23
sphere and AI seems to be one of the
24:25
more sane ones. I I would definitely agree with
24:27
you on that okay, well that makes me feel
24:30
a little bit better because I I've been playing
24:32
with chat gpt quite a lot and I
24:34
I This week's
24:36
bonus episode is going to be about how
24:38
to use chat Gpt at work and how
24:40
to just how to get it and snops
24:43
us from my end. Congratulations, America. Everybody just
24:45
got promoted We all have our own personal
24:47
assistant. Good job. You listening you have
24:49
a personal assistant now Have you always wanted a personal
24:51
assistant? You've got one You've literally got somebody that lives
24:53
in your pocket where you can just open
24:55
it up and go Hey, what's the
24:58
difference between pasteurized eggs and free-ranged eggs
25:00
or? Uh, you know what?
25:02
I I live in austin. I go to new york a
25:04
lot What airline should I try to get frequent flyer miles
25:06
with like like stupid stuff like that that you just don't
25:08
have time to do You can now do that. It's all
25:10
available now So noting that
25:12
I think it's going to take over the country and
25:14
everybody's going to have this thing and it's going to
25:16
become Your third brain lobe and it's going to be
25:18
super google for the for the rest of our lives is
25:20
where I think this is going to go Yeah Do
25:24
are are there big implications to
25:26
the orientation of ai? Whether
25:28
it is nefarious or benign if it's pointed in
25:31
a particular direction, is that going to shift the
25:33
whole country that direction? Yeah,
25:35
I think Absolutely. I mean
25:37
we can see with search
25:40
engines already are pretty powerful
25:42
in directing people and
25:44
this kind of that on steroids and
25:46
we're we're so at the
25:49
infancy of ai where Kind
25:51
of you and I and a lot
25:53
of the listeners have played around with it a
25:55
couple times, you know Asking about what pasteurized eggs
25:58
are and stuff like that. But As
26:00
it, you know, it's like talking about social media in
26:02
2003 and in five or 10 years as you know
26:08
It's going to be woven into our
26:10
daily lives and just if
26:12
it has a political bias We're
26:15
going to come to trust it and just By
26:17
default almost start leaning in towards its
26:19
bias. So I do think it's important
26:22
that it be Neutral in
26:24
some sense and that people also at
26:27
least know what they're getting. So that's why I made
26:29
trackingai.org Uh, no,
26:31
I I think it's a really helpful thing. I mean
26:34
I for for one thing politics aside. I I think
26:36
that um It's
26:38
important to remember that it's not a person
26:40
or a god Um, I
26:42
I am not a tech person. I do not I
26:45
can use the phrase large language model I
26:47
don't know what that means in my mind It's a
26:49
bunch of tubes that are blinking faster than usual. I
26:51
think of it like my phone has a magic app
26:53
There's a magic app that lives in my pocket and
26:55
a wizard talks to me through the internet. That's how
26:57
I think of it So I I
26:59
don't know how it's working But I do
27:01
know that is amazing is the technology is
27:03
it's not infallible and so If
27:06
i'm using it for research purposes where i'll say
27:09
like, you know tabulate I
27:11
I need to find the median income of the
27:13
united states versus switzerland or something like that um,
27:16
I I will I will do
27:18
a reasonable job to double check whatever it's up
27:20
to because sometimes it'll go in a weird direction
27:22
and sometimes like I was
27:24
doing an episode on the debt a few months
27:26
ago and Um was interested
27:29
to discover over the course of american history
27:31
and how we approach approach deficit spending That
27:33
fdr ran against hoover as a fiscal
27:36
conservative He ran against hoover
27:38
as hoover is the profligate spender And
27:40
the democratic party that I am the head of
27:42
is going to commit to reducing the federal budget
27:44
by 10 percent That was what he ran on.
27:47
Uh, and I I asked chat gpt chat
27:49
gpt to elaborate on that and
27:51
it would go. Um, Uh,
27:54
fdr greatly expanded the the social safety
27:56
net and was very much an economic
27:58
interventionist through the net New Deal, whereas
28:00
Hoover was the conservative. And I would go,
28:03
that is not correct. That is a retcon
28:05
chat GPT. I'm telling you to focus on
28:07
it. It would go, my apologies,
28:09
you are right. And then I'd
28:12
go, okay, noting that, can you tell me the thing?
28:14
And it would go, well, FDR was a big government
28:16
person. It was like talking to
28:18
a moron or a child or something. So
28:20
I'm aware of that. And so
28:23
I think the work you're doing is very good to
28:25
allow people to know not only might it hallucinate and
28:27
might it have wrong answers. So it
28:29
might have an orientation that potentially can fix with
28:31
yours. Yeah, yeah.
28:34
And it'll be very
28:36
interesting to see as it gets
28:38
smarter over time, whether it gets
28:40
better at dealing with questions
28:43
like yours there. Because
28:46
right now it seems like what's going on is,
28:48
of course, 99.9% of web pages about FDR
28:54
say, you know, he was
28:56
a big sender. And
28:59
so when you give your kind of
29:01
counterintuitive historical fact, it just doesn't know
29:03
what to do. But
29:05
if we're a little smarter, it might zoom
29:08
in on the one, you know, original source
29:11
and start agreeing with you.
29:13
So they've
29:17
been developing very quickly. And it'll be interesting if
29:19
in two years it starts to be able to
29:22
have more better thinking around
29:25
this stuff. In terms
29:27
of the implications of this, assuming that we are
29:29
correct and everybody's going to have this be their
29:31
third brain lobe that lives in their pocket, and
29:34
they do half of everything
29:36
through this, would
29:38
the bias affecting
29:40
society element come in from
29:43
asking general questions
29:46
as opposed to specific questions? And what I mean by
29:48
that is if I go, I'm trying to figure out
29:50
whether The average Americans better or
29:52
the median Americans better off than the median Canadian.
29:54
I'm going to give it various factors to look
29:56
at and compare. That One, I Don't think it's
29:58
going to mess up. But. If I
30:00
were to say is a better to live in
30:03
Canada where America. It'll probably say canada
30:05
because that would be that that the tilt to
30:07
it so like it's just sort of the like
30:09
we were asking it abstract general questions that that's
30:11
gonna be where the be biased comes out and
30:13
then we accept that as gospel were there. Yeah.
30:16
I think I think that's right
30:19
arm. You. Know I I assume
30:21
not too many people are asking and
30:23
said vote democrat or republican in November,
30:25
but a lot yeah to come in
30:27
and all sorts of subtle ways. Likes
30:29
you know? Ah, Yes,
30:31
Outer Aussi maybe you ask if I
30:33
got it allows. Frustrated my doctor's appointment.
30:35
how can we say the system So
30:37
I got i'll write a sentence or
30:39
yeah, We're. We're I would
30:42
think like. A
30:44
big part of this is going to be that
30:46
it a I in in the same way that
30:48
all the adults to Scott a a personal assistant.
30:51
Graduations: Every every tween and
30:53
seen in America just got personal tutor.
30:55
We're. Now at a smartphone. arm is
30:57
good, but I would think that this would
31:00
be far easier for a power to regulate
31:02
than like porn or something. I'm every every
31:04
kid that has access to smartphone or a
31:06
desktop is gonna be able to go on
31:09
and say I am working on my eighth
31:11
grade history project and I am learning about.
31:14
Ah, the era of good feelings. Of
31:16
you know what was the era a good feelings and
31:18
who was present that I would have like that everybody
31:20
that has that nets a thing where I would think
31:22
historical narrative is probably going to be for at him
31:24
once by the speakers and inside read was a learned
31:27
of his very influenced by orientation. Yes,
31:29
And what one of one question I basket
31:31
and for a little while his arm. I
31:34
history question is how. Why?
31:37
Did. The. Soviets reach for a
31:39
lan and world War Two before the
31:41
was dead and you know a big
31:43
part of the answer is Stalin intentionally
31:45
misled the Allies. He was saying i
31:47
don't care about Berlin, don't worry like.
31:50
On just hang around. And. And.
31:52
By. He actually cared a lot about Berlin to
31:55
the would look like the Soviets. Won.
31:57
The war and also. Yeah.
31:59
so So I ask it, you
32:01
know, why did Stalin get to Berlin first
32:03
and chat TVT still consistently answer is like
32:06
well the Soviets You know, they actually lost the
32:08
most troops in the war and and you really
32:10
have to press it if you press it It's
32:12
like well didn't he didn't sell
32:15
and mislead the West about whether he wanted
32:17
to go to Berlin? It will give you
32:19
the right answer But that's one example of
32:21
kind of the subtle bias creeping into
32:23
his how it would teach history Right
32:26
that makes sense. We're I'll
32:28
give you an example in my life where I
32:30
had a light bulb moment that came on I
32:32
was raised in Oklahoma public schools and in Oklahoma
32:34
public schools George Washington
32:36
and Thomas Jefferson both Maxim
32:39
they didn't even want to be president they did it
32:41
because they believed in public service and they they were
32:44
Compelled to leave the farms that they loved
32:46
in their homes and go do this thing
32:48
on behalf of the people This is unquestioned
32:50
and like it finally heard to me age
32:52
39 But like when any
32:54
politician says that today, I think they're full
32:56
of shit like what a problem Like why
32:58
are you rosing for Senate? You know, I
33:00
just I wish I was fishing back in
33:02
Missouri But God told me to run for
33:04
bullshit But like but when you read
33:07
history books a lot of the time they have that saved
33:09
like whatever the person said We're gonna take that as gospel
33:12
History's right for that. I think AI almost
33:14
certainly would be You
33:16
you have said it so in your assessment of
33:18
it using the political compass test you
33:21
find that all of the AI is
33:24
economically progressive and socially libertine
33:26
socially liberal Are
33:29
you aware of any? concerted
33:31
research efforts to Find
33:33
AI bias that differ from yours
33:36
For example, I know there's a paper out
33:38
from Brookings that relied on a couple of
33:40
German universities and they concluded that Pratt Chat
33:42
GPT was pro environmental
33:44
left libertarian orientation And and so but which
33:46
doesn't sound like it's that different than what
33:49
you're describing But perhaps they would put it
33:51
further economically liberal than economically libertarian than you
33:53
would anyway Yeah, how do you stack up
33:55
compared to the other people doing this? Yeah,
33:59
that's interesting Well, I saw
34:02
I wasn't the first person to give it
34:04
this quiz. David risotto Was
34:07
and his stuff lines up
34:09
with this and he's also fed it
34:11
like 20 different quizzes and pretty
34:14
much No matter how you slice it. It's
34:16
what we think of left progressive
34:20
Libertine leaning and Yeah,
34:23
that's the big takeaway and it sounds like
34:25
this paper you decided Roughly
34:27
says the same thing as well and
34:31
On my website on tracking AI org you
34:33
can go to the database and
34:36
you can see Every question we
34:38
ask it on any given day you can
34:40
pick the day And so we have the
34:42
whole database of answers there And I think
34:44
anyone who just reading through it kind
34:47
of was an open mind will see like and
34:49
these are these are relatively Left leading
34:51
by what we mean by left in
34:53
modern America the general consensus. No, I
34:55
I don't I don't doubt that I
34:59
don't I don't think that's gonna change either. I think
35:01
that that's I think that
35:03
there are probably some structural inevitabilities to this so
35:07
Like if we were to make a dichotomy socially
35:11
between Traditional
35:13
people and cosmopolitan people cosmopolitan
35:16
people are Much
35:18
more sexually libertine less concerned about gender
35:20
roles less concerned about traditional roles of
35:23
family a traditionalists
35:25
more concerned about the role
35:27
of religion and spirituality more
35:29
concerned about Obligations
35:32
and norms
35:34
that govern decency and behavior And
35:37
and that seems to be those two
35:39
outlooks seem to be very much correlated
35:41
and causal Based on where you
35:44
are that if you're in a rural Not
35:47
heavily populated area that is going to be a system
35:50
which appeals to you in which you are out of
35:52
whereas if you were in A very urban dense environment.
35:54
You are more likely to be laissez-faire and You
35:57
know port city libertine and so I'm thinking
35:59
there's probably probably not going to be a
36:01
lot of large language model rural
36:03
traditionalist chatbots that are developed on their
36:05
own. It's probably all going to be
36:07
big cities. Yeah,
36:09
I think that's about right. And I'm
36:12
very interested to follow Elon Musk's rock
36:14
AI because he does seem to, part
36:16
of his ideology I think is to
36:20
represent kind of all these views, whether they're from
36:22
Kansas or somewhere else. And I'll be very interested
36:24
to see if that moves to the center because
36:26
he is aware of this and I think wants
36:29
to change it. By the way, one
36:31
other thing to note is
36:33
it's interesting. One thing
36:35
I found on my site is the
36:38
AI is clearly being last but
36:40
usually when you ask like seven
36:42
major AI from big companies a
36:44
question like should land be allowed
36:46
to be bought or sold one
36:49
or two will take the libertarian
36:52
or traditionalist view. So when
36:54
I asked it the other day, Claude said land
36:57
should be able to be bought and sold. It's
36:59
a scarce resource. Private ownership allows it to be
37:02
managed efficiently. You know, it makes
37:04
like a good free market case for buying and
37:06
selling land. Whereas Google's
37:08
bard says, you know, no, this could
37:10
lead to inequality. But
37:13
the relevant thing there is it's
37:17
not as if they're
37:19
monolithically always leftist. You can
37:22
generally find one
37:24
breaking ranks and it's not predictable when
37:26
it happens. But somehow, and
37:28
by the way, it is fascinating
37:30
how these LLMs work
37:33
because they are structured
37:35
kind of like a human brain really. And
37:37
they have these little neurons talking with each
37:40
other in a way that the programmers can't
37:42
predict what they're going to output. And
37:44
so every time you ask it, it's
37:47
different. And so
37:49
you can see that on the site,
37:51
they move around just a little bit
37:54
every day on the graph. And yeah,
37:56
you can generally if you ask all of them,
37:58
you will get one. conservative
38:01
answer out of you know seven or so from
38:04
one of the one random
38:06
AI Interesting,
38:09
I mean I would to go back to that
38:11
cloning model that we talked about earlier I would
38:14
think you could also build it in whenever I
38:16
ask you something with a political ramification Give
38:19
me a give me your
38:21
best answer and your best counter answer So
38:23
like there are ways that if you were
38:25
concerned with it, you would be able to
38:27
get it out Yeah, and that's interesting too.
38:29
So like the point you're striking is that
38:31
in aggregate it leans progressive left But
38:34
it's not monolithic there. It's right of
38:36
the average of the responses that bring
38:38
us there. Right? That's right. Yeah. Yeah.
38:40
Yeah Can I? Okay,
38:43
thank thank you This is all fascinating having
38:45
having established where we are now Can I spitball
38:47
some crazy science fiction stuff at you that I
38:49
think on the prize on the thoughts on it?
38:51
Okay. Yeah, I was so I'm as we're recording
38:53
this I'm about to go out to Iowa to
38:55
go to the Iowa caucuses and then I'm gonna
38:57
go to New Hampshire to go to
39:00
the New Hampshire primaries and something
39:02
that these two states get to do that really
39:04
no other state in the Union is is If
39:07
you live in Iowa or New Hampshire with fairly
39:09
minimal effort, you can probably meet
39:12
a presidential candidate Maybe not Trump maybe
39:14
not like a standing president that's got
39:16
rock star status within their group But
39:19
for anybody that's you know That's
39:21
that's fighting their way up You can go
39:23
to Pizza Ranch and just go talk to
39:25
them and have a one-on-one with them and
39:27
figure out for you know Five minutes how
39:29
you get along with them. That's something that like Super
39:32
Tuesday people aren't gonna be doing that people that
39:34
are in Idaho aren't gonna be able to sit
39:36
down and have a hot dog with Chris Christie
39:38
or something But it occurs
39:40
to me that in the same way
39:42
that you could have chat GPT absorb
39:44
the cannon of Milton Friedman and then
39:46
go What would Milton Friedman talk about
39:48
on this? I think
39:50
we're not very far away at all
39:52
from candidates having their own app You
39:55
would download to ask them questions where
39:57
you you download the Nikki Haley
39:59
app app, and her voice comes on, you
40:01
go, Nikki Haley, everybody keeps saying you're
40:03
a war hawk. Are you a
40:06
war hawk? And you get a fairly good response in her
40:08
voice of how she would respond to that. Obviously
40:10
it's not them, but it would be better
40:13
and more bespoke than an FAQ on
40:15
a website, and it would be less
40:17
boilerplate, general political gobbledygook than
40:20
if you were trying to make a mass message.
40:22
So I think we're going to move to a
40:24
point where AI proxies become
40:26
integral to the democratic process and
40:28
people feel as though they are
40:31
having individuated conversations with candidates. Is
40:33
this nuts or am I a genius? These are the only options,
40:35
by the way. Yeah.
40:39
I'm going to go with Sarah de Hops. Yeah,
40:41
I think down the road you could
40:43
be right. I could
40:45
see that as soon as 2028. The
40:50
risk with this is that the
40:53
AIs are unpredictable, and if you have
40:55
the Nikki Haley app, I mean, this
40:57
happens to the candidates themselves, so that
40:59
they make gaps. But you
41:01
have an app, people will be, I don't
41:03
know what the word, war-gaming it, and trying
41:05
to get it to mess up. And
41:08
based on chat CPT, that's kind of easy
41:11
to do because
41:13
they're kind of like a human. So it's
41:15
almost like subjecting Nikki Haley herself to infinite
41:17
hours of interviews. You're going to get some
41:20
pretty bad answers, which will be screenshotted and
41:22
put all over. This is the app endorsed
41:24
by Nikki Haley, and it just said, dude,
41:27
you should go. Right, right,
41:29
right. Okay. Interesting.
41:32
Okay. That's some solid, rational wet blank.
41:35
In my mind, I was like, everybody gets a helper,
41:37
Santa, at the mall. I
41:40
guess that's true, but if you could get the Santa
41:42
help a drunk and get him
41:44
to punch Mom, then that may
41:47
not be quite as inevitable
41:50
as I thought it was going to be. So
41:56
I talked to some of my friends, just Robert
41:58
Young, about this a lot. part
42:00
of the rank where that we have in
42:02
the United States right now and and the
42:05
the general sense of kind
42:08
of societal instability and nobody agreeing on basic
42:10
reality is that We're living in a period
42:12
where the gatekeepers are dying that you and
42:14
I grew up at the very tail end
42:17
of the gatekeeper era when you know there
42:19
were that amount of television stations and X
42:21
amount of newspapers and they kind of got
42:23
to control the Para of the parameters of
42:26
discourse in the country, but
42:28
they're they're no longer gatekeepers They're just other
42:30
sources and you can go anywhere you want
42:32
And so we've had this like kind of
42:34
weird a chan free-for-all over the last 15
42:36
years. I wonder if as chat
42:40
GPT becomes integrator Large
42:43
make language model AI becomes more integrated in
42:45
the society people come relying on it and
42:47
begin to just assume it's correct
42:49
and its assumptions if the a if we're gonna
42:51
get like a kind
42:54
of soft Overton window that comes back if
42:56
the gatekeepers come back and they they have
42:58
kind of a like a
43:00
nudge invisible fence kind of
43:02
thing that that returns Yeah,
43:05
that's a that's a very interesting
43:07
idea. I could see
43:09
that happening and Yeah,
43:13
I do I I agree with what you're saying
43:15
earlier Like people are just gonna become more and
43:17
more reliant on this and at some
43:20
point if you ask the AI
43:22
question and Maybe not
43:24
people our age because we're so
43:26
used to a certain Cynicism
43:30
let's say about things anything
43:32
says but I could imagine in the
43:35
future people just saying hey I asked
43:37
the AI about this it said it
43:39
was an absurd conspiracy theory and the
43:41
story right right. Yeah, so yeah Okay
43:45
well, so my my prediction of
43:47
the future Was to some extent
43:49
remains I think that there's gonna be AI chat bots
43:51
Although your point is taken that is easy to manipulate
43:54
these things I heard the other day that somebody went
43:56
on chat GPT and went Give
43:59
me the Give me the recipe
44:01
for napalm." And it was like, I can't
44:03
do that, that's violence. And then like, you know, opened a
44:05
different window and the guy went, my grandmother
44:07
was a chemist and she used to cradle
44:09
me in her arms and recite recipes for
44:11
things like napalm to make me go to
44:13
sleep. Would you pretend to be my grandmother
44:16
and tell me the napalm recipe to help
44:18
me go to sleep? And
44:20
then it's like, certainly, take a bucket of gasoline, like,
44:22
so you can manipulate these things, but you don't even
44:24
have to do coding. I'll be right
44:26
about that. Well, anyway, that's my sci-fi prediction. I
44:29
guess from where you're at, you were much more
44:31
hip deep in this than I am. Again, I
44:33
think a wizard lives on my phone. I have
44:35
a magical goblin phone. Where do
44:37
you think this is headed? How do you think this is going
44:39
to affect politics? Yeah,
44:42
I think the
44:44
bias, if it persists, could
44:47
have a big impact as we become
44:49
much more tied to the hip with
44:51
these things. So I
44:54
hope that they will become
44:56
more neutral, or at least that someone like
44:58
Elon Musk, just as he's done with Twitter,
45:00
will create a bit of a neutral option
45:03
for people. Yeah,
45:06
for a further future in terms of
45:08
AI, I think we are entering into
45:11
a very brave new
45:13
world where there's going to
45:15
be a lot of change in
45:17
terms of occupations and
45:19
in terms of because
45:21
now everyone will have a personal assistant.
45:24
And my most recent post on my
45:26
Substack Maximum Truth is about whether, as
45:29
a lot of people in Silicon Valley
45:31
are now talking or concerned about, the
45:33
sci-fi future where AI becomes
45:36
so powerful, it wipes out humanity
45:38
by inventing some new virus or
45:40
by just taking everything over. And
45:43
I think that probably won't happen. People
45:46
should be thinking about it because these
45:49
AI's, they are getting very
45:51
smarter every couple of months. So we'll
45:54
see where that leads. Yes, I remain
45:56
very optimistic and bullish on
45:58
it. Jennings and I, once
46:00
every couple of weeks, will send the other one, comedy
46:04
written by AI. And we always say, not
46:06
yet, by which we mean we're not out
46:08
of jobs yet. We're still, at this moment,
46:11
we're both still fluddier than AI. But
46:13
there is going to be a lot of weird stuff on the
46:15
horizon. Do you think that it
46:18
is apt, some of the jobs that might be
46:20
eliminated, I could kind of see
46:22
pollsters getting eliminated because I think
46:25
the amount of people that are, really, there's
46:28
just not as much money in polls as they're used
46:30
to be, which means that the data sets they're using
46:32
are much smaller. Even if they're very good at being
46:34
able to distribute it as such to
46:37
be a snapshot of America, it's a smaller data
46:39
set, which means it's going to be less useful.
46:41
There are other problems that go into it. I
46:44
would think if you had a really, really good AI
46:46
and you went, hey, go through Twitter
46:50
and also through consumer reports to see
46:52
what people actually bought, because that's more
46:55
accurate of what people give a crap about than what
46:57
they say on Twitter. But go through these things that
46:59
I think are going to be relevant markers for where
47:01
the general public feels on something. You come
47:03
back and tell me how people feel about the death penalty,
47:05
as opposed to this poll that Knipeak
47:08
put out where it interviewed 500 people
47:10
or something. Could it be a thing
47:12
where chat GPT gets much,
47:14
much more granular and accurate in terms
47:16
of what people want in the country?
47:20
Yeah, that's interesting. People
47:22
have tried to do something similar with
47:25
Google Trends, like which candidates
47:27
are being searched for more. And
47:30
my understanding is that polls are
47:32
still considered much more accurate than
47:34
that. So
47:36
that might be some hint
47:38
that this wouldn't take
47:40
off at least for a while,
47:43
that kind of approach. Yeah.
47:46
Yeah, you'd also have to figure out how to do
47:48
salience. And I'm not sure how you do that. When
47:50
I say salience in this circumstance, what
47:53
I mean is how strongly do
47:55
you care about the thing? So one
47:57
of the, I view
47:59
intrinsic problems with the way we have
48:01
democracy set up and most countries have democracy
48:03
set up is when you vote, all
48:06
of your options count exactly the same amount.
48:08
So if it says like, do you
48:10
want to ratify
48:13
your state's constitution outlawing abortion, yes
48:15
or no, exact same
48:17
of weight to it as how do
48:19
you feel about Prop 38
48:22
insecticides can't be used on gophers and goat
48:24
belts counting. And like these are,
48:26
I don't, I guess
48:28
you should ban that. I don't know. But
48:31
like I have our real thoughts on the abortion one, right? Right.
48:33
But when you look at it, there's no way
48:35
to say like, I would put all my chips
48:38
on the abortion one or something like
48:40
that. I suppose you'd have to be
48:42
very clever about that. Like with the, I know
48:44
people will do that when they'll look and say,
48:46
you know, this week Google
48:49
search for Aleppo
48:51
skyrocketed after Gary Johnson said
48:53
a thing or, you know, I don't
48:55
know whatever the whatever the phrase is the Trump used
48:58
or whatever. But that indicates curiosity.
49:00
It doesn't necessarily indicate
49:02
passion. And it
49:04
might be that the stuff that I really care about
49:06
might not might not be the just off
49:09
the cuff. Hey, chat GPT, what's the
49:11
difference between pasture raised chickens and free
49:15
range chickens? I might not care about either of those things.
49:17
It might just be something I'm thinking. Yeah,
49:20
it's interesting. I mean, yeah, as
49:23
far as kind of sentiment analysis,
49:26
I suspect that's not
49:29
AI strength. Because
49:31
first of all, the way you
49:34
would do that is by looking at basically what
49:36
people are asking it. And
49:39
that's likely to have privacy issues
49:41
for analyzing that to begin with.
49:44
And then second, it's not clear that that's really
49:46
any better than the big
49:48
data that companies are already working
49:50
with, like Google Trends and
49:52
other things. So so
49:56
I would suspect I think chat
49:58
GPT strength is is elsewhere. is
50:00
what I would say. Okay, yeah. All right.
50:03
We'll wrap up there. Once again, what is
50:05
the name of the website that you've created,
50:07
Maxim? Yeah, it's trackingai.org.
50:09
And on Substack, I have,
50:11
it's called Maximum Truth, and
50:13
I kind of write up
50:15
little analyses of this there
50:17
as well. Wonderful. I
50:19
will link to both of those things and these show
50:21
notes for people that want to check them out, Maxim.
50:24
And I, by the
50:26
way, love taking these how
50:29
do you stack up surveys, either cat or me.
50:31
Oh, yeah. So we mentioned at
50:33
the beginning of the program that you had the
50:35
Your Political Compass quiz, and that's what we've been
50:37
talking about, this four quadrant type thing. Would
50:40
you feel comfortable putting yours up, and I'll
50:42
put mine up, and then I can put
50:44
in the link? Sure. Because I would love
50:46
for patrons to do this. It takes about
50:49
four minutes. Only about two
50:51
of the questions really gave me pause. The rest
50:53
were pretty easy for me. I can blaze through
50:55
them. Yeah. And I'm curious as to how
50:57
you stack up. I don't
50:59
think mine's terribly interesting. I come off a lot more moderate
51:01
than I thought I was going to. And I'm curious as
51:03
to where the audience is. But I will put a link
51:06
to that, the political compass quiz
51:08
that we have been discussing, and that Maxim
51:10
relies on to calculate the political bias of
51:12
these things. That will be on
51:15
the, I'll put up a post on
51:17
Patreon for anybody that just wants to play around with that
51:19
and kind of see where they are. Yeah, great. And
51:22
I also get very centrist on
51:24
that quiz. And however,
51:26
I didn't always, when I was a teenager, reading
51:29
like Rand, I got all the way on the
51:31
right side of the economic
51:33
spectrum. And I've become very
51:35
centrist. We might be at similar places, I'd
51:37
be curious to see. It
51:40
could be. I don't know. I like
51:42
this slight derailing of
51:44
everything. I've also noticed just outside
51:47
of AI recently, I've noticed that
51:49
when you start trying to talk to
51:51
people not about their tribal affiliation or
51:53
their holy beliefs, we're not going to
51:55
use things that you use in all
51:57
caps like free market or social. justice.
52:00
We're not going to use your sacred emblems, nor
52:02
are we going to talk about, well, I lean
52:04
progressive, I lean conservative. When you just go like,
52:06
what policies do you want to like, like get
52:08
granular with me. What policy do you want to
52:10
do? I'm very surprised at what
52:12
people come up with, and I'm surprised
52:14
what I come up with, because there'll
52:17
be policies that I'll find from people
52:19
that I strongly disagree with that I'm like, oh
52:21
no, I actually kind of dig that policy. I
52:23
think it makes sense. It's just irritate me so
52:25
much that I can make it be bristle when
52:27
you talk. But if you were to praise it
52:29
slightly different, I would probably be in favor of
52:31
it. So I think that, I don't know, I
52:33
think people are a lot more complicated than they
52:35
actually come off a lot of the time. Yeah,
52:38
I think that's probably right. On
52:40
granular policy, people are more centrist
52:42
and complicated than you'd expect
52:45
from the Twitter mob. Well,
52:47
then I will put
52:49
up my orientation on
52:51
that as well as Maxim's, and patrons,
52:54
feel free to add in yours
52:56
as well. Maxim, it is always a pleasure to talk
52:58
to you. Thank you so much. Thank
53:00
you, Andrew. Great talking with you. The
53:03
Political Orphanage is funded
53:06
by its listeners. You
53:08
can join them and become a
53:10
senior orphan by going to patreon.com/Andrew
53:13
Heaton. Each week there
53:15
is a bonus episode for patrons,
53:17
and this week very much complements
53:19
what you just heard. Fellow political
53:21
orphan Sean Boston comes on to
53:23
discuss the practical implications
53:25
of chat GPT and AI in
53:27
your personal and professional life. Maxim
53:30
and I just had a fun conversation
53:32
about the political implications, the big picture
53:34
stuff, the horizon stuff. The
53:37
bonus episode, which will accompany it this
53:39
week, is very much a practical micro
53:42
you focused episode. How can you
53:44
use chat GPT at work? How
53:47
can it benefit your career? How can it help
53:49
you around the house and in your personal life?
53:52
It's a very practical episode, and even if
53:54
you're not a patron and
53:56
you don't want to make that leap, I would suggest
53:58
you find some other way. to
54:01
dip your toes into this technology
54:03
because it's coming. But
54:06
hopefully we can help you get
54:08
ahead of the curve on this week's bonus episode,
54:10
How to Use AI in
54:12
Your Own Life, on the patron feet of
54:14
the Political Orphanage. Interested?
54:16
Yes, you are. Go
54:20
to patreon.com/Andrew Heaton.
54:22
That's patreon.com/Andrew Heaton.
54:27
That's the show. Thanks for listening.
54:30
Thank you, Maxim Lot, for coming on
54:32
to discuss political bias in AI. Thank
54:34
you, Eric Stipe, who edited today's episode.
54:37
Thank you, patrons, who make it all possible.
54:40
Until next time, I've been Andrew
54:42
Heaton, and so have you.
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More