Podchaser Logo
Home
UL NO. 427: AI's Predictable Future

UL NO. 427: AI's Predictable Future

Released Wednesday, 10th April 2024
Good episode? Give it some love!
UL NO. 427: AI's Predictable Future

UL NO. 427: AI's Predictable Future

UL NO. 427: AI's Predictable Future

UL NO. 427: AI's Predictable Future

Wednesday, 10th April 2024
Good episode? Give it some love!
Rate Episode

Episode Transcript

Transcripts are displayed as originally observed. Some content, including advertisements may have changed.

Use Ctrl + F to search

0:22

Welcome to Unsupervised Learning, a security,

0:24

AI and meaning focused podcast that

0:26

looks at how best to thrive as humans in

0:28

a post AI world. It combines

0:31

original ideas, analysis, and mental

0:33

models to bring not just the news,

0:35

but why it matters and how to respond.

0:42

All right. So the first thing for this week

0:44

is I finally turned my 9000

0:47

word I predictions essay into

0:49

an actual video. So if you click on this

0:51

thing all right I'm going to try to do

0:53

something crazy right now I'm going to try.

0:56

This is the whole video I

0:59

go through. I thought it was going to be like 30 minutes.

1:01

I'm like, knows what we want because it has

1:03

all the prefaces in, it knows our journal,

1:05

it knows all the stuff that we plus like

1:07

two hours of editing, which I never should have done.

1:09

But anyway, it is a full

1:11

walkthrough of the illustrated

1:14

essay, and I even give

1:16

a lot more sort of expansion

1:19

narration on top of the text

1:21

that's in the essay, so it's quite good, if

1:23

I do say so myself. I've had a lot of people

1:25

say they enjoyed it, so I would

1:27

say, go check that out. It gets

1:29

me excited to even think about it. Talk about it. It's

1:32

like the fun stuff. Plus I remember

1:34

like doing the art. It's just super

1:36

good time. Okay. Security. Yeah. But

1:38

so go check out the video. All right. Security.

1:40

So Israel's top spy,

1:43

chief Jose Ariel,

1:45

accidentally revealed his identity

1:47

through an Amazon book sale. So I guess we're

1:49

just doxing him now. That's rude.

1:51

Well, I guess he's not going to be that anymore, because

1:54

they're going to be like, you obviously can't do this. So they

1:56

probably got rid of him and replaced him with someone

1:58

who is still anonymous. So

2:00

now he's no longer the spy chief. He's just

2:03

a guy doing an Amazon book sale. All right,

2:05

cvn, Nvda databases are struggling.

2:08

Um, it's causing gaps and inaccuracies.

2:11

So somebody needs to fix that. Uh,

2:13

criminals in Montreal are using Apple's

2:15

AirTags to track and steal cars.

2:17

And police are using Apple's

2:19

AirTags to track criminals. So

2:22

a lot of AirTag use on

2:24

both sides of the fence there. And this one's

2:26

super cool. This sponsor I want to talk about this one

2:28

because I'm actually advising for them the first

2:30

AI SoC analyst that autonomously

2:33

investigates alerts. This thing will

2:35

pull alerts. I've seen this live

2:37

pull alerts. And this is a sponsor, by

2:39

the way. Sponsor. And I'm an advisor for them.

2:42

So I'm just excited about them. It's

2:44

called drop zone AI. So you

2:46

take an alert from anywhere in your stack,

2:48

some tool that you have that produces an alert

2:50

like an endpoint cloud doesn't

2:52

matter. And it

2:54

takes alerts from your environment and performs

2:57

autonomous. It basically goes and starts

2:59

researching exactly like a human. It's

3:01

like starts digging in, boom, get some results

3:04

from that lookup and that investigation

3:06

takes that goes to the next step and

3:08

just starts doing multiple steps, just

3:10

like a regular SoC analyst, just like a human.

3:13

And once it gets done, it puts that all

3:15

together and generates a decision

3:17

ready report. So you could

3:19

basically use that. And

3:22

it's basically exactly as if you had got

3:24

that from a SoC analyst. And you can make a decision

3:26

based on that. And it's no playbooks,

3:29

no code, no prompts required.

3:31

It just you feed it alerts and

3:33

it goes and does investigations and brings

3:35

back a report. It's absolutely insane.

3:38

It's exactly what I've been talking about as

3:40

being like how AI is most

3:42

going to affect security.

3:45

And I've been saying for the longest time

3:47

it's all going to be agent based. And that's exactly

3:49

what this is. And like I said, it's so good.

3:51

I just became an advisor and strop sonar

3:53

AI and you could actually go and see a demo

3:56

of it. Working on a real alert.

3:58

Panera's week long incident

4:00

was, in fact, a ransomware attack.

4:03

Cece's new High Risk Communities webpage

4:05

offers a bunch of guides and volunteer

4:07

support and discounted tools, thanks

4:10

to Defender Fee for sponsoring as well.

4:12

Israel's military used an AI named

4:14

lavender to pinpoint 37,000

4:17

potential Hamas targets, and that's

4:19

definitely getting people pretty upset about that,

4:21

because it's one thing to identify someone

4:23

and do further investigation. It's another thing

4:26

if you're identifying and then like targeting.

4:28

And again, I don't know if that's actually happening.

4:30

I don't know the degree they're trusting

4:32

this targeting that's coming up from lavender.

4:35

I don't know if they're going directly to

4:37

attacks or killing or anything

4:39

like that. But the point is it's

4:41

worth having a conversation about what they're

4:43

doing with it. Given the fact that

4:46

I can be super fast and

4:48

super accurate, but it can also have a lot of flaws,

4:50

and the more serious you're taking the results,

4:53

the more important those flaws are, right? Technology.

4:55

There's a rumor that Sam Altman and Johnny IV

4:58

are building some sort of handheld device

5:00

through a new secret company. Who knows if

5:02

that's a real or not. It sounds

5:05

real to me because we know that Johnny IV is

5:07

actually working with them. We know he's a design

5:09

person. We know we need these types

5:11

of devices. So it makes sense to me.

5:13

OpenAI released improved ways

5:16

of doing fine tuning. Uh, new

5:18

paper shows that adding more agents to

5:20

large language models boosts their performance.

5:22

How could it not? How could it not?

5:24

I mean, like like I've been telling

5:26

everyone agents is the way, agents

5:29

is the way this is all going to move forward. And

5:31

then each individual agent when it

5:33

gets smarter because the AI models improve,

5:35

that just makes the whole thing smarter. But the way

5:38

to get ultimately smart and actually pass

5:40

humans is to have that combination

5:42

of agents working together. That system

5:45

is what's smart. It's the same as our brain is

5:47

no different than our brain. Our brain has a whole bunch

5:49

of areas, each individually

5:52

only do kind of one thing, and

5:54

they're not very smart by themselves.

5:56

The whole thing that's made it smart over

5:58

all these millions of years is the fact

6:00

that they work together and they're sharing

6:02

information, and it's like it's a system.

6:05

The system is what makes us smart. And

6:07

it's going to be no different with agents.

6:09

Now, eventually you might have an AI

6:12

that's just one agent or one

6:14

model or one agent or whatever.

6:16

And it's so smart. It's just smart by itself

6:18

because it's different than a human. But the

6:20

way we're first going to get to AGI, mark

6:23

my words, is going to be through a system

6:25

of AI components working together.

6:27

And I've got a blog on that somewhere. New paper explores

6:29

how I might be leading us towards

6:32

knowledge collapse by oversimplifying

6:34

complex information. I like that,

6:36

but I don't think the complex information

6:38

goes away. Just because somebody is

6:40

simplifying or summarizing

6:43

complex information, it doesn't mean

6:45

we have to stop feeding the AI

6:48

the original raw form. One

6:50

doesn't have to supplant the other, and

6:52

we could just ask for the long form. We can

6:54

ask for the short form. I don't see that as a

6:56

trade off us is trying to get South Korea to

6:58

stop exporting chipmaking tools

7:00

to China, US testing, energy

7:03

storage and heated sand. This

7:05

tech is like trippy to me. Energy

7:07

storage and heated sand 135MW

7:12

of power output for five days

7:14

straight and Aura's rolling

7:16

out symptom radar. They're not calling it illness

7:18

detection for obvious reasons. I

7:20

haven't looked to my app to see if I have that yet.

7:23

I do have an aura though. Okay, Amazon

7:25

is ditching its Cashierless walkout

7:27

technology is kind of sad. I guess

7:29

it's too early, but they're switching to something

7:32

a little more normal, like a hybrid humans.

7:35

New studies are showing the wealthy are starting to have

7:37

more kids than the poor. Again,

7:39

need lots of studies to show that's actually

7:41

true, but interesting trend. NASA

7:43

is doing live streaming the eclipse. I watched it,

7:45

it was super cool. It was actually

7:47

moving through multiple cities and you would see

7:50

each person, each city as

7:52

it was moving. It would go to the full

7:54

totality and everyone would freak out.

7:56

It was quite cool. It was a very cool stream.

7:58

TSMC did not take much damage

8:01

at all. They were actually only down for like

8:03

a day and they went back to start

8:05

full production. And it's because their

8:07

buildings have some really cool stabilization

8:09

tech that allows them to not get too

8:11

messed up from earthquakes. However,

8:14

the earthquake was on the other side of the island,

8:16

so who knows if it's a bigger

8:18

like it was an eight and it was nearby.

8:20

I'm not sure the buildings would be

8:22

able to handle that, but who knows. Israeli

8:24

military dismissed two senior officers

8:26

and reprimanded three others for an

8:28

airstrike that mistakenly killed World

8:31

Central Kitchen volunteers in Gaza.

8:33

The UK's exporting workers

8:35

to fill higher paying US jobs

8:38

and US venture capital investments

8:40

went down $36.6 billion

8:43

in 2024. Had a really cool

8:45

conversation with Mike private from Return

8:47

on Security about these overall economic

8:50

trends, especially around cybersecurity,

8:52

and that will go up soon. McKinsey

8:54

is offering UK employee UK

8:56

employees nine months of pay to voluntarily

8:59

leave McKinsey. Gen Z is going

9:01

for trades like welding and plumbing over

9:03

college and student debt, and home insurers

9:05

are now using aerial images from

9:08

satellites to decide who

9:10

gets dropped from coverage. I assume that's

9:12

because maybe somebody's building something

9:14

in their backyard, or they have

9:16

too many cars. Like, I'm not sure what would be

9:19

on the policy that they'd be able to see in a satellite

9:21

photo, maybe add ons to the house

9:23

or something, I don't know. And all right,

9:25

got a couple cool ideas here. Another

9:27

view of imposter syndrome. So somebody

9:29

said outwork your imposter

9:31

syndrome. That was the post outwork your

9:34

imposter syndrome. And I'm, like, working

9:36

harder isn't the solution. In my opinion,

9:38

the solution is to work on bigger

9:40

problems that are super important

9:42

to solve. That way your focus

9:44

isn't internal, it's actually external.

9:47

And I actually go into this, so I'm going

9:49

to go ahead and open this up. So I

9:51

say framed this way, imposter syndrome is

9:53

ultimately a problem of thinking too much internally

9:55

versus externally. Because you're thinking how

9:58

do I compare to them? What do they think

10:00

of me? Right. And so the solution I

10:02

think is to. Focus

10:05

on. Something

10:08

outside of you, right? Stop

10:10

putting the focus on yourself. How

10:12

do I compare? What do they think of me?

10:15

That's thinking about yourself. Instead,

10:17

focus on the problem. And now you

10:19

won't really have imposter syndrome because

10:21

you're thinking about, like, what am I doing

10:23

that's useful to help me with this really

10:25

big problem, which has nothing to do with me.

10:27

It's about me working on that

10:30

thing. So instead, focus on the problem

10:32

and how to fix it. And this also works

10:34

for happiness. It doesn't come from focusing on self,

10:36

it comes from focusing on other. So

10:38

that was that piece there. And

10:40

this one's really interesting. It's the first time

10:42

I've ever seen Tyler Cohen be wrong. It's

10:44

like one of the smartest people that I know of

10:47

on the planet. And he had

10:49

Jonathan Height on and

10:51

Jonathan High was talking about how bad

10:53

social media is for young

10:56

girls and how

10:58

I could potentially make that worse and everything.

11:00

And Tyler Cohen is like, don't worry about

11:02

it. AI is going to solve this because

11:05

I is going to summarize everything, and

11:07

the summarization of of social

11:09

media is going to solve this problem and

11:11

fix it. And I'm like,

11:14

how are you not seeing this now?

11:16

Jonathan didn't have this point, but

11:18

I'm going to make this point right now. That is how

11:20

Tyler Cohen is going to use

11:23

AI, right? It's how I'm using AI right

11:25

now, and it's how, you know, Jonathan's probably

11:27

going to use it as well. Summarization of

11:29

content. So you're just getting the thing.

11:31

And that might actually take you away from being

11:34

caught up in like, oh he said she said

11:36

and all the drama or toxicity or whatever.

11:38

Well that's fine. And that again,

11:40

that's how they're going to use it. That's how a

11:42

whole bunch of intellectual people who have made

11:44

it in life and are older are probably

11:46

going to use it, but not for young

11:49

people consuming viral and toxic content,

11:51

because for them, the content itself is the

11:53

point, not the summary. And here's

11:55

my example of this. Does Tyler

11:58

think I will send people who

12:00

love stand up comedy a summary of

12:02

the jokes made in a given

12:04

standup as a substitute for going

12:06

to actual comedy shows. And I've got

12:09

I've got an I summary here. Here's your summary

12:11

of this standup. There were three jokes on

12:13

women in stereotypes, four jokes on

12:15

how clumsy the comedian is

12:18

to playful racist jokes. Two hecklers

12:20

were addressed. Applause was three

12:22

out of five compared to other performers.

12:24

We hope you've enjoyed this hilarious

12:26

I summary from comics I.

12:29

Does that work for comedy? No,

12:31

that doesn't work for comedy. And it won't work

12:33

for young kids consuming viral or

12:35

toxic content unless you have

12:37

some sort of like draconic like,

12:40

I guess, massive control

12:42

where they weren't actually allowed

12:44

to go to the real thing. They

12:46

couldn't go to TikTok, they can't go to Instagram,

12:49

and all they get is the stupid summary.

12:51

Nobody would. They they wouldn't even care about

12:53

the summary. I mean, that's not even going to be a thing.

12:55

Nobody cares. And if

12:58

it did, they wouldn't use it. And if

13:00

they had the option or if they had both,

13:02

they wouldn't choose the summaries. They would

13:04

not use the summaries at all, and they would just go to the

13:06

original thing. Now that being said,

13:08

it is Tyler Cohen. So there's

13:10

a chance that I didn't understand what he was saying.

13:13

And also Jonathan definitely didn't either.

13:15

So he could be misunderstanding Tyler's

13:17

point. So I want to offer him that.

13:19

There's also the other thing, which is you

13:21

might just be right, and I'm just wrong.

13:23

Now, I wouldn't normally say that because I

13:25

think fairly highly of my thinking

13:27

capabilities, but it's Tyler Cohen,

13:29

so I'm leaving that window a little

13:32

more open than usual. All right. Deep faked

13:34

content summaries. Oh, yeah. This is crazy.

13:36

So I don't know. I don't know if I woke up with this

13:38

idea. I think the main interface

13:41

that we're about to have for content,

13:43

let's say, okay, I got buddies

13:45

who make content. John Hammond makes content.

13:48

Jason Haddix makes content. Clint

13:50

Gibbler makes content. So they're putting out,

13:53

let's say they're putting out videos, they're putting out

13:55

text, they're doing live talks.

13:57

They're doing all these different things, different mediums,

13:59

different formats. I think

14:01

what's going to be happening

14:04

very soon is let's

14:06

do the R version, even though we don't have

14:08

R yet. The earlier version will just

14:10

be little videos on your phone. But let's

14:12

do the R version or the R version

14:14

combined with a digital assistant, which

14:16

is AI in your head or in your

14:18

mobile device. So it's essentially you say,

14:20

hey, look, what is Jason been up

14:22

to? What is John been up to? What is Clint

14:24

been up to? And it will actually

14:27

deepfake them okay. Because it knows

14:29

how much time I have. Let's say

14:31

I want a two minute summary. It will

14:33

take the long presentation

14:35

that Jason is going to do. Jason is getting

14:37

ready to do an AI talk called Red purple,

14:39

blue AI. Let's say it's actually it

14:41

is like it's a class. Oh, by the way, you should go sign

14:43

up for this class. His class is called red

14:46

purple blue I it's

14:48

going to be an amazing class. I've heard a lot about it.

14:50

I'm going to be in it as well. But for

14:52

example, let's say he gives a talk about that

14:54

a public talk. The other one's not public. But

14:56

let's say he gives a talk about that. Actually he's

14:58

doing a space con coming up soon.

15:01

So that's a good example. Let's say I don't have the

15:03

one hour or the two hours to watch that. I only

15:05

have two minutes. I'm about to get on a train where I don't

15:07

have any connection. Whatever it's going

15:09

to make, Jason and Jason

15:11

is going to. Teach me what he covered,

15:13

but he's going to do it in 30s or he's

15:15

going to do it in 60s. It's going

15:17

to be a deepfake of Jason doing

15:20

Jason's own content. Same for John.

15:22

John Hammond has a thing about this new piece of

15:24

malware. Clint has this new thing about

15:26

this new tool that he made or that he

15:28

saw, and he's talking about deepfakes

15:30

are going to allow the actual

15:33

creator to scale their

15:35

delivery of their content as

15:37

a video to any size

15:40

chunk that the consumer wants to see.

15:42

Give me a ten second summary. Give me a 32nd

15:44

summary. Give me a one minute summary,

15:47

a two minute summary, a ten minute summary, whatever.

15:49

But it will dynamically write the

15:51

content, which is a summary

15:53

of the content, which smashes

15:55

it down to that size, and then it will

15:57

perfectly deepfake that

15:59

creator and the mouth

16:02

will match, the mouth will perfectly

16:04

match. It'll look exactly

16:06

or almost exactly and eventually

16:08

exactly like the creator. Now,

16:10

why would the creator want to do that? Because it's

16:13

still their content, okay? It's still their content.

16:15

It's not the original raw form, but

16:17

in a lot of ways it's going to be better because it's not going

16:19

to have ums and ors. It's going to be very

16:21

crisp and concise, and most importantly,

16:24

it won't be some third party, right?

16:26

Because my digital assistant can also

16:28

render it using their face. But it'll

16:30

be really cool for the for the actual

16:32

creator to be still the one

16:34

that's delivering it. And what's

16:37

cool about that? So here's what's really cool

16:39

about that. It'll do multiple languages.

16:41

So now it's still Clint. It's

16:43

still John. It's still Jason. Jason

16:45

is still saying the thing to me.

16:47

Pedro diciendo

16:49

la content.

16:50

In.

16:51

Espanol. Entonces puedo escuchar.

16:53

En espanol and I won't know the difference.

16:55

It'll look exactly as if Jason

16:58

is speaking in Spanish instead of English.

17:00

And I think that is wonderful. Same with Chinese,

17:02

same with every language. Not any more difficult.

17:05

So now your deepfake content

17:07

is going out to every language

17:09

all at once, as soon as you drop a piece of

17:11

content. So watch this. You drop

17:14

a piece of content. It's one hour long.

17:16

Okay, perfect example. I just released a

17:18

video. It's one hour and ten minutes

17:20

long. I only did it in English because I'm only

17:22

capable of doing it in English. I could do it partially

17:25

in Spanish, but it would be kind of crappy.

17:27

So that one hour

17:29

and ten minute piece of content can now

17:32

be put in Hindi, Spanish,

17:34

Mandarin, Cantonese, Filipino,

17:37

like all these different languages, but not

17:39

just converted one for one

17:41

to the language. Also converted

17:43

to a ten second version, a 32nd

17:45

version, a one minute version. And

17:47

it's still me. It's still my face, it's still

17:49

my whatever gestures, all this,

17:52

it's still me delivering the content. That

17:54

is a game changer. An

17:57

absolute game changer because

17:59

it's scalable content delivery in

18:01

every language, at every bite sized

18:03

chunk. So now when I'm on a train or

18:05

I'm exercising, I could have my

18:07

AR interface. I see the person

18:09

is delivering the content and it's just like,

18:12

I'm watching that now. Maybe there will be some

18:14

sort of tag that says, there's this roar, or

18:16

is this an AI deepfake of it?

18:18

But it's not even quite a deepfake. It's

18:20

using deepfake technology. But

18:22

a deepfake is like almost unauthorized.

18:24

This will be completely authorized. Um, and

18:27

of course, there will be versions of this that are not authorized,

18:30

and that will be an actual deepfake. But anyway,

18:32

you get the idea. This is going to massively

18:34

change how we consume content,

18:36

because we want that content in different forms,

18:39

different languages, but we still want to see

18:41

the creator doing it. All right. AI and music.

18:43

AI is not going to ruin music. Have

18:46

we forgot about pop? Pop is,

18:48

you know, a few chords and a hook.

18:50

It's, you know, a lot of people would say it's

18:52

low quality music, like

18:54

low quality food. But it's

18:56

the same with doing customer service calls, sales

18:58

calls. We forget how low the

19:00

bar is for being better

19:03

than an average human, and that's why I

19:05

music is going to be pretty good. And actually

19:07

I think I have a link to

19:09

one here anyway. Discovery section

19:11

Luke Stefan's hack Luke put out an

19:13

amazing blog talking about his

19:15

evolution of bug bounty automation.

19:18

Talks about going from Bash to Python to Golang,

19:20

and how he eventually ended up at Cloud

19:23

Native. Really great piece, Thomas

19:25

wrote. He's super awesome and

19:27

he wrote a piece about applying LMS to

19:29

Threat Intelligence. Really cool. It's got a

19:31

Jupyter notebook here for the code as well.

19:34

SWE agent autonomously fixes

19:36

bugs in GitHub repos BR

19:38

is a Python framework. Simplifies building

19:40

an AI apps using building

19:43

blocks. Open source textbook

19:45

makes the art of mathematics accessible.

19:48

Jim Graham turns threat modeling

19:50

into a self-hosted web app. This

19:52

one is super, super cool. Chelsea

19:54

now lets you tweak images so you

19:56

can basically have an image that was created

19:59

with Dall-E and you could just like rub a like

20:01

the face or whatever and say, redo that part

20:03

and it will redo it, but integrate it with

20:05

everything around it. So that is something

20:07

that Midjourney had that that Dall-E didn't,

20:10

and now Dall-E has it as well. Claude's

20:12

API now has a new

20:14

tools feature that allows you to, like,

20:16

browse the web and do all kinds of stuff. I

20:18

kind of feel like agent functionality is going

20:21

to blend right into

20:23

the models, so I'm not quite sure

20:25

how long we're going to have, like these

20:28

elaborate agent frameworks,

20:30

because that might just be part

20:33

of using AI. You

20:35

might just say exactly what you want

20:37

to do in the prompt, and that will actually

20:39

be the agent framework. It'll actually spin

20:41

up how many agents are needed to do that.

20:43

And maybe you have some parameters right there

20:46

in the prompt that does it. But I kind of feel

20:48

like that's going to just blend right into

20:50

the language of the of the models

20:52

themselves. And kids are learning math

20:54

from deep fakes. Okay. This is another

20:56

example of this learning math from deep fakes

20:59

of Taylor Swift and Drake on TikTok. And

21:01

it's going well. I'm excited for this.

21:03

I am super excited for this look.

21:05

There are negatives that are going

21:08

to come from defects. Everyone knows that. Let's

21:10

find the positives. The positives are

21:12

if people love Taylor Swift, let's learn

21:14

math. Let's learn calculus from Taylor

21:16

Swift. I would do that. All right. Recommendation

21:18

of the week. Check out Mozart

21:21

on the bass. This is an eye track.

21:23

Yeah. Go listen this tell me that this won't

21:25

be popular. It's a little bit EDM ish.

21:28

So if you don't like that, just compare it to other

21:30

EDM that you don't like, and you'll see that

21:32

I is actually quite good at making

21:34

stuff you don't like. All right, aphorism of the

21:36

week don't explain your philosophy. Embody

21:39

it. Don't explain your

21:41

philosophy, embody it. Epictetus.

21:44

Unsupervised learning is produced and edited by

21:46

Daniel Missler on a Neumann 87

21:48

AI microphone using Hindenburg.

21:51

Intro and outro. Music is by zombie with

21:53

the Y, and to get the

21:55

text and links from this episode, sign up for the

21:57

newsletter version of the show at Daniel missler.com/newsletter.

22:03

We'll see you next time.

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features