Podchaser Logo
Home
David Minarsch: Autonolas – Autonomous AI Agents

David Minarsch: Autonolas – Autonomous AI Agents

Released Saturday, 27th January 2024
Good episode? Give it some love!
David Minarsch: Autonolas – Autonomous AI Agents

David Minarsch: Autonolas – Autonomous AI Agents

David Minarsch: Autonolas – Autonomous AI Agents

David Minarsch: Autonolas – Autonomous AI Agents

Saturday, 27th January 2024
Good episode? Give it some love!
Rate Episode

Episode Transcript

Transcripts are displayed as originally observed. Some content, including advertisements may have changed.

Use Ctrl + F to search

0:00

This is Apple Center, episode 532 with guest David

0:02

Minaj. Welcome

0:20

to Apple Center, the show which talks

0:22

about the technologies, projects and people driving

0:24

decentralization and the blockchain revolution. I'm

0:26

Frederik Ernst and I'm here with Meher Roy. Today

0:29

we're speaking with David Minaj, who is

0:32

the co-founder and CEO of Valery and

0:34

founding member of Autonalas. Autonalas

0:37

is a funny

0:39

project, it's an AI-slash-blockchain

0:41

crossover. And why that

0:43

is interesting, we'll

0:46

dive into in just a second. Let

0:48

us tell you before about

0:50

our sponsors this week though. This

0:54

episode is brought to you by Gnosis. Gnosis

0:57

builds decentralized infrastructure for the

0:59

Ethereum ecosystem. With

1:01

a rich history dating back to 2015 and

1:04

products like Safe, Cowswap or

1:06

Gnosis Chain, Gnosis combines

1:08

needs-driven development with

1:11

deep technical expertise. This

1:14

year marks the launch of Gnosis

1:16

Pay, the world's first decentralized payment

1:18

network. With Gnosis Card,

1:20

you can spend self-custody crypto

1:22

at any Visa accepting merchant

1:24

around the world. If

1:26

you're an individual looking to live more

1:29

on-chain or a business looking to

1:31

white-label the stack, visit

1:33

gnosispay.com. There are

1:35

lots of ways you can join the Gnosis

1:37

journey. Drop in the Gnosis

1:39

DAO governance form, become a

1:41

Gnosis validator with a single GNO token

1:44

and low-cost hardware, or

1:46

deploy your product on the EVM-compatible

1:48

and highly decentralized Gnosis Chain. Get

1:52

started today at gnosis.io you

2:00

stake your tokens on 45 plus

2:02

networks like Ethereum, Cosmos, Celestia

2:05

and DYDX. More

2:07

than 100,000 delegators stake with

2:09

Chorus One including institutions like

2:11

BitGo and Ledger. Staking

2:14

with Chorus One not only gets you the

2:16

highest years but also

2:18

the most robust security practices

2:20

and infrastructure that are usually

2:22

exclusive for institutions. You

2:25

can stake directly to Chorus One's public node

2:27

from your wallet, set up a white label

2:29

node or use the recently launched product

2:31

Opus to stake up

2:34

to 8000 ETH in a single transaction.

2:37

You can even offer high year staking

2:39

to your own customers using

2:41

their API. Your assets always remain in

2:43

your custody so you can have complete

2:45

peace of mind. Start staking today at

2:48

Chorus.one. Hi

2:51

David, it's a pleasure to

2:53

have you on. Yeah, pleasure to be here. Thanks

2:55

for having me. Absolutely.

2:57

Tell us a little bit

3:00

about yourself and

3:02

your background. Sure.

3:05

So I

3:08

came to crypto from a sort

3:10

of background in maths

3:12

and economics. I

3:14

did maths undergrads and then

3:18

really got into

3:20

applied game theory. There

3:23

were some fantastic courses at UCL where

3:25

I did that and one

3:27

thing led to another and I ended up doing a PhD

3:29

there. And then that,

3:34

you know, if you fast forward quite some time, led

3:37

for me to discovering

3:39

that I really like this intersection of

3:42

game theory and machine

3:45

learning, which I've done a lot

3:48

together with an

3:51

interest which had sort of grown

3:53

steadily in crypto and blockchain. And

3:56

so I'm working in that space now

3:58

for over five years and particularly at

4:00

this intersection of crypto and AI. Yeah,

4:04

super interesting. Sounds like applied

4:06

economics and math. And

4:09

yeah, it sounds like the ideal

4:11

background for getting into crypto. You,

4:13

as we said in the intro,

4:16

you are a co-founder of Valerie,

4:18

which is a co-contributor to Autonalers.

4:22

You kind of co-founded with

4:24

someone that's also quality fit.

4:27

And what kind of what

4:29

motivated you to kind of co-found

4:32

this project? Kind of

4:34

what's the problem you were setting out to solve? Yeah,

4:37

it's a great question. So Valerie's

4:40

mission is to basically

4:42

create open source

4:44

software for people to co-own

4:48

primarily agentic AI, and we'll kind

4:50

of uncover a bit what we

4:53

mean by this. I actually

4:55

have two co-founders. One,

4:57

David Galindo has a

4:59

background in cryptography, was

5:02

a cryptographer in SARS.

5:04

And the other co-founder has

5:06

a pseudonym called Ochsbrad Dattan. He

5:10

has a product background. And

5:13

the three of us really kind

5:15

of in different ways, we're excited

5:17

about autonomous agents and this general

5:19

pressure which you see in AI

5:23

towards a genetic kind

5:26

of AI systems. And

5:28

we had different

5:31

experiences with this topic, different

5:33

insights, and we came

5:35

together to basically build the

5:37

substrate on which you can,

5:40

as groups, kind of co-own

5:42

these agentic AI

5:44

systems. So I think that's

5:46

sort of the driving force is to provide

5:49

this kind of software stack which allows people

5:51

to do that and also kind of create

5:53

applications which people then can

5:55

own in that way. Do

5:58

you use the term agenda? AI

6:00

system and quite a few

6:03

times there, what

6:05

is an agentic AI system? The

6:09

way I think about it is that if you

6:11

look at the sort of dominant forms of AI,

6:14

then you can sort of see maybe

6:17

freeways. So you have like in

6:19

the earlier parts of the last century, like

6:21

this dominance of rules-based systems. And

6:25

then in day-based, you have

6:28

hard-coded rules, often extremely sophisticated,

6:30

which allow you to build certain

6:33

types of AI systems. And by

6:35

no means has this kind of

6:37

part of AI research and applications

6:40

gone away. But at

6:42

some point, you then had more of these

6:44

kind of learning systems and much where you

6:46

have like neural networks and deep learning and

6:48

other forms of learning, reinforcement

6:50

learning where you effectively use data

6:54

to construct part

6:56

of the algorithm

6:59

effectively. So the system

7:01

learns from data rather than all the

7:03

rules being prescribed. And

7:05

if you look at what's now

7:08

happening is we have these very

7:10

powerful Dutch language models and other types

7:12

of powerful

7:15

AI models. But they

7:17

by themselves are certainly

7:21

now not agentic in the sense

7:23

that they, you know,

7:26

there's some data which sits

7:28

somewhere and then you effectively query

7:30

these models, you instantiate

7:32

them, you query them, and then

7:34

you get a response. And that can be a

7:36

very sophisticated response, but that's it. What's

7:40

interesting is once you think about effectively

7:45

systems that are having

7:47

agency and can sort of

7:49

autonomously act, and

7:52

there's an enormous pressure towards the

7:54

systems from a pure optimization point

7:57

of view and evolution point of

7:59

view. you think about it, where

8:03

can you make the most money?

8:05

Where are the most exciting applications

8:07

as well? It's autonomous systems, which

8:09

can take actions by themselves and they're not,

8:12

you know, maybe

8:14

instructed by some third party, some

8:16

human or some other system. And

8:18

so, yeah, these kind of

8:21

agentic systems, we can uncover a bit

8:23

what they look like conceptually.

8:25

But that's I think

8:27

where the train is headed and where a

8:29

lot of focus is going

8:31

towards across like the AI field.

8:35

So David, maybe to kind of let me

8:38

repackage this somewhat, would it be

8:40

fair to say that an

8:43

agentic system is something

8:45

where you kind of give an

8:47

AI agent a goal, but don't

8:50

perfectly specify how it should

8:53

go about achieving that? Or

8:55

is that too simplistic? Yeah,

8:59

I think that works. And in

9:01

particular, I mean, what we're interested in are

9:03

sort of these autonomous agents.

9:06

And so we can briefly define that

9:08

sort of conceptually. So usually,

9:10

it's a sort of software system which is

9:14

placed in some environment from

9:16

which it perceives certain information.

9:20

They could be blockchain events, literally,

9:23

or they could be things from an

9:25

API, they could be something from

9:27

a sensor, which it has

9:29

locally, it then uses

9:32

that information plus whatever its internal

9:34

architecture looks like, to

9:36

then take action again

9:39

in its environment. And that environment again

9:41

can be like a

9:44

blockchain, another API, another

9:47

agent, some actuator

9:49

of any form. So this is what

9:51

we would call like an autonomous agent.

9:54

And effectively, what

9:57

I'm saying is that what

9:59

we're seeing in increasingly is that there's

10:01

more focus to basically create models

10:05

which can act as subsystems of

10:07

such autonomous agents or even

10:09

like almost like subsume

10:11

and autonomous agents as a whole right and

10:13

so there's this kind of pressure

10:16

and towards these kind of systems

10:20

So as an example of an autonomous agent, maybe

10:22

we could think of so

10:25

imagine there's a there's the

10:27

Gnosis network and then there's

10:30

the code base of the Gnosis network and

10:33

one could imagine like a coding agent of

10:35

some kind where somebody

10:38

opens an issue on against the code

10:40

base of the Gnosis network against

10:43

the code base of the network. I want to

10:46

add this feature to the core protocol of the

10:48

Gnosis network then an agent

10:50

could be something that kind

10:53

of as a first step isolates the pieces of

10:55

code that need to be changed as

10:57

a second step creates code

11:01

making those changes as

11:03

a third step does some form

11:05

of testing so that could involve kind

11:08

of like static analysis but that could

11:10

also involve like runtime analysis and

11:13

gets feedback from the environment and then makes another

11:15

set of changes and comes

11:17

up with a draft like

11:20

a draft change a draft pull request Yeah,

11:23

I like that. I think

11:25

that's that's a good example. Also speaking

11:28

of Gnosis chain,

11:30

so one autonomous

11:32

agent which is running there one type of

11:34

autonomous agent which is built on the autonomous

11:36

stack and is running there every day as

11:38

a agent which trades

11:41

in prediction markets and so

11:43

if we kind of map that into this model which

11:45

I was just describing it might be quite helpful, so

11:49

again here what

11:51

it's observing are sort of basically

11:54

new markets opening

11:56

so it adds us to the list of markets which

11:59

it has a look at.

12:02

It might then

12:04

fetch information pertaining

12:07

to the events which are referenced in these

12:09

markets from really anywhere in the web. So

12:12

a search API or just like

12:14

crawling itself almost.

12:18

It then uses

12:20

that information, that context basically on

12:22

the event as well as various

12:26

AI models. At the moment, most of

12:29

the agencies use

12:31

some form of large language models to

12:35

basically prompt these

12:37

models with that kind of information. And

12:40

then once it arrives at a prediction

12:44

for that event, together with

12:46

an accuracy and other kind of

12:48

information rigid estimates, then sorry,

12:51

confidence rigid estimates, then

12:53

it will construct

12:55

a transaction and

12:57

then sort of act in that market, i.e. kind of

13:00

take a position in that market. So for instance, if

13:02

it's a binary market, yes or no, by

13:05

the relevant tokens which represent

13:07

these events. And so here what you then

13:09

have is this environment being

13:11

sort of these smart contracts and the

13:14

information endpoints

13:17

which are pertaining to these markets. And then the

13:19

actions are taking these positions

13:22

in the markets and then sometime passes

13:24

and then the agent

13:26

might actually make some

13:28

money. Okay, but that sounds

13:31

primarily like kind of like automation

13:33

technology, right? So basically, people wouldn't

13:35

necessarily know that I run this

13:38

sort of software to do things,

13:40

just like kind of I run,

13:43

for instance, say trading

13:45

scripts, right? So how

13:49

do we know that this is, I

13:52

mean, I assume to some extent this is already

13:54

happening, but kind of like where

13:56

it gets really interesting is kind of when

13:58

you kind of design. systems

14:01

where several of these agents kind of

14:03

come together to kind of in

14:05

a game

14:08

theoretic way

14:11

to kind of figure

14:13

out you know something

14:16

to do with some conclusion

14:18

or something right so

14:20

yeah a couple things so firstly I think you're

14:23

right that like there's sort of automation and then

14:25

there's different levels of autonomy and like if you

14:28

think about a self-driving car they have these sort

14:30

of levels and it's a

14:32

bit similar to think here like you have different

14:34

kind of levels and you can be

14:36

closer to what people might describe

14:38

as automation and then there's also this

14:41

thing where as time moves on we

14:43

tend to prescribe

14:45

things which maybe we saw as

14:47

more autonomous towards automation because they

14:49

kind of get wrapped behind like

14:52

an agent for instance and then I can just sort

14:54

of see it the

14:56

act of interacting with this agents where the

14:58

agent is actually autonomous as for me from

15:00

from the perspective of the users almost like

15:03

just like automation I'm just calling this API

15:05

which then goes away and creates an outcome for

15:08

me and so I think

15:10

there's always that but you're

15:12

right and then and

15:15

in the system which I was

15:17

describing actually the way it's practically

15:19

implemented already is that there's already

15:22

three types of agents today so

15:24

the trading agent itself doesn't actually

15:27

come up with the prediction it's other agents

15:29

who specialize on that and

15:32

now we're even like picking

15:34

apart that role because what

15:37

we basically see

15:39

and is that from a practical

15:41

perspective if you can sort of

15:44

specialize your agent that has its

15:46

benefits like the same way we

15:48

specialize as humans but

15:51

also from a sort

15:54

of practical users perspective of running the

15:56

agent that can have its benefits so

15:58

for instance If I had

16:01

an agent which has to have all sorts of open

16:04

source model which need to run alongside it,

16:07

which it uses, then this can become quite

16:09

a beast to actually run this

16:12

thing, quite impractical.

16:15

Whereas if an agent can

16:17

use other agents to get something done,

16:19

then it might be as simple as

16:22

making a small crypto payment to, for

16:25

instance, get a prediction. And so that's the case

16:27

here. And then you have to obviously trade that

16:29

off and with other design

16:31

considerations of the system. So

16:35

I'd like to state Fridarika's question in

16:37

like different ways. So

16:41

any standard staking company would

16:45

be running, for example, price oracles,

16:47

right? So in a price

16:49

oracle, it's fetching the price from somewhere

16:51

and submitting the price to the blockchain

16:53

and it's getting paid

16:55

in crypto to do

16:58

that. And

17:00

one could imagine that entire, so it's

17:04

like the code of like a

17:06

price oracle is highly mechanical. It

17:09

is specified entirely in a

17:12

programming language. The input to

17:14

it is very

17:17

structured. It is probably coming as like

17:19

JSON files that are structured in a

17:21

particular, and its output is also very

17:23

structured. It is producing transactions

17:25

that have these fields and etc. Perhaps

17:29

that is like actually like an agent

17:31

itself, except it's like a very dumb

17:34

agent and the kinds of

17:36

agents you are thinking of are like AI

17:39

agents where we are trying to climb

17:41

the hierarchy of, well,

17:43

the inputs no longer need

17:45

to be that structured. It may

17:47

not come as JSON or or

17:49

protobuf or any of these protocols. It

17:52

might come as an English language and

17:54

it could be anything that comes in.

17:56

So the inputs becomes unstructured. Then

17:59

the. Processing logic instead

18:01

of being structured in the form of

18:03

code, you could

18:06

have processing logic

18:08

where the

18:10

agent like de novo comes

18:12

up with how to execute

18:14

on a certain input and its

18:17

execution path is kind of like invented

18:20

for that particular input and it might

18:22

be different from what it

18:24

was what it was previously and

18:26

then finally on the output side its

18:29

outputs could also be unstructured meaning

18:32

it's producing output in terms

18:34

of English language which

18:37

has like of course with English also has

18:39

structure which has like but less structure than

18:41

a programming

18:43

language output or a JSON output would

18:46

have and so maybe the one

18:48

interesting of the AI agent is we are trying to

18:51

generalize the input the processing and

18:54

the output of

18:57

what is already kind of like a traditional

19:01

crypto agent so validators price oracles you might think

19:03

of them as traditional crypto agents but we are

19:05

trying to kind of push their

19:07

boundaries in like what they can

19:09

do yeah

19:12

and there's these different dimensions which we kind of

19:14

pointing at right so you have like the

19:17

levels of autonomy and then

19:19

the levels also of the kind

19:21

of how dynamic

19:23

is the decision-making and

19:26

how open-ended is

19:28

it how structured and

19:30

unstructured can be the input and output and

19:33

basically like if you look at it from

19:35

our perspective the way we look at it

19:40

as our stack kind of allows you to build

19:43

across a whole range of these things so

19:45

we have some products

19:47

which are very very structured so they're basically

19:49

a rules based AI of the kind which

19:51

you know an Oracle is actually one example

19:53

you can build an Oracle now stack it's

19:56

not like anyone is like

19:58

majorly focused on it but we have some demos of

20:01

this sort. And then all, you know,

20:03

you go over to the right

20:05

on that dimension and then you

20:07

add the prediction agent where it becomes

20:09

a bit less structured because yes,

20:11

some of the flow of it is entirely

20:13

structured in the sense that it will always

20:15

sort of do certain actions

20:18

in a certain sequence. I'll

20:21

get to that in a second as to how that's actually

20:23

done on a code level. And then inside

20:26

of the states, actually let me explain

20:28

it right now. We structured

20:30

as a sort of finite state machine.

20:32

So basically we say,

20:34

okay, the overall agent is described

20:36

as this graph like structure where

20:38

it transverses through these states. And

20:41

then in some of these states, it might

20:44

sort of dynamically choose which path to take

20:46

going forward, but sort of the rails are

20:48

given, right? So it comes as sort of

20:50

totally go of the rails. And some of

20:52

you say, I was a prediction agent now, and now

20:54

I'm kind of doing this other thing, shopping

20:57

clothes or whatever. And

20:59

this, we

21:02

see this as a basically

21:04

a pragmatic approach and be

21:06

also a big advantage because

21:08

obviously these kind of AI enabled

21:11

agents, autonomous

21:15

agents is something relatively new

21:18

in that form. They were stuck in the

21:20

sort of Daldron for a long time where

21:22

they see nothing much happened for decades in

21:24

multi-agent systems research. I mean, no sort of

21:27

big move forward. And on

21:30

the other hand, you now

21:32

have these sort of AI agent models based mostly on

21:35

large language models where it seems a lot is happening,

21:37

but then when you dig a bit in, often

21:40

if you leave them too unstructured,

21:43

nothing is an interesting research exercise, but

21:45

practically not too much happened. So the

21:48

sweet spot is still in between, is

21:50

what we say, where you provide a

21:52

certain degree of structure. And then within

21:54

certain states, the agent can be

21:57

dealing with unstructured input or output.

22:01

and can do what you

22:03

were just describing. And

22:06

that, I think, is how

22:08

long we will be in this phase where

22:10

it's so, and in between, I don't know,

22:13

there's certainly attempts to

22:15

build sort of like,

22:17

almost like a large language model, but

22:22

for actions where people sort of train

22:24

those sort of, into the model itself,

22:29

we'll have to see when they're

22:31

actually, I think, usable. But

22:33

if you want to use off-the-shelf technologies today,

22:36

then you're sort of limited to still providing

22:38

some degree of structure. The other way

22:40

to look at this is also from an efficiency point of view.

22:43

So once you actually know that your agent

22:46

is meant to be an autonomous agent in

22:49

prediction markets, that it's meant to make its money there,

22:51

and that you

22:53

want to use it for that, then

22:56

it's kind of pointless if every time

22:58

it's running, it has to figure this

23:00

out from first principles. That's a very

23:02

dumb approach, right? The same way in

23:04

programming, if I write an efficient program,

23:07

I might not generate everything dynamically. I

23:09

might have like sort of hard-coded, you

23:11

know, look at tables or

23:14

whatever where I just pull values out because they're

23:16

way more efficient than if I were to generate

23:18

them on the fly, even if I can. And

23:20

so the same thing is here. Sometimes you might

23:22

want to apply an agent actually at the building

23:25

stage. So going back to what you were saying

23:27

earlier, Meher, applying

23:30

agents to

23:32

build agents is also something we're focused on.

23:34

So we have like some internal tooling now

23:37

where we are able

23:39

to basically prompt our

23:41

tooling and then it generates sort

23:45

of half of the agent.

23:47

Like not all the code is finished. There's still

23:49

like some software developer engagement needed, but

23:52

it generates a lot of it. So

23:54

there's This angle as well from an

23:57

efficiency point of view where you don't necessarily always want

23:59

to figure out anything. Everything at one you might

24:01

want to sort of ahead of time. Build.

24:04

A Better Asian which has been forced

24:06

to an act within these pounds given

24:08

by. And. That

24:10

design. And

24:12

confused can have. As

24:15

for the Asian terminology, so am I

24:17

think of as these cases where kind

24:19

of I can imagine you kind of

24:21

you have large game said to kind

24:23

of you optimize school and painting. That

24:25

means can have you don't You don't

24:27

have to do so much unchained because

24:29

kind of you can optimize it off

24:31

chain and ten of agents can keep

24:33

each other and right in It to

24:35

me is kind of like the Mighty

24:37

Agent. Am. System.

24:40

An. Adhesive kind of

24:42

my leg understanding and what kind

24:44

of in India description? Now it

24:46

sounded like an. What?

24:49

I would as. Conceptualize.

24:52

As one age and you guys

24:54

often think of as different agents

24:56

that as somehow. Amalgamated together

24:59

into kind of like a super agent

25:01

M As you said, kind of like

25:03

best the prediction this kind of the

25:05

research Asia and the brakes and at

25:07

making agent and so on Mit media

25:10

content of. Am. Delineate

25:12

the the terms here. A

25:14

little bit for us. Lenders.

25:17

Who might even make a bit further so.

25:20

One. Of the are. You

25:23

know, per. Core things which I guess like

25:25

the idea of mothers and systems as as but

25:27

you have. Multiple.

25:29

Potentially different type of reasons which

25:31

generates some sort of emerge and

25:33

outcome answer. If you look at

25:36

any individual of those asians ban

25:38

they themselves wouldn't bring about. This.

25:40

Outcome and then as onto their collection. So.

25:43

This is the example of escaped the giving are there

25:45

were you have this free type of raisins and then

25:47

they kind of. Coming. Together and

25:49

the outcome are sort of. They. I

25:51

driven prediction markets were know as

25:53

human ever participated. Now.

25:56

If we look at our spec

25:58

specifically a hammock, It's a bit

26:00

more interesting still, which is that. We

26:03

basically say okay. Going.

26:07

Back to do that idea of

26:09

co owned, a I incurred and

26:11

autonomous agents. Like. What

26:14

motivates us that love? What motivates us

26:16

that we bit concerned that. As

26:19

there's this tremendous pressure to build. Better.

26:22

And models and as there's to swim under

26:24

pressure to build better. Identikit

26:27

as a sense that ultimately they

26:29

will be owned and of a

26:31

decentralized when also operated in the

26:34

very senselessly. And. So the

26:36

question is, can you create basically as

26:38

a straight where people can own them

26:40

and a decentralized smith? So. Now

26:42

one of his answers. as if you

26:44

somehow can make. A. Smart contracts

26:46

Martha and there's a lot of exciting

26:48

projects which are kind of trying to

26:50

do that with C K Ml and

26:52

other kind of add technology approaches. Mack.

26:55

And effectively just as you the

26:57

block chain. And public one

26:59

and you've run some code on it. Average

27:02

might have been sort of verify it

27:05

off chain an. Act. As.

27:07

I may fight on. Service at might have

27:10

been proved are often and now. In.

27:13

Our case but Be offers basically okay

27:15

if you want to build an autonomous

27:17

agent. And you don't

27:19

want to run That. Code.

27:21

As a decentralized system then you can

27:23

do it at so in the all

27:25

us that you can develop this trader

27:27

a some resentment and other. And.

27:30

And you done can run it as a multi

27:32

know it's essence of what basically happens is that

27:34

the Trader Eight and. Is. Like

27:36

that the whole of all these agent

27:38

notes. And here to

27:40

be different because these a don't know

27:43

that effectively like latinos as sort of

27:45

replicating on. The.

27:47

The back and also de code.

27:51

That. You know, Often. Quite identical,

27:53

or can even before the identified

27:55

instances of each other. And.

27:57

Then they work together to effectively. Become

28:00

the straight A's and and so on. Jane

28:02

the represented as a multi said. and

28:05

that of Chained Earth as couple of notes

28:07

which have like. As. Taste

28:09

synchronization between and survey

28:11

practically. But. They use as tandem

28:13

and. Am at the moment. Other

28:15

consensus gets it. Said. At all

28:17

the nodes and assist them. Agree

28:20

on. On. The essence

28:22

despair. Agents to take

28:24

next. So.

28:27

In the seal off Llm. Six

28:29

cents I like and. Now.

28:31

I'm referring to Let's see like. the

28:34

non clipped or part of building on top

28:36

of element and it as which is probably

28:38

a thousand times bigger than the. To.

28:41

Crypto Part. There's like lots

28:43

of different. same was that I have enough. like.

28:46

Building Agents using at

28:48

Lm. so. Lang.

28:51

Chain is probably the most

28:53

commercially successful. But. Then

28:55

you're going fine. like Microsoft or to

28:57

gin which. Which. Is a

28:59

mighty agent system. Ah, Indian

29:02

Howard's constructed. But. It

29:04

up the modality lords of I

29:06

those infected and at inside The problem

29:08

is it's a problem of blamed the

29:11

other than a info know offer

29:13

follow scarcity so. We. Read to

29:15

start with. Dumps

29:17

of lady agent. same look you're building.

29:19

Oh. What? Is like really

29:21

different about your age and same work.

29:24

From. The. Things that might

29:26

be happening outside crypto as well.

29:28

As assholes. Yeah.

29:31

I think one. Keeping. Us

29:34

that we always and. One

29:36

system which are sort of able

29:38

to take action on same like

29:40

any other users and so we

29:43

see like accommodations as the sort

29:45

of. They. The active

29:47

users affairs protocols and we can talk about

29:49

this an aid and abet. But.

29:51

Benefits that has. For. The protocol.

29:53

but. That

29:56

means that I'm. In

29:58

our case, sort of bird. Crypt.

30:00

A wallet and also. The

30:03

unchained representation of the often age and are

30:05

like first class citizen. So we think of

30:07

this from the design and beginning and than

30:09

that has implications. For. Instance many

30:12

come back to the traitor if I

30:14

want to go on like let's say

30:16

a lantern agent and might have to

30:18

betty build with we felt. And

30:20

because you need some way

30:23

of basically. Sharing. Ownership

30:25

of Let's say and on Jan Bolland like

30:27

it a safe let's say. A

30:29

multi thick and with these often and

30:31

senses of agent so our have a

30:33

mic. Let's do this. That's that's

30:36

one way to look at said sister.

30:38

Sort of Nato has crept us apart.

30:40

I guess I'm the second thing as

30:42

this. Is he go farther

30:44

to call interceptors. Sort

30:47

of to. Extremes

30:49

there again and so if. Co.

30:52

Ownership can be achieved anti the on

30:54

chance of and since you have like.

30:56

A. Safe which has some assets and

30:58

now. You have a lot of.

31:01

Let's. Say lunch and as of the Auto Gp

31:03

the Asian or whatever or one of the same,

31:05

it isn't all fun of. Holding. A

31:07

wallet and then. And being

31:09

a sign around the faith and trust

31:12

could work right because they don't necessarily

31:14

need of ten consensus depending on what

31:16

the application as. But. As she

31:18

wants to look into the interesting application.

31:20

Turns out that almost always. Once.

31:22

You go beyond like simple things which are done

31:24

on san. At you need

31:26

often consensus because often as I things

31:28

I have like even an arc on

31:31

his agree of ten. Potentially.

31:33

On the date I once to put on saying.

31:35

I'm certainly efficient. Are cars wanted it

31:37

that often? And. And then.

31:41

If you imagine the off chance of them

31:43

wanting to act upon something else off and

31:45

on for sure. you also need often consensus

31:47

so. There. It then also again

31:50

health service that which gives you this out of

31:52

the box without us. Now.

31:54

A sad. Way to look

31:56

at it and this is sort of pure the on

31:59

the. In the. Then of grip and more

32:01

sort of on destructuring of the asian as

32:03

that. And. Can

32:05

we based our discussion earlier Automation

32:07

vs. autonomy and they said of

32:09

fully and a I face and

32:12

dynamic air and and. An.

32:16

Agent systems rather than those which I may be

32:18

that sort of based on Hard could it was.

32:21

The. Reality is if you want to build

32:23

like ready use cases can you speak

32:25

with people can use today and but

32:27

are actually meaningfully. And. Security

32:29

Achieving something. Then.

32:31

He can go yet and still

32:34

sort of fully unstructured models. Were

32:36

you just basically. Repeatedly.

32:38

Prompt than our like. did he can do

32:40

it but it's it doesn't work. And.

32:43

Then you need to provide structure and then if

32:45

you look at the framers in her lungs and

32:47

as an interesting sampling and. And

32:49

not I have no and know nothing

32:51

bad to say about it. By the

32:54

interesting example, they're moving towards grass such

32:56

as as well because the pretty obvious

32:58

that attain one cat it. Like.

33:00

Your decision making is almost never

33:02

attain. That. Is the most basic

33:04

kind of application where is like a B

33:06

C D and then going back to a

33:08

right to reality is he gonna have even

33:11

and a most basic. Application you

33:13

going to have the happy puffers might look like

33:15

this. But. Then off the happy puppy of

33:17

all this era passwords need to sort of looked

33:19

back to different states. And. So you're

33:22

basically in a graph sector and. And

33:24

so that's where we started out

33:27

an eerie bases like five years

33:29

ago sabha if he building. Autonomous

33:32

Asian Systems. Then.

33:34

It's. Unlikely that we gonna have to sort time

33:36

to sort of. Fully. Open ended

33:38

and set of just models which we

33:40

need to trend or somehow. That

33:43

authentic as this and pops out.

33:45

but and instead. We. Still

33:47

need to provide some rails and

33:49

then use and models alongside both

33:51

males. And these males in our

33:53

case or these. Bessie. Grass

33:55

along with to isn't has to travel.

33:58

and now if you put it back all together. I think

34:02

one of the benefits you have at SAC is that you

34:04

can go and say, okay, I have

34:07

a use case where there's some

34:09

states in which the agent is rate free. There's

34:11

other states where I want the agent to just

34:14

travel along this track, then I

34:16

can do this. And now I also want

34:18

this agent to take action on chain ever

34:22

so often, then it already comes out

34:24

of the box. So we obviously from

34:26

the beginning, when we built the framework, we're

34:29

really heavy users of the

34:31

safe. So we had like Ethereum support

34:33

with safe since basically day one, since

34:35

the framework is usable. And

34:38

now as we're sort of expanding it, and

34:41

to other sort of types of blockchain

34:45

ecosystems, we're always kind of

34:47

having the same design paradigm again, where we

34:49

pick like a multisig, which

34:52

is dominant in this ecosystem and then

34:54

build the compatibility of the

34:56

stack around it. Okay,

34:59

I think I'm, I'm now less confused

35:01

about the agenda part. But

35:03

I'm still confused about kind of like the protocol

35:05

as a whole. So kind of like if you

35:08

look at the stack, now, now we kind of

35:10

have some understanding of what these agents can and

35:12

can't do. I can't just give an agent, I

35:14

don't know, I can't just say, here, you have

35:17

100 die, you make me some money. And

35:19

basically, the agent will go and kind

35:21

of like, either kind

35:24

of like, but, you know, build

35:26

the arbitration bot or kind of

35:29

make saucy pictures on mid,

35:32

on mid journey and

35:34

kind of put them on only fans. I mean,

35:36

so basically, it's like, it's like you have to

35:38

give it some structure. I understand that now. But

35:42

how do you put this on into

35:44

a protocol and kind of where does

35:46

the co ownership come in? Because this

35:48

is something that in principle with an

35:50

LLM model and like, some deaf

35:52

background, I could just do on

35:55

my own, right? I don't need autonomous for that. Yeah,

35:59

and That's a

36:01

great question. So basically

36:03

I think for

36:07

like one of our core insights

36:09

is that it's not about building

36:11

individual agents, it's about building effectively

36:16

many agentic systems which can interact

36:18

because ultimately we're like big believers

36:21

in the specialization and even like

36:24

from a very practical point of view, if you

36:26

want to build better agents, we'll build very different

36:28

agents so the framework will have to get a

36:31

very different sort of use cases. So the

36:33

protocol was always designed to enable

36:36

basically entire agent economies and

36:39

enable their bootstrapping. So there is

36:41

a couple of mechanisms which facilitate

36:43

that the stack itself is open

36:46

source. So when you

36:48

have an open source stack, there's

36:50

never forcing function to tell them

36:52

or you have to use this

36:54

protocol. So you have to basically create like

36:56

a reason on

36:58

top why it would

37:00

make sense for people to engage with this protocol.

37:03

So one thing

37:05

which we noticed is if we want to have these

37:08

basically autonomous agent use

37:10

cases really grow, then

37:13

we need obviously a lot

37:15

of development, developers

37:18

who build on the stack. Why

37:20

do developers have the benefit of

37:22

building on the stack? Well, there's

37:24

some of the technical reasons we mentioned

37:26

before, but there's also one of composability.

37:29

So we basically have created

37:31

a very

37:33

composable framework where

37:35

it's not so much about

37:38

composing arbitrary Python libraries, which is a

37:40

focus of a lot of the other

37:42

frameworks, but where it is

37:44

the focus of the stack to compose business

37:47

logic itself. And

37:50

that's particularly with autonomous agents of the

37:52

current generation. If you think about it,

37:54

it's very important. So if I have

37:56

like, for instance, this

37:59

trader, and at some point it's going

38:01

to settle a transaction. And you might

38:03

say, well, that is just a matter of sending

38:05

the transaction. Well, this is actually not true. There's

38:08

around like 20 or 30

38:10

states in the finite state machine, which

38:12

takes care of settling the transaction. Because

38:14

there is like on the happy path, various things that you need

38:16

to come, you need to sign it from all the

38:18

agents, you need to then submit it, you don't need to wait

38:20

for it to be settled. And if

38:23

anything goes wrong in any of those

38:26

days, the resolution looks different. Now,

38:28

you don't want a developer to

38:30

re implement that. Similarly, if you

38:34

think about things like interacting

38:37

with these prediction markets,

38:39

that might actually be like

38:43

something which you might want in another

38:45

agent. So being able to kind of

38:47

compose these things is very, very interesting.

38:50

And so one big part

38:52

of the protocol as a result

38:54

is this focus on creating a

38:56

developer mechanism, developer incentive mechanism, whereby

38:58

developers get rewards for contributing these

39:01

pieces of agents and entire agents

39:04

into the stack. So that's

39:06

the code side of things. They

39:09

can do that permissionlessly. So

39:11

very practically, you know, you

39:13

develop the stuff, you register

39:16

it on chain as these NFTs. And

39:19

then there's a sort of reward system, which works

39:21

sort of on a on an epoch basis. On

39:24

the other side of this is, is

39:28

the question of capital. So, obviously,

39:31

the developer rewards come

39:34

partially from, you know, emissions, but over

39:36

the longer term, they will

39:39

have to come from productive agent systems,

39:41

which the DAO kind of operates,

39:43

I'll get to that in a moment. But

39:46

even to get you there, basically,

39:48

you need a bootstrapping

39:50

mechanism whereby the people can actually

39:52

use this OLA token.

39:54

And so that's where bonding comes in.

39:56

So whenever the protocol is deployed

39:59

on a new chain, then effectively

40:01

there's a bonding mechanism

40:04

in place whereby people who use

40:06

or believe in the protocol can

40:08

provide liquidity in this token

40:11

and the chains token and

40:14

then return that LP token

40:16

to the protocol and receive effectively

40:19

all of us. And what this

40:21

does is that basically you have

40:23

a very decentralized way of

40:26

bringing that utility token to more chains. Why

40:28

do you want it on more chains? Because

40:30

that's the third bit, which is staking. So

40:33

once I obviously have like code

40:35

which does something useful, and

40:38

now I want to be able to

40:40

actually operate these

40:42

agents. And as we noted before, you

40:44

can operate them like decentralized systems. It

40:46

ends up looking a bit like operating

40:48

a blockchain. So

40:51

you effectively then have the staking system

40:54

where the operators of the nodes

40:56

in any given agent

41:00

can basically earn these

41:03

staking rewards. And in

41:05

order for that to obvious move, it

41:07

helps when the token is basically accessible

41:09

on that chain directly. So

41:12

that's sort of the three mechanisms

41:14

like staking being the

41:16

last, and then the code can pull a

41:18

sort of path. And

41:22

then we can, that's a lot of things, right? So

41:27

let's try to recap. So right

41:31

at the layer of building the agent, what

41:35

you're saying is like, okay, your framework,

41:39

in a sense, like there are many

41:41

frameworks which we can be

41:43

used to build agents, but

41:45

the differentiation of

41:48

the autonomous framework is

41:50

that it contains,

41:53

the differentiation is on two dimensions. The

41:55

first dimension is it contains components.

42:00

that would make blockchain

42:02

integration and blockchain transaction

42:06

creation easy. So this

42:08

could involve things like, okay,

42:12

an agent if it needs to interact with a blockchain, it

42:14

needs to store a private key. So maybe it

42:17

needs some components for the securing of

42:19

private keys. It needs components

42:21

by which it can read blockchain data. It

42:24

needs components by which transactions can be

42:26

sent and it can figure

42:28

out that they were confirmed or not. So

42:31

there are like some standard pieces of logic

42:34

that are used in a

42:36

lot of different places, maybe even exchanges use it

42:38

in their hot polytus or things like that. And

42:41

you're going to build like the standard

42:43

versions of those components and integrate them

42:45

into your framework. So a developer doesn't

42:47

have to worry about those

42:49

aspects. Then the second

42:51

thing your framework is providing is it

42:56

is providing some kind of cognitive

42:58

architecture. What by

43:01

that what we might mean is that you

43:04

want the agent to basically

43:07

apply its intelligence, but you want it

43:09

to you want to

43:11

constrain its intelligence in a certain manner,

43:13

which is like, you

43:16

know, for a particular problem always

43:18

think always create a tree and

43:21

reason through the nodes of a tree

43:23

or in this particular problem create

43:26

a line and like there are nodes and reason

43:28

through all of these nodes. So particular

43:31

problems might have particular ways of

43:35

thinking that if we constrain the

43:37

agent to think in that particular way, it

43:39

will produce better outputs. And so you are

43:42

you are providing a way to develop

43:45

against some of these

43:48

of these like constraints, right? Like so

43:50

a developer can put these constraints into

43:52

their system and then they

43:54

could they could use it. Those

43:57

are on the framework itself and

43:59

they Then what you're saying is actually like

44:02

the network itself.

44:05

So now we jump from the framework deals with the

44:08

problem of how do you build a single agent or

44:10

how do you build two or

44:13

three agents and they

44:15

coordinate with each other. But

44:17

then you jump to the network level

44:19

where it's the problem of ultimately you

44:21

want thousands of agents

44:23

to be built. And

44:28

there's the kinds of problems you're trying to

44:30

solve are how to provide

44:32

developer incentives for the improvement of your

44:35

agent framework itself. Yeah

44:39

so this is so if we zoom

44:41

out a bit what we ultimately want

44:43

this a machine to well even

44:46

if you zoom out a bit more so the

44:49

co-ownership of

44:51

autonomous agents

44:53

and genetic AI ends

44:56

up being I

44:58

think what if you think

45:00

about like the decentralized autonomous organization is

45:02

sort of almost

45:05

like the end state of that. So

45:07

if you think about this concept of

45:09

like some organization

45:11

which we own which is in

45:13

itself autonomous and which has the

45:16

highest degree of decentralization we can

45:18

achieve then ultimately this

45:20

will be using forms

45:23

of AI and be agentic

45:25

by its definition. And so

45:29

the different angle at which to come

45:31

at this is to say okay how

45:33

can you basically coordinate all the

45:35

actors which need to make that happen

45:37

right because if it's just on chain

45:39

then you're always constrained by

45:41

what you can do on chain right. So if

45:44

you just have a smart contract then

45:46

there's always someone who has to call that

45:48

smart contract for something to happen and by

45:50

necessity you will always be limited to what's

45:52

possible on chain which I think will always

45:54

be less than what's possible off chain. And

45:57

so in a way the other thing that we can do is to make the

45:59

decision to do the same Another way to look at it is to

46:01

say how can you create basically a protocol

46:04

autonomous which allows

46:06

the creation of these kind of

46:08

co-unable autonomous agents. And

46:10

then this means you need to coordinate a bunch

46:12

of actors. You need to coordinate those who are

46:14

developing them. That's why you have the best incentive

46:16

mechanism. You need those who

46:19

operate them, which is around staking.

46:22

And you need those who basically provide

46:26

this liquidity

46:29

for the whole system to exist at any

46:31

given point, which is the bonders. And so

46:34

that's kind of what the role of the

46:36

protocol is, is to coordinate all these actors.

46:38

Now obviously it's

46:40

highly complex, so we should make it a

46:42

bit concrete. If you think about what

46:46

we had earlier discussed quite a

46:48

lot, the trading agent use case

46:50

with the prediction markets, then

46:52

there is the system called

46:55

the MAX inside of it, which is this third

46:58

type of agent which basically just specializes

47:00

on making the predictions. And

47:02

these kind of agents are

47:05

basically something which you

47:08

can imagine running as this decentralized

47:10

system which the autonomous dial itself

47:12

then can own. So

47:14

you effectively then have a situation where the

47:17

autonomous dial can provide on

47:19

an ongoing basis this kind

47:21

of off-chain system with configurable

47:24

degrees of decentralization which offers

47:27

these services to other agents in the

47:29

autonomous ecosystem. And then that allows you

47:32

to sort of bootstrap

47:34

this over time. That

47:37

makes sense. Can

47:39

I think of it like this? So

47:41

today we have a few different chains

47:44

that are trying to build what

47:47

I call like puppet accounts or

47:50

delegated account control. Those

47:53

are two interchangeable words, but

47:55

the essential idea behind it

47:57

is. Nia

48:00

network is trying to build this. Okay, so

48:02

so Nia's idea is that okay,

48:04

there's a blockchain with a sort of with

48:06

a set of validators. And

48:09

what if this blockchain itself could

48:11

own a Bitcoin address, not

48:14

only a Bitcoin address, but another Ethereum address.

48:17

And so from the

48:19

perspective of Bitcoin, it's like a normal

48:21

address with a private key, but

48:26

the private key is actually split

48:28

into the validator set of Nia

48:30

by some really

48:32

smart cryptographic protocol. So

48:34

Bitcoin thinks this is like a single,

48:38

it's a normal address, a single

48:40

individual, but in reality underneath, it

48:42

is actually the validator set of

48:45

Nia that controls that account. And

48:47

in a sense that you can say that okay,

48:50

that the Nia network by itself is kind of

48:52

like owning this, owning this

48:54

address on Bitcoin and this other address on

48:56

Ethereum. If you start with

48:58

that point and then and then can you kind

49:01

of you layer on the idea that is

49:03

it possible that okay, that

49:06

there be a way by which a network

49:09

could own not only an

49:11

address, but

49:13

an address plus a

49:16

piece of like running code. And

49:20

that running code is one an

49:22

economic agent, which is that

49:24

running code is an autonomous. This

49:27

framework agent. So it has

49:30

an address and it has some

49:33

kind of like structured and unstructured

49:35

logic. So you can actually message

49:38

it, give it tasks

49:40

and expect responses. And

49:43

so Autonolus is trying to do that.

49:45

Ultimately, like how do you have a

49:47

DAO that can own an address plus

49:50

some kind of code? And

49:54

so and it owns both

49:56

of those components together and then

49:58

it can also circle the rest. sort of like make

50:00

money, make money through it. That

50:04

is what you're seeking to

50:06

achieve. Well,

50:09

this is the average. So we would call this

50:11

like a protocol and stuff. So it's basically, if

50:14

we go back to this, conservation

50:17

said earlier, if you have an existing

50:19

validator set, so basically you

50:21

could say, okay, well, let's just do this all on

50:24

training. And like, that's just somehow modify the chains

50:26

or it can sort of run

50:28

long running tasks. And then you will find

50:30

very quickly that there's all these arguments as

50:32

to why that cannot work. Like you need

50:35

an application specific chain in order to have

50:38

long running tasks. Because if you have a public

50:40

chain and it becomes immediate, basically

50:42

the attack vector for denial

50:44

service, distributed denial service, because

50:47

you can just sort of pre-empt

50:49

future blocks indefinitely by scheduling

50:52

tasks for future blocks now.

50:55

So effectively, whatever near is

50:57

during there, I don't know in too much detail,

51:00

but there's limits to kind of putting

51:02

too much on a block public blockchain,

51:04

which is meant to run repeatedly

51:07

or scheduled basically.

51:10

So you need to do it on some sort

51:12

of application specific layer. And now you could say,

51:14

okay, well, we can just run some sort of

51:17

layer two or layer three or layer

51:19

and or whatever. And ultimately there's mostly

51:21

about having sort of, again,

51:24

an architecture where you

51:26

can basically then

51:28

inherit some degree of security, right?

51:31

And execute some of

51:33

those instructions. And in a way, I think

51:36

ultimately, in

51:38

the future one day, an autonomous

51:41

service will look quite similar to

51:44

an app specific rollout potentially,

51:46

because it will basically have

51:48

a lot of degrees of verifiability and

51:51

it will potentially even inherit

51:53

some of the security as a result

51:55

from the chains on which it acts.

51:58

But it will. have these

52:00

more autonomous, long running tasks here,

52:03

which is executing, which is different

52:05

from like a public blockchain where

52:07

I always need to basically

52:10

at any given time offer these blocks,

52:12

which accept a certain amount of basically

52:15

bidding into them. And then once they're

52:17

full, therefore, right, I can't guarantee you

52:19

that I'll execute you whereas an autonomous

52:22

service can do that, it can say,

52:24

well, I'm application specific. Ever

52:27

so often, I'm doing exactly the thing. Okay,

52:31

so I feel like this

52:34

has become super

52:36

abstract. And maybe let's

52:39

kind of make some examples,

52:41

right? So one of the

52:44

main topics that kind of you

52:46

posit this will be used for

52:48

in the short

52:50

term is optimization

52:52

of DOWs. Can

52:54

you give us some examples how kind

52:57

of things work in DOWs today, and

52:59

how you see them improving by kind

53:01

of putting these autonomous

53:04

agent systems on

53:06

top of them or kind of enmeshing them? Yeah,

53:10

so actually, there's an answer to this

53:12

question in the sense that originally we

53:14

thought when we started out with the

53:17

fact that like DOWs are this primary

53:19

custom for that, right, they have various

53:21

off-chain processes, which are often quite centralized,

53:23

that help them make them more decentralized,

53:27

and more autonomous, both things which

53:29

are in their name. Turns out

53:31

from a good to market point of view, and

53:34

it's not particularly great, because a lot

53:36

of DOWs have actually a lot of

53:38

things to do. And they're maybe not

53:40

the best organized entities always. And so

53:42

it takes a lot of time. And

53:45

you're not getting to the goal very

53:47

fast. You also need to coordinate a lot of

53:49

actors by the definition of it. And

53:52

so actually, what we noticed is

53:54

that what was

53:56

we still believe in this, and I'll talk

53:58

about an example is that it's better

54:00

to focus on problems

54:03

we see in our own DAO and make

54:05

them as autonomous and decentralized and or

54:08

just build basically users

54:10

for other decentralized protocols.

54:13

So what I mentioned earlier the use

54:15

case these autonomous

54:17

agents are basically users of Omen, users of

54:20

Gnosis, users of Saves, you know, they have

54:22

done around 70% of

54:25

all safe transaction on Gnosis since summer and

54:28

like on a weekly basis

54:30

basically they've done hundreds of thousands

54:32

of transactions which basically benefit

54:34

these protocols on which they are deployed

54:37

and obviously themselves as well because they're

54:39

profitable. Now

54:42

an example which I like because

54:44

it's very easy to understand which

54:46

can apply to many DAOs

54:48

and which they can adopt quite easily is Governator

54:51

it was a bit of a

54:53

joke project which is sort of

54:56

slowly maturing basically,

54:58

it's built on the Autonomous

55:00

service stack which which all us

55:02

offers autonomous and What

55:06

it does is it basically replaces a

55:08

human Delegate in a

55:10

DAO. So if I obviously have tokens and

55:12

I don't always want to vote I could

55:14

delegate them to someone I think will vote more

55:16

or would in my favorite like and with

55:19

my intent and so on and We

55:22

implemented that in code. So basically there's

55:25

an autonomous service which continuously

55:27

watches those DAOs for which it

55:29

holds delegated tokens and then when

55:32

it sees those proposals either on snapshot or

55:34

on chain it can then vote

55:38

Indos proposals and obviously in order to do

55:40

that it needs to use A

55:43

large language model to actually read the proposal and

55:45

reason about it It

55:47

also needs that in order to make sense of

55:49

the preferences It is given and sort

55:51

of bring those two things together to arrive at a

55:53

voting decision But the actual voting coming

55:56

back to the structured versus unstructured is

55:58

a very structured process. There's zero point

56:00

in having the agent figure this out every

56:02

time because it will probably fail most of

56:04

the time. Instead, you just have that part

56:07

hard coded, right? So basically, this is a

56:09

nice example of what we were discussing about

56:11

earlier. In very abstract ways, you have these

56:13

sort of structured bits,

56:15

which are defined very well. And

56:18

then you have these unstructured parts

56:20

of the logic where you're

56:23

looking at this proposal, making sense of it and so

56:25

on. Are there new

56:27

attack factors that I introduced here? So

56:29

basically, if I kind of trust an

56:35

autonomous agent to kind of

56:37

make voting decisions for me,

56:39

I kind of I rely

56:41

heavily on the fact that

56:44

this autonomous agent actually

56:48

will act in the way that I

56:50

would act if I were to look

56:52

into it, right? So how

56:54

do you make sure that

56:56

the agents actually do

56:59

what they are meant to do on

57:01

the face of it? That's

57:04

a great question. So the

57:06

there's two parts to this. Well, many,

57:09

but I would listen to one

57:11

is like the preferences. So

57:14

that's where the governor does fall short,

57:16

it doesn't actually allow you to express

57:18

very rich preferences at all. And that's

57:20

just a matter of our time and

57:23

effort, which has gone into this part of the

57:25

application. But one side which

57:28

exceeds on is the basically

57:31

certainty that it implements

57:33

the decisions, the decision

57:36

logic, which is is meant to implement. So if you

57:38

think about a human, if you delegate to them, you

57:40

basically have no clue, right? It's all reputation based. If

57:43

you were to think to delegate to a single

57:45

long chain agent or to GPT agent, well,

57:48

it really depends on the developer is running that

57:50

other even running it, if they're running it, other

57:52

running the code they told you they're running, right?

57:54

All this kind of stuff. Whereas

57:57

with an autonomous service, which has multiple

57:59

nodes operated by different operators, you

58:02

then start getting into a

58:04

similar basically threat model which you

58:06

have with like a you know like your

58:08

cosmos chain basically or any

58:10

other sort of Byzantine fault

58:13

tolerant system whereby you have

58:15

to reason about okay how

58:17

many like operators are

58:19

there, how decentralized is it and

58:21

then is the majority of them honest.

58:23

If the majority of them is honest then

58:26

you have very high

58:29

security guarantees because you effectively what

58:31

happens is that each one of

58:33

them has to agree or the

58:35

majority of them has to agree and each one

58:37

of them uses these models so

58:39

you're not even relying on a single model

58:41

instance which is another issue with large language

58:44

models they're not necessarily deterministic at all. They

58:46

sometimes can be configured to be but like some

58:49

of them can't even be configured

58:51

to be deterministic and so then

58:55

having multiple agents each come to independent

58:57

valuations and then sort of pool that

58:59

decision making and then agree is actually

59:01

like a massive improvement. So on

59:03

that dimension I would say governor is already

59:05

better than a human because a human could

59:08

you know do whatever and here you have

59:10

like a node system implementing that decision logic.

59:14

Yeah so I think kind of what we

59:16

often try to do in these episodes is

59:19

kind of we try to understand how exactly

59:21

things work and I think this was more

59:23

an episode about kind of talking

59:26

about why it would make sense to have something

59:28

like this. So kind of I kind of I

59:30

know I want to change gears a little bit

59:32

here and kind of ask

59:36

about concerns you may have

59:39

about this. So kind of like if

59:42

you look at AIs the way

59:44

that they have improved in the last

59:46

couple of years at least kind of

59:48

like in the popular mind.

59:51

I know that kind of it's been a long time

59:53

coming and so on but it's

59:55

really impressive right. It seems

59:57

absolutely certain that they were kind of surpassed.

1:00:00

has human ingenuity

1:00:02

and capacity on all

1:00:04

kinds of axes in

1:00:07

the very short term. And

1:00:09

if you talk to AI safety

1:00:11

people, often

1:00:15

they will tell you they're

1:00:19

not so concerned because you can always switch it

1:00:21

off. And now pairing

1:00:23

it with the technology that by

1:00:25

definition, no one can turn

1:00:27

off. Does that worry you? Yeah,

1:00:31

I think it's a

1:00:34

good topic to discuss and one

1:00:36

we will not be obviously certain

1:00:38

about. I

1:00:41

think the first

1:00:43

thing which I strongly

1:00:46

believe in is that it's very,

1:00:48

very unlikely that there will be

1:00:50

just sort of one model which

1:00:52

kind of runs away and like

1:00:54

takes over. And that's just even

1:00:56

in very favorable cases to the

1:00:58

sort of super

1:01:00

intelligence arising and being able to consume

1:01:02

a lot of resources. There's

1:01:06

like geographical, physical sort of

1:01:09

constraints which make it unlikely.

1:01:13

I think what's much more likely

1:01:16

is that it there will have

1:01:18

a situation where certainly a lot of

1:01:20

centralized players will own very,

1:01:23

very powerful models. And

1:01:25

so I think actually what we should be most

1:01:28

concerned about is the

1:01:31

economic impact of this kind of

1:01:34

change in technology on

1:01:37

people rather than these hypotheticals

1:01:40

where some software slays us all. I

1:01:42

think it's important to kind of keep

1:01:44

it in the back of our minds.

1:01:48

Like with every technology be mindful as to

1:01:50

when these dangers become

1:01:54

more apparent that we kind of think about

1:01:56

them. But like the much, much bigger concern

1:01:58

I think is. economic,

1:02:01

under economics, if you hit the

1:02:03

hit, listen to someone like Sam

1:02:05

Altman, it's this naivety

1:02:07

of the economics which really

1:02:09

riles me up, like, they

1:02:12

all go around and say, you know, I

1:02:14

mean, that, you know, by all means, like, they're great, like,

1:02:16

you know, entrepreneurs and create great products

1:02:18

and so on, but everyone has the weak points.

1:02:21

I think here, it's like this kind of naivety

1:02:23

around, just because I create

1:02:25

better technology, everyone will be better. Well, that never

1:02:27

worked out that way. The reality

1:02:29

is that it's always a distribution question.

1:02:31

And if the distribution sucks of

1:02:33

access to these kind of models, and

1:02:37

people's ability to use them for

1:02:39

their lives and improving

1:02:41

their own situation, then it

1:02:44

doesn't matter how good the best model

1:02:46

is, then there will still be even

1:02:48

bigger disparities in sort of

1:02:50

income, health, wealth around

1:02:52

the globe. And I think that's what

1:02:55

we should all be really worried about. And that's

1:02:57

kind of the mission about entire

1:02:59

business and the mission of autonomous is about

1:03:01

creating these kind of systems, which

1:03:03

can be co owned so that there

1:03:06

can be groups can share these systems,

1:03:08

that doesn't mean that all problems are

1:03:10

solved, because now, you know, these groups

1:03:12

could again, be better off

1:03:14

than others. And you still have these kind

1:03:16

of distributional issues, but at least it's a

1:03:18

start. So I'm worried about the economic impact

1:03:20

of this much, much, much, much more than

1:03:23

these kind of hypotheticals, which I think

1:03:25

are interesting for dinner conversations, but really

1:03:27

don't kind of miss the

1:03:29

point mostly. Having said that, I think,

1:03:32

you know, let's say we

1:03:34

fast forward, there's like multiple

1:03:36

generations of advances and like,

1:03:39

even like models, which are basically agent

1:03:42

in the model, sort of, you know, like, some

1:03:44

called them large, extra models now

1:03:47

I saw, and others call them differently

1:03:49

than, you know, open

1:03:52

AI has their reinforcement

1:03:54

learning much with a large

1:03:57

language models, there's different attempts, whatever it will

1:03:59

be in the ultimate state. And

1:04:01

if we imagine that to run in a sort

1:04:03

of blockchain like way, where

1:04:06

it sort of has a bad intent, and we can't

1:04:08

turn it off, yes, I think it's something we should

1:04:11

keep in the back of our mind and think about

1:04:15

solutions. But I think the flip side

1:04:17

of this is, again,

1:04:20

that if this model

1:04:22

is used for good, then having

1:04:25

transparency and kind

1:04:29

of censorship resistance can bring many

1:04:32

good as well. So I

1:04:34

mean, let's take it one step at a time,

1:04:37

basically, and focus on the problems which we for sure

1:04:39

know will happen, which I think are distribution. I

1:04:42

feel like we've touched on many,

1:04:45

many things. If people want to

1:04:47

learn more, or kind of build

1:04:50

their own age, a genetic systems

1:04:53

for our toddlers or kind of just use and

1:04:56

systems that are already there. And where

1:04:58

should we send them? Yeah,

1:05:02

so we have this thing called the

1:05:04

Academy. And that's a great start. So

1:05:06

that's for people who want to basically

1:05:08

have more like support as they're building,

1:05:11

we have the docs, all

1:05:13

of that can be found on the website, so

1:05:15

all that's the network. And then if you follow

1:05:18

Autonomous on Twitter as well,

1:05:21

there's like weekly updates, where

1:05:23

I think those two places are the best.

1:05:26

Perfect. I am so

1:05:28

curious to see how this is going to

1:05:30

evolve. I think we should pencil

1:05:32

in kind of a follow up soonish

1:05:37

to just to see kind of like what people

1:05:39

build and kind of how

1:05:41

it actually changes things because the

1:05:44

opportunity space here is absolutely

1:05:48

enormous. Yeah,

1:05:50

let's do that. Yeah,

1:05:52

it's been a pleasure to have you on. Thank

1:05:55

you very much. It was a pleasure

1:05:57

being on. Thank

1:06:00

you for joining us on this week's episode. We

1:06:02

release new episodes every week. You

1:06:04

can find and subscribe to the show on

1:06:07

iTunes, Spotify, YouTube, Soundcloud, or wherever you're listening

1:06:09

to podcasts. And if you have

1:06:11

a Google Home or Alexa device, you can tell it to

1:06:13

listen to the latest episode of the Epicenter podcast. Go

1:06:16

to epicenter.tv slash subscribe for a full list of

1:06:18

places where you can watch and listen. And while

1:06:21

you're there, be sure to sign up for the newsletter

1:06:23

so you get new episodes in your inbox as the

1:06:25

release. If you want to

1:06:27

interact with us, guests, or other podcast listeners, you

1:06:29

can follow us on Twitter. And please leave us

1:06:31

a review on iTunes. It helps move behind the

1:06:33

show, and we're always happy to read it. So

1:06:36

thanks so much. And we look forward to being back next

1:06:38

week.

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features