Podchaser Logo
Home
ChatGPT turns a history book into a game with 400 questions! Techniques: roleplaying, shot prompting

ChatGPT turns a history book into a game with 400 questions! Techniques: roleplaying, shot prompting

Released Thursday, 27th April 2023
Good episode? Give it some love!
ChatGPT turns a history book into a game with 400 questions! Techniques: roleplaying, shot prompting

ChatGPT turns a history book into a game with 400 questions! Techniques: roleplaying, shot prompting

ChatGPT turns a history book into a game with 400 questions! Techniques: roleplaying, shot prompting

ChatGPT turns a history book into a game with 400 questions! Techniques: roleplaying, shot prompting

Thursday, 27th April 2023
Good episode? Give it some love!
Rate Episode

Episode Transcript

Transcripts are displayed as originally observed. Some content, including advertisements may have changed.

Use Ctrl + F to search

0:30

So my name is Lee Chazen and

0:32

I was originally a teacher.

0:34

I've worked in a lot of different professions. I

0:36

got into content strategy because

0:39

a friend told me about seven or eight years ago

0:41

that what I was doing is actually content

0:43

strategy and I wasn't familiar with the term back

0:45

in 2013 or 14. Did

0:48

some work in Silicon Valley

0:50

as a chief content strategist and

0:52

now I do consulting on my own. And

0:55

when ChatGPT came around

0:57

about two months ago, I said, oh my

0:59

God, this is gonna change everything. And so I pivoted

1:02

pretty hard into prompt

1:04

engineering and I I think

1:06

I read some article in the New York Times

1:09

that let me know that this was going to be huge

1:11

and it was okay to be a creative

1:14

liberal arts sort of person and

1:16

do this work like you didn't

1:18

have to be necessarily technical

1:21

or have a background in computer science or

1:23

programming, and it's working just

1:26

fine.

1:27

That's awesome. What

1:29

would you say were the first steps

1:31

that you took to learn prompt

1:33

engineering?

1:35

I think it was two and a half months ago I spent

1:37

the next three or four days just experimenting

1:39

with prompts, any type of prompt

1:41

I could think of. I realized

1:44

some people were going a weird direction with

1:46

this. Let's try to undermine

1:48

the system in some way. Let's let's

1:50

try to fool it into being something that it's not.

1:53

And with me, I was mainly

1:55

trying to get it to finish written

1:57

products that I had started,

2:00

but I just needed that extra kind of boost,

2:02

that personal editor. So

2:05

my first, yeah, my first few

2:07

prompts were I just need help

2:09

with my content. And

2:13

from there I just thought, wow, I don't have enough

2:15

hours in the day. I wanna do this all day long.

2:18

That's amazing. Just give us a little more sense

2:20

of, when you say helping with content,

2:23

what kind of content? Like books or podcasts

2:25

or poems, song lyrics. I

2:27

don't know.

2:28

Yeah all of that. And also scheduling,

2:30

for example, today I wrote down

2:33

all the major categories

2:36

of things I need to get done in a day, And

2:38

then I could easily just create a prompt saying,

2:41

All right. I need some creative time, some administrative

2:43

time, some networking time

2:46

and then some promotional time for my consulting

2:48

operation. I went on and on and I gave each

2:50

one a percentage. I could create a prompt

2:53

now that would say

2:55

divide my day, and divide my week into

2:57

segments according to these percentages.

2:59

And tell me. How I can accomplish

3:02

all these things in a day, and I will get that

3:04

as a response. So there's a lot of just

3:06

real practical, functional stuff

3:09

that I was doing initially, So

3:11

if you're like a content strategist

3:13

a teacher, anyone who puts up website

3:15

content or social media content that,

3:18

those were like the first things that I started doing.

3:20

For example, as a as a former teacher

3:23

who became what they call an preneur,

3:25

so I produce educational products. And

3:28

things, and I needed

3:30

to finish a lot of that. And

3:32

one day sitting there, and this is the prompt

3:34

that I wanted to share with people I thought

3:37

what is the, just the coolest,

3:40

most amazing, prompt I can come

3:42

up with that will solve. Like

3:45

a teacher's problems. And

3:47

what I came up with was, now it's over 500

3:49

words, but initially it was 447

3:51

words. And it was designed

3:54

based around this game that I

3:56

had invented with my students.

3:58

Back when I was teaching social studies, we invented this game

4:01

called Global Challenge. The idea was

4:03

to learn the contents of an entire world

4:05

history textbook, all 800 pages

4:07

in the form of a game. But in order

4:09

to do that, you need game questions.

4:11

So the prompt was and

4:14

I'm not gonna read the whole thing now because it would take

4:16

too long, but it was,

4:19

in the role, you're this omniscient author

4:22

master of all content related

4:24

to world history, current events, high school curriculum.

4:26

So I gave it a role, that, that meta promptt.

4:30

And then I got into, you're gonna design

4:32

20 questions, seven categories,

4:34

six different levels of learning to cover

4:36

the span of recorded human history.

4:40

And it's, at first it told me that

4:42

it can't do this all in the

4:44

character limit, but I would have to just

4:46

keep hitting continue

4:48

and let me pause you for a second. That is actually

4:50

one helpful trick for people

4:53

when it runs out of character output, as

4:55

it so often does. You

4:57

can tell it, 18 different ways,

4:59

but the simplest one is just continue

5:02

and it will usually mind you pick

5:04

up where it left off.

5:06

Yeah, and I just thought, how

5:08

much time do I have to keep hitting continue

5:10

because this is going to create 400

5:14

multiple choice and short answer questions.

5:17

Across the span of different intelligence

5:19

types. Cause I built that into the prompt too.

5:22

So there'd be something for a math logic

5:24

kind of person, a visual spatial

5:26

sort of person. And,

5:29

this could revolutionize everything. I

5:31

don't know what textbook publishers are

5:33

thinking right now, but this

5:36

could replace the textbook

5:38

because the corpus text, the body of all

5:40

content that. Goes into ChatGPT

5:43

will cover pretty much

5:46

everything. It's all been

5:48

if it's digital, if it's put out there in some form

5:50

on the internet apparently ChatGPT

5:52

will find it

5:53

Particularly with this prompt, have you run into

5:56

problems with hallucination?

5:57

Yeah. That can be a problem. And

6:00

so you have to be

6:02

somewhat knowledgeable to

6:05

do this to begin with because you're

6:07

gonna need to fact check things and.

6:12

I don't know what the percentage is, the accuracy

6:14

percentage, but my initial guess

6:16

is like 95% or 97%

6:19

accurate, just based on what I've seen so far.

6:22

But here's the kind of the flaw of

6:24

being a human being is that when

6:26

something sounds authoritative

6:28

and. Is using a certain kind of

6:30

vocabulary, you just automatically think

6:33

it is correct,

6:38

and so there's a possibility that disinformation

6:40

misinformation can

6:43

happen. You just gotta be on the lookout

6:45

for it.

6:47

That makes sense. Walk us through this output.

6:49

So I had to create a point

6:51

value based on the

6:54

the level of learning so that if it was like, so

6:56

there's this thing in education called Bloom's Taxonomy

6:59

of Learning, where if you do the

7:01

simplest thing where you just recall some information,

7:04

That's like a level one. When

7:06

you're up at the top and synthesizing

7:08

it and applying it, evaluating

7:10

the information, manipulating it into something different,

7:13

that should be the highest point value because now you're

7:15

really doing some heavy thinking. You'll

7:18

see in the right column there, it'll say

7:20

Bloom's level. And the number

7:22

of points they're gonna get. So automatically

7:25

this thing has, I can't even

7:27

tell you how long this would take a teacher to

7:29

do. And this took when I was initially

7:31

doing with the students and students wrote the questions

7:33

and we created the point values and the charts and everything.

7:36

This was like a two month long project.

7:39

Wow, two

7:41

months.

7:42

To go through and

7:44

curate or cull whatever

7:46

the word is, all of the knowledge

7:48

from the textbook and turn it into questions.

7:50

But here's the cool thing, this is why

7:52

I think teachers shouldn't be afraid of ChatGPT

7:55

because I think the question is more

7:57

important than the answer. And if students can form

7:59

good questions and do this themselves and

8:01

create great prompts, then

8:04

that means they're learning. If

8:06

you know what to ask about a subject, it means

8:08

you know the subject. If you don't know what to ask,

8:11

then you haven't read enough.

8:15

That totally makes sense, and that's a nice

8:17

way to flip that discussion people have

8:19

of, every time people freak out

8:21

about a technology and education, it's

8:24

that the answers are provided. It's calculators,

8:26

it's Wikipedia, it's ChatGPT,

8:29

but thinking about you

8:31

need to be able to ask the right question.

8:34

That's really a good point cuz with a calculator,

8:36

yeah, it'll tell you whatever you want and it'll

8:38

do it more accurately than ChatGPT will, but

8:41

if you don't know how to ask the right question, it doesn't matter.

8:44

Yeah. And what the effect that I think this is going to

8:46

have on people using ChatGPT and other

8:49

LLMs is that

8:51

it's going to improve our thinking.

8:53

It really will because it'll make you ask

8:56

better questions, which means you're gonna be thinking

8:58

at a higher level. You're

9:00

gonna want to be refining these

9:02

prompts. To get better

9:04

answers. So you're not gonna be caught up in the technical

9:06

side, the coding, the getting the

9:09

technology to work for, you're gonna be caught up in the

9:11

how do I take advantage of this superior

9:13

technology to ask Most

9:16

fascinating purposeful,

9:19

meaningful, whatever question. And

9:21

so I did another one like this for just general

9:24

content for any,

9:27

anyone like running a business. Or

9:29

a content strategist or people working in

9:31

social media that I

9:34

generated a question, so that

9:37

ChatGPT would ask the user

9:39

a series of questions. And in getting

9:42

all those answers, it

9:44

will then create a prompt. So the purpose

9:46

is to, you're writing a prompt to

9:48

create a prompt and

9:50

so based on that, they're gonna have a prompt that they can use

9:52

forever whenever their

9:55

company or the idea changes, all

9:57

they have to do is answer the questions that ChatGPT

9:59

has generated from that original prompt.

10:03

And it will give you all

10:05

of the answers or all of the

10:07

content you need for a website, for social media,

10:09

for a book, whatever it is you're working on. The

10:12

idea was once you have all this

10:15

knowledge, now create content

10:18

for the website. Give me a list of the

10:20

top 50 key words, for seo

10:22

so I can use that in social media, use that on

10:24

the site. Or

10:27

the actual product itself, because

10:30

I could have it build out all those different sections

10:32

into the entire book that explains

10:36

that, that explains the whole concept. And

10:38

so in about 30 days time, like that business

10:40

is formed and ready to go, maybe

10:44

sooner. If I imagine if I had a team

10:46

of three or four prompt engineers, we could finish

10:49

any. Major product

10:51

like this and just, all right guys, we've got two

10:53

weeks to do a book, a website,

10:56

and put a product out.

11:00

Very cool. So specifically,

11:03

let's go back to this first prompt.

11:06

Can you call out some of the techniques

11:08

that you're using in this prompt, just

11:11

so the audience can understand and see how they're

11:13

being applied.

11:14

I don't have to speak perfectly

11:17

as I'm speaking to a person. I

11:19

just have to get it all of the information

11:21

I want in there. So up top I say, this

11:23

is for a world history game called Global Challenge

11:26

2.0 Metamorphosis.

11:28

Then I explained what the game is. Then

11:31

I said, all right, now we're gonna move on to the

11:33

questions for this grade level, and

11:37

they should move progressively. Meaning I wanted

11:39

to start at the beginning of the book and

11:42

go 20 questions per chapter

11:44

all the way through. So

11:47

I gave it the order of operations. That,

11:50

so each thing you're drilling down what do I need

11:52

next in this process?

11:54

Now I need them to be divided into seven

11:56

categories. And here's

11:58

what those categories are like.

12:00

It needs to divide it into major

12:03

events, vocabulary, people, geography,

12:05

government, current events and trivia.

12:08

And then I said within those 20 questions,

12:10

so I keep drilling down generate

12:13

five questions in category one. Three

12:15

in categories two, three, and four.

12:19

So I break it down like, how do I want

12:21

those 20? And the remarkable thing is that

12:23

it did it perfectly on the first

12:25

run. And I did not expect that.

12:27

It really blew my mind. I've

12:29

had my mind blown so many times in

12:32

the last two months by things. And

12:34

Oh, and then I gave it specific instructions regarding

12:36

the current events because it's not enough to

12:39

just ask a student a current event question

12:41

about what's happening now. I wanted them

12:43

to, wherever possible,

12:45

I wanted ChatGPT to relate that

12:47

question to something that was happening during

12:50

that period of history that we're studying. And

12:52

then I gave it an example, which

12:55

surprisingly helps, if

12:57

you give ChatGPT, an example

12:59

of what you want, it

13:02

somehow models that.

13:09

Yeah, so that technique is called

13:12

shot prompting and not

13:14

doing it is called Zero shot Prompting cuz there's

13:16

no shot. But one shot

13:18

prompting is giving a single example and

13:20

few, or n there's a lot of different

13:22

terms for it, but either of those is when

13:25

you give it multiple examples where

13:27

in this Wherein this is the example.

13:29

I see where you said connect

13:31

the news to something that happened in the time period

13:33

for each set of 20 questions. Eg.

13:36

There was a conflict or territorial dispute

13:38

similar to what is happening in today's world. Did

13:41

you include a specific one?

13:42

I didn't because I was in a hurry, number

13:45

one, I also didn't wanna give a

13:47

wrong example because I haven't,

13:49

I'm not actively teaching right now I may have forgotten

13:51

some things. So I just gave it a

13:53

general, Example, like

13:55

in history there's a lot, we see a lot of conflicts.

13:58

So if you can relate a current conflict of something that happened

14:00

in the past, that's gonna be really good. And

14:03

then, I asked for it. Now in,

14:05

when I initially did this on G P T 3.5

14:08

and I asked for the table, it said, I can't do a table,

14:11

but I can give you the code so that you can create the table.

14:13

In G P T four it

14:15

produced the table, which

14:17

I couldn't duplicate on this Google Doc, but it

14:19

is awesome.

14:22

Nice. Yes. Table output is a

14:24

very nice change. One other question actually.

14:27

I am not a history teacher

14:29

or in that sort of branch of education it

14:32

looks like you didn't define

14:35

Bloom's Taxonomy of learning it just

14:37

from the name. It was able to go, oh, yes,

14:39

I know what that is. Is that right or was

14:41

there prompting beforehand to tell it?

14:44

No I capitalized

14:46

it to let it know that it was a proper

14:48

term for something which may

14:50

have helped, I don't know. But

14:53

I found, so when I researched

14:55

it myself, I found there was a version

14:58

of Bloom's Te cuz Bloom's Taxonomy goes back

15:00

to, I think to the 1950s. So

15:02

I told it to use a more

15:04

current version, which I just happened to find

15:06

by Anderson and Krak.

15:09

I'm saying that which reordered it into remembering,

15:12

understanding, applying, analyzing, evaluating,

15:14

and creating. So that creating if you create

15:16

something with the knowledge you've been given, which

15:20

is really what ChatGPT does,

15:22

that's like the highest level of learning.

15:25

It's synthesizing stuff for us. So it's

15:27

a great student, if you wanna look at these

15:30

LLMs in that way.

16:00

That is very interesting. Okay. I'm

16:04

seeing role playing as one

16:06

technique, cuz you say in this role you are the

16:08

omni mission author and master of all content

16:10

related to world history, current events

16:12

and high school curriculum. You are wise,

16:15

but you are also funny at times, things like

16:17

that. The other thing that I'm

16:19

seeing is output

16:22

constraint. So for example,

16:24

for each set of 20 questions, generate

16:26

five questions in category one.

16:29

Three questions in categories. 2, 3,

16:31

4, 5, et cetera. And then the shot

16:33

prompting that I mentioned earlier. These are awesome

16:35

examples of these techniques.

16:37

Thank you. I think in future versions,

16:39

I was just thinking about this, that if

16:42

something is undefined, this is going to

16:44

be a major breakthrough when this finally happens, and I

16:46

don't think it is happening where it

16:48

asks questions like how

16:51

many of these in each category did you want me to produce?

16:53

If I did not define that now,

16:56

that would be brilliant. I don't know

16:58

when we're gonna see that. But

17:00

if it helps you refine your question,

17:02

that's gonna be amazing. Yeah.

17:06

Yeah, definitely. So

17:08

can you tell us more about the process

17:10

you took of iterating on

17:12

it?

17:13

I came up with this idea of a of a multidisciplinary

17:16

content matrix. So

17:19

that all you have to do is

17:21

answer the questions in each category, and

17:25

that will tell you how to put your prompt together.

17:27

Taking that even a step further, I

17:30

asked ChatGPT. This is something I'm

17:32

probably gonna put on prompt base once I get it done

17:34

because it's really elaborate, is

17:36

I'm going to have it create the

17:39

code for an app so

17:41

that this app prompts you. In

17:44

say, 10 different categories. For

17:47

example what is your product idea or service?

17:49

Who is the intended audience? What style

17:52

of writing do you want to use?

17:55

And then from that it will create

17:57

your prompt. And I know there's prompt generators out there,

17:59

so maybe this already exists. It's moving so quickly.

18:01

And so if you, kind of go through a checklist.

18:04

and you go through a checklist. I

18:06

think that will help people come

18:09

up with really great prompts, because

18:11

I think honestly, most people are just doing like the

18:13

one sentence, create

18:16

a a poem about McDonald's as,

18:18

as though written by Shakespeare or something

18:20

Yes. Yes. That's part

18:22

of why I created this podcast and the Mastermind

18:25

is for people to be able to go,

18:27

oh, wow, I've never thought of doing it that way. Or,

18:29

oh yeah, let me go read through all of the

18:31

different prompting techniques

18:33

and things like that, and. That's also

18:35

what learn prompting.org does.

18:38

They're a great resource for this kind of stuff,

18:40

you I imagine you're probably one of the first 100

18:43

in this genre, right? Or on the market.

18:47

Yep. Yeah, there's quite a few people selling here

18:49

are my 70 or 500 prompts,

18:52

but actually teaching you how to

18:55

do it and the techniques for it. Yeah,

18:57

I think there's probably around 50 people.

18:59

Yeah. This is super valuable because,

19:02

that's one of the things I love about current culture.

19:05

It's like you go on

19:07

Reddit like the way I found you

19:09

and this group, and it's, let's

19:11

all help each other learn this. It's

19:14

not as dog eat, dog as

19:16

it could be. It's oh my God, I am, because initially

19:18

the thought is like, oh, I think I've discovered

19:21

something. There's no way I'm gonna share this. But everyone's

19:23

sharing everything. I

19:26

don't know how that's gonna play out Exactly.

19:29

Yeah, it's, but it's

19:31

like what I was telling a friend the other day that these are,

19:34

this is simultaneously the scariest time

19:36

ever. But also the most unimaginably

19:39

amazing time ever, and

19:42

those two things are kinda happening

19:45

simultaneously.

19:46

What are some common pitfalls

19:48

you've run into? With

19:50

building prompts.

19:53

If anything, it's how to limit the

19:56

ideas, so that you don't get a a jumbled

19:59

mess of a response. It's

20:01

like paring things down. I

20:04

think you have to know really exactly

20:06

what the end product is, what you want. And

20:10

it's, I've never seen more of a, like

20:12

a mirror of a tech product where

20:14

you're gonna get exactly out, maybe not

20:16

exactly out, but very close to exactly out

20:18

what you put in. And brilliant

20:21

prompts are gonna get brilliant

20:24

responses, but

20:26

like you said before, hallucinations

20:29

can occur.

20:32

Have you found any good techniques for catching

20:34

the hallucinations? Obviously, you know a lot about

20:36

history. You're gonna be like, no, Julius Caesar

20:38

did not live in, Africa. I don't know.

20:41

But have you found any good techniques for that?

20:43

If it's stuff you're not as familiar with?

20:48

Yeah I don't know. I think we need

20:50

good techniques for that. I think there's probably

20:52

apps and programs out there that are going to

20:54

be designed to detect that. Yeah,

20:57

I just recommend to everyone that don't

21:00

just create a document and then post it.

21:02

Some are, make sure, human beings still need

21:04

to read through things and

21:06

there's gonna be a lot of garbage. Now

21:08

I. This is gonna date myself. Definitely.

21:11

But I remember in 2005 when I started

21:13

my blog and I

21:15

quickly realized, wow people

21:18

are reading this and accepting pretty much whatever I'm

21:20

saying here as the truth. And

21:22

I had to self-censor. I had to

21:24

tell myself, you gotta be careful

21:26

what you're saying because they're taking this to be true.

21:28

You don't wanna be found out to be like this guy who

21:30

is a master manipulator and changing

21:33

information around. And

21:35

it's that same thing. I think it's going

21:37

to be up to schools

21:40

or generally like on in

21:42

places like Reddit, Hey, let's monitor ourselves

21:45

here and let's

21:47

try not to deceive, each

21:49

other and our, and ourselves.

21:53

Definitely just teaching students critical

21:55

thinking as well will be part of that with,

21:58

here's how you fact check things. Here's

22:00

how you think about, here's a source I've

22:02

never heard of. Are they telling the truth?

22:04

Are they, accurate? Whether it's intentional

22:07

or not.

22:08

Like people will be using this

22:10

the way they have been using Google.

22:12

Now here's the difference. Google's gonna give you a set of links.

22:15

I think you can ask ChatGPT four

22:17

for a source of information. I'm pretty sure

22:20

I know Bard. You can And

22:22

I haven't tried.

22:23

Bing provide that. I haven't seen

22:25

that in G P T four, but

22:27

it's certainly possible, and I know I've seen

22:29

it in systems that augment

22:31

G P T four.

22:33

Yeah, that makes sense. So people with medical conditions

22:36

are just going to inevitably ask

22:38

Hey, what should I do about toenail

22:40

fungus or this This repeated

22:43

headache? If you're getting this ongoing headache

22:45

I'm not sure what you're gonna get as advice,

22:48

but I would just say do some follow up questions

22:50

like where is this information coming from? I'm not sure

22:52

if that's a lot of

22:55

the responses I'm getting. I am a language model

22:57

and I don't have access to this

22:59

or that, medical journals. So you need

23:02

to, I think everything comes with this caveat.

23:05

Which, especially at this stage, that caveat

23:07

is a good thing. Yeah.

23:09

Yeah.

23:09

Can you share an example of a prompt

23:11

that didn't work the way you wanted

23:13

it to and how you

23:16

either learned from that or iterated on it to

23:18

get it to do what you

23:19

Yeah, that's a good, I did not come prepared for that

23:21

one. I've gone back and forth like where I'll say,

23:23

that's not what I want because it has this,

23:25

oh, I apologize. Perhaps this

23:27

will work. And it's weird kind of dialogue.

23:31

Yes I've seen one version of

23:33

the say what again? Seen

23:36

from Pulp Fiction where Samuel

23:38

Jackson is like, threatening them. Say it again. Say it

23:40

again. But it's, talking to ChatGPT

23:42

saying, say, I apologize.

23:45

But again cuz

23:47

yeah. Seeing that over and over, especially

23:50

for me, I do a fair amount of code generation

23:53

and that's frequently what comes up.

23:55

Cuz you're like, If that code doesn't work

23:57

or here's an error, I'm sorry,

24:00

let me sell, try that again. Like over

24:02

and

24:02

I think there's some self-correction

24:04

going on already. Where

24:09

I don't know. I've seen this in a few places where

24:12

it, it will generate some

24:14

potential errors in what it just produced,

24:17

and then you'll check those and then it, you'll

24:19

just go through this kind of iterative process

24:21

until you get to what you want.

24:24

Yeah, absolutely. I've definitely seen that

24:26

in the. Co-generation where

24:28

you know, I'll ask it, fix this bug

24:30

of, I don't know, missing semicolon

24:32

or it's outputting the wrong thing. It

24:34

does say, I'm sorry, let me, I apologize.

24:37

Let me give you that again. But then it fixes it and it

24:39

keeps the fix at least for a while. That

24:41

does sometimes roll out of the context window

24:44

though, so that can be a challenge. There's a

24:46

tool that I'm building. That

24:48

actually allows you to do iterative testing on

24:50

prompts, and I'll link to that in the show

24:52

notes. It's called the Prompt ide, but

24:55

the idea is if you have a

24:57

prompt that takes in variables,

24:59

for example, if you're selling it on prompt base,

25:02

and so it's, give me suggestions for what

25:04

to do on a vacation, maybe then you're gonna

25:06

have a variable for. Where you're going.

25:08

And another variable for how long? And probably

25:11

some third variable of, I don't know, flying

25:13

or driving. Particularly if you're in the us.

25:15

When you're testing that out, you tweak

25:18

the prompt, and then if you want

25:20

to test it, like test the actual output, then you

25:22

have to copy and paste it into ChatGPT,

25:24

replace all the variable names with stuff,

25:27

and then run it, and then do that again with your second test

25:29

case and your third test case. So anyway, what

25:31

this tool does is you have a field

25:34

for the prompt, and then you have some sections

25:36

for each test case with a variable

25:38

for each one. Location for

25:40

test case number one is Paris location

25:43

for test. Case number two is New

25:45

York City. And then every time you change the

25:47

prompt, it automatically reruns

25:49

all the test cases with these variables

25:52

placed in it. I built

25:55

it in both combination of Ruby

25:57

and type script. And I know both those languages,

25:59

but I've been enjoying being able to

26:02

try out the code generation. And

26:05

it's had some really interesting things because

26:07

this is too complicated a program to

26:09

keep entirely in memory, so I can only ask

26:11

it for, give me a function

26:14

to call ChatGPT's, API and

26:17

take in these things. I can't just say give

26:19

me a page that does all of these

26:21

features at

26:22

right.

26:23

but it has actually been pretty

26:26

good at being able to do these things.

26:28

Ironically, one of the places it

26:30

has repeatedly failed is

26:33

anytime I ask it to write an API

26:35

call to open ai.

26:37

It's wrong because it's old. Like

26:39

it, it tells me, here's what you want to use

26:42

to connect and the URLs

26:44

wrong. A bunch of the parameters are wrong. So I have to

26:46

go look those up. But that is

26:48

exactly what people have talked

26:50

about. That's out of the training window.

26:53

The training window ended in 2021,

26:55

I believe. And these are a

26:57

p i changes that happened three to six

26:59

months ago. So it doesn't surprise me that

27:01

it's wrong, but it catches me sometimes and

27:03

I'm like, oh that is a thing you will not be able to

27:05

do. So let me go, just look up

27:07

the correct a p i documentation and put

27:10

it in myself. But for the most

27:12

part, it's been pretty powerful.

27:14

Yeah, absolutely.

27:16

You mentioned selling on Prompt

27:18

base earlier. Have you sold

27:20

any prompts and what do those prompts

27:22

do or what are the ones you're thinking

27:24

of selling if you haven't yet?

27:28

He had that real that first one is something

27:30

that I want to I wanna refine first,

27:33

get it to exactly what I needed to do and

27:35

then put that one up for sale, which is the one that generates

27:37

all the questions to cover the span of world history. But

27:40

you could do that for every subject area.

27:42

And I might just start doing

27:44

it. It's just that once I start going down that road,

27:46

I don't know how long. Yeah,

27:49

I don't know how long it'll be before I stop. Especially

27:52

if you sell that first prompt, I don't know how successful

27:54

people have been on there, so

27:58

I have to be care careful because there's wormholes

28:00

out there that will just suck you in and you'll think,

28:02

wow, it was Monday when I started this. How is

28:05

it Friday afternoon now? Did I, did

28:07

any of this generate income? I don't know.

28:11

Yes, I totally understand the feeling

28:13

about the rabbit hole. I can actually

28:15

tell you. I don't know

28:18

off the top of my head, but

28:20

I did an analysis of prompt

28:23

bases, text output prompts

28:25

to look at, basically to categorize

28:28

it a little better than they did, and also

28:30

then to look at how many sales does it get, how much

28:32

revenue based on price. Audience,

28:35

I'll put the link as well in the show notes.

28:37

And it's basically it's what I wanted

28:39

when I said I want to start selling

28:42

more prompts on prompt base. Which categories,

28:44

which niches do you prefer are selling

28:46

well, which ones are selling

28:48

well, but they're really overloaded. Which ones are like,

28:51

they don't have one or two, but those get

28:53

a lot of revenue, et cetera. So

28:55

that's what this analysis is basically looking

28:58

at. It does actually have the titles

29:00

and sales information and all that of. 5,236,

29:05

I think it is prompts that are available

29:07

on there. So if, if you want to do different analysis

29:10

of your own, you can do that. I

29:12

also did the analysis of here are the niches I

29:14

would say you probably wanna try and target

29:16

because they're, high value, low competition,

29:19

or maybe medium competition, but

29:21

still medium or high value, things like

29:23

that. So yeah, that'll be in the show notes.

29:26

Nice. Yeah, and I haven't even

29:29

fully explored, image and graphics

29:32

and art creation. Or

29:34

even music creation if it's out

29:36

there. I also play music but I have not done any

29:38

composing. But I'm

29:40

wondering that, that's probably right around the corner

29:43

too.

29:45

Yeah, the music for this podcast,

29:47

the intro and outro music is a very short

29:49

clip, but it's actually AI generated,

29:51

Oh, wow.

29:53

people will be hearing that, a few minutes

29:55

ago and in a few minutes when we wrap

29:57

the episode.

29:57

Okay.

29:59

So thank you so much for coming.

30:01

Where should people go to stay up to

30:04

date on what you're building and these

30:06

prompts and when you start posting

30:08

on prompt base?

30:09

Yeah. Twitter at lee underscored chazen.

30:12

Glider cell.com is my

30:14

I'm, I did a major pivot and now it's like my prompt

30:17

engineering content strategy or.

30:19

My old tiny blog

30:22

from the one that I mentioned before with without

30:24

a specific url. It's right

30:27

brain world.blog spot.com,

30:30

like the oldest name on the internet Possible.

30:34

That's awesome. And all of

30:36

these links will be in the show notes. And

30:38

you also mentioned global

30:41

challenge dot m ixo.io

30:45

Yeah, that is the game

30:47

and people can get on the wait list because

30:49

I'm once I get the list to

30:51

enough people, I will generate

30:53

the final products that they need to start running

30:56

this, and then they can build their

30:58

own versions of the game and

31:00

hopefully we'll create a platform where they can showcase

31:02

all the different versions that are out there so that students

31:04

can learn across the

31:06

entire curriculum by playing this game.

31:09

That's awesome.

31:10

And thanks for, I really appreciate you having me on.

31:14

You are very welcome. Thanks

31:17

for coming to the Prompt Engineering Podcast. Podcast

31:19

dedicated helping you be a better

31:22

prompt engineer. Episodes are

31:24

released every Wednesday. I

31:26

also host weekly masterminds where

31:28

you can collaborate with me and 50 other people

31:30

live on Zoom to improve

31:33

your prompts. Join [email protected]

31:37

for the schedule of the upcoming masterminds. Finally,

31:41

please remember to like and subscribe.

31:43

If you're listening to the audio podcast, rate

31:46

us five stars. That helps us teach more

31:48

people, and if you're listening to

31:50

the podcast, you might want to join us on YouTube

31:52

so you can actually see the pops. You

31:55

can do that by going to youtube.com/@PromptEngineeringPodcast.

32:03

See you next week.

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features