Podchaser Logo
Home
Vital.io Translating doctor jargon into regular human English

Vital.io Translating doctor jargon into regular human English

Released Thursday, 24th August 2023
Good episode? Give it some love!
Vital.io Translating doctor jargon into regular human English

Vital.io Translating doctor jargon into regular human English

Vital.io Translating doctor jargon into regular human English

Vital.io Translating doctor jargon into regular human English

Thursday, 24th August 2023
Good episode? Give it some love!
Rate Episode

Episode Transcript

Transcripts are displayed as originally observed. Some content, including advertisements may have changed.

Use Ctrl + F to search

0:00

Welcome to the prompt engineering podcast,

0:03

where we teach you the art of writing effective

0:05

prompts for AI systems like chat,

0:07

GPT, mid journey, Dolly,

0:10

and more. Here's your host, Greg

0:12

Schwartz.

0:15

Welcome to a joint episode

0:17

of the

0:17

Prompt Engineering podcast and the How to

0:20

Talk AI podcast.

0:21

We've got some awesome guests, so go ahead

0:23

and introduce

0:24

yourselves, guys.

0:25

Yeah, I am Aaron, er, the co-founder

0:27

and c e o of Vital.

0:30

And I'm Felix Brand at

0:31

Vice President of Data Science.

0:34

And they have a terrific product

0:36

that they have a launch. Today we're gonna hear all

0:38

about it. I think it's something that would resonate

0:40

with everyone and anyone that's

0:43

been to the doctor and had questions about

0:45

what, what was being told

0:46

to them. Yes. I

0:47

already tested it after watching your talk. Cool.

0:50

Really? Yeah. So I have sleep apnea. Yeah.

0:52

I put in a long diagnosis

0:54

with a bunch of stuff that I'm like, okay, I think I know

0:57

what that is. Yeah. I don't know what the hell that is. Yeah.

0:59

And it was like, sleep apnea, obstructive.

1:02

Yeah.

1:02

And two other things. Yeah. Oh.

1:04

Fantastic. Okay. That's great. I,

1:06

I think like you said what person

1:08

hasn't seen a whole long

1:11

list of doctor's notes or even been

1:14

in a situation where you're maybe an

1:16

inpatient in the hospital and then the

1:19

doctor on rounds is coming by and telling you something

1:21

in a million miles a minute because he's got 20 other people to

1:23

see. But it's probably important

1:25

because it affects your own health and being and like,

1:28

you're Probably already out of it anyway, because you're

1:30

in the hospital. What a terrific way

1:32

to,

1:32

Provide something. Doctor's notes are really almost

1:35

like a foreign language. Yeah. As

1:37

I said in my talk, doctors don't say

1:40

nosebleed, they say epistasis. They

1:42

don't say, hey, your mom has had a stroke.

1:45

They say, oh, she's had a cerebral infarction.

1:48

They use all of these abbreviations. It's almost

1:50

impossible to... Understand.

1:52

And so we use the large language

1:55

model, As the core of

1:57

what we call our Doctor to Patient Translator,

2:00

and it's at vital. io slash

2:02

translate. It's free to the public, available

2:04

worldwide, literally as of today.

2:06

You're just catching me at a good time. And

2:10

we're happy to tell you, a bit about the

2:12

prompts, the classifiers, the free parts,

2:14

and all the things that we offer. To

2:17

make that possible, technically.

2:19

Yeah, that would be great. I would love to I would love to

2:21

delve into some of the the technical aspects.

2:23

Maybe this is a better question for Felix. Could you tell us a little

2:25

bit about how the model was going to be trained and

2:27

what data was used to be able to produce these

2:29

great

2:29

completions? Sure.

2:31

We've tried a number of different prompts because there are actually

2:33

a lot of different types of doctor's notes. And with

2:35

the public facing stuff, we know that we're going to get the

2:37

whole gamut from imaging all the way to discharging

2:45

stuff. We People, when they get their paper discharge instructions

2:48

upwards of 90% of them chuck them straight in

2:50

the bin as soon as they leave the hospital. And the

2:53

literature people understand their care, and

2:55

understand the follow up instructions

2:57

the doctors are giving them, their post care situation is way

3:00

lower. So we've looked

3:02

at different prompts for different situations

3:04

and then built a pre

3:06

model classifier, a pre LLM classifier,

3:08

also using a language model, a

3:12

small one deciding which of our various prompts should you have

3:15

to write some nodes and then we have

3:17

a whole bunch of post parsing, it comes out, we

3:20

take sections out of translation,

3:22

we plug those sections of the website,

3:25

maybe when you saw it, you could see that you

3:27

get like a very brief summary. And then

3:29

also a much more sort of technical breakdown.

3:32

Yes. So we're getting the LLM to pull out a lot of information

3:35

about what's in your justice note, but we want

3:37

to show you in like a digestible

3:39

summary first. Yeah. I think an important

3:41

piece of context is a lot of these doctors notes,

3:43

they're 10 or 15 pages long,

3:46

and they have 80% boilerplate.

3:49

Yeah. They have a, Hey, don't smoke. Or

3:52

I don't. Hey, here's COVID education. Okay. You're two years out

3:54

of date. And they put a lot of filler

3:56

in there. And this is actually just a fraction

3:59

of our primary business. Our primary business is

4:01

patient experience offer. It guides you through

4:04

an ER visit, or if you have to stay

4:06

overnight in the hospital, it explains your lab results,

4:08

how long you're going to wait. And then your

4:11

notes. Yeah. And because

4:13

we have experience with a million patients

4:16

a year using it, we know the structure of notes.

4:19

from all over the country. And so

4:21

we can pre parse, and instead

4:23

of a 10 or 15 page, we can

4:25

get it down to actually we only need to pass 3

4:27

or 4 pages into the LLM. That's

4:30

an important business and engineering consideration,

4:33

because cost and speed.

4:36

Also context window. If

4:38

you're doing, especially if you're using few shot

4:40

training with an LLM, which is a good

4:43

idea so that you know what output

4:45

you want to get. You'll blow through

4:48

your prompt, your future

4:50

shot, your data, and then your output, it has to fit into

4:52

a 4k window or a 16k window.

4:56

And so you need to do a few things to give

4:58

yourself as much profit as

5:00

possible. That makes

5:02

complete sense, but having the almost

5:05

sub prompts acting like little sub

5:07

agents themselves trained to

5:10

say just get rid of all the boilerplate

5:12

stuff that's not unique to

5:15

that patient's differential diagnosis.

5:17

Exactly. So deciding which part you're

5:19

going to do... Are more or less with

5:21

your own code or your own classifiers,

5:24

and then how much to send, especially if you're using a, like

5:26

a commercial l m. And we've used

5:28

both. Felix's got, Lama up and running

5:31

and too Yeah. Med

5:33

Palm lm, which is

5:35

medical specific, obviously. The

5:37

open ai, we can't actually use open

5:39

AI directly. You have to use it through Azure because

5:42

you. You need this to be hit. We're

5:45

in a regulated industry. Open AI will

5:48

not sign all of those things. You actually have

5:50

to like Work your way through Corporate

5:52

Microsoft. Yep, they'll determine

5:55

whether you're a worthwhile person or not, and

5:57

whether they're willing to take the risk, and

6:01

so if you put all of it, you can, with a sophisticated

6:03

prompt, put it all through WebMGT. You

6:06

can say, classify this. Is this a discharge

6:08

report? Is this a physical therapy

6:10

report? Or is this a hostile input?

6:13

By the way, you should always protect against hostile input.

6:15

Is this a non English input? Is this something

6:18

else entirely? So you want, and then...

6:21

In your prompt, you can say based on the classification,

6:23

then do this. But if you do

6:25

all that, your prompt starts to get very complicated

6:27

and very big. You can use that to prototype,

6:30

but when you go into production, this

6:32

is also very slow, it gets very expensive, you

6:34

run a classifier that's much simpler and

6:37

much quicker on top of it, and

6:39

then you don't have the expense, your prompt's shorter.

6:41

And then you can say, if it's this, go to this prompt.

6:43

If it's that, go to that prompt. You can

6:45

also templatize prompts. So if you

6:47

say, I want the output in Spanish,

6:50

you can put a variable in your prompt. So the

6:52

prompts, don't think of them as static strings.

6:55

Think of them as a programming language

6:57

that is frankly pseudocode, yeah?

7:00

One of the things that, this is a bit like medical

7:02

specific, but the part that's very important

7:04

to patients is the plan and assessment,

7:07

what the

7:07

doctor says you're supposed to do. Here's the

7:09

problem. In some hospitals it's called plan and assessment. In other

7:11

hospitals it's called assessment. In other hospitals it's called

7:15

plan. In other hospitals it's got like an abbreviation.

7:17

And with classic programming if I say

7:19

match panda and I give it pandas

7:22

with a plural, it's no

7:23

match. Or you got a space in your column

7:25

header. Exactly. But with an

7:27

L M, I can just be like, it's gonna be called

7:29

this, or probably this. It's got stuff

7:31

that kind of looks like this and like

7:33

it's good enough that if I explain it to you guys, you'd be

7:35

like, oh, okay. I know what you're looking for. That's

7:38

the power of LLMs is

7:40

you can give them. Vague pseudocode.

7:42

Yeah, and to me, that's mind blowing.

7:45

This guy actually knows a map of how that's

7:47

passed. So

7:48

real quick before we get into that, just for the

7:51

audience, part of what I do on my

7:53

podcast is like, What are all these

7:55

technical terms? Content window, number one.

7:57

It's literally how much stuff you're putting into the prompt,

7:59

but also how much it's filling out, and

8:02

if you do too much, it forgets the stuff outside

8:04

the prompt window. Sorry, the context window.

8:07

And so you have to be careful how long everything

8:09

is. That's what they're talking about when you're saying,

8:11

if I can pull pieces of the prompt out and only run

8:13

them separately, it's way better.

8:44

It's a key reason to innovate in your own models, because

8:46

for a long time you've been working with 4K

8:49

context window, and if you're doing

8:51

this few shot in context learning, as

8:53

Aaron says, you just run through it.

8:54

Yeah. And also, I'm

8:56

the CEO as well as, maybe

8:59

you can tell I have a bit of an engineering background, not as

9:01

good as this guy. I don't have the British

9:03

accent, which is, that's true. And

9:05

also, that adds

9:06

20 IQ points, right? Yes.

9:10

But as the CEO, I have to think through the economics,

9:12

right? If you were using GPT 4 and

9:14

you give it the 16K 32K window,

9:18

the maximum one, it's going to cost you,

9:20

if you fully fill that thing, it's going to cost

9:22

you about 48 cents per, translation

9:25

or transformation, right? Yeah. We have

9:27

a million patients on our platform. They have

9:29

about five nodes each. You do

9:31

the math on that and you're spending

9:33

5, 000 a day.

9:35

Yeah. If that's what you do. You don't need

9:37

to. You use smaller context windows,

9:40

or you use 3. 5 Turbo, or you run

9:42

Llama. Yeah. Or you use one

9:44

LLM to pre parse for a different LLM.

9:47

You can do, those are the tricks that like, practically

9:50

speaking, this is an immature

9:52

industry because you have to hand do

9:55

All of that.

9:56

And what's really interesting is, some of these problems

9:58

are really exciting and new. As Aaron says,

10:00

you're trying to pull out something that's

10:02

very undefined in free text document.

10:05

Okay. So that's you need some modern stuff to

10:07

do that. But some of these problems are pretty traditional.

10:09

Classifying a document and you've got, plenty

10:12

of examples. You don't need to go and use

10:14

your OpenAI LLM to do this

10:16

classification problem. We've been doing this for a long time. And

10:18

you can do them a lot cheaper.

10:20

Yeah, it's slow and expensive to use OpenAI,

10:22

or Google, or

10:25

NetApp for basic classifications. But it's

10:27

great for prototyping. So the key

10:29

insight, is work out the piece that you really

10:31

need the expensive tech for, and ensure

10:34

that you boil down the problem only to that, using

10:36

other pieces of technology

10:37

upstream. Yeah. So how do you handle,

10:40

Like the, if you have all these

10:42

prompts essentially acting as agents, and

10:44

you have to have this sequence occur. In

10:47

a specific order, how do you asynchronously

10:49

is there a specific layer that's doing the

10:51

handoff? Are they doing the, are they doing

10:54

a turnover at rounds? In between

10:57

synchronization, let

10:58

me get a little technical. So we use an

11:00

event. sourced architecture. So this

11:02

is outside of AI, which basically

11:04

means that we handle streaming data quite well. So

11:06

we have data that's streaming from

11:09

over 100 hospitals now, more or less

11:11

real time. It comes out of Cerner, Epic, whatever

11:13

the electronic medical record system is.

11:15

So a doctor writes a new note, finishes it, it hits

11:18

our system and

11:20

goes on to the parsed,

11:23

classified, cut up into little bits, and

11:26

then divvied out to the yeah,

11:28

you need to synchronize it so you have queues of

11:30

work. Those queues can back up. We just

11:33

launched this.

11:34

Unfortunately at this point, we had some

11:37

audio challenges. So the video will

11:39

continue. But going

11:41

forward, we're only able to use audio

11:43

from a much lower quality source.

11:46

So it's going to get kind of noisy from here.

11:48

I'm sorry about that. The rest of the interview

11:50

is definitely very interesting. But

11:53

it was a pretty noisy room.

11:55

I've been so busy with talking to people. For all

11:57

I know, the system is, got an hour wait

12:00

queue back up. But it won't

12:01

fall over. It will just queue up. It

12:03

took two tries, and it was about 45 seconds,

12:05

but it worked! That

12:06

means, eventually, that's actually, I'm

12:09

like, happy to hear that, not from your

12:11

experience, but it means that we're putting serious

12:13

load on this. It means that people are, this

12:15

is a good day in the history of Python. But

12:18

you have to have a robust architecture to handle

12:20

that and not get things out of order and handle server

12:23

restarts and all of that,

12:25

so that's a, it's a pretty

12:27

engineering response, but yeah, it

12:29

can be

12:29

handled. And to speak to Aaron's answer

12:32

earlier, this is something new that we're doing, but

12:34

we have, what, a good four products at

12:36

the moment? Yes. We have a patient experience

12:38

product, which is going to guide your experience through

12:40

the emergency room. Yeah. And we're doing a bunch of AI

12:43

there. We're predicting, how long are you going to wait for a bed?

12:45

How long are you going to wait until a doctor comes and sees

12:47

you? Yeah. What are the lab results that you're, that

12:49

are coming back, what do they really mean? for you. We've

12:52

got a product for care teams. We're

12:54

providing clinical decision support alerting.

12:57

Are you likely to get sepsis at some point in

13:00

your stay? How likely are you to be admitted? Like allowing

13:02

doctors to manage their workflows

13:05

using this kind of alerting system. We've got a

13:07

system which allows you to find follow

13:09

up care afterwards. And so basically,

13:12

we've been doing this for a long time. We've been doing

13:14

it, what are we, like six years now? Six years, yeah.

13:16

Yeah, we and we've been dealing with this huge pipe

13:18

of patient data for a long time. We're not new to this.

13:21

The event sourcing stuff, that's not for the LLM stuff. That's

13:23

running our systems. That's running our systems at a hundred

13:25

hospitals, a million patient visits. That's, that

13:28

stuff has been the

13:28

easy part for sure. That's right. So if, if this

13:30

sounds foreign or if you don't have a system like

13:33

that with the robust retry mechanism it'll

13:35

take you a couple of years of engineering to get to

13:37

that solid

13:39

system. That's some getting your hands dirty, just in

13:41

the mud. Yeah. Noting and debugging

13:43

just to get there.

13:44

Medical data, the messiest data I've

13:46

worked with

13:48

so far. That's a great, that's a great segue maybe into

13:50

can you tell us a little bit about the process that you

13:53

had to go through to have an LLM,

13:56

Handling HIPAA, HIPAA secure

13:58

patient data. Yeah. I know this is a big fear that

14:00

a lot of enterprise customers have. We

14:02

don't want our trade secrets to get out there.

14:04

We have legal,

14:07

proprietary, interactions with our clients.

14:10

Yeah. We're in a regulated industry, right?

14:12

This is, fortunately

14:15

or unfortunately, not new to me. I

14:17

was the founder of a company called Mint.

14:20

com. We took, The usernames

14:22

and passwords for 25 million

14:25

people and a hundred million bank accounts, including

14:27

me. Yeah, it was a long time ago. Including me!

14:29

That's right.

14:30

No. Yeah,

14:31

and have never had a security breach. At

14:34

least to my knowledge. I sold the company about

14:36

a decade ago. So we're used to dealing

14:38

with sensitive information. You

14:40

want outside penetration testing

14:43

outside audits. HIPAA and HITRUST

14:46

is even more thorough, is

14:48

routine outside security audits. Honestly,

14:51

it can sometimes be a pain to log into our own

14:53

systems requires multi factor fingerprints

14:56

and a drop of blood, but it

14:58

is very secure. You can

15:04

You cannot do this with

15:07

OpenAI. You have to go with,

15:10

Google will sign what's known as a

15:12

BAA, a Business Associates Agreement.

15:15

And it's part of the medical

15:18

chain of liability that says, hey,

15:20

we have the right insurance. If

15:22

we mess up, we have to legally report

15:24

it to you, and you have to report it back to the health

15:27

system. Here's our security practices,

15:29

and we have to look at those, and we have a whole

15:31

compliance office. To do all

15:33

of this.

15:34

And so you actually can't go in some

15:36

sense with the l and m startups.

15:38

Yeah. Microsoft Azure is a fantastic

15:41

choice to start out with. Google's

15:43

been aggressive once they saw what we were doing.

15:45

'cause this has been this has been out internally in our

15:47

products for two or three months.

15:50

And yeah, but they're also Google and Microsoft.

15:52

They know what they're doing when it comes to. security.

15:55

Honestly, when it comes to medical

15:57

information, it's all the people

15:59

who are still running local servers with.

16:02

Yeah, that's it. You want

16:04

to know why they have so many like

16:07

Malware attacks. They're on an old version

16:10

of Windows. They don't patch their stuff. And,

16:12

they may or may not be the,

16:14

the best IT people in the business. I absolutely

16:17

trust the security of AWS

16:20

and Microsoft and Google. Because they have too much

16:22

to lose as companies. We

16:24

have a super secure system.

16:26

And we trial it all

16:29

the time.

16:29

And obviously, our BAA includes

16:32

none of our data being used for training.

16:35

Of course. Yeah. Nice.

16:38

Speaking of the patient experience,

16:40

right? Yeah. If, is it a bespoke

16:42

interaction each time I log onto the app?

16:44

Yeah. Or does it keep my health

16:46

record, so to speak, so I can refer back

16:48

to the last time I used it? And

16:51

then, is that stored locally on

16:53

my device, or is it used, in any

16:55

sort of... process to make

16:56

the tool better. So our primary business

16:58

is a tool that guides you through your visit

17:01

at an ER or inpatient. And

17:03

that is visit based. So we know what your health

17:06

history is and we might show you a little bit of your past

17:08

visit, but it's meant to use at the time

17:10

that you're at the hospital or the emergency

17:12

room or having surgery or something like that.

17:15

And it's just walking you through that experience and

17:17

understand your lab results. These are the videos

17:19

you should watch so you can understand it. These are

17:21

the medications and what you need to know about

17:23

the side effects. We give you

17:26

access to that data for the couple

17:28

weeks following your visit, but we always

17:30

hand you off to the patient at least

17:32

for now. And I will be tight

17:34

lipped about whether you will ever have a full health

17:37

history. IE,

17:39

I've been pitched probably a dozen times on,

17:42

we're the mint for healthcare. And I was like, I

17:44

could do the mint. It's

17:46

vaguely familiar as a

17:48

business concept. I've done this before. So

17:50

nothing to announce today, it's in the back of my mind.

17:53

I'm sure people would, of

17:55

course it would resonate with

17:56

someone to be able to query. years

17:59

and years of interactions, and not to mention

18:01

the opportunities that if you

18:03

apply some machine learning over top of some

18:05

of that diagnostic opportunities

18:07

to catch

18:07

things early. Now it's like you're inside what

18:10

my long term business vision is. Theoretically,

18:12

I could calculate your health future

18:15

if I had a big enough data set. Yep. And

18:18

keep in mind that I... At

18:21

Vital, we now see 2% of all U.

18:23

S. Emergency medicines. Wow! For

18:25

a startup that's been around

18:27

for not that long, that's a pretty

18:30

good sample size. We can see

18:32

how diseases progress and,

18:34

that there's more of this type of fall in the winter than

18:36

there is in the summer, right?

18:39

I know there's entire industries, like the health

18:42

insurance

18:42

industry

18:43

that has it modeled on curves

18:45

exactly when you're gonna die based on, the

18:47

fact that you went skydiving once when

18:49

you

18:49

were 31. Sure. Yeah. I

18:52

could probably predict whether, Bird

18:54

and Lime are doing business well based

18:56

on the number of elbow injuries and wrist

18:58

fractures that we can

19:00

plot over time. That's unfortunately

19:03

not a joke. Wow.

19:04

Okay, then I have to ask, since Google

19:07

got rid of the, I forget what they called it, but the

19:09

flu predictor feature that they had for so long?

19:12

Is that something you guys might potentially product?

19:15

No, we won't use that

19:17

sort of stuff. It's really interesting and we probably could do

19:19

it internally. But And we did come

19:21

up with a COVID checker. We did come up with a COVID

19:24

checker that was used a million and a half times. Wow. Yeah.

19:26

We were the first one out before

19:28

Google, before Microsoft. We were, CDC

19:31

considered using us. I was literally on the phone

19:33

with the White House Task Force in the middle of the night developing

19:35

this thing. A million and a half uses

19:38

within the first month. We did the COVID checking for the state

19:40

of

19:40

Oregon, right? Yeah. The whole state. We

19:42

pivoted the whole company as soon as the pandemic started.

19:45

Yeah. Said, okay, we've got all this health data coming

19:47

in. We've got the data science chart. Let's try

19:49

and do something quickly with that. Yeah, nice.

19:51

But the sort of north star for

19:53

the company is what's right for the patient.

19:55

will it improve patient outcomes? I'm

19:58

really tired of most of healthcare.

20:00

I'm looking at you. Medicare Advantage. Who

20:03

is, frankly, just financial arbitrage.

20:06

They're basically like, Okay, so the government says

20:08

New York's a more expensive place. We'll pay 1, 400

20:11

a month for somebody over 65. Phoenix

20:14

is cheaper, so we'll pay you 1, 100.

20:17

And Medicare Advantage companies are just like, You

20:19

know what, we'll advertise in rich zip

20:21

codes to get healthy, wealthy people

20:23

and we'll leave the rest to the public system. They're

20:25

not improving patient outcomes. They're not

20:28

increasing utilization. They put up barriers

20:31

and blocks. Like you have to get a referral

20:33

from your primary care doctor. We

20:35

will do none of that. There are lots of ways to make

20:37

money in healthcare. Our investors sometimes

20:39

push us towards that. I

20:42

have fortunately had a successful start up.

20:44

I don't know, I'm not doing this, for the money primarily.

20:47

Just want to do what actually affects

20:50

patient health.

20:52

Yeah, we all have other things that we could be doing, there

20:55

are other ways to make money, but we, I've never

20:57

been in a more mission led company,

20:59

so thank you first.

21:00

Yeah, and it resonates with everybody,

21:03

even if they're healthy, we've got parents,

21:05

we've got grandparents, who like who wouldn't feel empowered

21:08

and able to help them out just with their

21:10

care make them feel a little more at ease during

21:12

a A time of struggle.

21:14

Completely. And actually I think one of the best use

21:16

cases for what we launched today, Vital.

21:18

io slash Translate, is

21:21

if you have an elderly parent or somebody

21:23

that you're caring for, especially

21:25

if they're elderly and they're a little confused

21:28

and they went to the doctor's office and they're like,

21:30

hey dad, what did ahhh.

21:34

Put their notes in there and see what actually

21:36

comes out. Diseases and issues

21:39

they actually have. Yeah. I

21:41

was talking I don't know the full story, but,

21:44

I had a friend whose sister died

21:46

basically because they didn't catch something

21:48

that was on page three or four. Because

21:51

humans can't scan text

21:54

that quickly. And you might have hundreds of pages

21:57

of medical history if you're a chronically

21:59

ill person. And sometimes

22:01

that history really matters. And doctors give

22:03

it like two or three minutes to maybe

22:05

scan through. AI does a way

22:07

better job of picking out. The stuff that

22:10

they might need. The fair comparison, this

22:12

is, listen, This, we've marked

22:14

it as 99. 4% safe

22:17

for animal adopters, independent people, employed

22:19

by the company. It's

22:22

not. Without risk. If 1 in 200

22:24

times, it'll miss something small. But

22:26

doctors miss something big. 1

22:29

in every 10 times. And so the stats

22:31

are actually much better for AI than they are for

22:33

humans, and that's the problem. And

22:36

when we, we have some great clinicians in

22:38

our team, when I talk to our

22:40

clinical staff, our advisory board

22:42

about the stuff that they really want to

22:45

see, all of them talk about patients paying

22:47

attention to and understanding their discharge instructions.

22:50

The value there is enormous. The value in terms

22:52

of long term care and in terms of immediate

22:54

outcomes is huge.

22:55

Nice. Were there

22:57

different specialties within

22:59

medicine that were a little more challenging?

23:03

We started out with medical imaging. Medical

23:06

imaging is nice because it's confined. CT

23:08

scans, x rays, MRIs and

23:11

then, what we released today I

23:13

don't know what people are going to put into

23:15

it. And so it has to be pretty robust

23:17

to doctor's notes, nurse's notes

23:20

lab results, all sorts of things. It's

23:30

time to wrap it up. Yeah. That's

23:33

a good time to wrap it up. We really

23:35

appreciate your time today to,

23:37

come talk to us guys. Such a product that I think

23:39

everyone can. You can benefit from learn

23:41

from other family members of these. And

23:43

just remind listeners at HTTTA The Rob's

23:46

Engineering Podcast.

23:48

Yeah, thank you. We really appreciate the

23:50

time. Yeah, Thank you for having us. Yeah, we never get

23:52

to talk about the nerdy tech stuff. Dude, we

23:55

can go even harder. Oh,

23:57

I'm gonna change the memory card for that, yeah, I

23:59

think I held off. I'm like, all right, tell us about your air handler

24:02

later. Yeah, I was like no,

24:04

we're not going that

24:04

deep. We're not going

24:05

that deep. Fantastic. Thank

24:07

you. Thank you

24:09

guys.

24:12

Thanks for coming to the prompt engineering podcasts

24:14

podcast dedicated helping

24:16

you be a better prompt engineer Episodes

24:19

are released every Wednesday I

24:21

also host weekly masterminds where

24:23

you can collaborate with me and 50 other people

24:26

live on zoom to improve

24:28

your prompts Join us

24:30

at promptengineeringmastermind. com

24:32

for the schedule of the upcoming masterminds. Finally,

24:36

please remember to like and subscribe.

24:39

If you're listening to the audio podcast, rate

24:41

us five stars. That helps us teach more

24:43

people. And if you're listening to

24:45

the podcast, you might want to join us

24:47

on YouTube so you can actually see the prompts.

24:50

You can do that by going to youtube. com

24:53

slash at prompt engineering

24:56

podcast. See

24:58

you next week.

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features