Podchaser Logo
Home
Journal Review in Surgical Education: Artificial Intelligence

Journal Review in Surgical Education: Artificial Intelligence

Released Monday, 18th March 2024
Good episode? Give it some love!
Journal Review in Surgical Education: Artificial Intelligence

Journal Review in Surgical Education: Artificial Intelligence

Journal Review in Surgical Education: Artificial Intelligence

Journal Review in Surgical Education: Artificial Intelligence

Monday, 18th March 2024
Good episode? Give it some love!
Rate Episode

Episode Transcript

Transcripts are displayed as originally observed. Some content, including advertisements may have changed.

Use Ctrl + F to search

0:06

Behind the Knife, the surgery

0:08

podcast, relevant and engaging content

0:10

designed to help you dominate

0:12

the day. Welcome

0:23

back to Behind the Knife. This is Kevin and I'm

0:25

here with Jason and we have some exciting news to let you know

0:27

about. Yeah, thanks Kevin.

0:29

So we are now pleased to offer

0:31

bundle and institutional sales for our Behind

0:33

the Knife premium products. The

0:35

bundle includes our general surgery

0:37

oral board audio review, our

0:39

trauma surgery video atlas and

0:42

our oral board audio reviews

0:44

for vascular, colorectal, surgical oncology

0:46

and cardiothoracic surgery. We

0:49

offer institutional discounts to our bundle

0:51

and individual premium products. Yeah,

0:54

so all of this is available via our new

0:56

website and app where you can take

0:58

notes, pen chapters and download chapters for offline

1:01

viewing both on the website and the app.

1:03

So please email hello at behindtheknife.org and

1:05

check out the show notes to learn

1:08

more. And remember,

1:10

this isn't only for board review. You

1:12

can use our content to prepare throughout

1:14

the year and provide didactic learning across

1:16

all levels for all residents. So

1:19

level up the education at your institution and let

1:22

your program directors know that Behind

1:24

the Knife can be bought in institutional packages

1:26

and in bundle pricing. All

1:28

right, let's get back to the episode. Hello

1:30

and welcome to this Behind the Knife

1:32

episode in surgical education. We're the general

1:34

surgery education team for gliimophthale. I'm

1:37

Nicole Brooks, our current surgical education research

1:39

fellow and general surgery resident. And

1:41

I'm Judith French, I'm the PhD education

1:43

scientist for the Department of General Surgery. And

1:46

I'm Jeremy Libman, I'm the DIO and

1:48

director of graduate education at Vauclat. On

1:51

today's episode, we'll discuss the use

1:53

of artificial intelligence in surgical education.

1:56

As more and more advances in surgery

1:58

involve applications of AI. Many surgeons

2:00

and trainees, including us, are

2:03

left without much understanding of the technology behind it.

2:05

It can be overwhelming to imagine where the

2:07

field might be going without really comprehending these

2:10

events. Luckily today we're joined

2:12

by an expert in the field who can help touch

2:14

us up to speed. Dr. Dan

2:16

Hashimoto is an assistant professor and boarder

2:18

and endoscopic surgeon at the hospital or

2:20

the University of Pennsylvania. He's

2:23

also an affiliated faculty in the Penn School

2:25

of Engineering and Applied Science. He

2:27

completed medical school and a master's offense

2:29

in translational research at the University of

2:32

Pennsylvania, provided general surgery training at Massachusetts

2:34

General Hospital, followed by a

2:36

fellowship at University Hospital's Bleakville Medical Center.

2:39

Dr. Hashimoto is also a leading expert in

2:41

the use of AI in surgery. He's

2:44

the director of the Penn and

2:46

Computer Assisted Surgery and Outcomes Laboratory,

2:48

which focuses on using technology to

2:50

improve surgery performance and decision making, with

2:52

a special interest in the translation

2:54

of AI and computer vision for

2:56

surgical video analysis. Dr.

2:58

Hashimoto is a co-founder of the Global

3:01

Surgical AI Collaborative and has held leadership

3:03

positions in many surgical organizations, including the

3:05

Sages AI Excess. He

3:08

has over 75 publications and is the

3:10

editor of the textbook Artificial Intelligence and

3:12

Surgery, Understanding the Role of AI in

3:15

Surgical Practice. We're thrilled to welcome

3:17

you to the show. Thank

3:19

you so much for having me. I love

3:21

behind the knife, so I really appreciate the

3:23

opportunity. All right. So AI

3:25

is currently a hot topic, not

3:27

just in surgery, but it seems

3:29

that across the spectrum. Can

3:32

you briefly describe some of the

3:34

advances that fall under this broad

3:36

umbrella of AI and how they're

3:38

being applied specifically in surgical education?

3:42

Absolutely. No, thank you so much. I know

3:44

I knew that it was kind of hot

3:46

where my grandmother started texting me about it, and

3:48

she doesn't text me about much other than to

3:50

ask me about my kid. And so I knew

3:52

it must have captured some attention

3:54

outside of the direct research

3:57

field. But no, absolutely. I think... that

4:00

when we think about artificial intelligence, obviously

4:02

it's a broad field of study that

4:05

really thinks about how machines can, quote

4:07

unquote, reason or work

4:09

through tasks in a manner that is analogous

4:11

to how a human being might do it.

4:14

And what we've seen, particularly I think

4:16

in the last 10 or

4:19

11 months or so, is sort

4:21

of a second explosion in interest. I would

4:23

sort of think that in the last 10

4:25

years or so, the first explosion came around

4:27

deep learning in 2012, 2013,

4:31

and then most recently, obviously, with large

4:33

language models and really sort of capturing

4:35

the imagination of many fields. And

4:38

what's really interesting though is that

4:40

these types of advances have, I

4:42

think, really made it possible for

4:44

us as education researchers to really think

4:46

about what are the types

4:48

of data that we can now look

4:51

at more

4:53

quantitatively than we

4:55

could before the growth of these

4:57

types of methods. So that is

4:59

to say, taking into account things like

5:02

video, taking into account text entries, or

5:04

perhaps even audio recordings, can

5:06

we potentially analyze those in a

5:08

way that's scalable so that you

5:11

can do it across multiple trainees

5:13

instead of doing things one at

5:15

a time? And can you do it in

5:17

a bit of a quantitative fashion? And I

5:19

think that's where there's a lot of sort of

5:21

interest in the education space about what new world

5:24

has this opened up for us. So,

5:27

I mean, your grandma's talking about

5:29

it, but your grandma's not doing

5:31

AI coding and that kind

5:33

of work, and those surgeons aren't either. So

5:35

what should the surgeon's role

5:37

be as this stuff gets developed?

5:40

Yeah, absolutely. I think it's a question

5:42

I get fairly often. It's, hey,

5:45

I just signed up for this Python course.

5:48

What's my next step to becoming an AI

5:50

researcher? And while I do

5:52

think it's important for surgeons to have sort of

5:54

a base understanding of sort of what's going on,

5:57

I can tell you we're never going

5:59

to be as good as the PhD

6:01

engineers who are doing this for

6:03

the 15 hours a day, they were otherwise spending

6:05

in the operating room. So

6:07

in my opinion, surgeons really need

6:09

to leverage their expertise. And what's

6:12

their expertise? It's around clinical care.

6:14

It's around thinking about how this is going to impact

6:17

our patients. It's about

6:19

thinking on how this is going to

6:21

impact our trainees and the next generation.

6:23

And how do you use these types of technologies

6:26

in a safe way and in a

6:28

meaningful way? I

6:30

think there's a lot of times where an

6:33

idea can come up that comes

6:35

from non-clinicians that says, oh, we

6:37

built this technology and we created

6:40

this application of it. Can

6:42

you please use it and tell us how it works? And

6:45

then we sort of look at it and say,

6:47

well, actually, this doesn't fit our workflow at all

6:50

and this doesn't give us any meaningful information from

6:52

which we might be able to do assessment. And

6:55

rather than saying, go back to the drawing board and

6:57

come back to me again with a new idea, I

6:59

think surgeons would say, hey, let me

7:01

talk to you about what my experience is

7:04

and how I see this technology

7:06

potentially impacting education,

7:09

training, outcomes, et cetera.

7:12

You tell me, is that feasible? Is that

7:14

doable? It really needs to be about having

7:16

a conversation and building an interdisciplinary team that

7:18

can tackle these topics together. So

7:21

you've been involved in a lot

7:23

of projects that use AI intraoperatively,

7:25

which are very interesting, including the

7:27

Go No Go Net project, which

7:30

uses AI to help with intraoperative

7:32

decisions for it during a lab

7:34

fully dissection. Can you

7:36

discuss what your experience was like in

7:38

the creation and implementation of this project?

7:41

Yeah, happy to. And I need to give a

7:43

shout out to my friend, Amin Madani, who's an

7:46

endocrine surgeon at the University of Toronto. He

7:48

really sort of led the charge on this. And

7:51

the way that came around was it was

7:53

never initially intended to be a, can we

7:56

use AI in the OR type of project.

7:59

In his... PhD work that he

8:01

was doing during residency, Meen

8:04

was studying sort of decision-making and

8:06

surgeons and trying to really understand

8:08

the mental models that

8:10

surgeons were developing around understanding safe

8:12

and unsafe claims of dissection. And

8:16

to do that, he and his friend

8:18

Robert Messina had developed a web platform

8:20

that allowed surgeons to view

8:23

a video of a laparoscopic

8:26

cholecystectomy and then they'd be

8:28

asked to mark up, where do you

8:30

think is a safe place to do a dissection? Or

8:33

where is an area on this particular image

8:35

that you would not want to do a

8:37

dissection because you're worried about an injury to

8:39

a critical structure? And

8:42

that allowed him to gather data

8:44

from experienced surgeons as well as

8:46

trainees and compare what the

8:48

differences were in terms of where is

8:50

that safe and unsafe area. And

8:53

when we sat down and looked at the

8:55

data together, it dawned on both of us

8:57

that, these were actually just labels or annotations

9:00

that we could feed to computer

9:02

vision algorithms to see if they

9:04

could also learn where are

9:06

safe and unsafe areas of dissection. And

9:09

that project really became sort of the first

9:11

project that led to the founding of the

9:14

Global Surgical AI Collaborative because we

9:16

wanted to train a model that

9:18

was robust to different types of

9:20

data. So coming from different types

9:22

of institutions, different kinds of practice

9:24

patterns, and we were able

9:26

to sort of scale across several

9:29

different institutions, both academic

9:31

centers, community hospitals, rural hospitals, et cetera,

9:33

to really see if this type of

9:36

algorithm could detect these safe and unsafe

9:38

areas of dissection in all manners of

9:40

different cold bladders. And

9:42

that has since sort of grown. In

9:44

fact, he released a mobile game based

9:46

off of this that you

9:48

can download from the App Store on

9:50

your iPhone or Android that actually takes

9:53

some of these frames of cholecystectomy, looks

9:55

at the safe and unsafe zones that

9:58

were generated by the... NoGoNet

10:00

algorithm and gamifies it.

10:02

So create the scenario where you can look

10:05

at the video, you can mark out where

10:07

you'd want to do your next step in

10:09

your diverse section and it gives you a

10:11

score compared to the algorithm and compared to

10:13

expert annotators who participated in the original project.

10:16

So it's really been very cool to see

10:18

how that has grown. How

10:20

good is that thing at predicting what

10:22

would be the experts' opinion? Yeah,

10:25

so it's pretty good, especially the first

10:27

iteration of it. We're hitting

10:29

somewhere around the 70 to 80 percent

10:32

mark in terms of matching up with

10:34

where an experienced surgeon might want to

10:37

do their dissection, but subsequently

10:39

we've taken advantage of newer types

10:41

of algorithm architectures and have improved

10:43

that to above 90 percent. So

10:45

it actually fits very nicely with the mental

10:47

model of some very experienced HPV surgeons and

10:50

people who, for example, sit on the SAGE's

10:52

SafeCoal task force who are kind enough to

10:54

help label some data and review some of

10:57

the data for us. So

10:59

as you know, we're supposed to

11:01

be assessing our trainees using a

11:04

competency-based framework. In thinking

11:06

about this particular project, how do you

11:08

think that fits in with this idea

11:10

of competency-based assessment? I

11:13

think that's a great question, and not surprisingly

11:15

coming from you because you have such expertise

11:17

on this and in fact I might at

11:19

some point skim that back around to see

11:22

where your recommendations are for us. Here

11:25

I think it's helpful because

11:27

in some ways, it's one

11:29

thing to take a

11:31

trainee through a case and try to

11:33

better understand, okay, is this trainee competent

11:35

to perform this, let's

11:38

say, cholecystectomy independently

11:40

with or without different

11:42

levels of supervision as they're doing it? And

11:44

it's another to try to break

11:47

down what parts of that operation

11:49

are potentially prohibiting them from reaching

11:51

competency. Obviously, an operation

11:53

is sort of the merger of

11:56

different elements of education. So One

11:58

is obviously the decision-making. Component. So.

12:01

understanding. Where.

12:03

Is it it that I want

12:05

to place this particular Maryland grasp

12:07

or tip? So. I

12:09

know that I'm not going to injure

12:11

critical structure and that comes from understanding

12:13

what your anatomical landmarks are, where the

12:16

boundaries, understanding principles of attraction, intention and

12:18

play, and exposure, and working with your

12:20

assistant to get the optimal views. These

12:22

are all these things that come together

12:24

beyond just the can. I move my

12:26

hands in a certain way to put

12:29

my instrument. Kippur. I wanted to. And

12:31

I think this element of can you take

12:33

the video and sort of bringing out of

12:35

the operating room so we can try to

12:37

break down. What? Is it

12:40

about your mental model of

12:42

perceiving? The appropriate next step

12:45

or the appropriate plane. And

12:47

that we can try to give your feedback

12:49

on and can we say that in old

12:51

from a decision making standpoint. You. Are

12:53

appropriately visual perceiving the landmarks such that

12:55

you can make a safe decision around

12:58

what is a quote unquote goes on

13:00

or a no go zone. And

13:02

and that allows I think a an

13:05

educator. Who. With greater confidence, say okay,

13:07

I understand your decision making process. or at

13:09

least I agree with your decision making process.

13:12

Where. Does that translate into? now that we're

13:14

in the operating room and I am sort

13:16

of observing, you are assisting you in

13:18

accomplishing the goals of your mental representation

13:20

of how the servers ago. You've.

13:22

Got the experience binoculars here new

13:25

can log down further than lot

13:27

of other what do you see

13:29

coming down the road. In.

13:31

How we teach her surgical trainees,

13:34

At a eyes really going to

13:36

be impactful for either positively or

13:38

potentially negatively. Yeah. I

13:40

really thank you. Know that minute tea

13:42

party on his computer vision? Peace Because

13:44

I am very biased. It It's so

13:46

that's the majority of what our lab

13:49

dies. But. It I think this

13:51

the growth in interested in using

13:53

video. To. Sort of replay

13:55

performance in get feedback on performance I

13:57

think is going to play a very.

14:00

The drawer where this is gonna

14:02

go forward. I think that as

14:04

technologies are getting better and applications

14:06

are coming to market that allow

14:09

trainees and faculty members. To.

14:11

Just take clips of their

14:13

videos. right? And then use

14:15

that to guide us. feedback

14:17

session. At between a faculty member in

14:19

a trainee or even for pure coaching I had

14:21

so a senior as in a junior resident or

14:23

maybe two reasons for the same level. Added

14:26

you're going to start see greater engagement. With

14:28

visual media. I. And I think

14:30

that. Artificial. Intelligence tools can help

14:33

with. I have seen they can do the

14:35

automated segmentation the steps of the procedure. Was.

14:37

Shown with gonna go nods and and

14:39

said calling up from the Strasburg group

14:41

that you can automatically segment out different

14:44

anatomy and the key structures and things

14:46

like that. It's. So it takes out

14:48

a lot of the the manual labor of

14:50

can I just prepare this video to the

14:52

point where it's gonna be useful. says he

14:54

back in coaching. I. Do think

14:56

that we're gonna see an increase

14:58

in through these a quantitative metrics

15:00

of performance. So Andrew hung from

15:02

the or see I think is

15:04

now Cedar Sinai Urologist. His.

15:06

Group has been very advanced. I'm

15:08

thinking about this automated performance metrics

15:10

that they're gathering from the robotic

15:12

platforms and they've been able to

15:14

show. That. Quantification.

15:18

Of surgical gestures. In.

15:20

A robotic procedures such as possible to me.

15:23

Can. Correlate with outcome so they can

15:25

look at it but we called a killer

15:27

matic profile or how are these robotic arms

15:29

moving during a case. That

15:31

actually predict whether that patient is

15:33

going to have a better functional

15:35

outcome. And ten eyes and categorize

15:38

a surgeon. Based. Off of

15:40

those kinda mad x into and experienced

15:42

are super expert surgeon. And.

15:44

An inexperienced surgeon. And then

15:46

can I try to get that inexperience surgeon?

15:49

To. Match.kinda matic profile.

15:52

Of the super experts such had, they

15:54

also have better outcomes for their patients.

15:57

So. i think what you're going to see is sort of

15:59

this novel use of data in terms

16:01

of providing more specific and

16:03

quantitative feedback to trainees. It's

16:05

almost like what we do in sports now, right? In

16:08

fact, the other day I was at a kid's soccer

16:10

game and the coaches had this

16:12

camera on the side of the field that

16:14

was recording the entire field and later on

16:16

they were using it to provide feedback and

16:18

they were taking very advanced measurements for a

16:20

bunch of seven-year-olds playing soccer to

16:23

try to do coaching strategy and give feedback to

16:25

these kids. And I was kind of

16:27

amazed. I'm like, well, we can do this for seven-year-olds

16:29

in soccer. I don't understand why we're not doing something

16:31

similar here for surgery to make our trainees better at

16:33

taking care of people. How

16:35

do you envision this technology will be

16:37

used in real time in the OR

16:40

about term? Like, do you think it's

16:42

ever going to stop surgeons from dissecting

16:44

in no-go zones or doing other unsafe

16:47

movements like what you're talking about? I

16:49

think that would potentially ultimately be the goal. I

16:51

don't know about stopping in the sense of stopping

16:53

the hand or stopping the robot or whatever it

16:56

might be, but I think it needs

16:58

to be a collaborative decision-making process. Obviously,

17:00

we have a long way to go. There are a

17:02

lot of hurdles to get through for FDA approval and

17:04

things like that for an algorithm

17:06

that functions intraoperatively in real time

17:09

to impact performance. That's a huge

17:11

hurdle to climb from a regulatory

17:13

perspective. I know the FDA is

17:15

thinking about it, but there's no clear guidance yet on

17:17

what those types of algorithms need to look like. But

17:20

my hope, or at least my early vision of

17:22

what I think that's going to look like, is

17:25

basically additional data that's provided to the

17:27

surgeon such that the surgeon can augment

17:30

their decision-making. It may not even be

17:32

that it's running all the time. It's

17:35

probably initially going to be a system where a

17:38

surgeon can say, I think this is what's going

17:41

on. I would like some additional

17:43

data on it. Let's turn

17:45

on the data visualization platform. If

17:48

you're in the Iron Man movies, let's

17:50

turn on Jarvis. Let's ask Jarvis what

17:52

these coordinates are, what these calculations are. You

17:56

can get a better sense of that from a

17:58

data perspective. surgeon can

18:00

take that into consideration with your

18:03

personal experience as a clinician and

18:06

then integrate those two together to make a decision on

18:08

what to do next or how to proceed in a

18:10

given operation for that patient that's in front of you.

18:13

Every fan knows the right player in

18:15

the right position can be a game

18:17

changer. Put LifeLock between your identity

18:20

and identity thieves to monitor and alert you

18:22

to threats you could miss. Plus,

18:24

with a U.S.-based restoration specialist on your

18:26

team, you won't have to face drained

18:29

accounts, fraudulent loans, or other losses from

18:31

identity theft alone. All

18:33

backed by the LifeLock Million Dollar Protection

18:36

Package. Change the game on identity

18:38

theft. Save up to 25%

18:40

your first year at lifelock.com. If

18:43

you only have a 401k, you're not getting the

18:45

most for retirement. Wait, what? Add a Robinhood IRA

18:47

on top, then they'll boost it by 3%. You

18:50

can do that? And if you transfer in any

18:52

retirement account, you get 3% on top

18:54

of that. Is there a limit to the match?

18:56

No limit. Robinhood Gold gets you the biggest contribution

18:59

match of any IRA on the market. Sign up

19:01

for Robinhood Gold at robinhood.com/boost by April 30th. Subscription

19:03

fees apply. Investing involves risk. 3% match requires gold

19:05

for one year from first match must be by

19:08

rate for five years. Match on transfer subject for

19:10

additional terms and conditions. Robinhood Financial Level Team member,

19:12

SIPC. So, you mentioned

19:15

Jarvis. I'm going to bring up

19:17

Hal from an earlier movie. Okay.

19:20

So, where Hal in the

19:22

Stanley Kubrick movie, Space

19:24

Odyssey. Yeah, 2001 Space Odyssey.

19:28

Yeah. Anyway, Hal takes

19:30

over the space station and destroys

19:33

everyone, decides Hal knows

19:35

what's best. So, how do

19:37

we prevent that sort of doomsday

19:39

model as we continue to develop these things? And

19:42

maybe it's not going to be that Hal's taking over

19:44

the robot and dissecting the

19:46

surgeon, but perhaps

19:49

making unwise decisions or

19:51

providing the wrong guidance or, you know,

19:54

as we're using this for higher and higher stakes

19:56

decisions, not giving us the

19:58

best information. I

20:00

think that's where the regulatory component becomes

20:02

key because we know that models drift.

20:04

So what do I mean by that? Once

20:07

you train a model, the current regulatory framework

20:09

is that you have to kind of lock it in place.

20:11

If you're going to do a next

20:14

iteration or an update, do you have to sort of

20:16

resubmit that data so that they can make sure that

20:18

it's safe to release that

20:20

next iteration. But as you

20:22

collect data, practice patterns

20:25

change. And as you

20:27

use technology, your practice pattern changes.

20:30

And that can cause you to

20:32

sort of drift out of the

20:34

original distribution of data on which

20:36

those algorithms were trained, such

20:38

that even just a couple

20:40

of months after an algorithm gets

20:42

released, it could potentially already become

20:45

outdated and give wrong recommendations. That

20:47

is a very real question and a

20:49

big fear that a lot of us have

20:52

as we're developing these technologies is how

20:54

do we account for that? And how

20:56

do we control for that? And how do we ensure that

20:58

that's safe? And we're exploring

21:01

different types of techniques in terms

21:03

of looking at explainability, for example,

21:05

in terms of trying to better

21:07

understand why a given algorithm might

21:10

be recommending XYZ type of step

21:12

or thinks that this plane is better than

21:15

the other plane. But ultimately,

21:17

that's why I think a lot of

21:19

us envision this as being an augmenting

21:21

technology instead of a replacement technology, because

21:23

at the end of the day, it

21:26

does require a human being

21:28

with surgical experience to

21:30

look at that and say this

21:33

is or is not appropriate for

21:35

the given clinical scenario. And

21:37

so in reality, what we really

21:40

fully expect is that the clinician is going

21:42

to have to pay just as much, if

21:44

not more attention than they do

21:46

without an AI algorithm to ensure that

21:48

this is implemented safely. I know

21:51

there's a lot of concern about de-skilled link. So

21:53

what happens if you give people an AI

21:55

algorithm that helps them do a dissection to turn

21:58

off their brain? When

22:00

you use GPS, sometimes you sort of forget

22:02

to pay attention to where you're making your left-right turn. The next thing

22:04

you know, you're at your destination, you don't know how you got there.

22:07

I really don't think that that's going

22:10

to be wise for assertion

22:12

to toothpads when we have these intraoperative AI

22:14

systems running because it's going to be very

22:16

important to sort of keep that algorithm honest.

22:20

So keeping in line with some of the

22:22

barriers to the use and advancement

22:24

of AI, there are several

22:27

ethical and legal implications that

22:29

are related to this in

22:31

surgery, data protection, error accountability,

22:33

limiting bias with equitable data

22:35

sources. So how do you

22:37

balance those challenges in your

22:39

work? Yeah, it's

22:41

very, very difficult, particularly around

22:43

considering the appropriate data sources.

22:46

As we sort of know, just in general from anything

22:48

that we do in medicine, most,

22:51

if not all, of our data sets are biased

22:53

in one way or another. And that's just as

22:55

true, if not more true, for

22:57

the types of data that we're collecting for

22:59

our AI studies. Obviously,

23:01

there's a minority of surgeons

23:03

and institutions that elect to

23:06

record their videos

23:08

in their cases or to provide

23:11

their data for training AI algorithms.

23:14

And that can include text data, not just

23:16

video. So that can include, for example,

23:18

the EMR notes and things like that, or potentially

23:20

even your assessments as a resident, your milestones, your

23:22

EPA's, et cetera. So the

23:24

data that we do get is very much

23:26

biased to toy institutions that are already sort

23:28

of thinking about using this data in this

23:31

way, but it may not

23:33

be reflective of actual practice. And

23:35

what we have to do is really understand the

23:37

distribution from which that data comes from. And

23:40

to the question that came up earlier, what's the surgeon's

23:42

role in this? The surgeon's role

23:44

is not just in developing and using the

23:46

AI technology, but evaluating it. So

23:49

if you are told, I

23:51

have an AI algorithm that can do this for

23:53

you as a surgeon, I do think

23:55

it's the surgeon's responsibility to look at that

23:57

data critically, whether it's a paper or a

24:00

paper. pamphlet or some sort of product brochure

24:02

and ask those very serious questions

24:05

of, was this algorithm trained on

24:08

a population that is reflective of the population

24:10

in which I plan to use it? Is

24:13

this going to be really the best thing for my

24:15

patients? There have been a lot of studies coming out

24:17

in the radiology literature, for example, that

24:19

real world performance of algorithms

24:22

that were otherwise incredibly

24:24

impressive in the clinical trials that

24:26

were submitted to the FDA for

24:29

approval are absolutely abysmal when they got

24:31

in the real world and the incidence of disease is

24:33

like 2% versus 50% in the initial evaluation

24:37

data. And so

24:39

that has led to some very strong questions

24:41

around what it's going to mean to implement

24:44

these in a safe way. But

24:47

100%, we really have to think about biases

24:49

and things that come into that. And

24:51

then also we think about, you know, who are

24:53

the types of patients who are going to be

24:56

willing to donate their data? Or if they are

24:58

donating their data, is it being collected in a

25:00

way that's ethical, that's equitable

25:02

and it's fair? How

25:05

do we in our trainees is the

25:07

reading literature that's coming out and

25:10

stuff that's coming out in surgical literature?

25:13

We used an AI algorithm to do this.

25:15

We used an algorithm to

25:17

show that. How do

25:20

we determine that that's really okay and

25:22

that we really understand where that's coming from and

25:24

that it's going to be applicable to

25:26

our situations? Yeah,

25:29

thank you. In that case, right, and I'll

25:31

tell you, it's a flood of literature coming

25:33

out now. Now that a lot of these

25:35

tools are much more accessible, it's

25:38

really becoming much more widespread. And

25:40

I think it's really relying on

25:42

first principles of research. So it's

25:44

not even just thinking about, you

25:46

know, oh, is this, other questions

25:48

are specific to AI research. These

25:50

are just questions about research methodology

25:52

in general, right? So you

25:54

want to look at what's the size, obviously, of

25:56

that population. Where is that population

25:58

being drawn from? And then when

26:00

you get down to the modeling questions, again,

26:03

it's really thinking about the phenomenon of

26:05

interest. So focusing less so on what

26:07

was the name of the algorithm that

26:09

they used, but really drilling down

26:12

to what was the question being asked? We're

26:14

kind of talking about education right now. I've

26:16

seen some things come up to say, oh,

26:19

we built this algorithm to automatically

26:21

assess the competency of a trainee

26:23

in doing XYZ task. And

26:25

it's always interesting. Oh, I didn't know we had

26:27

sort of a validated way of assessing competency for

26:30

that task. Let me read this. And

26:32

then you read the paper, and they

26:34

had sort of defined competency in a

26:37

very narrow way that was very specific

26:39

to their use case, but hadn't otherwise

26:41

been investigated for any real

26:43

sort of applicability outside that particular

26:45

research study. So then you got

26:48

to wonder, okay, did AI actually determine

26:50

the competency of doing XYZ task in this

26:52

paper, or was it just that AI

26:54

was demonstrated to be able to do this

26:56

task that was specifically defined for this

26:58

paper itself? Right? So those are

27:00

the types of questions to look at from the lens

27:02

of just general research

27:05

methodology. Right

27:07

now, all of our research fellows take

27:09

a statistics course because it's important for

27:11

them to understand that element of how the

27:13

research is done, where it's coming from, helps

27:16

them to better understand what they're reading.

27:19

Should we start having them taking some type

27:22

of foundational AI course or coding? I

27:25

think a conceptual AI course would probably

27:27

be more helpful. But

27:29

I will say that having a

27:31

strong basis in statistics is as

27:33

important as sort of building

27:35

in some component of AI education. Although

27:38

I will say the Royal College of Physicians and

27:40

Surgeons of Canada a few years ago released a

27:42

report where they actually suggested

27:44

that digital health literacy become

27:47

a new fundamental competency in

27:49

the Canadian system because of

27:51

the anticipated growth and

27:53

the expected importance of

27:56

digital health in delivering care. So

27:59

that means try to understand. understand, okay, what do these

28:01

technologies mean? How do I interact with a

28:03

computer scientist or a data scientist who

28:06

may become a part of a healthcare team

28:08

who helps to interpret these types of

28:10

tools such that I can be

28:12

a competent and safe physician in the new era?

28:16

Well, thank you so much for all

28:18

of your insights. It's been very helpful

28:20

to better understand this topic that I

28:22

personally don't have much understanding of. So

28:25

can you go ahead and give our

28:27

listeners an educational timeout, some key takeaways

28:30

on AI and surgical education that they

28:32

should meet this podcast with? Yeah,

28:35

absolutely. I think the number one

28:37

thing is that as magical as AI

28:39

can seem, when you

28:42

are thinking about artificial intelligence

28:44

applications and thinking about, for

28:46

example, papers and things like

28:48

that, it's really less

28:50

about the AI and more about

28:52

first principles in research. So

28:55

any other approach that you would take

28:57

in evaluating a research study, the

28:59

same approach is going to apply to

29:02

evaluating an AI study. Just

29:04

like you may not understand or have

29:06

heard about every single

29:08

statistical analysis in a clinical

29:11

trial, you may not have

29:13

heard about every single type of algorithm that's going

29:15

to be presented in a study that uses AI. But

29:18

that doesn't mean you don't already have expertise

29:20

around understanding good fundamental

29:22

research methodology and

29:24

more importantly, understanding what

29:26

is the implication of that for

29:29

clinical care, for education and training,

29:31

for learning, for teaching, et cetera.

29:34

And so it's not being

29:36

intimidated by the subject

29:38

matter and really relying on the

29:41

excellent training and education that you

29:43

have already had around science and

29:46

understanding science to get you through that.

29:48

And I think that's probably the most

29:50

important lesson to take away from this

29:52

because it's very easy to get intimidated

29:54

by it, thinking, oh, we

29:56

weren't exposed to coding and things like

29:58

that when we were... pre-meds or in

30:00

medical school, but we were

30:03

very much trained to think scientifically

30:05

and to evaluate literature in

30:07

a rigorous fashion. I'm gonna

30:09

ask one more. I'm gonna give you an

30:11

opportunity to become very very famous here. You

30:15

know we brought up the Stanley Kubrick movie

30:17

2001 Space Odyssey that

30:19

was released in 1968. So if

30:21

you're now looking down the road another

30:24

40 years, where do

30:26

you realistically see AI

30:28

taking us in surgery and surgical education?

30:31

Yeah, I think that realistically 40

30:34

years out, I mean at that

30:36

point I would fully expect that

30:39

we have hopefully been able to put

30:41

together a database

30:44

of outcomes that are

30:46

linked to trainees such that

30:48

as you are moving through your training

30:51

process, right, evaluation becomes

30:53

much more quantitative and outcomes

30:55

based. So it becomes

30:57

less about, oh let me

31:00

check off these Likert scale things on your

31:02

evaluation to tell you that you're competent and

31:04

ready to graduate. It's, let

31:06

me look at the data about you

31:08

and your performance, what you've

31:10

done in the operating room, what's your kinematic

31:12

profile, what's your outcome profile, what is your

31:14

decision-making profile based off of the orders that

31:16

you have entered relative to

31:18

when you access certain results that

31:21

can allow me to infer what your

31:23

decision-making process is like for

31:25

x, y, z disease process and

31:28

really create a comprehensive picture of

31:31

who you are as a clinician and

31:34

what it is that you're ready

31:36

to graduate from a competency-based perspective.

31:38

That I think is sort of what I see about

31:41

that 40-year time frame because I do

31:43

fully expect that we're going to have

31:45

better pipelines for the data and

31:47

things like that. I assume we'll

31:49

call the computer Dan. Although

31:53

I will say one thing that I always

31:55

bring up to people is, you know, I

31:57

always ask where was the first self-driving car

31:59

demonstrated? And I'll ask you, do you

32:02

remember the first self-driving car? No,

32:05

I have no idea. I

32:07

used to think it was like, it must have

32:09

been like mid 2000s or something like that, but

32:11

it's actually 1987, if I remember correctly, on the

32:14

Autobot. They demonstrated a self-driving car. At that

32:16

point in time, you can actually

32:18

look up some of the newspaper clippings and everything. Everybody

32:21

was convinced that we were going to have self-driving cars

32:23

by the mid 90s. And

32:25

here we are in 2023, and I don't think we're that

32:27

much closer with how things are going. So,

32:30

and that was about like 30, almost 40 years

32:32

ago at that point. So I could be way

32:35

off. Well, Dan,

32:37

thank you so much for your time.

32:39

This has been incredible, really insightful, and

32:41

a lot of your optimistic view of

32:43

the future of AI. Hopefully,

32:46

we'll all be around to see it. If

32:48

it doesn't take us out first, right? Be

32:52

sure to check out our website at www.behindtheknife.org

32:54

for more great content. You can also follow

32:56

us on Twitter at Behind the Knife and

32:58

Instagram at Behind the Knife podcast. If you

33:00

like what you hear, please take a minute

33:02

to leave us a review. Content

33:04

produced by Behind the Knife is intended for

33:06

health professionals and is for educational purposes only.

33:09

We do not diagnose, treat, or offer patient-specific

33:11

advice. Thank you for listening. Until

33:14

next time, dominate today. Every

33:26

fan knows the right player in the

33:28

right position can be a game changer.

33:31

Put LifeLock between your identity and identity thieves

33:33

to monitor and alert you to threats you

33:35

could miss. Plus, with a

33:37

U.S.-based restoration specialist on your team, you

33:39

won't have to face drained accounts, fraudulent

33:42

loans, or other losses from identity theft

33:44

alone. All backed by

33:46

the LifeLock Million Dollar Protection Package. Change

33:49

the game on identity theft. Save

33:51

up to 25% your first year at

33:53

lifelock.com. On

34:00

top and they'll boosted by three percent. You can do

34:02

that and if you transfer, any were. Present.

34:05

On top of that, a no limit to the

34:07

matte, no than having to go guessing the biggest

34:09

contribution matter of any I agree on the market.

34:12

Sign up for Robin Hood Gold at running with

34:14

on com flash lose by April thirtieth subscription fees

34:16

apply. investing about refugees them as required. Go for

34:18

one year from first Match must be very for

34:20

five years match on transfer such as additional freshly

34:22

condition for a minute financial of remember as I

34:24

can. Just. A

34:26

smidge know. But if you sign up

34:28

for Robin Hood goals ira with a

34:30

three percent match, you can get up

34:33

to one hundred ninety five dollars for

34:35

the twenty twenty three tax. Or oh

34:37

yes, sign up a decimal. Don't lose

34:39

contacts day to get the biggest contribution

34:41

match on the market. Subscription fees, Opinion.

34:46

Investing involved with three percent match requires gold

34:48

for one year from first met Monkey Virus

34:50

for five years lot of his financial hello

34:52

sees member as I just. Waiting

34:56

on a tax return and hopefully it ends

34:58

up in your hands. Fraudulent tax returns due

35:00

to identity has increased by thirty percent and

35:02

twenty twenty three if you're in a bind

35:04

this tag please and Life Lock can help

35:06

or us based restoration specialists are experts dedicated

35:08

to helping solve your identity theft issues and

35:10

lowlife like plants are backed by the million

35:12

dollar protection targets so will reimburse you have

35:15

to the limits of your plan if you

35:17

lose money to the identity theft. help protect

35:19

your information that taxis and whitlow flocking set

35:21

to twenty five percent your first year with

35:23

thing or code news it later.

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features