Podchaser Logo
Home
Higher Ed & AI

Higher Ed & AI

Released Wednesday, 3rd April 2024
Good episode? Give it some love!
Higher Ed & AI

Higher Ed & AI

Higher Ed & AI

Higher Ed & AI

Wednesday, 3rd April 2024
Good episode? Give it some love!
Rate Episode

Episode Transcript

Transcripts are displayed as originally observed. Some content, including advertisements may have changed.

Use Ctrl + F to search

0:00

Than k you everybody for joining us on the Boring AI Show.

0:02

Today we're talking about higher education with Dr Emily Bailey, Associate Professor of Philosophy and Religious Studies at Towson University and a member of the AI Task Force.

0:14

So thank you for joining us today. Emily, thank you for having me Awesome.

0:19

Well, tally, you want to kick us off with the news and then we'll get into our topic.

0:24

Let's do it. So the news article today is about why AI is so bad at spelling, which is really interesting and kind of delves into, just explains you know how AI really works.

0:36

So in this article they talk about how essentially AI models you know they use patterns, it's a lot of pattern recognition and they're getting really good at it locally.

0:48

So, for example, if you have a hand with, you know, six or seven fingers on it, it's like wow, that looks like a finger.

0:56

Those are extra digits that they're kind of adding, but it's, you know, a hand has fingers, and so that's the pattern that it's creating, and so that's the pattern that it's creating.

1:03

And a lot of times, similarly with generated text, you could say something looks like an H, that maybe looks like a P, and so they're just structuring together these different patterns with the most likely you in image generation.

1:17

They have an example image there and I'll share this article in the comments, following the show of a burrito stand with the poorly spelled word burrito, which is kind of funny to look at.

1:36

Yeah, so it's just interesting that you know text is actually a lot harder to generate because it's really not the process that ChatGPT is using or your chatbot is using isn't really spelling as we would think of it.

1:53

And, tim, I'm not sure if you have anything else to add there, but I thought that was just an interesting article, interesting call out.

2:00

Yeah, at the end of the article I loved the picture of the music store and because, again, if you're just glancing at it like, you're like, yeah, it's a music store, looks like a music store to me, but then they call out like but somebody who knows music and knows these instruments would know this is not the right arrangement for the white and black keys of a piano and this is not the right string structure for a guitar.

2:27

And so, again, getting into that, you have to know what you're looking at, to know that it's not what you're looking at.

2:34

And I feel like that ties into a lot of what we've talked about, emily, with knowing your topic and knowing your domain and being able to not just like, oh, ai is the magic answer machine.

2:48

You need to know what's right and what's not.

2:50

And so what was your thoughts on the article, emily?

2:56

It's definitely a good reminder why prompt engineering matters.

2:58

So the rules that we take for granted when we think about large language models are really important, and thinking about how, if you use something like generative AI to create new text or new images, that the conversation with the machine, any sort of context or nuance or intention, is really in the hands of the prompter.

3:21

Yeah.

3:25

Subtleties make a big difference.

3:27

Yes, yes, well, ed, I did not know.

3:31

I've been very closely following Adobe's Firefly implementation, but in this article they said that Adobe has explicitly excluded text into their image model, which I felt was really interesting to say, yeah, we know it's not good at this, so we're just not going to do it.

3:48

Um, I thought that was very, very fascinating.

3:51

Um, mid meanwhile I know mid journey, this has been something that they've been really from, or no, stable diffusion, three, I think it is they've been really promoting like you can have text in your images now and it's, you know, know, all useful and good.

4:05

So, you know, I think it's interesting and context is the key.

4:12

You know making sure that your prompts, like you were saying Emily, make sure your prompts are on target.

4:17

You know good stuff, but also make sure you know question everything and know what you're looking at.

4:24

So if there's too many fingers, it's probably AI gen, right?

4:30

Totally and why we're trying to generate something in the first place.

4:34

Now I know that's our primary article I did want to touch on just because there's been a lot going on in the news AI-wise this week going on in the news AI-wise this week the GTC keynote.

4:48

Specifically, nvidia has been releasing quite a bit of new products.

4:52

So I just wanted to call attention to that for those listening that that's something to definitely keep an eye on.

4:57

An example of that was they have NVIDIA has something called Project Groot, which is a project to develop, you know, with a robotics company, the humanoid robots.

5:08

They're also really advancing their chip technology, coming out with new solutions for healthcare.

5:16

So there's just so much going on and I just wanted to call attention to that for folks listening that you know this is something that's continuing to grow and the, the organizations in this space, are coming out with new ideas and projects.

5:32

It feels like daily, if not weekly, and so you know just something, something to think about there.

5:38

And, tim, I don't know if you have any additional thoughts on the GTC keynote, but I just think that's something big highlight here for the listeners.

5:46

Yeah. So if you're not watching NVIDIA and what they're doing in the world of AI, you have to be.

5:54

This is the company leading on a whole different level.

5:58

Openai is leading with a lot of applications.

6:03

Anthropic was called three and you know Mr was there, you know, creating these models.

6:10

These are they're leading in software.

6:12

But a lot of these companies are running on NVIDIA systems on the back end and that isn't just the chips, it's also tools like the Omniverse, which is NVIDIA's development platform.

6:23

You know. Basically Super fascinating to see you can create all these plug-in modules to help your tools together.

6:31

And as well, you know NVIDIA has things like have your large language model running locally on your machine through your RTX chips, you know, on your local machine.

6:41

So there is a lot going on in the world of NVIDIA.

6:46

Two things I thought were really interesting from the GTC session.

6:51

One was Project Groot, which I did wonder if they got the copyright permission from Disney on that, but it's basically a code library for humanoid robots and so to accelerate the development of humanoid robots and it's using simulation systems and all kinds of very fascinating technology to do this.

7:15

But the second thing I thought was really interesting was then they paraded out how Disney is implementing these and they're implementing them as drawings in their Star Wars park and so super fascinating use case.

7:30

So we talked about, you know kind of you know the the record.

7:33

You know AI is a representative of your brand in past episodes.

7:36

But this is like a great example of like imagine walking through the Disney parks and you're walking beside one of the droids from a movie, you know, and it's walking beside one of the droids from a movie, you know, and it's, it is one of the droids that they use in the movie, you know, and so lots of fascinating capability there.

7:52

Um, I will say the droids were also super cute, so that adds to the value.

7:58

The humanoid robots they were demoing were like warehouse workers and the ability to work in assembly line, the ability to move dishes around, which is funny because in the International Monetary Fund report that was specifically one of the jobs called out from AI dishwashers because of the complex nature of the interactions.

8:21

But it was the demo showing otherwise, so very fascinating.

8:24

If you haven't seen the GTC keynote, we'll. But it was the demo showing otherwise, so very fascinating. If you haven't seen the GTC keynote, we'll post it in the comments.

8:29

Highly recommend you watch it. Emily, how would you feel about running into a robot autonomous robot in Disney in your next trip?

8:43

In Disney.

8:45

That would be um concerning maybe than in other spaces yeah, like I don't know if you guys saw the movie I robot, uh, which people are like?

8:54

Oh, like the book, mikey has nothing like the book, but uh, where they had the, the, the, you know humanoid robots walking around the streets and helping out with tasks and walking dogs and stuff like that.

9:07

I'm curious to see how far we are from starting to see humanoid robots on the street just walking around in our mix with people.

9:18

I think we've got a while, but it's going to be different, all right.

9:29

Just an FYI. In terms of the naming convention, it looks like it is an acronym for Generalist Robot Double Zero Technology, and so I'm not sure if they needed to.

9:40

There were any issues with the naming with Groot and Disney there, just because it is a an acronym which is interesting it just happens to be the acronym that says Groot right a very very known word for a certain thing, so exactly exactly anything else on the news there.

10:02

Tally, that's it. That should do it for today.

10:05

Awesome, all right.

10:07

Well, dr Bailey, thank you for joining us today.

10:11

We are very excited to have this discussion about higher ed and AI, and why don't we just start out with some context setting?

10:20

We'll just start with kind of the softball question of you know, how is AI being received in the world of academia?

10:32

Thank you again. I think it's been received sort of on a spectrum of acceptance.

10:38

I mean, there are certainly a lot of faculty members in different institutions sort of embracing the possibilities of AI, especially generative AI, for students in their classrooms across a variety of disciplines.

10:55

And you know, we know that AI has been part of higher ed for a long time outside of that sphere.

11:03

So most learning management systems have basic grading, support for automated grading.

11:08

There's been chatbots and virtual assistants and those things for students.

11:13

But we've sort of stepped out of it just being a sphere where it's used to streamline administrative tasks and is starting to be something that can be integrated into curricula, and so that's a very interesting thing.

11:27

So I think there are definitely folks that embrace that.

11:29

I think there's a lot of dabblers. I would consider myself to sort of fall in that middle category of it's here.

11:36

It's a tool, let's figure out how we can use it, but I have a lot to learn still.

11:40

And then, of course, you always have a contingent anytime there's something brand new like that that have that kind of need to stop it.

11:48

It's apocalyptic, it's the end of education as we know it.

11:53

So it's been interesting to follow those conversations because there's a real range in the way that it's it's being received and used in classrooms.

12:01

Very cool and used in classrooms.

12:04

Very cool. What do you see as so?

12:07

We talked about, like, the teachers. What about the students?

12:10

How are the students receiving it? What's their perspective on it?

12:15

It's been interesting to kind of follow. So in my own experience I think, even though our students today are perceived as sort of being digital natives and having a really strong grasp on technology because it's been integral in lots of different ways over the course of their lives, they have very limited comfort zones sometimes with things.

12:38

So I've seen in my own classrooms with generative AI tools that they are less wary, I guess, sometimes of using something that feels informal, but as soon as it becomes a formal activity or exercise they feel uncomfortable sometimes with what they're able to do with the tool, and so I think sometimes they don't always feel equipped to use it in the way that we maybe would want them to use it in the classroom, as opposed to just typing something quick in while they're working on an assignment and hoping it generates content for them and so kind of closing that gap?

13:21

I think has become very important.

13:23

It's a great call, yeah, and it's interesting to hear.

13:25

You know, I think it sounds like there's some differences from what I'm hearing between maybe the student and teacher perspective and AI, and I'd love to hear I guess, emily, is there certain use cases that you've used leveraging AI in certain assignments or things that you've seen that have worked and haven't worked?

13:42

just out of curiosity that you've seen that have worked and haven't worked just out of curiosity, Definitely.

13:50

So. I have a lot of colleagues in my own institution at Towson University and outside of this context who have spoken about using especially generative AI tools in their classrooms in different ways, having that sense again that students are probably using it anyway, so helping them to understand what the possibilities of those tools are and what the limitations of those tools are really important for students as they're trying to figure out what exactly to do with this new technology that they have access to.

14:19

So in my own classroom, one thing that I found to be sort of helpful is to really emphasize for them that many of these generative tools like ChatGTP or DALI are really great for brainstorming or creating outlines, kind of collecting and organizing information, but they don't often get into some of the deeper nuanced stuff that we want students to be thinking about, and so there's pieces of problem solving and critical thinking that are so important for students in higher ed, something that you know the generative AI often can't do for them.

15:03

So one activity that I found to be really useful is to have students answer a set of questions about a reading that they've done intensively.

15:13

We do that together in class. They have time, either individually or with partner or in groups, to go back to the article and answer a very specific set of questions and then share their thoughts and have a discussion.

15:26

And then we go to the AI tools.

15:30

So we've used chat, GPT, especially where they enter the same information into the AI tool to see what it generates for them as a response and then they have that as a comparison for their own work and they'll see often very quickly that there are parallel points right, a reference across the two different sets of responses, but that what the AI is generating is often very simplified, it's very generalized.

16:04

I teach in religious studies and so you know the affective pieces of things that we talk about are often very much missing, since those tools can't do that yet use this for, and this is where this is helpful for me, but this is why I still need to do this work myself and these are the pieces um of thinking and being a college student that the ai can't do for me yeah, and going back to the, the music store example from the article this morning, like, if you don't know what to look for, it looks great.

16:43

Like you know, everything's great, everything's fine, you know the answer is perfect.

16:47

So, um, I and I think that's you know, not just in student work, not just in teaching, but in business as well.

16:57

Like, you know, the, the ability to know, um, to question the ai is so critical and that's where, you know, building these critical thinking skills are.

17:10

It's just such an important piece of the future.

17:12

Like, uh, and that's you know, emily, uh and I both have religious studies backgrounds, which is, you know, we laugh and say, yes, religious studies, we're going to dominate the ai world world, you know.

17:23

But it's a good skill set to come at AI and say, well, why, why is this?

17:32

You know, I think that's a question that I know in my own.

17:36

You know, studies and religious studies. That was the number one question we asked about everything.

17:41

Well, why is that? Why is that?

17:43

Yes, yeah, so from a, from a.

17:49

I love the example you gave. You know, with the exercise of, you know, the students doing the work and then students working with the AI system, do you know?

17:57

See the difference. Does that lead to aha moments for the students?

18:01

Have you seen students that are like, okay, I didn't, you know, I was just using this for everything before and now I'm questioning it, or you know, kind of, what's the transformation you're seeing?

18:14

I hope so. I mean, from what I've seen, at least in real time after those activities is that they're often quite surprised how limited the response is when they know the content really well, and so that's a great exercise and I've heard others do similar sorts of things for students, I think, to start to realize that when they leave school and they enter the workplace, that the tools are still going to be there for them and they're probably going to be interacting with AI in ways that generations before them never even thought of, but that the content, knowledge, has to be there in order for them to be able to leverage the tool, and the tool can't do all of that work for them.

19:01

Yeah, go ahead, sally.

19:04

Oh, I was just going to say I think that's a great call out or parallel between the student experience and the employee experience.

19:10

I think there's a lot of employees that we've seen that have.

19:13

You know, they're just starting to dabble in leveraging AI in their day to day and so recognizing it's not just something you can use and copy and paste the output, it's it's knowing when to use it and when is really, you know, important to have a human export expert, um, reviewing that content.

19:31

So I just like that call out because I think it parallels really nicely to the business world yeah, totally, and and reminds me, um, this morning, as I was on linkedin, I saw I was a video from the steven colbert show and he was interviewing somebody who I will figure out who this was, we'll share the video in the comments.

19:49

But the person was talking about, like, where we are in a massive paradigm shift of how the world is going to be, and the example they were using was like you know, back in you, you know in the 80s, you know it was.

20:05

It was fairly static, you know, we knew.

20:07

You know, get a good job, work in an office.

20:11

You know like and good job, you know to not to say that all jobs outside of an office are not good job.

20:18

Not saying that, just saying you know.

20:20

Again, going into the 80s mindset go to college, get that white collar job, succeed, make money, all that good stuff.

20:27

That was kind of the operating model.

20:31

And what did that look like in the 90s?

20:33

Well, it was very similar. What did it look like in the 2000s?

20:36

Very similar. But now, as we look at the world of AI and it really is massively different, like we really don't know what is that world of work going to look like in five years, in 10 years in 20 years.

20:50

You know, as we were talking about GTC, with everything changing daily, you know what is possible next year, you know, with ChatGPT came out, I think it was a real eye-opener of AI is real, it is going to transform our work and then you just look at the steady improvements over time.

21:12

You know, I think that's one of our big challenges.

21:16

And so with that I would ask you know, emily, what do you think are some of the biggest challenges to education, higher ed, that AI is posing?

21:26

You know, both for the students and for the teachers and for the administration.

21:32

Sure, oh, there's so much to unpack or something like that.

21:36

I mean, you know, one thing that I think has been very important in these conversations has been, you know, the use of things like predictive analytics to help identify at-risk students and those sorts of things, but that those algorithms have, you know, are cultural and broader biases sort of infused in them and they miss the whole picture.

22:00

So there's a lot of ethical considerations there.

22:03

When we think about things like, you know, the digital divide and digital access and digital literacy and what that looks like in an educational context.

22:13

There's also a lot of ethical concerns with what students are doing and academic integrity.

22:20

There are not AI proof assignments, but there's a lot of effort being put into designing and creating AI sort of resistant assignments or AI assignments that help students again to employ the tools of AI Kind of what you mentioned, tim not as a magic solution for homework, but how do we teach students to use this well and effectively?

22:49

And one of the big things I think has just been questions, like you mentioned, with the workforce.

22:53

So there's a shift when we think about what an entry level position is going to look like for a student when they graduate and recent graduates, and certainly future graduates entering a very different workforce, have some great opportunities, I think, ahead of them for upskilling, fostering creativity, learning how to problem solve in new ways.

23:14

But this leaves a lot of questions on the table about what it means to be a student right now and what it means to be an educator right now and how we navigate that.

23:24

And there's, I think, again, some really exciting possibilities for that, to sort of rethink the way that we're designing education and and the skills we want students to walk away with.

23:36

Um, but that's another instance of kind of how do we close that gap to get everybody to that point?

23:44

very interesting. So do you think that from a um, do you think that the students are?

23:53

And where I go with this is like as we were talking in our pre-show discussion, we really want, as an employer, I want people to know how to use these tools.

24:05

I don't know. Smartly, I guess, is the term I'll use.

24:09

Don't just take whatever comes out of them as truth.

24:12

You have to question it, you have to fact check, but you have to know how to use it.

24:16

You have to know how to talk to the machine.

24:19

I always use the galaxy quest, like the woman whose job is to talk to the machine.

24:25

You've got to know how to talk to the machine. Do you feel that the students are kind of growing those skills?

24:31

Do you feel like this is just too new and we're not there yet?

24:35

What are your thoughts?

24:37

Yeah, I think it's very similar to sort of the classroom example, where they are willing and they're very interested and they are already often using these things outside of the classroom anyway.

24:55

But they don't always realize that what you put in is what you get out of the tools.

25:03

So if your prompt is not tailored to generate the kind of response that you need for a question that you want to answer, it can not just be distracting, but it could be very problematic or it could take you down a path that's not going to help you to effectively problem solve in the way that you need to.

25:24

So the training and prompting, I think is really interesting, and I know you can both speak to this a lot better than I can.

25:32

You're coming from a place of of a lot more knowledge about that than I have.

25:37

But I'm very curious about what we're going to need to do moving forward to help students learn these tools in the context of the disciplines that they're studying in, which involves a lot of professional development and kind of time and energy on faculty to get them there too, in order for them to be able to step out of our classrooms into the workforce and know how to use that technology and not sort of be starting from scratch.

26:08

Yeah, what do you think, tali?

26:14

from scratch. Yeah, what do you think, tali? Yeah, I think prompt engineering is definitely a place to start.

26:17

What I've really liked, I guess? Two things come to mind.

26:19

One I have a few friends currently in grad school who have some really interesting assignments that they're using AI for, similar to what you called out earlier in the show um Emily, kind of using it as a co-creator, so helping you know them build out an essay and then using that AI generated essay to write their own, and so that's a good practice of, okay, this is what they generate.

26:42

Let me analyze that and figure out how I can make this better and and really under better understand the gaps for um AI outputs that you, you know, could then apply, hopefully, to their you know, day-to-day jobs moving forward, you know, and understanding how to best leverage AI.

27:00

I think on the other side of it, it is interesting, especially in the context of philosophy and religious studies, but on a larger note, in a lot of ways it's brought up discussions about humanism and what it really means to be human, and I think it.

27:15

Tim and I, on previous episodes, have talked about a skills-based economy and I think that, as we move forward, it's not so much what you know.

27:26

While it matters you know what type of degree you have.

27:29

It's more While it matters what type of degree you have, it's more thinking about what are the skill sets as an employee or as a student or as a professor that I really bring to the table.

27:38

What's my actual value, separate from what these machines can accomplish, and what are the things that are truly human that AI will not be able to replace?

27:48

And I think that that's a really great conversation piece to allow people to understand when to use AI is you know, why you use it, and I think you know there's certain use cases that are great, and then there's certain use cases that really need to be either reviewed by or done entirely by the human.

28:04

You know things that involve empathy, creativity, problem solving, and so I think it's really interesting to think about AI from that lens in terms of you know again, what's what's making us human is, I think, where where my mind goes with that.

28:18

Yeah, and I, you know, this is the, the, the fact I feel like and I use the word fact, that is a subjective fact, I guess the days of like you've got your job and you're in your job and that's just your job, that's over, like that was over quite a while ago, but it's going to really be highlighted that it's over in this age of AI, because we all need to be constantly upskilling and constantly learning and constantly driving towards what is next.

28:55

How is this world changing? So, like you had mentioned staff development, I've been in organizations where there was the expectation that, well, if I need this for my job, my employer is going to have to provide it and I'm not doing anything until they do.

29:13

Like, those people are going to be replaced by people who are actively growing and learning.

29:20

And so, if, from my perspective, um, for students, for professors, for everybody having the strong desire to learn, learn, that is like, irreplaceable, like you, yeah, and it's something at mind of machines, we look for in our team members, somebody who's an eager learner, somebody who really wants to grow, like man, that's a, that's a gold star.

29:46

We want to, we want to get that person, um for, uh, you know, the students really instilling like you're not done once you leave college.

29:57

You know you gotta keep learning and keep growing, um, and you know one of the quotes I really like and I don't.

30:04

You know it's been reused and recycled so many times.

30:07

You know, um, ai won't replace job, but somebody who knows how to use AI will.

30:14

That is so one point, because you know, again, as we just in development, software development is a great example for us Tools like GitHub Copilot.

30:24

You know Microsoft's research shows that GitHub Copilot can accelerate your development by 15%.

30:32

So you're doing your work in half the time Like.

30:36

So who would you hire? A developer who has never used GitHub Copilot or a developer who is very skilled in GitHub Copilot?

30:41

And so when we look at, you know okay.

30:44

So let's translate that to students. How many students are being told it is okay to use AI tools for software development?

30:51

That's a skill we'd be looking for in developers.

30:55

Now we want you to understand what it is to do development.

31:00

We want you to understand what's Tally's role, what's the psychology of the human-computer interaction.

31:07

You need to understand that. But you also need to know where do these AI tools fit in?

31:12

And that's where you have to experiment, explore, try things out, and I think that's where a lot of people are looking for employers to do that for them, not realizing that it's really a shared responsibility between the employer, the employee and government and educational institutions.

31:30

We all need to work together to help up skill.

31:32

A little bit of a soapbox for me, but that's my take on how people can get ready for this workforce is learn, learn, learn and don't stop, and demonstrate that behavior to their students, demonstrate that behavior to their faculty and staff.

31:50

Show that.

31:52

Any thoughts on that?

31:56

That's my soapbox, so feel free to kick the soapbox out from under me.

32:00

What do you guys think?

32:03

Are we in an age where you have to just keep up at all times, or is that just like?

32:07

I don't want to be in that age?

32:13

I don't ever think that encouraging students to be lifelong learners is a bad thing.

32:18

I think there is sometimes the mentality like I got my piece of paper, I walk out the door and, like you said, I step into the workplace and if I need to learn anything else, then they'll help me get there.

32:40

So having some sort of impetus for being a self-motivated lifelong learner is not a bad thing in my mind.

32:44

Go ahead.

32:44

Tali, I was going to say I think with that, one of the biggest skills, too, is just understanding how we learn.

32:48

One of the biggest skills, too, is just understanding how we learn. You know, how are we in taking knowledge which I think really obviously kind of relates back to also philosophy and, and you know, religious studies is is, you know, how do humans learn, take in knowledge and how can we use AI tools for that learning process?

33:06

I think is another aspect here which I think will be really, really interesting from a both a storytelling perspective, you know, leveraging AI to maybe better understand something, or going to AI as a tool to actually learn.

33:17

That, I think, will also be a really I don't know interesting world to explore.

33:21

Yeah, and Emily, something you said really sparked in my mind.

33:26

You know, there was a leadership conference I was at many years ago and the speaker said you can't motivate anybody.

33:34

Motivation is an internal thing.

33:37

You can only motivate yourself, but you can inspire them and inspiration can be external, and so really demonstrating these behaviors, you know, can be inspiring.

33:53

So, you know the lifelong learner, and I think this is where, you know, one of the things we talked about is how can businesses, you know, help students in this age of AI?

34:03

And to me, you know, it's a call out to professionals like you got to grow, like look at how you're growing in your work and make sure you share that with other people.

34:15

It's good for you, it's good for them. So, you know, I just really I think that call out of uh, uh for us, like for you know, myself, for tallying, for, you know, business leaders, make sure you tell people like, yeah, you've got to grow, like you've got to care about your personal growth and your professional growth and you got to invest in it in your own time.

34:37

Um, now your employer should be investing as well, but most of it's going to be you doing your own thing.

34:45

So, all right, I do have one last question here, and I think this is an interesting one, because time and time again I have heard how very different mind over machines approaches the world of artificial intelligence.

35:00

And you know, one of the things we've joked about but you know, I kind of like to explore it a little bit is what is the intersection between AI and the humanities?

35:24

And you know, and I say that because the approach to AI and mind over machines has been formed by humanities people, you know, and so we have approached it as a humanities problem, not an engineering problem.

35:36

And so you know, emily, I'd like this kind of hear your thoughts on where do you see an intersection between humanities or liberal arts and ai?

35:41

And you know what, what does that mean for liberal arts students and our humanities people going forward?

35:57

That's a great question and I always enjoy our conversations from the perspective of the humanities thinking about technology.

36:00

As you know, that infusion of creative and critical thinking and being aware of the affective and those sorts of approaches to problem solving that students learn in humanities fields I think are so useful for approaching technology.

36:18

And sometimes there is this disconnect.

36:20

I mean often when I work with humanities students and I say, ok, we're going to integrate this piece of technology, there's sort of that wariness again of, well, I'm not really quite sure how to do that.

36:30

I'm not a computer science major, but kind of finding a point of intersection there I think is often very helpful.

36:38

So I know from my experience of this, you know, as a religious studies instructor, one of the things that I've seen kind of reflected in that humanities content.

36:54

It's sort of like that question of what does it mean to be a student?

36:57

What does it mean to be a teacher in the era of AI?

36:59

What does it mean to study something like religion when people have access to all of the world's religious knowledge and these sorts of things very quickly and through technology?

37:14

Ai is so often associated with secularism.

37:17

I think it's interesting when we start to see some of those intersections.

37:21

There's a work of a British anthropologist that I think is really fascinating.

37:26

Her name is Beth Singler you may have come across her before and she thinks about kind of theistic approaches to AI.

37:33

So she has a study that's fascinating called Blessed Be the Algorithm.

37:37

That thinks about how people have started to apply their divine-like characteristics onto things like benevolence or blessings like you get a better seat on a flight or something like that where an algorithm has helped you.

37:54

But it's given this sort of benevolent, godlike kind of quality in the way that people are approaching it, kind of using the technology to approach concepts and topics in the humanities that historically have been very distanced from technology.

38:14

I think is fascinating. But there's always these questions, like you mentioned, tim, that we want our students to sort of continuously return to, and so something like you know, there's prayer chat bots now where you can have something like Buddha Bot do mantra recitations for you, but usually in a ritual context in religion.

38:38

When we look at global religious traditions, ritual usually has efficacy when the actions are done in a certain way, but there's also a specific intention behind them and in the case of the technology doing that work, the actions might be happening perfectly, because it can be designed to do that, but the intention behind it if a machine is doing it and not a human is very much in question.

39:05

And so getting to use technology to help students sort of probe those kinds of questions and think about, well, what is a ritual and why does that matter, and how is it efficacious for somebody, if we're looking at this in a religious context Raises lots of questions about things like responsible use.

39:22

So it's helpful to think about technology, but it's also helpful for them to revisit some of these age-old questions about the discipline they're studying too yeah, that.

39:34

So I have not heard of that researcher, but, believe me, that is my weekend plans.

39:38

Um, sounds fascinating, um, but yeah, like and uh, that is that is interesting, because I see this like and you hear this like, oh well, you know.

39:53

And again, going back to artists, you know, if you go scroll artists on threads on Instagram, on, you know, tiktok, there is always this like commentary to the algorithm and it is, it is asking for, you know, blessing is asking for grace, you know, and in that kind of language and very fascinated, um, you know, for, for my perspective, like, I can't emphasize enough for humanities, people, to go into technology professions, because it has, like, totally been my secret weapon for my career, like, because you go in and you're talking and people expect you to be an engineer and think like an engineer and act like an engineer and be disconnected and impersonal like an engineer.

40:46

No offense, engineers, I'm not trying to call you out, just it's just how things tend to be.

40:51

But when you start displaying your humanities aspects like asking why, why are we doing this?

40:59

Who is this for? What's the impact we're trying to create here?

41:03

And it lights up, it just transforms people, and so, in AI, I think we're going to see even more of that, like where you have somebody who isn't this engineer mind that's just trying to solve the technical problem, but the human who's trying to solve the human problem.

41:23

We need those people, and so I think this is a great moment for humanity's people to really rise up and get involved and executives like Brad Smith, the head of legal counsel for Microsoft.

41:38

He's been quoted numerous times.

41:40

We need more humanities people in AI.

41:43

I can't emphasize that enough.

41:46

I'll ask another humanities person, tali what do you think?

41:50

Yeah, I mean, for me, I'm biased, absolutely.

41:55

I'm coming from a psychology perspective.

41:58

So, you know, I know we've done an episode on human-centered design thinking and you know, ultimately the folks hopefully benefiting and leveraging or using these AIs are humans.

42:10

At the end of the day, should support and be a tool to the people and and if, um, if we can use them ethically and responsibly, um, the outcomes are just really exciting and amazing and I think we can solve a lot of really incredible issues.

42:25

I think we just need to come together and make sure that we're doing so in a way that, yeah, aligns with the values of our, of our self, of our culture and society, um, so, yes, I'm all on board with that.

42:39

Awesome, you guys want to move into wings. Is there anything else on this topic we want to hit?

42:44

Let's do it. Talking about you know the positivity, the potential benefit of AI is a perfect segue.

42:52

Yeah, tali, you want to kick us off.

42:55

Let's do it. So there's a lot of stuff out there around I think we've actually touched on it in a previous episode in the news of how AI-powered companions, or chatbots.

43:12

You know there are some downsides, but there was a recent article that came out about how an AI-powered robot really provides, and can provide, companionship to lonely seniors.

43:18

In this example there's this woman, Dottie, who lost her husband in 2022, and she really found hope and kind of took her out of her depression through this chatbot, through this it's called an intuition robot, and so I just think it's a really interesting use case to leverage AI companions as a way to combat loneliness for certain demographics of the population, how that could really prolong life and reduce, you know, some negative mental health impacts like depression and things like that.

43:55

So I just thought that was a really incredible call out and I wanted to bring that to the table.

43:59

As you know, chatbots become ever the most present.

44:04

Yeah, and now put on the layer of Project Glit on that.

44:08

You know what happens when we can actually make, you know, hematoid companions for people cost-effectively, you know, know, I think that's the biggest challenge, you know, and what are the ethical implications?

44:22

I saw this. Um, there was a company I cannot remember their name, but this was also.

44:27

Their paradigm was to make ai companions for, uh, elderly and um, like for medication, adherence and loneliness and things like that.

44:36

But their big push was we absolutely do not want it to be thought of as a person, we don't want it to be thought of as human, like it constantly reminds the elderly person that it is an AI system and they can do these things and not these things.

44:56

So very fascinating, yeah, interesting topic, emily.

45:07

What's your win? I think, for me pedagogically.

45:13

It might seem like a little bit of a paradox the way that some folks think about this in education, but I think it's exciting that AI has the potential to make us less complacent, as teachers and students, when it's used as an effective tool and it can help us think in such a variety of new ways.

45:30

So I think that the shift from a mindset of transmitting knowledge to a focus on learning what to do with that information is a win for me.

45:39

Awesome.

45:42

Yeah, I really like that reframing of like this was a major win and here's how Like that's cool, my win, I think.

45:55

This week our team member, nick, brought this to me which I thought was really cool.

46:00

It's called WePaint and it is using so you can go use like MidJourney or Dolly or whatever, to create your image, like your AI image, and then you submit it to this company and they have a network of artists that will paint your image and paint it on canvas with paint, just to be clear, like physical paint, and then they send you your painting, and so I think this is a very fascinating kind of flipping the whole AI art discussion on its head of like no, you generate your image, you want us to do and we'll do it, and I personally have done that myself, where I'll use tools like Adobe, firefly or Midjourney to conceptualize content and then I will take it into physical, like charcoal drawings or paintings or, you know, colored pencils, whatever I'm working in.

47:00

So I thought this was a super, super fascinating concept.

47:05

So, yeah, would you guys want painted AI art in your houses?

47:14

I love that. I actually have a friend who is in the digital art space and she's been a little bit frustrated by the takeover of AI and I think that's a great example of how, again, people will always want that human element.

47:28

There's something about that that I think is just really exciting. I think that really highlights that and, yeah, I love that example and I'm going to have to share that with her right after this.

47:34

I think she'll be really excited, cool.

47:39

All right, anything else for us today.

47:42

I think that's everything. I think we've covered it.

47:45

Well, thank you, dr Emily Bailey, for being with us today and having this discussion.

47:50

This was really fascinating to me and I can't wait to go check out some of the research you were talking about.

47:56

And you know, this perspective on students and teachers and higher ed is super valuable as we look at as employers.

48:04

You know this is the next crop of workforce that we have coming, so what are we getting?

48:08

So very interesting, thank you.

48:13

Thank you so much for a great conversation Awesome.

48:17

Well, thank you Next time, Tali. What are we talking about next time?

48:21

Next time we're talking about AI and HR, so I can have a conversation.

48:26

Right, awesome With Jay from HR Geckos yes, so awesome.

48:30

So this is going to be about HR automation and, similar to our friends at MeBeBot, hr Geckos has a pretty cool AI capability there.

48:43

So, looking forward to that discussion and, as always, we will post all the links we talked about in the content in the comments.

48:51

Here we are working on some significant improvements to the Boring AI Show delivery post show, so stay tuned for that and, as we talked about last time, we're looking for topics for our One Boring Day concept, which is a day-long Boring AI Show.

49:10

So if anybody has thoughts of what they'd like to see there, we'd love to hear from you, post them in the comments below or just send us a message with your thoughts.

49:17

With all that said, thank you everybody for joining us today and we will see you in two weeks.

49:24

Thanks guys.

Rate

Join Podchaser to...

  • Rate podcasts and episodes
  • Follow podcasts and creators
  • Create podcast and episode lists
  • & much more

Episode Tags

Do you host or manage this podcast?
Claim and edit this page to your liking.
,

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features