Podchaser Logo
Home
230 - Carlos, Explained

230 - Carlos, Explained

Released Thursday, 15th June 2023
 2 people rated this episode
230 - Carlos, Explained

230 - Carlos, Explained

230 - Carlos, Explained

230 - Carlos, Explained

Thursday, 15th June 2023
 2 people rated this episode
Rate Episode

Episode Transcript

Transcripts are displayed as originally observed. Some content, including advertisements may have changed.

Use Ctrl + F to search

0:00

Before we get into the show, I have

0:02

to tell you about the lineup coming to HubSpot's

0:04

Inbound Conference this year. You all

0:06

know it's one of my favorite events, and this

0:08

year you're going to hear from some amazing guests

0:10

like Reese Witherspoon, yes,

0:13

the Reese Witherspoon, Derek Jeter, and

0:15

Andrew Huberman. I love the Huberman Lab

0:18

and listen to Andrew all the time. Make

0:20

sure to mark your calendars for September

0:22

5th through the 8th and join us

0:24

right here in Boston. This year, there'll

0:26

be multiple stages, industry experts,

0:29

talk tracks on everything from AI to sales

0:31

strategy.

0:32

It's the perfect combination of inspiration

0:35

and practical tips to help you grow your business. Tickets

0:37

are selling out fast, so head over to inbound.com

0:41

to get yours today.

0:44

Today, we're going to talk about the seven principles

0:46

to make you a winner in the AI era. I am your host, Kit Bodner,

0:48

CMO at HubSpot, joined by my co-host, Kieran

0:51

Flanagan, the CMO at Zapier, and

0:52

this is Marketing Against the Grain. Let's

0:55

get into today's show. Kieran,

0:58

we spend a lot of time talking about the future.

1:01

I know you're a big fan of the future,

1:03

but what about the past? Let's

1:06

get into today's show. Kieran,

1:09

we spend a lot of time talking about the

1:11

future.

1:12

I often think that you and I are probably not content

1:14

in our real lives because we spend

1:16

all of our time talking about the future. And

1:19

then when we get to the future, we're like, ah, not that cool. What about the future

1:21

now? Not that cool. What about the future and

1:23

I? Kieran, you and I were talking on WhatsApp. There's

1:26

a lot of uncertainty in artificial intelligence,

1:28

and most of the companies, most of the executives

1:30

we talk to are like, hey, what do

1:32

you think is going to happen?

1:34

And so what do you think is going to happen to fill

1:36

in the blank with some specific thing? Right.

1:39

Call it search engines, call it advertising,

1:41

call it sales reps, BDRs, what

1:43

have you. Like, there's tons of questions there, right? And

1:46

I don't think you and I know the answers 100%,

1:49

but what I do think is going to be really

1:51

helpful to everybody is that we took the time to

1:53

think about what do we know to be

1:55

true about

1:57

artificial intelligence? And can we

1:59

distill that data? down into principles and

2:02

use those principles as a potential

2:04

predictor of the future. You

2:06

and I love the idea of first principles.

2:08

We're first principle obsessed. And so we

2:10

wanted to do some very AI specific

2:13

first principles on today's show.

2:15

You ready to go? You ready? Yeah. All

2:17

right. I got our list. I'm gonna start with one of my favorite

2:19

principles and I want you to talk about what you think this one

2:22

means. The first one I'm gonna start with is in

2:24

an AI world, anything that depends

2:26

on just

2:27

brute force, human work and

2:29

time is going to be disrupted. So if you

2:31

use humans to do manual work,

2:34

all that manual work will be disrupted by AI. AI

2:36

will do that work

2:38

and somebody will come along and do it

2:40

faster and cheaper than you're currently doing

2:42

it. Yeah. Or it's

2:44

specifically in knowledge work. Which is like typically

2:47

where you would think about this, you would think of manual labor. And

2:50

now I have seen examples where

2:52

there are new stories of potential AI

2:54

robots gonna be able to do manual labor things

2:57

like build houses and all these things in the future.

2:59

But this is like very much specific

3:02

to knowledge workers and anywhere where you've had

3:04

to

3:05

fill the gap, where I just like grinding

3:07

something out with humans, there's just gonna be disruption

3:09

there. Again, I think of everything in terms of,

3:12

it's not the job that's being disrupted, it's

3:14

the tasks. The tasks, that is the way to

3:16

think about it. I love that. And human jobs are a series

3:18

of tasks and in the future automation

3:21

is gonna be able to automate much more of those tasks. Perfect.

3:23

And if you were thinking about this first principle, what you wanna say

3:26

is,

3:26

what are the parts of my business that I irrationally

3:29

rely on human time and effort and

3:31

manual work to do? And how do I automate

3:34

those before somebody else automates them

3:36

and puts me out of business?

3:37

Yeah, I think the other thing that's unwritten

3:40

in here that we didn't call out that I just thought of as,

3:42

like within a technology company, if

3:45

you judge your success by the size

3:47

of your team, that is gonna be heavily disrupted.

3:49

Yes, you are screwed. Because I think you're gonna be

3:51

much more judged on the quantity

3:54

of your output. Right?

3:55

Like today, there was a really great article

3:57

from the Slack CEO. Now there was nothing in it that was...

4:00

That was like jaw dropping, but everyone

4:02

seemed to think it is jaw dropping, which is you hire people

4:04

and they want to hire other people. And we pay

4:06

people based upon scope of work and scope of

4:08

work today. There's nothing wrong with this, right? Like

4:10

I know that it's really everyone wants to paint.

4:13

There is actually something wrong with it. There's

4:15

nothing wrong with paying people by

4:17

scope of work, right? The amount

4:19

of surface error that they control or the importance

4:22

of that work. And historically,

4:24

the amount of people you had to do that work,

4:27

the larger the tasks you were given or the larger

4:29

the goal you were given, you just need

4:31

more people to do that task or to achieve that

4:33

goal. Yeah. Yeah. There was over hiring. The problem

4:36

with technology companies is you have empire builders

4:38

where people hire just because they actually think that

4:40

makes them look better if they have bigger teams. And even if they

4:42

don't need bigger teams, they're not incentivized

4:44

not to have bigger teams, but

4:46

I think in the future, within this first

4:48

principle, you are going to be judged in your

4:50

output and your output could be the same with

4:53

two people plus AI. As it historically has

4:55

been with like 20

4:56

people, right? And I think that's the magnitude

4:58

of difference there. Look, the reason that

5:01

the scope of work compensation model isn't

5:03

good is because especially

5:05

now with AI and the rise of

5:07

automation, growth has to be nonlinear.

5:10

And nonlinear means that you get more

5:12

out than you put in. It's not just like, Oh, I had

5:14

a person and I get like same amount of

5:16

return of spin for every person

5:18

I add, it's like, right? No, that's automation and my

5:21

revenue grows exponentially, not

5:23

linearly. And when you

5:25

compensate people

5:26

on scope of work, you are incentivizing

5:28

linear growth. That's the problem. That's

5:30

the problem to kind of put it plainly. All right.

5:33

I got favorite ones. I've kind of expanded

5:35

into being a three-parter. We've talked a little bit about

5:37

this on the show before, but I want to underscore for everybody,

5:40

there are three generations of the internet,

5:42

web one, oh, the birth of the internet.

5:44

It democratize access to information. The

5:47

second generation of the internet web 2.0

5:49

democratize access to each other. Connections.

5:52

It was the social web. Web 3.0

5:54

this generation, the AI generation

5:57

democratizes understanding

5:59

of.

5:59

So the first principle

6:02

here is the understanding of complex topics

6:04

is now commoditized, which means

6:07

you could have an idea for a song or have an idea

6:09

for a piece of art. And

6:11

before, if you didn't have the skills to write music

6:13

or read music or do all those things, you couldn't do anything with it.

6:16

And

6:17

now, AI can help you basically

6:19

take that idea and turn it into an

6:21

end product, and it is democratizing

6:23

the understanding, at least at

6:25

a basic level, of all of those skills. I

6:28

actually would argue that that is slightly

6:30

different than I were talking about it, which is... Ooh, I love this. Go.

6:34

Yeah, it's not the understanding of complex topics.

6:36

What you've actually described is you don't need to understand them at

6:38

all.

6:39

And it's actually expertise is commoditized

6:42

because in the past, you

6:44

would have to understand how to do something to get that

6:46

idea out into the wild. Today,

6:49

or in the future, you will not have to understand

6:51

how to do that thing because you can just tell someone to do

6:53

it for you or tell an AI assistant to do it for you and get that

6:55

idea out of it. So

6:58

if I have to understand a complex topic, I still

7:00

have to get the information and kind of understand that. Okay,

7:02

well, how do I create that thing? Or how do I do that thing?

7:04

Or how do I get from A to B? But

7:06

actually, what AI does is I don't

7:08

have to learn any of that stuff, right? I just have to tell

7:10

it to do it and it will do it for me. So I wonder if it's

7:13

like

7:13

expertise in a lot of areas is now

7:15

commoditized. Well, expertise is definitely commoditized.

7:18

I think those two things go part and parcel. I don't think they're

7:20

actually mutually exclusive expertise and understanding. I

7:22

think they're very related. But if you're thinking

7:24

about the future of your business, what you want

7:26

to say here is the uniqueness

7:28

of understanding and the barrier to

7:31

entries of skills is not a moat

7:33

that is going to protect my business from being disrupted

7:36

as much in the future as it was in the past. Right.

7:38

Right. And AI is going to

7:40

have a higher potential to disrupt on

7:42

that expertise and understanding.

7:44

Right. In the past three to five years,

7:47

if you were a coder, you could have spent your

7:49

weekends learning how to be better than the other coders who

7:51

are going to be better. You could have

7:53

had to move all the same experience with you, but you put in those extra

7:55

hours. You could outwork them. And today

7:57

that coder has closed the gap. because

8:00

they've used in these kind of co-pilots and tools.

8:02

And that's the commoditization of expertise. Are you

8:05

saying we're moving from an era about work to

8:07

an era about think?

8:08

That would actually be, if we are, that

8:10

would be great for the world. I think it would be amazing

8:12

for the world and that's what you're tipping at. I don't know

8:15

if it's true, but you're kind of saying, hey, maybe we're

8:17

heading there, right? Yeah,

8:18

yeah, maybe we can all have like work-life balance,

8:20

but people who have just the better ideas

8:23

and able to, are able to get those ideas and

8:25

into the wild now will actually be much more successful

8:28

versus the person who is just willing to work 24-7 for

8:31

seven days a week. Well, one of your principles that

8:33

you shared was right online with that, which is creative

8:36

ideas are priceless. Execution

8:38

of those ideas commoditized. Explain what that

8:40

means. Yeah, when I think of AI, there's

8:43

three different categories of creativity.

8:45

There's being able to mix ideas together. There's

8:48

being able to create ideas within a

8:50

framework that you give it. And there's being able to come up

8:52

with net you ideas.

8:53

And if you look at AI across, we've

8:55

covered this across like text, image, video. If

8:58

for the most part is limited in those two buckets, right?

9:00

It can actually mix ideas together. Like

9:03

in terms of text, you can give it some content

9:05

from a book or something and ask it to

9:07

like take your idea and take the content from the book

9:09

and splice them together and give you some output.

9:11

And that output is like fine. Like if you're an okay

9:14

writer, it's fine. It will actually get you

9:16

to good. If you're a great writer, it doesn't actually improve

9:19

anything for you.

9:20

Image is very similar. I can take things,

9:22

work within this existing style guide and give you something

9:24

back and video very similar,

9:26

but it cannot create brand new ideas,

9:28

right? If I'm a marketer and I'm sitting there with these text images

9:31

and video tools, I'm gonna use elements

9:33

from each of those different things in different ways. I

9:35

think images is by far the most ship

9:37

level ready. Like I'm actually gonna take things and put

9:39

them straight into my production line. Video,

9:42

if I'm really good at video, I can take things there and

9:44

actually use them within my marketing. And in

9:46

text, it actually is good for like a research

9:48

assistant and an assistant.

9:50

But none of those things are gonna create net

9:53

new ideas. That's gonna really blow up my

9:55

marketing. And so creative ideas

9:57

are still priceless. And I would say even more so.

9:59

Because people who have great ideas,

10:02

I think get held back or all their

10:05

time gets sucked away by the execution of

10:07

those ideas. Just hard to use all these tools

10:09

and do things to bring them into market. And so I think

10:11

ideas are going to go up in value because

10:14

the execution of those ideas, and we

10:16

pay people a lot of money to bring those ideas to

10:18

fruition, is going to get commoditized. I

10:20

love that one. Another one that you had that

10:23

might be my favorite, and I might be

10:25

very, very jealous. Oh, yeah. I

10:27

nailed it. Which one? I

10:30

nailed it. You just told me which one I did. I

10:32

think it's the last one. You had the cost to

10:34

take risks are near zero. Right.

10:36

And that is pretty brilliant. You're basically

10:39

saying that the old adage of like

10:41

fortune favors, the bold, is actually going

10:43

to get far more true because

10:45

the

10:46

cost to actually try something

10:48

and see if it works is dropping to,

10:51

if not zero, very, very close to zero,

10:53

right? Right. We covered it on an episode

10:56

recently. In a not too distant future,

10:58

you're going to be able to build a chatbot equivalent

11:00

power of chat TBT that

11:02

can run 8.5 billion queries

11:04

a day

11:06

comparable to what Google does today for $650.

11:09

Can I not take that $650 and do some

11:12

incredible things and try things out like

11:14

the minimal viable version? So

11:16

when you want to test something and you want to launch

11:18

something new, you do a minimal viable version.

11:20

So what is a minimal viable version? Well, I want to use the least

11:23

amount of resources and take on the least

11:25

amount of expense to prove an idea is valid.

11:28

Now, minimal viable versions for a lot

11:30

of things still cost resources,

11:32

still cost some amount of money, and they are

11:34

still a detractor from trying a lot of

11:36

different things and taking risks. The cost

11:38

of doing those things is going to depreciate

11:41

so, so fast that I'm going

11:43

to be able to try so many different things.

11:45

You're right that it's related to the first thing, which again,

11:48

people with great ideas can get more

11:50

of those ideas out into the world to see which ones

11:53

actually resonate. And I think they're the people

11:55

that are going to be much more successful and they are going to benefit

11:57

a lot from the abilities of AI.

13:55

And

14:00

some amazing, amazing recent

14:02

episodes, Jina, like you need to trust yourself.

14:04

And here's how to start. She has an amazing

14:07

guest. The Savi Kumar shares

14:09

her insights on how self-talk confidence

14:11

and self-trust are all interconnected. If

14:14

you're struggling with self-sabotage or

14:16

negative self-talk, check out this episode.

14:18

This is just one example of all the amazing

14:21

episodes that Jina is doing right now. I

14:23

highly recommend checking it out. It's a great

14:25

podcast. You can listen to Gold Digger

14:28

wherever you get your podcasts.

14:56

It's a great episode. I would recommend taking leadership to

14:58

technology, entrepreneurship, relationships,

15:01

and much, much more. If I could

15:03

recommend

15:04

one episode, I would really recommend the recent

15:06

episode on scams in the pharmaceutical industry. Or

15:09

there's really a great episode on how you play

15:11

the status game. Both are incredibly addictive, shocking,

15:14

and captivating stories. Listening

15:18

to the Smart People podcast feels like befriending the

15:20

most interesting person at a dinner

15:22

party. There's never a dull episode. I've

15:25

got a few more principles left. This is one

15:27

of

15:27

mine that I think is really important is that information

15:29

asymmetry no longer exists. So what

15:32

does that mean? It means in the olden days,

15:34

if you were a buyer, you would have to go to that

15:36

company. Normally we'd have to talk

15:38

to a salesperson or somebody at that company to figure

15:40

out pricing, use cases of that product for your business, specifically

15:44

all of those things, and with AI. That

15:47

is quickly going away. And that

15:50

all the interesting things that I mean, if I've

15:51

ever shown to you a company has is available

15:54

to you if you want it without having to talk to anybody

15:56

else.

15:59

And that leveling of the information

16:02

playing field gives the buyers

16:05

way more power than they have ever had. And

16:07

will force companies to think about enabling

16:10

their buyers much more than they think about

16:12

enabling their sellers. Like it's inbound.

16:15

It's just an evolution of the inbound. Like inbound was created

16:18

and like the original that

16:20

sprung from consumers having much more

16:22

power as brands were created. And more things

16:24

on the internet and the internet rise in

16:26

popularity and people being able to like get

16:28

all the information they needed themselves.

16:29

But that wasn't true

16:32

of all the information. Like there's just a lot

16:34

of information that companies have that you have to

16:36

talk to the salesperson, talk to the support

16:38

person, like get connected to a human. And

16:40

I think in this world you're right. Like

16:42

you have an ever present agent

16:44

that can give you the information around the company because

16:47

a company's best interest is to give that agent

16:49

all the information they can so people can query at 24 seven.

16:52

And so I think all of that stuff does not get

16:54

locked away. And you actually

16:56

consumers have more control than they've ever had before. AI

16:59

is much more of a tailwind

17:01

for buyers than it is for sellers. AI is

17:03

way, way, way better for buyers than

17:05

sellers. Right.

17:06

OK, I think we have time for like two more, Karen.

17:09

So one that we've got here is

17:12

consumer impatience and entitlement

17:14

will be at an all time high.

17:17

I love you put entitlement in there. Like

17:19

those entitled consumers. No,

17:21

but entitlement, I think, is a very

17:24

important word where it's like the best way

17:26

to describe expectations because

17:29

when you have endless

17:31

choice, you get

17:32

to be entitled. Right. Like

17:34

if they're saying that there is going to be more choice than

17:36

ever before, then a consumer gets

17:38

to be entitled. That is a privilege that that

17:40

is just born out of those market circumstances.

17:43

Right. Well, the impatience is because you never expect

17:46

to have to wait in line for an answer again. I

17:48

think that's going to look archaic. The times

17:50

where I have to sit on the phone with my broadband company

17:52

for an entire day to try to get an

17:55

answer to something.

17:56

None of that stuff exists in the future. Consumers

17:58

are going to expect to get answers. whenever they need

18:00

answers in any kind of form, whether that's chat,

18:03

email, WhatsApp, it doesn't really matter.

18:05

They expect to get 24-7 answers because an

18:07

agent can give them those answers. And the entitlement

18:10

to me is like, they expect the concierge

18:12

experience because they expect agents to be able to

18:14

do most of the work. So none of this stuff is an added

18:16

cost to the company. The company has

18:18

agents set up, the agents do all that work for them.

18:21

And everyone gets the concierge experience.

18:23

I just think that the thing I don't know about

18:25

is like,

18:26

where do you find leverage? That's what I

18:28

always continue to think about is like, where's the leverage? Where's the

18:31

leverage? Where's the leverage? If everyone has a concierge experience,

18:33

what is it? Although they drive it around to your office

18:36

and bringing you tea and cakes, considering

18:38

you dine and telling you you're a great person

18:41

and picking you up and telling you lots of great things about

18:43

yourself. I don't know what the next part of

18:45

one-to-many

18:46

is. If the one-to-many experience becomes like

18:48

a very like one-to-one type experience. I think

18:50

that's actually a very good point. And it happens to

18:52

be exceptionally true. I

18:54

don't know what the differentiation is, but I know

18:56

in the short term, there's going to be a lot

18:58

of differentiation made by companies

19:01

who facilitate, you know, that speed

19:03

and around the clock access

19:06

to information and service versus

19:08

the old school way of waiting. Right. And that's,

19:11

if you're predicting the future around

19:13

AI, that's one of the things that I think is going to happen.

19:15

All right. Our last principle

19:18

of the day is that chat user interfaces

19:20

are just as popular and important

19:23

as graphical user interfaces. People

19:26

are going to want to use natural language

19:28

to do things online in

19:31

the same way that they've historically used graphical

19:33

point and click interfaces to do

19:35

things online.

19:36

Is that true? I do agree with that. Every

19:38

brand is going to have a natural language layer and chat

19:40

is going to be part of every single product

19:43

experience. I don't think you're going to use a product in the future

19:45

that doesn't have some way to chat with that product. Like

19:47

me being able to tell the product through natural language, please

19:49

do this for me. Please do this for me.

19:51

I think all platforms build that into

19:53

their app. Again, when we think about the Sam Altman

19:56

roadmap episode that we just did, we covered

19:58

what was open AI, secret.

19:59

roadmap, product roadmap, and sound map

20:02

and talked about in there. Hey, the thing we think

20:04

we got wrong about the plugins, and

20:06

these plugins don't have product market fit, is people didn't

20:08

want to create their app in chat

20:11

GPT. People wanted to bring chat GPT

20:13

into their app. And that's an important distinction, because it's what

20:15

you're saying here is, every

20:18

product has a chat GPT natural

20:21

language interface component to

20:23

it. Correct. And graphical interfaces,

20:25

graphical UIs, I don't think they go away.

20:27

We've just seen the biggest technology brand

20:30

in the world launch vision pro

20:32

glasses, which bring graphical UIs

20:34

that you can control with your hands into your

20:36

like everyday presence,

20:38

right? You actually wear it on your face.

20:41

And so I think those two things actually are just

20:43

maneuvering in like very different

20:45

ways, but actually are going to be very important to how

20:47

we like consume content, use software

20:49

in the future. Yeah. That's why I said they're going

20:51

to be equally as important, right? I think

20:53

we're going to live in a world where graphical user interfaces

20:56

and text or chat based user interfaces

20:58

coincide with each other. Right. And they're

21:00

going to kind of relate and rub

21:02

off on each other, right? Like one of the things about a

21:05

chat user interface is you kind of go back and forth

21:07

and iterate on something. That same iteration,

21:09

I think will move over to a graphical

21:11

user interface in different ways, right?

21:13

Like they'll start to blend and start to merge.

21:16

And what this means for everybody in the future

21:18

of AI, what does it mean for our business? It means

21:20

that customers are going to want again,

21:23

a very adaptable way to interact

21:25

with your business. And that interaction

21:27

is going to look like a mix of like a visual

21:29

interaction and a text based interaction, right?

21:31

Right. And balancing that and figure out where one is

21:33

better than the other is going to present a bunch

21:36

of opportunity to you as a marketer, to you as a founder.

21:38

And so that is what we are

21:40

talking about

21:41

here today. Those are our principles

21:43

for predicting the future of AI because

21:46

AI is uncertain.

21:48

But if you have those principles, you can

21:50

understand directionally the impact AI

21:52

is going to have and then use them to

21:54

figure out what strategic choices

21:57

and investment choices you need to make in your own

21:59

business.

21:59

be prepared for whatever change

22:02

might work out in terms of like what specifically

22:04

might happen. The case of the future,

22:07

as long as you have a rough understanding of the direction,

22:09

I think, you're normally way ahead of everyone

22:11

else. You don't need to know the answer ahead of time. You

22:13

need to know kind of the direction the answer is going.

22:15

Do you agree with that, Kieran? I agree. I

22:18

think this maybe relates to my very

22:20

last first principle, which I

22:22

do think is one we should like just cite as

22:24

fast movers get unfairly rewarded,

22:27

which I think there's always been a reward

22:29

for adopting

22:31

things before the masses.

22:33

I think in terms of AI, it has

22:35

never been more prevalent. It has never been more

22:38

important. You will never be more rewarded

22:40

than with this technology to be in that

22:43

fast mover bucket. Because again, I think commoditization

22:45

happens much faster. And so when everyone catches

22:48

up, you lose some of that advantage. But that advantage

22:50

for the first movers in integrating AI

22:52

into your work, integrating AI into your go-to-market, integrating

22:55

AI into your marketing is so much bigger

22:58

than those who are not doing that. And there's

23:00

still like a big discrepancy between what

23:02

we talk about every

23:03

day and what the common most companies

23:05

are actually doing. Correct. And are still

23:08

like way, way behind in trying to figure out

23:10

how to integrate this into their business. If you were watching

23:12

the show, you were already ahead. Exactly. And

23:14

if you want to keep being ahead, hit that subscribe button. Because

23:16

we are obsessed with staying

23:18

ahead on this show. And we are obsessed

23:21

with giving you the tools and the insights

23:23

to actually be ahead. And

23:25

the principles today, I think are a great

23:27

example of us try to do that. Leave a

23:29

comment in the YouTube if

23:31

you've got a favorite principle or if you

23:33

have one of your own that you think we left

23:35

off that we should have included in today's show. That

23:38

being said, that's first principles

23:40

for AI. It was awesome. Thank

23:42

you so much for watching today. And we'll be back real soon

23:44

on Marketing InstaDrain.

24:03

Howdy folks, Jacob Cohen here,

24:05

one of the hosts of the Hustle Daily Show, another

24:07

awesome podcast from the HubSpot Podcast

24:09

Network. Our show's got fresh daily takes

24:11

for anyone interested in learning about everything

24:14

from the billion dollar business of nacho cheese

24:16

to what the heck is going on with the housing market

24:18

to how AI is changing the way we live

24:21

and work. It's short, it's sweet, it's

24:23

got good intro music so I'm not really sure what more

24:25

you need. We'd love to have you come check it out

24:27

and you can tune in throughout the week wherever. You

24:30

get your podcasts.

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features