Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:00
Episode 368 of CPPcast
0:02
with guest Abbas Sabra, recorded
0:05
23rd of August 2023. This
0:08
episode is sponsored by Sona, the
0:10
home of Clean Code.
0:27
In this episode we talk about the latest releases
0:29
from the Boost and FMT libraries. And
0:33
about the downsides of C++ core routines.
0:37
Then we are joined by Abbas Sabra. Abbas
0:41
talks to us about static code analysis and
0:43
a new approach to automatically analyzing
0:45
C++ projects.
0:55
Welcome to episode 368
0:57
of CPPcast, the first podcast
0:59
for C++ developers by C++ developers. I'm
1:03
your host, Timo Dummler, joined by
1:05
my co-host, Phil Nash. Phil,
1:07
how are you doing today? I'm alright, Timo, how are
1:09
you doing?
1:10
I'm great. Yeah, I'm very happy
1:12
you're back from vacation. I, as you know, had
1:14
a few episodes with some guest co-hosts,
1:16
but it's great to have you back here.
1:19
How was your vacation, by the way?
1:21
It was a great vacation, actually.
1:24
As is often the case, I now need a break to
1:26
recover from it. We
1:29
did a lot of driving. It was a road trip from
1:32
the UK, where I live, down to Italy. We
1:34
actually went through Switzerland in both directions.
1:37
And on the way there, we actually went right past Sona's head
1:39
offices in Geneva. So appropriate to
1:41
this episode.
1:42
And we somehow managed to get incredibly lucky with
1:44
the timing. Because for a three-week holiday
1:47
that we actually planned last year,
1:49
we managed to land exactly between two record-breaking
1:52
heat waves in Southern Europe. So for us,
1:54
it was just a manageable 30 degrees most days,
1:56
which was still a bit uncomfortable when
1:59
walking around Rome.
1:59
for example. But we definitely
2:02
had a lot of gelato managed to survive
2:05
that. But you know, if we'd been a week either way,
2:07
we would have been into the 40s and that would have not
2:10
been as comfortable for a holiday, I think. Oh,
2:12
wow. No, it hasn't been 30 degrees
2:15
here in Finland at any point. So you
2:19
managed to avoid that heat wave quite a bit
2:21
further north. So
2:22
kind of as the rest of Europe was sizzling,
2:25
it was a very comfortable, you know, 20
2:27
to 20 degrees here most of the time.
2:30
Yeah, I have been in the UK as well.
2:32
I did need to give a full report
2:35
on my vacation because as we've heard
2:37
before, a colleague
2:39
at Sonar who says that he only ever hears about
2:41
these things from CPPcast. Well,
2:43
this time he's going to get to hear it earlier than usual, because
2:46
he is here with us today. So I can exclusively
2:49
reveal that my mysterious colleague is
2:51
our guest today, Abbas
2:52
Zebra. So that one's for you,
2:54
Abbas. Thank you. Thank you for having
2:56
me. It's nice to be here. Finally,
2:59
I don't have to read
3:02
what's the next episode of
3:04
CPPcast because I
3:06
know about your vacation now. Yeah.
3:09
All right. So at the top of every episode, I'd like
3:11
to read a piece of feedback. And this week,
3:14
the feedback tab has been turned back on.
3:16
Thank you very much for everybody who gave us
3:18
feedback. I've selected
3:20
this one from Grand Ranger 7305 on Reddit
3:22
about the
3:25
last episode with Jason Turner and Mark
3:27
Gillard. And Grand Ranger
3:29
says, I enjoyed the show very much.
3:31
Jason is always a great guest. It's
3:33
sad to hear that the reflection support is not being
3:36
worked on at the moment. I had a project
3:38
where I had to work around the language because I wanted
3:40
to iterate over all members of a certain type.
3:43
Thank you for your work.
3:45
PS about the episode before,
3:47
the voice of Matt Gotbolt is kind of soothing.
3:50
He should record audio books. Well,
3:53
Matt, if you're listening to this, yeah,
3:55
maybe that's something you should consider.
3:57
And thank you, Grand Ranger, for
3:59
your feedback. Yeah, I think Matt needs
4:01
some sort of side hustle. Yeah,
4:04
I would totally listen to an audio book read
4:06
by Matt Godfold. I think that would work
4:08
really well.
4:09
There's also some other feedback from another listener
4:12
of the last two episodes. And that was me. I was
4:15
playing the listener role this time.
4:17
And I had that experience where I was
4:19
listening to particularly the episode, two
4:21
episodes ago, where
4:22
I was like shouting at the podcast because I had all
4:25
the answers. And I
4:27
wasn't there to say them. So I'm not
4:29
going to give all of my, all of the points that
4:31
I would have made during the episode. But there's a couple of things I did want
4:34
to address. First of all, particularly,
4:36
there was a whole section on adding
4:38
slides to a podcast, as
4:41
the chapter up. So most podcast clients
4:43
can show images at certain points
4:45
that you can specify in the podcast. And that was discussed
4:48
in the episode. As it happens,
4:50
I've done that before.
4:51
No diagnostic required. The podcast
4:54
I did with Anastasia Kazakova,
4:56
we actually did that every episode. So we prepared
4:58
slides in advance, almost like a conference
5:01
talk. We talked around the slides and I
5:03
put those in as a chapter up.
5:05
As you guessed, that was a lot of extra work.
5:08
You had your spot on there. And of course,
5:10
the other problem is not everyone can see
5:12
that. So you can't come
5:15
to rely on it, but it can be handy sometimes.
5:17
We also did it a couple of times in CPP
5:20
chat.
5:21
There was one time we had Sean Baxter on
5:23
and he just suddenly
5:25
started doing a screen share. It was like walking through some code
5:27
and saying, ah, we can't show
5:29
this to people on the audio podcast. And I said, well, I'll
5:31
add it as chapter up. And yeah,
5:34
we did that. And as a one off, that was fine. But
5:36
yeah, it wouldn't be sustainable in
5:39
the long term. But yeah, maybe we'll do it from time to
5:41
time.
5:42
We also talked about Mould, the
5:44
linker. We did actually
5:47
attempt to reach the author a while back to
5:49
come on the show. So I think that'd make a great episode, particularly
5:52
our series about tooling. Not actually
5:54
yet been able to duration. So
5:57
if the author is listening, would like to come on do please
5:59
reach out to us. We'd love to have you on the show.
6:02
And my final piece of feedback was
6:05
when you talked about the article
6:08
from Jonathan Mueller,
6:09
suggesting that we use Lambdas instead of functions,
6:12
perhaps a little bit tongue in cheek,
6:14
as it happens, I did a lightning talk on
6:16
almost exactly the same thing a few years ago.
6:19
All the defaults are backwards. I'll
6:21
put a link in the show notes. I had a slightly
6:23
different focus. I was suggesting that
6:25
rather than just using Lambdas as they are, we actually
6:28
change the language feature so you can have named
6:30
Lambdas at
6:31
sort of global and class scope,
6:33
which would have all of the same benefits Jonathan
6:36
talked about, but be a more official
6:38
part of the language. So it kind of functions to
6:40
zero basically. Yeah, yeah, exactly
6:43
that. Yeah. It gives us a chance to reset on a few
6:45
other things, which I talked about in that talk, so
6:48
do go and watch that. But I thought it was like an interesting
6:51
overlap between that and what
6:53
Jonathan was saying, but that's where I'm
6:55
going to stop for now. We need to get on with
6:57
the show. All right. So we'd like to hear
6:59
your thoughts about the show and you can always reach
7:01
out to us on X, formerly known as Twitter,
7:05
Mastodon, LinkedIn, or email us at feedback
7:07
at cppcast.com.
7:09
Joining us today is Abbas Sabra.
7:11
Abbas is a principal engineer at Sonar,
7:14
where he has discovered the ideal platform
7:16
to pursue his passion for C++ development,
7:19
development processes, and tooling. His
7:21
career began in the financial industry,
7:23
where he identified inefficiencies within the
7:26
C++ tooling ecosystem that led to
7:28
significant debugging and time losses.
7:31
He firmly believes that static analyzers
7:33
can significantly improve the productivity
7:35
of C++ developers.
7:37
Fueled by a keen interest in compilers, static
7:39
analysis techniques and language design, he's
7:42
continually driven to innovate and push the boundaries
7:44
in his field. Abbas, welcome to the show.
7:47
Thanks for having me.
7:48
So you mentioned tooling inefficiencies
7:51
in finance there. My
7:53
own experience working in the finance industry
7:56
is that every organization and often
7:58
every team in an organization.
7:59
that develop their own ad hoc set of
8:02
tooling practices or even their own tools, including
8:05
static analysis tools.
8:06
Did you find the same thing?
8:08
Isn't that the case for C++ in general? Especially
8:11
so in finance, I found. Yeah, so yeah,
8:14
we had the special tooling, we had a special
8:16
build system, so we had a special
8:18
language also. That is... Yeah,
8:22
I will not go into that.
8:25
All right. So, Abbas,
8:27
we will get more into your work in just a few minutes,
8:29
but we have a couple of news items to talk
8:31
about before we get to that. So, feel
8:33
free to comment on any of these, okay?
8:35
Okay. So, the first news item
8:37
for today is that Boost has
8:40
released version 1.83, and
8:43
it has a lot of new stuff, much
8:46
more than what I could talk about in the
8:48
show, but just to name a few things,
8:51
there's a new library, it's called BoostCompat.
8:54
It's a repository of C++ 11 implementations
8:56
of standard components added in later
8:59
C++ standards
9:00
from Peter Demov and contributors.
9:03
And apart from that new library, there's also loads of updates
9:06
to existing libraries, so just a few highlights.
9:08
BoostAny now has a BoostUniqueAny,
9:11
which is an alternative to BoostAny or actually
9:13
to StidAny, that does not require
9:15
copier move construction from the help type. There's
9:18
loads of new additions to Boost file system.
9:21
Boost iterator now has an is-iterator
9:23
type trait that allows us to test whether
9:26
the type qualifies as an iterator type. That,
9:28
I think, sounds like something that's very useful
9:30
in kind of genetic programming.
9:32
BoostMath now works with C++
9:35
23's new std float types,
9:38
like float16t, float32t, etc. There's
9:41
a major update to BoostMySQL
9:45
with loads of new stuff, and there's a major update
9:47
to BoostUnordered, which is something that
9:49
I think we've discussed a few episodes ago.
9:51
They added a BoostConcurrentFlatMap,
9:54
which is a fast, thread-safe hash map based
9:56
on open addressing.
9:58
There's just a few things, there's a lot more in there.
9:59
But I think that's kind of a really, really interesting
10:02
major update to Boost.
10:04
Yeah, it does sound like a pretty major update. And
10:06
the thing that jumped out at me was
10:09
the BoostCompat library, because it's sort of almost
10:11
the inverse of our previous mention
10:13
of Boost updates, where they removed configurator's
10:16
compatibility with
10:18
pre-C++11. Yeah.
10:20
So, removed a load of backwards
10:23
compatibility, and now we've added some back in a
10:25
different way. Right. But I think compatibility
10:28
would see SUS98 and O3 is gone
10:30
for good. Yeah, yeah.
10:32
Good riddance. Right.
10:35
So, there was another major library update
10:38
this time around. FMT, Viktor
10:40
Zwiederwitsch's format library, which
10:42
is now on version 10.1.
10:46
And version 10.1 gives
10:48
us lots of new interesting updates compared
10:50
to version 10.
10:52
One thing that I found particularly interesting
10:54
is that there is an optimized format string
10:56
compilation now, resulting in up
10:58
to 40% speedup
11:00
in compiled format 2 and
11:03
up to four times speedup in
11:06
compiled format 2n,
11:09
at least on a kind of concatenation
11:11
benchmark that they published,
11:13
compared to FMT 10.0.
11:17
So, I thought that was quite impressive. They
11:19
also added formatters for proxy references
11:21
to elements of things like vector bool
11:23
and bit set of n and things like that.
11:26
And there's many, many other fixes and improvements
11:28
here as well. So, another
11:30
pretty significant update to a very popular
11:32
library. Yeah. And std
11:35
vector bool just keeps giving, doesn't it? Well,
11:39
it's not going anywhere. We'll have to support it
11:41
until the heat death of the universe, I guess. Or
11:43
maybe nobody proposed it. Maybe
11:46
somebody should just propose to get rid of it. I don't know.
11:48
Anyway, there's one more news items that I would like to
11:50
discuss this time around, which is
11:53
a blog post
11:54
called the downsides of C++ core routines.
11:57
And that blog post caught my attention. It's
11:59
by... James Mitchell, aka Reductor,
12:02
which is his name on Twitter and GitHub
12:04
and a few other places. I
12:07
don't think I've encountered
12:08
that name before, but I thought
12:11
that was a really interesting blog post. James
12:15
works at Ledge Hammer Games, which is
12:17
the studio that makes Call of Duty. It's interesting
12:19
to see gaming people write a
12:21
blog post like this. I think that's great.
12:23
James shows one
12:26
of the reasons why I thought his blog post was really interesting,
12:28
is that before he goes into what he actually
12:30
wants to talk about is he actually shows
12:33
what the compiler actually transforms your coroutine
12:35
to. If you write a coroutine, if you write a function that
12:37
says call return or call yield
12:39
somewhere, this is what you actually get.
12:42
I think that's really, really good for actually
12:44
understanding what's going on with coroutines.
12:47
I saw a blog post before that did that.
12:49
It was Louis Baker's blog post, understanding
12:51
the compiler transform.
12:53
Louis was very, very
12:56
complete, because
12:58
obviously Louis is one of the people who actually worked
13:00
on coroutines. But it's
13:03
quite complex and wasn't
13:05
really able to understand everything in that blog post
13:07
without extensive knowledge of all
13:10
the ins and outs of the guts of coroutines.
13:12
James' blog post is actually
13:15
not quite as accurate and precise and complete,
13:17
but I think a lot way easier
13:19
to understand. I think that is really
13:22
cool that you can actually see. Here's
13:25
the coroutine and here's what the compiler
13:27
actually does with it. Once
13:29
he has that, he then talks about various
13:31
downsides of stack-less coroutines and
13:33
the way they implement them in C++.
13:35
He calls out a bunch of security issues that
13:37
don't happen in the same way with plain functions
13:40
like object lifetime, iterator
13:42
pointer invalidation, et cetera, et cetera. Also,
13:44
he talks about how you get either a dynamic
13:47
memory allocation with the coroutine because
13:49
the coroutine frame in general
13:51
will be allocated on the heap
13:53
or in those cases where the compiler can optimize
13:56
out that allocation, you get lots of
13:58
stack bloat. The
14:00
other thing that he calls out, which is something that gaming
14:02
people actually often complain about, and I think it's
14:04
quite a valid, valid concern is that because
14:07
of the way like courtrooms or like work under
14:09
the hood, there's a lot of kind of injected,
14:12
injected like internal function calls. And
14:15
that leads to slow, very slow debug
14:17
builds, right? So if you're, you know, compiling for release,
14:19
a lot of that stuff gets optimized out, but if you're compiling
14:21
for the bug, you actually get those function
14:23
calls under the hood that you didn't write.
14:25
And that slows down your
14:28
debug build massively when you're like testing it
14:30
or whatever. So I thought that was really interesting. That's something
14:32
that I haven't really thought about, but
14:34
it's, you know, legit kind of downside of courtrooms.
14:36
And I wonder if if we had designed them differently,
14:39
maybe as a stack for courtrooms,
14:42
we could have avoided some of that stuff. And he
14:44
actually talks about that as well. What's the end of
14:46
the blog post?
14:47
Yeah, I thought it was a great post. Just
14:50
to be clear, the, the, the pseudo code that he shows
14:52
up front is not actually what the compiler generates.
14:54
It's, it's close to, it's enough
14:56
to give you an idea of the sort of thing that the compiler
14:59
might generate, which of course is going to be implementation
15:01
dependent, which is why he can get away with
15:03
being not quite as accurate as Louis
15:05
Baker's version. I did something
15:07
similar in my own coroutines talks, you
15:09
know, how you might implement coroutines without
15:12
the coroutines feature. It's
15:14
quite a useful education technique.
15:17
I also want to encourage readers not
15:19
to get put off by the, the
15:21
pessimistic title
15:23
at least of the blog post. It
15:25
doesn't mean that coroutines are not worth using, or
15:27
they've got lots of problems, just that there are things
15:30
you have to be aware of. And I
15:32
thought it was a little bit ironic that one of the
15:34
problems is that it simplifies
15:36
code too much that you might, might miss problems when
15:38
most people's experience of using coroutines
15:41
is that they're too complex, but most of the complexity
15:43
is in the infrastructure around the
15:45
coroutines. And when you're writing the coroutine
15:47
code itself with the coroutines
15:50
and the car waits, that part's
15:52
actually much simpler than it would have been otherwise. That's
15:54
the whole point
15:55
sort of pushes that complexity into the, into the infrastructure,
15:58
which means, yes, you, you.
16:00
can be forward into thinking,
16:03
well, this is just like normal synchronous code.
16:05
When actually there are things you have to be aware of.
16:07
One of my favorite things that James does bring
16:10
out is the fact that you
16:12
can actually get dangling references going
16:14
into functions now as well as coming out
16:16
of them. And that's another
16:18
thing that I bring out in my own coroutines talks.
16:21
And I will always point out that, again,
16:24
because of my guest today, it's a great time to talk about
16:26
this.
16:27
So now, Lin actually catches that for you. So
16:29
it will tell you when you've got a dangling reference going into
16:31
a coroutine, if
16:32
you pass serving in by reference.
16:35
So a great post. I
16:37
recommend you read it if you're interested in coroutines.
16:40
All right. So over the last few months, we
16:42
had kind of a little bit of a miniseries about
16:45
tooling, which was kind of interleaved
16:47
with episodes about other stuff. So we had
16:49
an episode about
16:52
Conan, which is a package manager. We had an
16:54
episode about build systems in the
16:56
context of modules. And more recently,
16:58
we had an episode about C line, which is an ID. And
17:01
another very important type of tooling is
17:04
static analysis tools. So we thought
17:06
we should do an episode about that. And
17:08
as it so happens, Phil actually works at a company
17:11
that does that provides tools for static
17:14
analysis. And so Phil, you've invited one of
17:16
your colleagues along. Isn't that right? It is.
17:19
Yes. So that's what we have about here today.
17:21
So that he knows about my vacations.
17:23
All right. So welcome
17:25
again, Abbas. Pleasure to have you here today on your
17:28
episode. Thank you. Happy
17:31
to be here. So you're in static analysis now. You
17:34
mentioned in your bio that you did other stuff before.
17:37
And we heard about
17:38
kind of that a little bit, like you were used to be in
17:40
finance.
17:41
But what was your journey
17:43
that got you into static analysis? How did you get into
17:46
that field?
17:47
Yeah. It's an interesting journey that I
17:49
usually like to talk about.
17:51
So I started in the finance industry. And
17:53
as many developers, we used to debug 80% of our
17:55
times.
17:58
And one day we... I had a
18:00
ticket related to, I used
18:02
to work on an interest rate derivative
18:05
functionality with million line of code and
18:07
we had a ticket where the calculation
18:10
was wrong. So
18:13
I started debugging, it took me
18:16
two days, almost two days to figure out what's happening.
18:19
And it turned out after two
18:21
days of debugging that
18:23
we had an expression
18:26
with side effect like plus
18:28
plus iterator and someone
18:31
decided to add a decal type
18:33
around it. And as you know
18:35
in C++ when you do something like that,
18:38
the expression inside of the type is
18:41
not evaluated so side effects suddenly
18:44
disappeared, which led to
18:46
this calculation and the finance
18:48
model that we were working on. The
18:51
interesting part is that it took me two days to
18:53
find to discover it. Then after
18:56
discovering such a bug, I wrote
18:58
a quick like script that
19:00
can detect similar issue and it took my
19:03
machine
19:04
five seconds to go over all the code base
19:06
and
19:07
find similar bug. And we
19:09
ended up having two bugs like
19:11
that in our code base. And
19:14
hopefully I saved another colleague two
19:16
days of debugging, but
19:19
that's where my passion came from. There is
19:22
many problem in development
19:24
world where machine does it better
19:26
and we should work as much as possible
19:28
to keep this problem for machine rather than
19:31
for human.
19:32
So how long did it take you to write that script?
19:34
It was a simple back-end so it took
19:36
less than an hour to find that bug. So
19:40
less than an hour to save a couple of days of work. Yeah
19:42
and once you do that and you realize the
19:44
power of something as simple as that, you start
19:47
to think about patterns that can
19:49
happen in C++ that lead
19:51
to such bugs and I can
19:53
tell you that there are many. Oh yeah,
19:56
but your bio also mentioned that we
19:58
talked about it a lot about the challenge.
19:59
of C++ tooling, which I'm sure
20:02
most of us are familiar with in some sense. But
20:05
what was your experience then that contributed
20:07
to this journey?
20:08
Yes. So C++
20:11
tooling in general is challenging,
20:13
but if you think about it and you compare it
20:15
to other languages,
20:17
every tool created in the IDE, for
20:19
example, if we focus on IDE, tooling is easily
20:21
created by someone as a side project or
20:23
by a big company.
20:25
And if you look at every successful
20:27
C++ tool, there is a path
20:29
behind it. And then you have an entry barrier
20:32
that if you want to write a good C++ tool,
20:34
you need to know how to pass C++, which
20:37
is, I think everybody knows that it's a
20:39
very complex problem.
20:40
And
20:41
the main challenge is that IDE
20:44
are interactive. And if
20:46
you know something about C++, passing is slow.
20:50
So you start your project to start tooling
20:52
like I did,
20:54
and you become, you
20:56
directly have a bottleneck on the browser, because
20:59
your tool will be as fast as Clang
21:02
if you are using Clang. And if it takes,
21:04
I don't know, 10 seconds to compile your
21:06
file,
21:07
your tool,
21:08
best case scenario, will be 10 seconds
21:11
to get back to the user.
21:12
So there's basically high entry
21:15
barrier for tooling. And if you look
21:17
at the successful tools, for example,
21:19
I did
21:20
a bit recently in Clang format,
21:23
one of
21:24
the tools that is getting standardized in our industry,
21:27
what they did is they implemented their own parser
21:30
that focus only on formatting.
21:33
And I don't think
21:35
any tool
21:38
is, you cannot reasonably expect every tool
21:40
to write their own parser to optimize their
21:42
use case.
21:43
So we actually had two episodes
21:46
ago, we actually talked to Dmitry Korzevnikov
21:48
about C-line, the C-line IDE, and he
21:50
was talking about
21:51
the fact that they also have a parser,
21:54
because certain things like ClangD
21:56
isn't the right, doesn't have the right trade offs. So they have
21:59
their own parser. in the IDE as well.
22:02
Yes. And even when you
22:04
write your own parser, like in C line, which is a great
22:07
tool,
22:08
if you use other JetBrains IDE, like
22:11
IntelliJ for Java and you move to C line, and
22:13
you feel that it is heavier, which
22:16
is
22:17
not the fault of JetBrains,
22:19
it's basically the current state of
22:21
C++.
22:22
Right. So, I mean, in order to parse
22:24
C++ code, I mean, it's not just hard
22:27
per se, right? Because you have all of these ambiguities
22:29
you have to resolve, you have to kind of backtrack,
22:31
but also you might have to
22:33
execute arbitrary
22:35
code, like constexpr functions
22:38
to figure out. Like for example,
22:40
you have an identifier, you need to figure
22:42
out if something's a value or a type, and that
22:45
is like a dependent name, so you might have to
22:48
execute random code somewhere else in the
22:50
constexpr function to figure that out. So you actually have
22:52
to have a full blown interpreter
22:55
of C++ for constexpr stuff, if you
22:57
want to actually parse it fully, right?
23:00
Yeah, the usual two bottleneck are constexpr
23:03
resolution and text base
23:05
include, because they
23:07
are costly to resolve. And
23:09
hopefully now with C++ 20 models, once
23:12
every compiler implements it, we might get
23:15
faster analysis, faster tooling
23:17
for the
23:18
new modern code bases. Yeah, modules
23:21
will fix everything. So
23:23
do you then also have your own parser at Sonar,
23:25
your own C++ parser?
23:27
So at Sonar we have
23:30
fork of LLVM, where we
23:32
have a lot of patches to the parser, then
23:35
once we can do something that is
23:38
upstream, we upstream as much
23:40
as we can to contribute back to LLVM,
23:43
but we have multiple hacks
23:45
to optimize
23:47
how fast is the parser inside.
23:50
One of the common
23:52
one is to, for example, to
23:54
compute a preamble where you precompile,
23:57
while working on the IDE, you usually
23:59
modify.
23:59
the code multiple times and you can always
24:02
pre-compile all the set of
24:04
headers at the beginning of the file so you don't have
24:06
to compile them again before the
24:08
analysis starts and that's why you
24:11
save time and passing.
24:14
Right, so before we continue talking about
24:16
how the stuff works under the hood, maybe you
24:18
could tell us a little bit more about static analysis
24:20
in general. Like what is it, how can
24:22
I use it, what is it good for?
24:24
Sure, so static analysis is basically
24:27
the art of knowing information
24:30
about the code without executing it,
24:33
which is the opposite of dynamic analysis. If
24:36
you think about dynamic analysis, it's mostly well-grained
24:39
address sanitizer. I'm just mentioning some names that are
24:41
familiar for our listeners. And
24:44
static analysis is the requirement
24:46
is that you should not execute the code
24:48
while doing it. And
24:50
once you learn about static analysis, you discover that
24:53
it's used in many places like
24:55
compiler do static analysis to optimize
24:58
your code and get rid of that
25:00
code, the other instruction
25:03
to make it more, to make your
25:05
binary more optimized. So they analyze
25:08
the code, they learn information about it, then
25:10
they apply it for code optimization.
25:13
Another use case of static analysis
25:15
is tooling
25:16
to detect
25:21
bugs and
25:23
basically undefined behavior in C++. There
25:26
are some toolings that try to detect null pointer,
25:28
the reference buffer overflow, and all the common
25:31
undefined behavior. And here
25:33
we reach the usual discussion about static
25:36
analysis, not being able to
25:38
get everything right.
25:40
Then there is other applications like
25:43
guidelines, like Cpp code guidelines
25:45
or the MISVA guidelines. Where
25:48
the goal of static analysis is not
25:50
to detect issues in your code,
25:52
rather to detect patterns that might
25:55
lead to issues.
25:56
And here
25:57
static analysis is usually the... successful
26:01
at doing that.
26:03
Other use cases like there is tools
26:05
to build the dependency
26:08
graph of your code base. That's some
26:10
sort of static analysis. But the most interesting
26:13
use case for static analysis for me is usually,
26:15
in
26:16
my opinion, is education. Let's say
26:18
you want to learn about
26:21
the C++ algorithm. The
26:23
usual thing
26:24
to do is either to go to CPP
26:26
reference or go to a conference or
26:28
read the C++ standard.
26:31
But you can use actually static analysis
26:33
to, let's say, find
26:36
in your code base every row loop
26:38
and tell you that, hey, you can replace this
26:41
row loop by std any of or
26:43
by std rotate. And you can
26:45
map every std algorithm to a certain pattern
26:48
and try to educate the
26:52
new joiner of C++ about different
26:55
part of the STL algorithm. So
26:59
if done right, static analysis can be also
27:01
a very good education tool.
27:04
Yeah,
27:04
one of my favorite examples of that is
27:06
using the heterogeneous comparison
27:08
operators, where the
27:11
second argument to a container
27:14
is the less operator by
27:16
default,
27:17
which compares the same type.
27:19
Whereas
27:19
if you have a std string and you want
27:21
to compare it against a char star,
27:23
that char star has to be converted to a std
27:25
string every time. So std is a very good operator. At some
27:28
point, I think it was C++ 14, maybe 11th.
27:30
We introduced the
27:32
less sort of empty angle brackets operator,
27:35
which has a templated comparison.
27:37
So that will use std string's
27:40
own comparison operator to compare against char
27:42
star. So you don't do that conversion every time. So
27:45
we will tell you if you haven't used that, that there's an
27:47
opportunity that you can speed up
27:49
your comparisons, except there are some
27:51
cases where it becomes a pessimization
27:53
and it will detect that as well. So you
27:56
can selectively learn more and more as
27:58
you go.
27:59
oh, I know this now and then you learn
28:02
the refinements on it as well. So yeah, completely
28:04
agree. So static analysis can also
28:06
help you with performance?
28:08
Yes, performance is also some
28:11
is a place where static analysis
28:13
can contribute. Like if you have a pushback
28:16
and you can detect the size of an object
28:18
statically because it compiles, you
28:20
can automatically
28:22
detect that in place back in this case is
28:25
better.
28:27
Right. I do have a question. So,
28:30
I mean, I've used static analysis before.
28:33
For example, C line also has a built-in
28:35
static analysis tool. Right. So, and
28:38
what I found often is that for cases
28:40
where the tool can reason about what's
28:42
happening at compile time, you know, it is
28:45
pointing out to me, Hey, there is a bug. For example,
28:48
let's say a null pointer dereference,
28:50
like if I outright, you know, dereference
28:52
a null pointer,
28:54
like obviously the tool is going to tell me, Hey, you
28:56
know, you're doing something here, which, which might be UB.
28:58
But
28:59
if I'm doing something where
29:01
it's like a runtime property, whether it's a bug,
29:04
like the point that comes from somewhere else, the
29:06
static analysis tool
29:08
can't reason about
29:10
whether or not it's an out pointer in this case.
29:12
Or for example, it depends
29:14
on a branch and then the condition of that branch
29:16
is like a runtime value. Then
29:18
typically from what I've seen, the static
29:21
analysis tool just doesn't tell you anything at that point.
29:23
And then you
29:24
only find the bug when you run a runtime
29:26
analysis tool, such as
29:28
address sanitizer or UB sanitizer or
29:30
something like that.
29:31
Is that kind of how they all work?
29:33
Or is there a mode where you can say, you
29:36
know,
29:36
point out to me like potential problems,
29:38
you know, like, Hey, this might be a
29:40
null pointer dereference here. You know, you might
29:42
want to double check that, or you might want to insert
29:44
a null pointer check. Like, I don't know what happens at runtime,
29:47
but
29:47
it's like a potential problem. Is that something that, you
29:49
know, these tools can do? Because like,
29:52
I'm not seeing that kind of by default, typically,
29:54
or maybe I'm using the wrong tools. I don't know. I
29:58
don't think you are.
29:59
Static analysis has different technique to
30:02
do it. What you are talking about is
30:04
basically detecting a null pointer to reference
30:07
in the wild without any constraint
30:09
on the code. Usually, this is
30:11
detected by a technique called symbolic execution
30:13
where you try to execute the code without
30:16
executing it by simulating it in
30:19
the static analyzer. This
30:21
technique usually are limited because
30:23
they reach the path explosion problem
30:25
where you reach the theoretical
30:27
limit of what you can do in a reasonable time. The
30:30
path explosion problem,
30:32
interesting. Yeah.
30:33
Cool name, I like that. Yeah.
30:36
There's a website title. So
30:40
if you want to simulate a code and you have a loop
30:42
inside of it, for example, and you don't know how many
30:45
times this loop is going to run.
30:47
So you have to execute the loop multiple
30:49
time until you
30:52
finish this loop. So the static analyzer
30:54
at the end, if it wants
30:56
to detect all this kind of issue, it will
30:58
be as slow as the runtime and
31:01
not as good also. So these
31:03
kind of
31:04
techniques,
31:05
I think I believe they are useful, but
31:08
they cannot be the only things that you use because
31:10
if you have a null pointer to reference in your
31:13
code base, it's probably going to detect two or
31:15
three of them, which I believe is already
31:18
good.
31:19
I think you mentioned earlier that better
31:21
approach is to detect patterns. The
31:24
cells are not wrong, but can lead to these
31:26
sort of issues. So if we can avoid situation
31:29
where
31:29
a unlocated, null reference can happen in the
31:32
first place without some other checking
31:34
around it, then that's a better pattern. So
31:36
I think that often can avoid
31:38
us going down that path in the first place.
31:40
Yes. A common example
31:43
is that you cannot detect all the buffer
31:45
overflow things. And it's the most
31:47
common CVE in the
31:49
database. And if you have
31:51
a static analysis checks that
31:53
tell you that you should use GSL
31:56
span, for example, FV-Web,
31:59
this is...
31:59
kind of checks that can be implemented
32:02
by a static analyzer and can get all
32:04
the checks. And I'm not arguing
32:06
that everybody should use that, but many people
32:09
should be aware of it and consider it as a possibility.
32:12
So you already mentioned CVEs. This
32:14
is something that we talked
32:15
about quite a few times on the show already, this whole
32:18
safety discussion, where some people are saying,
32:21
you know, you can actually find the safe subset of
32:23
C++ that can be statically proven
32:25
to be safe.
32:26
And other people are saying,
32:28
no, that's not possible. That subset
32:30
would be too limited to be
32:33
useful. I'm actually curious,
32:36
is it possible to use static
32:38
analysis to
32:40
build some kind of subset of C++
32:43
where you can reason about that being safe and not having
32:45
any kind of behavior? Or do you think that's
32:47
kind of not a viable
32:50
way forward?
32:51
In the theoretical sense, it
32:53
is possible because common
32:56
tools that actually does that is done
32:58
by Microsoft trying to remember its name. It's
33:01
called Daphne, I think. So they
33:03
have a tool where you basically write the
33:06
precondition of each function, the post-condition
33:08
of each function, and the static analysis
33:11
can build mathematical
33:13
verification that your code is supposed to do,
33:16
what it's supposed to do. So you can reach
33:18
a subset of a language where you
33:20
can verify statically that it is
33:23
correct and doing what it's supposed to do.
33:25
Now, if you want to reduce this problem
33:27
of C++, yes, you can do
33:29
that, but the question is not if you can
33:31
do that. The question is at what cost?
33:34
You can definitely tell people that you
33:37
should use bound checking
33:39
everywhere and verify that they do it statically.
33:42
You can tell them, you can create
33:44
a function, for example, that checks for null
33:46
pointer before every reference and statically
33:49
detects that only this function is used,
33:51
but I'm not sure this is the
33:53
language that everybody wants to write. That's
33:55
interesting because you're kind of contradicting, I think,
33:57
what Bjarne said a few episodes ago.
35:58
we are
36:00
relying on CVEs and these CVEs
36:02
are not well organized because they don't separate
36:05
between C and C++ and they don't
36:07
separate between old C++ and modern C++.
36:11
So when we talk about memory
36:13
safety, we don't actually know if
36:16
code bases that are using smart
36:19
pointer always have these
36:21
CVEs. We don't have this information
36:23
in our database. And
36:25
to start thinking about the solution
36:27
for this before getting the data
36:30
might lead us to a
36:32
different path than the ones that is needed.
36:35
And I know that, for example, smart pointer
36:37
can be misused and
36:40
you can detect some part of the
36:42
misuse statically. But actually
36:44
people that are using smart pointer misusing
36:46
it, I don't know, because
36:49
also other languages that are compared
36:51
with C++, they can be
36:53
misused. It's just that it is harder. So
36:56
yeah, this point of do we
36:58
want to get to 100% or is it enough to
37:00
get to 95% is always interesting.
37:03
And I'm always on the 95%
37:06
is good enough because
37:08
we are engineers.
37:09
Then the second question was from Phil
37:12
about the current state of tooling.
37:14
I think, for example, Biani
37:17
advocates for using CPP
37:19
code guidelines, which our tools support
37:22
and I can also talk about other tool like
37:25
length ID support part of it, Microsoft
37:28
static analyzer support most of it. So
37:30
there is there are two things that
37:32
can enforce you to use the C++ code
37:35
guidelines. But the question is, do actually
37:37
people use them? And if you look at survey
37:40
from JetBrains, not many people
37:42
use static analysis. 30% don't
37:45
at all. And some of them use only the one
37:47
in the IDEs. So we
37:49
have a problem of awareness of the value of
37:51
static analysis. We
37:53
also have problem of processes
37:56
and let's say a push
37:58
towards static analysis.
38:00
For example, we support
38:02
different coding standard like CPP
38:04
Core Guideline and MISFA. If we
38:07
follow the requests that we get as a company,
38:10
most of the requests are coming to
38:12
MISFA. And
38:13
I personally believe that CPP Core
38:16
Guideline is a better guideline for
38:18
writing code for most people, but yet
38:20
many people ask for MISFA.
38:22
Why is it so?
38:25
Because there is enforcement
38:28
on some companies to use MISFA, so
38:31
people are getting aware of MISFA and
38:33
trying to apply it in their code base.
38:36
While there is not enough awareness
38:38
about CPP Core Guideline and its value,
38:40
that's my opinion.
38:42
Right. So we
38:44
talked about static analysis in general, but
38:46
then there's also a bunch of actual products
38:49
that you do static analysis. And you briefly
38:51
mentioned that
38:53
not as many people use them as they
38:55
probably should. But also there
38:57
are quite a few of them around. So
38:59
I already mentioned that T-line has its own static
39:02
analysis built in. Then
39:04
we have like PVS Studio has a
39:06
static analysis tool. We had them as sponsors
39:08
last couple of episodes. There's Perforce
39:11
clockwork, there's Covarity, there's a bunch of others.
39:14
So it seems like there's quite a few tools out there.
39:16
And you actually have three tools at Sonar,
39:18
don't you? You have, as we heard
39:21
in the sponsor read, there's Sonar Cube,
39:23
there's Sonar Lint, there's Sonar Cloud. So
39:26
what do all of these products do? When do you use
39:28
which and how do you integrate them into your workflow
39:30
if you want to try and start using static analysis?
39:33
Yeah. And just to be clear, I'm more
39:35
of an advocate on static analysis. So
39:37
I use ClankTidy even
39:39
though I use Sonar, I use other static
39:42
analysis tools. For me, my opinion
39:44
that
39:45
if combining them is better, if you can find
39:48
more issues, it's OK.
39:50
You don't have to use one tool. And C++
39:52
is a big language. So
39:54
there is in our backlog,
39:57
we have 600 rules already.
39:59
checks in our backlogs, there is thousands
40:02
of them, so we are not going to run off
40:04
checks. There is room for all competitors.
40:07
So yes, we have three products.
40:11
All of them have the same target, which
40:13
is reaching a state of clean
40:15
code in your code base.
40:17
So all of them try to push towards the same
40:20
goal.
40:22
Clean code is multiple definitions
40:25
of clean code, but it's switching
40:27
the point where your code base is an asset for
40:29
your rather than an adapt.
40:31
So you can think about maintainability,
40:33
you can think about security, you can think about
40:36
performance, you can say you can think about safety,
40:38
all the
40:39
non-good aspects of a good code.
40:41
So all of these tools are trying
40:44
to get you there. Sonalint
40:46
is the first line which is a new IDE.
40:49
It's usually a personal choice
40:51
when someone has stolen an extension in their
40:53
IDE to comply with CPP code guidelines,
40:56
and you don't have to invest a
40:59
lot. While Sonalcube and Sonalcloud
41:01
are the same product, one is on
41:04
the cloud, hence the name, and the other
41:06
one is on-premise, which is Sonalcube.
41:08
But they all both have
41:11
the same goal, which is scaling up to
41:13
a project level or an organizational
41:16
level quality. When
41:17
I say organizational level, let's
41:19
say your organization wants to comply with
41:21
CPP code guideline, it's
41:23
not enough that everyone
41:25
of half of your team applied in
41:27
the IDE. You need some sort
41:30
of a process, of a guideline of
41:33
what are the criteria that you need
41:35
to comply with to merge to
41:38
your main branch.
41:39
And all of that government is usually
41:41
done by Sonalcube and Sonalcloud,
41:44
and there are processes to do
41:46
that because you cannot just say, hey, I'm going
41:48
to refactor my entire code to comply
41:51
with CPP code guideline, because if you have a million
41:53
lines of code, you cannot just go and remove everything
41:55
new and put in place of it smart
41:58
pointers. So
41:59
Sonalcube and
41:59
Cloud are meant to put processes to
42:02
reach this goal incrementally. For
42:05
example, if you open a pull request,
42:07
you have some checks that only say
42:09
that you have to comply with CPP
42:12
code guidelines on your new code rather
42:14
than the old code. And
42:16
once you change your new code incrementally,
42:19
at some point you will reach a
42:20
code base that is compliant to CPP
42:23
code guidelines. I think the statistics
42:25
we usually use are that after about five
42:27
years you would have touched half of your code base.
42:29
So just by only changing new code
42:32
you would have cleaned half the code base within
42:34
five years and
42:35
some statistics even faster than that.
42:38
So we talked earlier about the challenges
42:41
of C++ tooling, of which there are many,
42:44
but are the Sonar tools in, particularly
42:47
SonarCube and SonarCloud that you mentioned there,
42:49
are they just as easy to pick up or
42:52
are there particular challenges to
42:53
using
42:54
those new tools?
42:57
So SonarLint is free
42:59
in your IDE, you don't have to configure anything.
43:02
It works with CLI and VS
43:04
Code and Vero Studio.
43:06
But SonarCube and SonarCloud
43:09
you have to
43:11
set up the analysis in
43:13
your pipeline on your CI
43:16
and hence currently we
43:18
worked on a new feature called
43:20
automatic analysis which
43:23
makes this process of
43:25
analyzing your code on SonarCloud
43:27
much easier.
43:28
So I do have another question. What if I'm
43:31
working on an open source library and it's like
43:33
an entirely not-for-profit
43:35
kind of thing which is out there on GitHub
43:37
and I have my
43:38
CI set up with GitHub Actions
43:40
or something like this. Is there any
43:43
non-commercial license of these
43:45
tools that I can use to have static analysis
43:47
on CI rather than just in my IDE?
43:51
Yes, so for SonarCloud you have
43:53
two options. The previous way
43:55
was there's a GitHub Action for
43:57
analysis on SonarCloud and it's
44:00
for open source projects. So you can
44:02
just integrate it in your pipeline and
44:04
you will get
44:06
our static analyzer.
44:07
Or you can use a new way
44:10
of analyzing C++ code on
44:13
Sonar Cloud, which is called automatic
44:16
analysis, where you just click one button
44:18
and it's going to be analyzed on
44:21
Sonar source infrastructure. So you don't need a
44:23
CI and it's going to be configured
44:26
automatically. So we are going to resolve
44:28
your dependencies and your P-Processor
44:30
and every single part of
44:33
the C++ ecosystem automatically.
44:34
Wait, so how do you do that? So
44:37
I have been working for the last year on this feature.
44:39
It's, it
44:41
takes time. So
44:43
the motivation is that not many
44:45
people use static analysis. And previously
44:48
to set up a GitHub action to
44:50
do that, you
44:52
need to make sure that your build works. You
44:54
need to make sure that the dependency exists
44:56
on this action. You need to
44:58
know how to invoke, how to build
45:00
your code because if you build your
45:03
code with different P-Processor, it means different
45:05
things. So there's many things that
45:08
the static analyzer needs to know about your
45:10
project to be able to analyze it. And
45:12
now after this new feature, we don't
45:15
need that anymore. And
45:17
what we did, we did three main
45:19
things. First, for dependencies,
45:21
we built our own dependency manager
45:24
that basically instead of you telling us
45:27
which dependency you are using,
45:29
we scan your code, look at your include,
45:31
try to match them with open source dependency,
45:34
and we automatically check them out for you.
45:37
That's the first part. This
45:39
works for most dependency, but doesn't work
45:41
for private dependency. And we are
45:43
going to go over that later on. The
45:46
second step is resolving
45:48
pre-processor. And the
45:51
good thing about resolving pre-processor is that
45:53
you only need to find a valid
45:56
combination of P-Processor
45:59
that leads to your code.
45:59
to compile.
46:01
So what we basically do is
46:03
we build an equation
46:06
from all the preprocessor directive
46:08
and we use a tool called SMT solver
46:11
to find the best combination of all
46:13
the preprocessors that lead to a
46:15
code that actually compiles.
46:18
And
46:20
SMT solver is known as a static analyzer
46:22
word. It's basically a tool that you give
46:24
an equation and it do some mathematical
46:27
model to resolve this equation.
46:29
And the equation here is what is the combination
46:31
of all the preprocessors that lead
46:34
to a compile level C++ and
46:36
lead to the maximum number of tokens because
46:39
you can if you have a preprocessor that remove
46:41
your entire code base we don't want to
46:43
enable it. We want to actually analyze your
46:45
code. And by using this technology
46:48
we get
46:50
a solution for the preprocessor. We
46:52
can understand your code. And
46:54
the third thing is we go back
46:56
to parser. We had to be able
46:59
to modify the parser to
47:01
understand invalid code because
47:03
if you have an internal dependency
47:05
and you have a code to this internal dependency
47:09
Clang will fail because we cannot
47:11
resolve what this dependency is. But
47:13
on our side we worked on the parser to
47:15
be able to say keep this aside. We don't
47:17
understand it right now and make the analyzer
47:20
behave as if this function.
47:22
Try to guess what it is and behave as if
47:24
this function exists. So
47:28
there is three elements. There is a preprocessor,
47:30
there is a dependency manager and
47:33
there is a parser. And
47:35
combining them together we were
47:37
able to reach we analyzed
47:40
200 open source project. We
47:42
were able to reach with this
47:44
method something like 95% accuracy compared
47:48
to manual analysis where it's done by
47:50
the user itself.
47:51
So yeah
47:52
this is like a lot of work that
47:55
is done behind the scene to reach a state
47:57
where the user only have to click one button.
48:00
To get code analyzed the
48:02
c++ code analyzed and
48:05
it only works on github for
48:07
open for open source project it's free
48:10
for private source project it is
48:12
it is paid.
48:14
So that's super impressive so basically you have
48:16
your own package manager so
48:18
to say and your own like. Build
48:20
system ish thing that just magically
48:23
configures itself like by just
48:25
looking at what's there in the repository and.
48:28
It sounds like it's not to me
48:30
like it sounds also like this kind of system
48:32
would be very useful in many contexts
48:35
even beyond just doing static analysis
48:37
like this
48:38
kind of magic auto configuring.
48:40
Thing that figures out
48:42
how to compile your code or how to reason about your code
48:44
that sounds really cool and useful.
48:47
Yes so it works in this
48:49
tooling area where you need to know mostly
48:51
how your code compile if you need 100% like
48:54
you need to generate binary of your code. Yes
48:57
this is not reliable because you might
48:59
do small mistakes that change the
49:01
meaning of your code but in things like tooling
49:03
or refactoring and
49:06
static analysis this is a seems to
49:08
be a good approach.
49:09
And I think magic is the right word there when you
49:12
when you try this for the first time, especially if you've tried
49:14
setting up.
49:15
Something like sona cube manually or
49:17
even so in the cloud it is just magic
49:19
you just literally click that button and it just
49:21
works and 95% accuracy
49:25
is pretty good when it comes to applying
49:27
static analysis it's it's good
49:29
enough that most people would never need to
49:31
manually
49:32
configure their projects anymore. Well now I know
49:34
what I'm going to be playing with this weekend. All
49:39
right, so we talked a lot about static
49:41
analysis and tooling but is there anything
49:43
else going on in the world of C++ right now that you find particularly
49:45
interesting or exciting.
49:48
Let's
49:50
see what's happening in the C++ world these days.
49:53
So there is this movement of new languages
49:55
that you went over. I think I'm excited
49:58
about it in the sense of hopefully it moves.
49:59
C++ language
50:02
forward. Let's see what comes out of
50:04
it.
50:06
The new hot topic in the last
50:08
year in C++ is safety. So
50:11
we hope that the language improves due to this
50:14
push. But yeah,
50:16
that's all. So can I ask you a question
50:18
just because it's kind of a controversial question
50:20
that I've asked a lot of people and I'm just curious about
50:22
your opinion as somebody who works
50:25
on static analysis and obviously worries
50:27
about safety and security.
50:29
Do you think that C++ is doomed
50:31
if you can't make it a kind of guaranteed
50:34
memory safe language the way that Rust is?
50:36
No, I don't think it's doomed because there are
50:39
many use cases of C++
50:41
where it's not the main thing to
50:44
do. Maybe you will lose some
50:46
industry, will move away
50:48
from C++ in the places where
50:50
you need 100% safety.
50:52
But in the industry of
50:54
gaming, for example, or in the industry
50:56
of music, I'm
50:59
not sure all of this discussion is going
51:02
to impact
51:03
the users of
51:05
C++. And
51:07
there's many people that are using COBOL these days.
51:11
C++ is not going away.
51:14
Right, well thank you very much. That was a very
51:17
good insight. I like your answer. So
51:20
anything else you want to tell us before we let
51:22
you go, Habas? What do I
51:24
want to tell you? Maybe
51:27
how people can reach you if they want to get in touch. So
51:30
I'm usually on SonarSource
51:33
community, community.sonarsource.com.
51:36
I'm interested in C++. If you have any
51:39
idea of how to check things statically,
51:41
please post it there and I will be
51:43
one of the people that are going
51:45
to reply to you. I
51:47
have a LinkedIn profile. I have a Twitter
51:49
slash X profile that I don't
51:51
use that much, but feel free to reach
51:53
out to discuss C++ and static
51:56
analysis. All right, we're going to put links to
51:58
all of these things in the show notes.
51:59
By the way, that
52:02
community discourse forum,
52:05
I can say from seeing it on the inside, that's
52:07
almost a direct line to the developer team.
52:10
So if you've got any questions for
52:12
any of us,
52:13
that's where to post it. All right.
52:15
So it looks like we're at the end of this episode, but thank
52:17
you again, Abbas, for being our
52:20
guest today and for this fascinating discussion. It was
52:22
a
52:22
pleasure to have you here. Thank you for
52:24
having me. And I'll
52:26
see you next week. All right. See you next
52:29
time. Thanks so much
52:31
for listening in as we chat about C++. We'd
52:33
love to hear what you think of the podcast. Please
52:35
let us know if we're discussing the stuff you're interested
52:38
in, or if you have a suggestion for
52:40
a guest or topic, we'd love to hear about that
52:42
too. You can email all your thoughts to
52:44
feedback at cppcast.com. We'd
52:47
also appreciate it if you can follow CPP Cast
52:49
on Twitter or Mastodon. You can
52:51
also follow me and Phil individually. All
52:54
those links, as well as the show notes can
52:56
be found on the podcast website at
52:58
cppcast.com.
53:01
The theme music for this episode was provided
53:03
by podcastthemes.com.
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More