Podchaser Logo
Home
Soagen

Soagen

Released Friday, 18th August 2023
Good episode? Give it some love!
Soagen

Soagen

Soagen

Soagen

Friday, 18th August 2023
Good episode? Give it some love!
Rate Episode

Episode Transcript

Transcripts are displayed as originally observed. Some content, including advertisements may have changed.

Use Ctrl + F to search

0:00

Episode 367 of CppCast

0:02

with guest Mark Gillert, recorded

0:04

9th of August 2023.

0:07

This episode is sponsored by the PVS

0:09

Studio team. The team promotes

0:11

regular usage of static code analysis

0:13

and the PVS Studio static analysis

0:15

tool.

0:31

In this episode, we talk about several new

0:33

library releases and

0:35

about how standard containers are implemented in

0:37

the Microsoft Standard Library. Then

0:41

we are joined by Mark Gillert. Mark

0:44

talks to us about his library SOGEN, a

0:47

structure of arrays generator for C++.

1:00

Welcome to episode 367

1:02

of CppCast, the first podcast

1:04

for C++ developers by C++ developers.

1:07

I'm your host, Timo Dummler, joined

1:09

by my co-host for today, Jason Turner.

1:12

Jason, how are you doing today? I'm

1:15

excited today, Timo, actually.

1:18

Thank you for inviting me back

1:20

on as a guest co-host. It's

1:24

been a little while. How many episodes have you all

1:26

done without me here, basically? 17, 18, depending

1:28

on whether we count the

1:30

one that we did together. Right,

1:34

right. Yeah, quite a

1:35

lot. Yeah, it's been fun so

1:37

far. So everything's going

1:39

well? Did I leave the podcast in good hands? Yes,

1:42

yes. It's been great fun. And yeah, thank you so

1:44

much. It's, as you can

1:47

tell, Phil is still on vacation. So I'm very, very excited to have

1:49

you here.

1:51

You're back on the show for the first time in over

1:53

a year, aren't you? If you don't count the episode, the special Christmas

1:55

one that we did together. Oh

1:58

yeah, definitely. Over a year, yeah.

1:59

the Christmas episode. So how

2:02

have you been since then? What are you up to these

2:04

days? Well, not a whole

2:06

lot has changed for me professionally.

2:09

Anyhow, I'm still doing training. I

2:11

do have a

2:13

C++ best practices workshop coming

2:15

up at NDC tech town in the

2:18

end of September. I'm assuming this will air

2:20

before then, right? Oh, yeah.

2:22

This will air next week on Friday.

2:25

Okay, then yes, definitely. And I'm

2:28

planning to do also a best practices workshop

2:31

in the post conference CVP

2:33

con. So there's that. And

2:35

I'm also sitting here like a fool

2:38

staring at my YouTube subscriber

2:40

count because I'm currently at 99,807 subscribers. All right, just

2:42

want to see that 100

2:43

tick

2:48

over. That's amazing. So if you haven't yet subscribed

2:50

to Jason's YouTube channel, please do so now

2:52

to help him get the six digit

2:54

in there.

2:55

I'm hoping to hit that by the end of the

2:57

week. That's my plan right now. So

3:01

what about you? You've had some career

3:03

things going on lately, right? Yeah, exactly.

3:06

So I handed in my notice at JetBrains.

3:08

I felt like I needed to take a little bit

3:10

of a break from being full time employed.

3:13

I

3:13

have other stuff going on that I want to spend

3:16

some time with. So I want to

3:18

focus on some family stuff and also

3:21

write a book about C++, which

3:23

I wanted to do for a long time. But

3:25

that's not really compatible with being like

3:27

full time employed at a big tech company, unless you

3:29

want to work weekends and evenings, which is

3:31

not something I'm a particular fan of. So I

3:35

kind of had that on my backlog, but always wanted

3:37

to do that. And so now I finally will have time to

3:40

pursue that. So I'm very excited about that. Do you

3:42

have a title or a plan, something

3:44

to get the listeners excited about? So

3:47

tentative title is low latency C++.

3:50

You might have seen my three hour talk

3:52

at C++ now this year, which had the same

3:55

title where I was talking about

3:56

all of the different techniques that you can use if you're

3:59

into. audio

4:01

or finance or video games or any of

4:03

those kind of fields where you're

4:06

optimizing for latency rather

4:09

than just general performance like where it really matters

4:12

how many milliseconds or microseconds a particular

4:15

piece of code runs and it shouldn't be over

4:17

a

4:17

certain deadline it should be as fast as possible so you have

4:20

this hyper focus on kind of

4:22

latency and that leads to particular

4:24

ways of writing C++ that have

4:27

some overlap with kind of general performance optimization

4:29

but sometimes you also

4:31

take very different approaches and so I want to just

4:33

provide an overview over all these techniques that you

4:36

you might want to use in those industries so

4:38

it's kind of a summary of like a bunch of talks

4:40

that I've given over the last five six seven

4:42

years and also kind of other material

4:44

that I have kind of researched

4:47

in parallel so I just want to I've been asked

4:49

by people it was kind of fun when I first

4:51

did this one hour version of this low latency overview

4:53

talk

4:54

people were like yeah this is really exciting but like one

4:57

hour is nowhere near enough to like give a proper overview

4:59

so can you do a longer one so I did a three-hour

5:01

version at C++ now

5:03

and then after three hours and people were

5:05

like well but that's nowhere near enough time

5:07

to actually explain how this will work so can

5:10

you like a one day workshop and then I was like yeah

5:12

but okay let me just write it up so

5:15

that's what I want to do and yeah

5:18

that's kind of going to be another one of my site projects

5:20

for the next few months are you working with a publisher

5:22

or self publishing

5:24

um so I haven't decided

5:27

that yet I had actually a couple publishers

5:29

who kind

5:30

of approached me and said they

5:33

were interested to do something together but

5:35

I haven't yet decided if I if I go for that

5:37

or if I do self-publishing or how I'm gonna approach

5:39

this I want to first get like a kind

5:41

of table of contents and this is what's gonna be

5:43

in this book and and have like a really good idea

5:46

of that in my head and then think

5:48

about you know how I'm gonna make that happen and exactly

5:50

with whom yeah that sounds like a pretty

5:53

good approach to me having only self-published

5:55

but I have worked with publishers on other projects

5:57

and I prefer self-publishing so

5:59

But I think it would be a good,

6:03

good, it makes sense. Yeah,

6:06

to get as much done as you can before

6:08

you talk to a publisher and then decide if you want

6:10

to work with a publisher, if you wanna go the self-publishing route.

6:12

And just as an aside to listeners

6:15

who might be thinking about the same kind of thing, there's nothing

6:17

stopping you from self-publishing and then

6:19

selling your book to a publisher.

6:22

That is a thing. I actually had someone offer to

6:24

buy my book and I said, no, I'm good, thank

6:26

you very much, bye. That is interesting.

6:28

I did not know that was a thing. Thank you, Jason.

6:31

You own the copyright to your book, so no

6:33

one can stop you from selling it. Right.

6:36

Well, so at the top of every episode, I like to read

6:38

a piece of feedback, but actually this week

6:42

I didn't receive any feedback, neither

6:44

via email, nor via Twitter, nor via Mastodon,

6:47

nor on Reddit.

6:48

So I actually don't have any new feedback

6:51

this week, but if you have any,

6:53

please let us know.

6:55

We'd like to hear thoughts about the show. You

6:58

can always reach out to us on Mastodon or on Twitter, or is it

7:00

now officially called X? I'm not sure. I

7:02

saw someone write T-X-I-T-T-E-R.

7:06

All right, or you can email us and that's

7:08

definitely gonna still work. Email

7:10

us at feedback at cppcast.com.

7:14

Joining us today is Mark Gillett.

7:16

Mark is a soft body

7:18

physics engine developer and low level

7:20

tooling guy at Osgenic, a

7:23

surgical training company based in Helsinki,

7:25

Finland.

7:26

Prior to his current role, he was the chief

7:28

architect of an internal graphics engine used

7:30

by the company in prototypes during

7:32

their startup phase.

7:34

Before coming to Finland, Mark was a teacher,

7:36

researcher, and consultant at Flinders

7:39

University in South Australia,

7:41

working with haptic controllers to find novel

7:43

ways, modeling and teaching different

7:45

surgical interactions. Mark

7:47

first learned to code as a teenager, making

7:50

mods in Unreal Script for Unreal Tournament 2004.

7:53

And these days, almost all of his work is C++.

7:56

Mark, welcome to the show.

7:58

Hi, thanks for having me. Mark,

8:00

I feel like your bio is backward

8:03

because so many people say they got into programming

8:05

because they wanted to do gaming stuff. It sounds like

8:07

you started with gaming and then became like a

8:10

not game programmer. Yes. Yeah,

8:13

that's exactly right. I've had people comment on

8:15

that before. I think for me the

8:17

genesis of this is that I discovered pretty early on

8:19

that I don't actually really want to

8:21

make games. I want to make tools.

8:24

I want to make things that go in games. So the

8:27

path I've taken is and satisfies

8:29

that quite nicely. I can totally see that.

8:32

I can feel that. I mean, I'm

8:34

curious though, you just like casually throw

8:36

soft body physics into your

8:38

into your bio versus

8:41

hard body physics, rigid body. Like

8:44

what's the deal here?

8:46

So it's a platform for

8:48

simulating the interactions between different tissues,

8:51

right? So it's not just rigid body

8:53

physics as you might have in say, Nvidia's

8:56

physics where you've got

8:58

spheres and cubes and various things.

9:02

It's more about

9:04

pretty much entirely focusing on the interactions

9:06

between the soft body, the soft tissues themselves.

9:11

And how we might drive, say we have

9:13

these three dimensional haptics controllers

9:15

that we use to act

9:17

as a proxy for say a scalpel or a drill or

9:19

something and that's capable of rendering some force

9:21

feedback. And so from the

9:23

physics simulation, the interactions between

9:26

the tool and the soft tissue, we can pull

9:29

forces out of that

9:30

and have the have the tool render some force

9:33

as it would in real life if you if you passed

9:36

an instrument through some flesh during surgery. So

9:38

like what does this practically

9:41

look like soft body physics modeling? Is it like a

9:43

bunch of particles connected by

9:45

springs or am I like overthinking

9:47

this? That's essentially

9:51

the bare bones description of what it is.

9:53

It's not a mass spring system, but

9:55

it shares similarities with

9:57

that.

9:59

It's, I'm of the people

10:02

that work on the project, I'm not the physicist. So

10:04

I don't want to get too much into the specifics because I'm

10:06

going to, I'm going to fudge

10:08

the description, but I would say it would

10:10

be fair for me to describe it as being a mass spring

10:13

system on Uber steroids.

10:16

Okay. And that's about as technical as I can

10:18

be on the physics side of it, because, uh,

10:20

the, you know, the research and the, uh,

10:23

the, the physical principles that go into making it

10:25

work aren't really my area of expertise.

10:27

I wrap it up into inner, in a software engineering

10:29

framework and make it fast. That's sort of where

10:32

I, where I live. Interesting. Right.

10:34

So Mark, we'll get more into your work in just a few minutes,

10:36

but, uh, before we do that, we have a couple of

10:38

news articles to talk about, so feel

10:41

free to comment on any of these. Okay.

10:44

So the first one I have this time

10:46

is a series actually of blog posts, the

10:48

whole series of blog posts called, um, inside

10:51

STL by Raymond Chen

10:53

that came out last week. Um, and probably

10:55

there's going to be more coming out about how the

10:57

containers and the Microsoft standard library are implemented

11:00

under the hood.

11:01

So I thought that was really interesting. There was one about,

11:03

um, the vector it's called inside STL,

11:05

the vector. Uh, that

11:07

was cool because I always thought that vectors, like

11:10

state vector is implemented as like a

11:12

three members, like pointer size and capacity,

11:14

but it actually turns out that the Microsoft version

11:16

is implemented with a three pointers first,

11:18

last, and end.

11:19

So, so Raymond talks about that. Um,

11:22

then he has a blog post about the string,

11:24

the Microsoft string. There's another

11:26

one about the pair, the lists, like

11:28

the maps. Um, and

11:30

it's like a blog post for each one of those. So if you

11:33

want to kind of dig into Microsoft

11:35

STL implementation and see how things are done there,

11:37

um, I think that's a really cool kind of series of

11:40

blog posts that caught my attention.

11:41

Well, in the string article also does

11:44

a comparison on how the other two

11:46

standard libraries implement their small string optimization.

11:49

So for anyone who's curious about

11:52

what the heck is small string or short string

11:54

optimization, how does it work? This is a

11:56

super succinct overview of that.

11:58

Cause this is.

11:59

Well, Raymond's article is what he publishes one literally

12:02

every weekday, right? So

12:05

they're never very, very

12:07

long. So it's all compacted

12:09

here. Because we

12:11

interviewed him back in the day,

12:14

CBPcast.

12:15

Oh, that's

12:17

interesting. I haven't listened to that one yet.

12:19

So I actually started listening to all the

12:22

CBPcast episodes all the way from the

12:24

very beginning, when Rob were just

12:26

doing them on his own, and then later

12:28

and later. But I haven't caught up to this one

12:30

yet. Yeah. So I think this is,

12:32

if we look at it, I'm derailing

12:34

the conversation now. But I think if we look at this post number,

12:37

it might actually literally be

12:39

post 108,532 or something ridiculous.

12:43

Wow. OK. That's a lot of blog

12:46

posts. Maybe I'm wrong. But it's

12:48

a lot. I think it's every weekday that he publishes

12:50

one. Right.

12:51

So there's another blog post that

12:53

I also found really interesting this week from Twist

12:55

and Brindle, whom we had on the show a couple

12:58

of episodes ago.

12:59

He talked about his Flux library,

13:02

kind of an alternative way to do like iterators

13:04

and ranges, which is kind of really cool. And

13:06

he updated his library to support C++ 20 modules. And

13:10

he wrote a blog post about it.

13:12

And that's really interesting, not just because modules

13:14

are great and because Flux library

13:16

is great, but also because the blog

13:19

post actually explains how it all works. So

13:21

he shows, first of all, how to

13:23

compile his library using modules on all

13:25

the three major compilers, like Clang 16,

13:27

GDC 13, MSVC 17.6,

13:30

like

13:31

what compiler flags you need to compile with modules,

13:33

all of that stuff.

13:35

He also talks about how you can try it out

13:37

using CMake.

13:38

He mentions that CMake has this new

13:41

built-in module support, but he actually doesn't use it.

13:43

He uses Viktor Zverevich's modules.cmake

13:46

thing for that. He explains how that works.

13:49

And then also he talks about how to modularize a

13:51

library. So you can actually apply that to your own

13:53

library if you want to make your library

13:56

compatible with C++ 20 modules. He kind

13:58

of goes through that as well. So I thought that was it.

13:59

like a really, really cool and comprehensive

14:02

blog post for people who are interested in actually

14:04

using modules and practice. So

14:07

I know we're going to get into Mark's library a little bit later on,

14:09

but I'm curious if you looked at modules at

14:11

all yet so far for the library you've been

14:13

working on, Mark?

14:14

No, I admittedly,

14:17

modules, conceptually

14:19

have been a bit of a black hole for me. I just, other

14:21

things keep coming up when I've set aside time to learn about

14:23

them. So no, I haven't. I

14:26

mean, I'm in the same boat because I'm waiting for the tooling

14:28

story to be complete. So that I can

14:30

just use them, not have

14:32

to figure out how to use them. But

14:35

anyhow.

14:36

Yeah. So speaking

14:38

of CMake, someone actually made a branch

14:40

of CMake that supports Python

14:43

scripting in addition to regular CMake

14:45

scripting. There's

14:47

a GitHub repository, it's called Py CMake.

14:49

There's

14:50

a delightful Reddit discussion about

14:52

whether that's a good idea or not.

14:56

Yeah, I thought that was another interesting project

14:58

that I wanted to mention that surfaced this

15:00

week. Let's do a show of hands.

15:03

Who thinks that Python and CMake is a

15:05

good idea? Mark,

15:08

Timor, either one of you think it's a good idea? I

15:12

won't raise my hand, but I have a bit of a non-straightforward

15:16

answer. I think replacing

15:19

CMake's DSL with

15:22

literally anything else is a goal worth

15:24

pursuing. I don't think replacing it

15:26

with a Turing complete full programming

15:28

language is the right way to

15:30

do it. One of

15:32

the former projects I was working on, we had

15:35

CMake augmented with Lua scripts. Really?

15:39

Which was really cool for

15:42

that particular use case. You could do

15:44

cool things that are a pain to do in CMake.

15:48

But whether you really

15:50

should have to do these things,

15:53

probably my answer would be no. Every

15:56

single CMake best practices talk

15:58

in the last five years is

15:59

has been stick with a declarative style

16:02

in your CMake, don't have a lot of ifs and

16:04

branching and stuff. And so

16:06

I'm leaning more towards the side of making it

16:08

too easy to program

16:10

our CMake might not be a good idea.

16:13

But I've also been in situations where I could have used that.

16:16

Yeah, and there's also situations

16:18

like Mark's library that we're going to get into later,

16:20

where we actually have to generate code

16:22

at certain points. And you have to somehow

16:24

integrate that into your CMake. So

16:27

before we get to that, I want

16:30

to mention one more library that also

16:32

popped up this time around,

16:34

so lots of interesting new

16:36

libraries, Bay

16:39

recently. This library is called

16:41

CPP Trace. And

16:43

it's a lightweight stack trace library that we can use

16:45

while we're waiting for C++ 23 header

16:48

stack trace

16:49

to actually be a university available. So currently,

16:52

I think the Microsoft compiler actually is the only

16:54

one that has a full implementation of this library. C++ 23

16:57

stack trace.

16:59

I think GCC has kind of a partial

17:01

one clang that's lagging

17:03

behind. They don't have anything

17:06

at the moment. If you want to use

17:08

that stuff cross-platform today, it

17:10

seems like this is a new library that you can

17:12

use instead. Cool. And

17:15

yeah, the last newsworthy library

17:18

that I want to mention on this episode is a new

17:20

library called Sojend, a structure

17:22

of a race generator for C++. And that's

17:24

Mark's library. As it so happens, the author of

17:26

that library is our guest for today.

17:29

So hello again, Mark. Greetings.

17:32

So first of all, how do I pronounce Sojend?

17:35

Is it Sojend? Is it SO-A-JEND? How

17:37

do I pronounce the name of your library correctly?

17:39

I don't think anybody had literally spoken it

17:42

out loud until today. So I'm

17:45

happy with Sojend. That's how it is in my head.

17:49

OK, I vaguely remember it's like a Japanese

17:51

character from some kind of video game or something

17:54

like that. Yeah,

17:56

yeah.

17:57

Oh, I was just, I just took

17:59

the word. It's a way generator and stuck

18:01

them together. That's as deep as

18:04

any thought that went into it is. Is it actually

18:06

an anime character, Timur? Or are you just making

18:08

stuff up? What's happening?

18:11

I might be making stuff up. I'm going to research this.

18:15

I have a vague memory of like, I've

18:17

heard this name before, but maybe I'm mistaken.

18:20

To me, it sounds like a

18:23

shortening of Sojourner or something. Like

18:25

I'm expecting it to be a traveler of some sort,

18:27

but... Oh, yes. Sojourner is

18:29

a character in Ghost of Tsushima,

18:31

which is a video game. Oh, okay. Cool.

18:35

But it's spelled S-O-G-E-N without the A. Okay.

18:37

Apparently, it's impossible to make up

18:39

a new word today. Right. Anyway,

18:42

so what is Sojourner? What problem

18:45

does it solve? What's the structure of a race?

18:47

Why do we need that in C++? What is this

18:49

about?

18:50

Okay. The problem that it's aiming

18:52

to solve is the

18:55

essentially the cache locality problem inherent

18:58

in if you have a large data

19:00

set of say you have an array of many, many objects,

19:02

those objects have quite a few fields, you need

19:04

to whip through that array and only do some processing

19:06

on one particular field in each

19:09

element in the array, you're going to essentially

19:11

take your cache hit every single

19:13

time because

19:14

you're struct, one or two

19:17

instances of your structure going to fill the cache where

19:19

really what you just want is that one element

19:21

laid outside by site. So

19:24

structure of arrays is saying, okay,

19:26

instead of having one array

19:28

with each of our individual

19:30

objects, let's not have an explicit

19:33

object anymore. And let's have many arrays,

19:35

one for each field, and then whip through

19:37

the particular array that we want. So

19:40

that's not ideal for say most scenarios

19:42

where you would have

19:45

the example I use in the documentation for the libraries

19:47

is an employee database piece

19:50

of software. Now, obviously,

19:52

in reality, you would use SQL or something for this, but we'll

19:55

just roll with this. In that sort

19:57

of application, you're going to want the

19:59

A the

19:59

array of structs model, you're going to want the

20:02

objects to be self-contained actual objects

20:04

because you're rarely going to access

20:07

a field of an employee and not touch any others.

20:09

That's pretty unusual. But for

20:12

the scenarios where you do need to do that,

20:14

you want to restructure your data that way for things

20:16

like low latency applications,

20:19

like collision detection in a game engine, for instance,

20:21

or rendering applications. It's often

20:24

worthwhile to structure your data that way.

20:27

So

20:28

the annoying thing about working with that like

20:30

that, though, is that you now you lose

20:32

the explicit object that models your

20:34

thing and you instead have to implicitly

20:38

connect all these different arrays together

20:40

and say, okay, all of the elements that index seven

20:43

are the one I'm interested in. And that can be quite annoying

20:45

because if you add a new element to any one of those arrays,

20:47

you need to ensure that you do it to all of them. If

20:49

you shuffle them, sort them, whatever. Otherwise

20:52

you end up with data going out of sync and all sorts

20:54

of crazy bugs.

20:55

So this

20:58

project is fundamentally two things. It's a

21:00

set of abstractions for, it's a library

21:03

for working with data like that in C++

21:05

as though it were one contiguous collection. And

21:08

it's also a generator for solving some additional

21:10

problems on top of that.

21:12

So when I was looking

21:14

at your project before the interview, I was

21:17

immediately reminded

21:19

of Mike Acton's data oriented design

21:22

talk from CVPCon like 2015 or whatever

21:25

that was, is that

21:27

a fair comparison? This is a data oriented

21:29

design principle? Yes, very

21:31

much so. And I

21:33

thought maybe I linked to that talk somewhere in the description

21:35

for the project, maybe I didn't, but yeah, certainly that

21:38

sort of design is firmly in mind. Yes.

21:41

Okay.

21:41

So it's a way of like formalizing that kind of design.

21:43

Yeah. And wrapping

21:46

it up in a, you know, so, okay,

21:48

the vocabulary type everybody uses in C++ for

21:51

containers is vector, right? Even when you

21:54

don't want to use a vector, you want to use a vector. That's

21:56

the whole joke. So that's an interface

21:58

that we're all familiar with.

21:59

I wanted to have that same interface,

22:02

but for this style of data. OK.

22:04

All right. So how do I use Sojian? What's

22:08

the workflow like? Because the

22:11

title suggests that it's a generator

22:14

for C++, so it's not actually

22:17

just a C++ library, so I need to generate code.

22:19

How does that work? How do I use this?

22:21

Do I

22:22

define my struct,

22:25

and then

22:27

I run a script over it

22:29

to generate some other C++ code

22:31

that I then compile into my program, something

22:34

along those lines?

22:36

Yeah.

22:38

I should be careful to clarify that you don't have to

22:40

use the generator. The library is the

22:42

features you get without the generator are about 90% of

22:45

what the generator provides. So the generator is for fairly

22:48

specific use cases on top, which

22:50

I'll explain in a bit, I suppose. But if

22:53

you were to use the generator, yes, you would be you

22:55

describe your struct

22:56

in a configuration file that says what the members are

22:59

if you have any particular alignment requirements and if

23:01

you want to do any code injection stuff and

23:04

run

23:04

the tool, and it spits out a header file for you with

23:07

the code that then uses the library

23:10

as a dependency.

23:13

But I feel like I've sort of

23:15

buried the lead there. I should clarify what you might use the

23:17

generator for over the top of the base library. Perhaps

23:20

that's worth me clarifying. So

23:22

because my background

23:25

is not game development, but it's game dev adjacent.

23:28

And in those sorts of environments,

23:30

you have to do a lot of reflection based tasks.

23:33

So whenever you need to deal

23:35

with deserializing and

23:37

serializing assets, for instance, there's

23:39

all sorts of different assets in a game engine. Even

23:42

if you're not making games,

23:43

my company is not making games, we still use

23:45

Unreal Engine and that has its own built in reflection

23:47

system. And these reflection systems, because

23:49

C++ doesn't have reflection

23:52

proper, invariably end up being

23:54

based on some combination of source

23:56

code scanners, stringification,

23:58

magic.

23:59

macros, that sort of thing. And indeed, Unreal is no

24:02

exception. And

24:04

that works very well for the way they use

24:06

it, but it does mean it's a little bit hard to bring in,

24:09

you know, if you want to bring in third-party libraries

24:12

and have them integrate into the

24:14

whatever the reflection system is natively,

24:17

if they depend on magic macros and they depend on various

24:19

injections that you need to do, you either need to maintain

24:21

forks of these libraries, or you need to create wrappers

24:24

for everything that you bring in, which

24:27

is its own maintenance burden. So the

24:30

goal of the generator is for you to be able to just say,

24:32

okay, I

24:34

need you to put in this magic macro

24:36

as part of my class definition and have this these

24:38

magic macros as being part of the various

24:40

functions and expose it to whatever system and

24:42

to do that quite simply. And that way you don't have

24:44

to maintain a bunch of wrapper classes, because that's essentially

24:47

what the code generator is doing for you. Oh,

24:49

and of course, the other thing you get too, is you

24:51

get names, which was

24:54

something that you can say, one of the questions is what do you

24:56

get out of using a generator that you don't

24:58

get

25:00

from just template metaprogramming

25:02

for this particular application is you

25:05

get names for everything. So

25:08

if you have like a row abstraction, say,

25:10

sorry, I should,

25:11

my mental model for how this works

25:13

is that it's a table with rows and columns, right?

25:16

So each column being each member of your struct

25:18

and each row being the implicit data

25:20

members that are all shared the same index.

25:23

If you want to address a row,

25:25

so let's say you have some say std

25:27

tuple or something, a structure

25:29

of references to each member of that row, you want

25:33

to be able to do dot ID, you don't

25:35

want to have to do dot get

25:36

angle bracket zero, you know,

25:40

we like names. Now you can you can

25:42

do that with templates, if you're willing to use

25:45

specialization macros or do specialization

25:47

tricks by like injecting something into your

25:49

type using like CRTP. But

25:52

always that ends up being a very tedious

25:55

to maintain thing that

25:56

is easy to accidentally break.

26:00

So by having all of the

26:02

everything just in a nice little config file that has

26:04

its own set of diagnostics associated with it, it

26:07

generates all the named members for you and you don't have

26:09

to

26:10

worry about maintaining any any template

26:12

specialization soup. Okay.

26:14

It's of course it's there. It's just that the

26:16

generators doing it for you.

26:18

I want to, if you don't mind just make

26:20

sure I understand without the generator,

26:23

I have all of the tools

26:25

that let me have a bunch of these, this

26:28

table as you're describing it rows and columns.

26:32

And if I want to sort based

26:34

on the third element in

26:36

my row, the third column

26:38

in my row, it will do that

26:41

and keep everything nice and organized. It'll

26:43

sort all of the other columns

26:45

at the same time for me.

26:47

I can access the numbers by index.

26:50

I can do things with index, but if I want names,

26:52

then I want to use your generator. Correct. Okay.

26:55

Names and anything that might be integrated into a

26:58

reflection system. Yeah, that's

27:00

what you're getting with a generator.

27:01

So what kind of reflection then do you provide? Do

27:03

I get like compile time tables of the

27:06

names and members and that kind of thing? Yes, you

27:08

do. That's there's a compile time.

27:10

There's

27:11

a template variable for accessing the names of the columns,

27:14

the null terminated string for the names

27:16

of each column. There's an enum in each class

27:18

that has the indices of each column. And

27:20

then there's the config file, you can inject various

27:23

annotations into your class so that if you want to say, in Unreal

27:26

Engine, if you want to expose a class to the visual

27:29

programming, the blueprint system, you need to use

27:31

the U class magic macro and you can trivially

27:33

inject that into your types without having to create

27:36

a bunch of wrapper classes for third party

27:38

libraries or whatever. Okay. Once

27:40

we have the generated

27:41

reflection thingy,

27:44

we can pretend like we have

27:46

our old school structs

27:48

with all the numbers in one thing.

27:51

And I can just do like a range for loop over them

27:53

and just pretend like my world is what

27:56

I always I thought it had been.

27:58

Okay. does the generator

28:01

know the

28:03

members of the struct? Do you have

28:06

a thing that actually scans the code

28:08

and somehow parses C++ class

28:10

declarations? Or do

28:13

you have to put annotations on your members with

28:15

magic macros? Or do you have to duplicate

28:17

the declaration in some kind of script

28:19

that you feed to the generator? What

28:22

is this magic? How does this work? It's

28:24

a Toml config file. So you describe

28:26

your in the config file, you

28:29

have an entry for each SOA struct you

28:31

want to create, and you run the

28:34

tool over on the config file.

28:35

OK. Is Toml

28:39

related to JSON or anything

28:41

else, the YAML that we're familiar with? Or is

28:44

that, why did you pick that?

28:46

It's related insofar as it's in the

28:48

same class of config. I

28:50

picked it because, well,

28:52

OK. I don't think JSON

28:54

is very human friendly. And

28:57

I think YAML is

28:59

far too complex. It's too easy to shoot yourself

29:01

in the foot with YAML in what is ostensibly

29:03

a config file for that. So I

29:06

like Toml because it's hard to get wrong.

29:09

It kind of looks like old school

29:11

Windows INI files from what I'm looking at here.

29:13

Yep, that's a good way to describe it. It's any

29:15

files, but with some sort of standard

29:17

applied to them. Standard.

29:20

Who needs a standard? So

29:22

these

29:25

reflection, I'm just curious, because before

29:28

we started recording, you and I were bantering a little bit about

29:30

constexpr all the things. Yes.

29:33

So I'm assuming that these

29:36

reflection structures

29:38

that you provide are like constexpr

29:41

static tables of things that people

29:43

can do anything at compile time they want to know

29:46

about the types that they're working with.

29:48

Yes. Yep. Nice. All

29:50

right, so speaking of reflection,

29:53

I think actually this array of structs, structure

29:55

for array observation actually often

29:57

comes up as a use case for proper reflection,

29:59

which

29:59

we obviously don't have the language yet.

30:02

Like I think Bryce mentioned this

30:05

in his ACU keynote this year.

30:07

I saw a talk by a

30:10

florist Bob, um,

30:12

and it's sitting in at this year's super

30:14

sauce now a conference where he was

30:16

doing similar things like, Oh, let's make

30:18

like the data layout configurable, but

30:20

that's still have the same class interface as you always

30:22

do. And then he also very quickly

30:24

reached a point where he said like, we can only really do

30:26

this with reflection.

30:28

It's really cool that you can do this basically the way

30:30

you do it, but like, if you actually had proper

30:32

reflection in the language, how would you

30:34

do it then? Like, like, would that make things a

30:36

lot easier? Like what's, what's your take on kind

30:38

of reflection? Why we need it.

30:40

Um, okay. So I don't

30:42

know that I really want to dare to say how I might

30:45

do it because I'm nowhere near enough of an expert to

30:47

speculate, uh, what it

30:49

might look like in the language, but I can tell you, uh, well, I've

30:52

already sort of touched on one thing that's very, very difficult

30:54

without it is the names.

30:57

Aspect of it, right. You know, you can, you

30:59

can do some complex template injection

31:01

nonsense, but that's, it's just that it's complex

31:04

nonsense. Uh, there

31:07

is one other thing that the generated code

31:10

from sojourn can do, which I would love reflection

31:12

to be able to do, but

31:13

as I understand it, there's not really any elegant

31:15

way of doing it in vanilla C plus plus, and that is if you

31:19

have a, say you have some

31:21

static interface, you have, you have a class

31:24

of types that have all got a pushback method with

31:26

some varying number of arguments.

31:29

It might be the case that you want in some situations,

31:32

one or two of them at the end to have a sensible

31:34

default and you

31:36

otherwise, they may not be any sensible defaults

31:39

and there's no way that I know of to

31:41

be able to have as part of the function definition, some

31:43

sort of conditional assignment operator

31:46

default thing as part of the actual function. I

31:48

can think of ways to do it with like, if

31:50

you, instead of taking all of the arguments individually, you

31:52

took them as a struct and you could have say

31:55

the, that struck struct

31:57

be built up of like a composite of base

31:59

class. that each represent each member and maybe they

32:02

have an in-class

32:04

member initializer or maybe they don't depending on

32:06

some tableau specialization stuff. But again, just

32:08

me even trying to explain that, I'll talk my head, I've stressed

32:10

myself out.

32:12

That's the sort of thing that it would be nice just to be able

32:14

to have a cool little syntactic thing to say, okay,

32:17

maybe there'll be an equals whatever for

32:20

this function parameter or maybe there won't. And

32:22

to have that be syntactically valid and to maybe source

32:24

that from some const eval function

32:26

or something, who knows.

32:28

That's the example that comes to mind.

32:31

That's interesting because the enumerating,

32:34

being able to enumerate members of a struct

32:36

is I think one of the first things

32:38

that any reflection proposal always says,

32:40

you know, we need to do this. But

32:43

yeah, being able to basically express

32:46

that it's kind of variable whether

32:48

or not a

32:50

function parameter has a default value

32:52

and you determine that somewhere else in code,

32:54

whether that's the case.

32:56

That's really cool. I don't

32:58

even know if any of the reflection proposals

33:00

that were discussed in the last few years

33:03

can actually do something like this.

33:06

I don't know. Maybe they can. Maybe they

33:08

cannot. But that's... I should say

33:10

that it's a testament to my familiarity

33:12

with homebrew reflection systems that are in

33:14

video games is such that the idea of

33:17

iterating through all the members of a structure didn't even

33:19

occur to me because that's just like,

33:21

duh.

33:24

So I was immediately thinking of something a lot

33:26

more niche, but yeah, also that. I

33:29

think it's really interesting because some of these

33:31

reflection

33:32

papers I think were written from this more academic

33:35

point of view. It's like, okay, let's build this up from first

33:37

principles. We need to do this and this and this

33:39

and this. And then I think actually

33:41

it's really cool to come from the other end and say,

33:44

well, these are the problems we need to solve. We

33:46

need a library that can do this and we need a

33:48

piece of code that can do that. And

33:50

what do we need to get that? And I think coming

33:52

at it from the other end, I think that's a really cool approach

33:54

to figure out how reflection

33:57

should work. But yeah,

33:59

I think that's... not actually that much going on

34:01

currently in the reflection study group. I think

34:03

the work there has kind of stalled. I think they

34:07

kind of ran out of funding for like to

34:09

keep working on these papers and

34:12

compiler forks or something. I don't know.

34:14

I don't think there's much going on there at the moment.

34:18

Unfortunate. Yeah.

34:21

I'm curious if you can speak at all to

34:24

how you've actually used Sojin up

34:27

to this point and what kind of performance

34:29

benefit perhaps you've seen by moving something

34:31

from a array of structs to structs

34:34

of arrays layout.

34:35

Yes. Okay. So, uh, I'll

34:37

give you a little bit of context about the nature of the

34:40

data I work with at my job, for instance,

34:42

which also depends pretty heavily on, on, on not

34:44

this, but a thing very much like it. We

34:46

originally in an earlier version of things did

34:49

work with conventional array of structs. We've

34:51

got particles in our

34:53

physics simulation system and they have things

34:55

you might expect them to have position, mass, et cetera. We've

34:59

also got constraints which act on those particles and

35:01

they have different properties depending on what

35:03

type of constraint they are, for instance, a bend

35:06

constraint, for instance, uh,

35:08

is sat between two triangles and it's related

35:10

to the bend angle between the two of them.

35:13

And so it'll have like the indices of the triangles and

35:15

the various weights of the,

35:16

the, the coefficients for the math

35:19

basically. Uh, and that's just

35:21

all a whole bunch of floats. So when

35:23

we transitioned that from array of structure, destructive

35:25

arrays,

35:26

immediately we saw a speed

35:28

off of about 30

35:30

ish percent. Oh, wow.

35:32

Which is, which is alone a pretty good, like

35:35

that's how heavily, how well

35:37

suited our data and, and, and our sort

35:39

of access patterns were to this, that we got

35:41

such a marked speed up immediately, but then

35:43

we got a secondary speed up

35:45

because the whole reason we were investigating structure of

35:47

arrays to begin with was not the performance that it

35:49

grants that was

35:52

at the time it was a surprise, but intuitively

35:55

it makes sense. The reason we were actually doing

35:57

it was because we wanted to SIM define everything.

36:00

where previously we'd only used it in a few places

36:02

because the data wasn't structured in a way that made it easy

36:05

to do. If I might for just a second, if

36:07

you can clarify SIMDIFY for the last

36:09

minute. Sure. I can't think of off

36:11

the top of my head what SIMD is sure for. Single

36:14

instruction multiple data? Right, exactly.

36:17

We were transitioning

36:20

from just basic scalar math to

36:24

using SIMD registers, using

36:26

SIMD compiler intrinsics to do

36:28

it. We didn't necessarily

36:31

do it all raw. We

36:33

used a library for that, but

36:36

we still needed to change the layout of all our

36:38

data to be able to do that.

36:42

We transitioned from AOS to SOA. We

36:44

got the 30% speed up. Then because

36:47

all our data was contiguous and appropriately

36:49

aligned, we could

36:51

swap out the math for SIMD math.

36:53

Then we went from

36:56

doing scalar math to vector math on eight

36:58

lanes or something. That was

37:01

now much, much, much faster.

37:03

Triple digit percentage

37:05

speed increases.

37:07

At most eight times faster

37:09

if you're now doing eight things at the same time you were previously

37:12

doing one. That's amazing. We

37:15

would not have been able to do that if not for

37:17

changing the layout of our data to SOA

37:19

to begin with.

37:21

It has compounding impacts

37:25

if you pair it with SIMD

37:28

and things like that. It just reminded me of Godbolt's

37:31

law, which isn't very

37:33

well known, I don't think, but Matt

37:35

Godbolt

37:36

from Compile Explorer. Godbolt's

37:38

law of any single optimization makes a routine

37:40

run two or more times faster than you've

37:42

broken the code. You've just broken

37:45

Godbolt's law with your assertions

37:48

here. I'm

37:50

sorry Matt. That's interesting. I

37:53

had another point about SIMD.

37:55

I actually talked

37:57

about this with Matthias Kretz,

37:59

who's the author of Stilt SIMD, which we

38:01

are hopefully going to get in C++26.

38:04

And we were talking about this SOA, AOS,

38:06

kind of like the speedups that you can kind of get there.

38:09

And, and he said something interesting. He

38:11

said that often,

38:12

like you get the best speed up, not

38:15

from actually transforming AOS

38:17

to SOA, uh, but

38:19

from, uh, like doing an in-between

38:21

thing. So the fastest thing often is to have

38:23

an array of structs that

38:26

then inside have like arrays,

38:28

which are like SIMD register width sized.

38:31

So you have an array of structs of SIMD

38:33

register width sized arrays. Do you have

38:35

any opinion on that? Have you tried anything like this?

38:38

Uh, does Solzhen even support this stuff? Yes.

38:40

Yes. And yes, I think in that order.

38:46

So, okay. Okay. So

38:48

yes, you, it

38:50

does support it. We can do all of those things. Uh,

38:53

I, I am familiar with that workflow. The main reason

38:55

you do that sort of thing is because you

38:57

need the, uh, the essentially the alignment

39:00

of the batches to beat, to

39:02

meet whatever your SIMD requirement is, so 16 bytes

39:04

or 32 bytes or whatever, and then of course

39:07

that matches the size of the thing so that you step forward

39:10

by that amount and it all stays aligned nicely

39:13

and you can do your, your, your load aligned,

39:15

um,

39:16

and store aligned, et cetera, as you move through into your calculations,

39:19

uh, by nesting it in a second level of

39:21

array like that, you sort of get that

39:24

just out of the compiler, just using a

39:26

strategically placed align as you get

39:28

that. But that does of course mean that you now have

39:31

two layers of data. You need two little

39:33

square brackets everywhere. If you want to address

39:35

those members individually and you have to do like,

39:38

you know, if you've got a SIMD register width

39:40

of eight and you want to access element 17,

39:43

you then have to do like, you know,

39:45

blah divided by eight blah

39:48

mod eight, which is

39:49

pretty annoying. And you have obstructions for that,

39:51

but I think humans tend to prefer data

39:54

being flat. So the way that I've addressed

39:56

that problem is to have it so that if

39:58

you specify over alignment for a call. in

40:00

one of the tables, you'll get an aligned

40:02

stride, which is a static context

40:04

for a size t, just as a member of the class,

40:07

the description for each column, which is just calculated

40:09

based on the alignment that you've specified. And

40:11

that says that if you step through this collection

40:14

by this amount at a time,

40:16

everything stays nice and perfectly aligned.

40:18

So you can just have one level of floats, and

40:20

you can step through it by that value, and you

40:22

get the same benefit without having two sets of square

40:24

brackets everywhere. Cool. Right. So

40:27

we'll be back in just one second, but I would like

40:29

to mention a few words from our sponsor.

40:32

This episode is supported by PVS

40:34

studio.

40:35

PVS studio is a static code analyzer

40:37

created to detect errors and potential

40:39

vulnerabilities and see C++,

40:42

C sharp and Java code.

40:44

Podcast listeners can get a one month

40:46

trial of the analyzer with the CPP cast 23

40:50

promo code.

40:51

Besides the PVS studio team regularly

40:53

writes articles that are entertaining and educational

40:56

at the same time on their website.

40:57

For example, I've recently published a mini book called 60

41:00

terrible tips for a C++ developer. You

41:03

can find links to the book and to the trial license

41:05

in the notes for this episode.

41:08

And now we're back to Mark and Jason. Hello

41:10

again, both of you. So I

41:12

actually have a couple more sojourn questions.

41:14

So what C++

41:17

standards and compilers do it does sojourn

41:19

support and do you support CMake?

41:22

17, the big

41:24

three and no. All

41:27

right. Not because I don't

41:30

want to support CMake. I'm just I'm a CMake novice.

41:32

I,

41:33

I tend to choose mezon build as

41:35

a preference that I've never gotten around to essentially

41:37

learning enough about CMake other than learning

41:40

what I need to know to fix a problem in someone else's project,

41:43

then probably forgetting it immediately afterwards. So

41:46

I would not be I would not have any objection to someone

41:48

adding CMake support, but I currently haven't done it myself.

41:51

Right. So you're open to contributions and pull

41:53

requests and things like that. Absolutely. Yep.

41:55

Right. Amazing. So where we can find where can we

41:57

find your code? Is it on GitHub?

41:59

It

41:59

is. on GitHub, marza

42:02

slash sojen. S-O-A-G-E-N. All

42:04

right, we're going to post the link to that repository

42:06

in the show notes. And what

42:09

license do you use? Can people just

42:11

go and use that code for their own

42:13

projects? Yep,

42:14

MIT. MIT, amazing. All

42:17

right. And do you have any kind

42:19

of roadmap? What do you want to do next with this

42:21

library? Is there going to be some kind of 1.0 release at some

42:23

point?

42:25

There is a roadmap. It's actually currently

42:27

the only issue on the repository

42:29

is my own notes as a roadmap.

42:32

So yeah, I suppose eventually

42:35

there'll be a 1.0. So I mentioned

42:37

earlier that predominantly the features

42:39

you get by using the generator are

42:42

the reflection stuff and the default

42:44

arguments and names. Apart

42:46

from those, which I don't think I'll ever be able to close

42:49

that gap

42:50

in the absence of C++ having

42:52

actual reflection, there are some other things

42:54

currently that it does, just some class interface

42:56

stuff.

42:57

But I would like to bring the two

43:00

at parity. So that would be, I guess,

43:02

my 1.0. So you

43:04

essentially get all of the features you possibly could without

43:07

using the generator. And

43:09

it's not currently there. It's almost there. Silly

43:13

question. What language is the generator written

43:15

in?

43:16

Python. I had a suspicion it might be.

43:18

I didn't actually look on the project. So

43:20

from the perspective of do you support

43:22

CMake, the rest of the library I'm assuming is

43:25

header only, right? Yes, correct.

43:27

So yeah, so supporting CMake should be

43:29

relatively easy to just have a

43:32

custom build step that calls your Python scripts

43:34

that generates the thing and then have other things

43:36

rely on the output of that. So for our

43:38

listeners standpoint, for you, CMake shouldn't

43:40

be difficult to use your library. All

43:43

right, so you actually have a few other

43:45

libraries on your

43:46

GitHub. So you

43:48

mentioned that the language

43:50

that the user kind of specifies their struct

43:52

layout in a solution is Tomo.

43:55

I also noticed you have a library called Tomo++

43:58

on your repository. It has 1,200 stars on GitHub.

44:02

So that's not

44:05

a low number. So it seems like it's a popular

44:07

library. What's that one about? Yeah,

44:09

it's a tumble-passing and serializing

44:12

library in C++, I guess. OK,

44:16

so a bit of brief history as to

44:18

why this library exists. I needed to use

44:20

tumble for a personal project a few years

44:22

ago.

44:23

There were two options in C++. One

44:26

was abandonware.

44:28

Author hadn't touched it in three years, and it didn't support

44:30

the current version of tumble.

44:33

And the other one was

44:35

much more actively maintained, but

44:38

it didn't really suit the programming model that I wanted to use

44:40

with it. It was still being developed at

44:42

the time, too. So I was still missing a few features. And

44:44

I thought, OK, how hard could this be? Which

44:46

is, of course, famous last words.

44:48

Because writing a parser,

44:51

if you've never done that before, is like, oh, OK, actually,

44:53

this is kind of complex. And then, oh, OK,

44:55

now I've published an open source

44:58

library that's hilariously gotten popular that

45:00

I wasn't expecting that. Now I've got to maintain

45:02

it. Oh, crap. So

45:04

how's that going? It's going OK.

45:07

Admittedly, my enthusiasm for the project is

45:10

a bit lower than what it was now that it's relatively

45:12

mature. But I come back to it

45:14

occasionally and tinker.

45:17

It's on the back burner in terms of new features

45:19

and stuff. But yeah,

45:21

it's still maintained in that I fixed bugs.

45:24

I've been pondering an episode of C++ Weekly

45:26

titled something like, how

45:28

to responsibly abandon your open

45:31

source project. Yeah,

45:34

I can relate to that. Right.

45:38

Are there any other projects that you're working on that you want to

45:40

share with us? No, I had intended to.

45:43

I was building a ray tracer to

45:45

make use of Sojin as part of my,

45:47

hey,

45:48

here's this thing. And I'm using a ray tracer. But

45:50

I realized that I could either do one or the other and not both.

45:53

So I might release that

45:55

at some point. But nothing that's maybe

45:57

worthy of discussion on the podcast.

46:00

All right. So, you

46:02

also recently attended your first C++

46:04

committee meeting in Varna. That

46:06

was in June, if I remember correctly.

46:09

You were both there.

46:10

And so you joined the Finnish standardization

46:14

body, so you're now an official member of

46:16

the committee, as far as I know.

46:17

So what was that like,

46:20

going to your first committee meeting? Are you kind

46:22

of interested in the progress of standardization? Do

46:24

you want to do this more?

46:27

Yes. Okay. So, what

46:29

was it like? It was good. I went in with about

46:31

a million questions, and I came out

46:34

with not really any questions anymore.

46:37

It was a very good learning experience. I don't feel like I

46:39

went there with any intention of making

46:41

waves. I just wanted to learn how

46:44

everything worked. How the sausage is

46:46

made, as they say. Exactly. How

46:48

the sausage is made, precisely. So I feel like I got a good overview

46:51

about that. All of the misconceptions I had

46:53

were dispelled, and all of the questions I

46:55

had were answered, so that was good. And would I like

46:57

to participate further? Yeah. I've

47:00

got

47:00

an idea for a relatively simple change, I think, that

47:02

I'm exploring. Good luck. I know,

47:04

I know, famous last words. I've

47:06

got two ideas. One's, I think, relatively

47:08

simple, and the other is not. And it would be interesting to find

47:11

if that guess holds up, if I were to write them both up.

47:14

So it might turn out to be the other way around. Are you going

47:17

to participate in the reflection

47:20

work? Because it sounds like you should. Yeah,

47:22

but if it's in a situation

47:25

where it sort of needs people to pick it up and take

47:27

the lead on it, I don't want to put my hand up for

47:29

that. I'd

47:32

participate in discussions, but I don't necessarily want to

47:34

be the driving

47:36

force behind them, we'll say. I see. Yeah.

47:39

I think I can understand

47:42

why it's taken so long and how challenging

47:44

it actually is. Because when you really drill down into

47:47

not only what reflection is, but how do you express it programmatically,

47:50

syntactically?

47:53

This is a computer science, this

47:55

is a hardcore computer science thing. And I'm very much not

47:58

that. So I'll participate in discussions. but

48:00

I don't necessarily think I want to take

48:02

any sort of lead in designing that sort of thing.

48:05

Yeah, I think on top of designing, you

48:07

also have to implement it in a compiler. So you kind of

48:09

have to

48:10

be a compiler engineer or have a compiler

48:12

engineer working with you as well.

48:14

And so, yeah, I think it's an enormous

48:17

amount of work to make progress on this. And

48:20

yeah, I do get that, you know,

48:22

it's very time consuming, expensive

48:25

to do this work. It needs

48:27

experienced people. I'm very sorry that it's kind of

48:29

stalled. I hope to see progress there.

48:31

And I think one thing that way you could, I think,

48:33

contribute very well is to just provide

48:35

real world experience of like use cases. Like this

48:38

is

48:39

what we actually need reflection for. These are like actual

48:41

things that pop up in my day to day work. Can

48:43

this or that syntax or proposal actually

48:45

do that for us? And if it doesn't, you know, maybe

48:48

you're missing something here. Right. It

48:51

seems skeptical. I,

48:55

you know, having only recently

48:57

started being involved in the whole proceedings, I haven't really

49:00

got a good sense for,

49:01

you know, how much time it would

49:04

actually consume if I were to be actively

49:07

involved on particular proposals or whatever. So

49:09

I'm

49:11

hesitant to fully dive

49:14

deep into anything just yet. Fair

49:16

enough. Fair enough. I think I spent like

49:18

several committee meetings just being a tourist

49:20

before I kind of wrote my first little paper.

49:23

Yeah.

49:23

And then I kind of got sucked

49:25

in somehow because the little paper turned out to

49:28

be way more complicated than I thought.

49:29

But yeah, I don't know. It can go either

49:31

way. Right.

49:33

Well, as long as we're talking about it, when

49:35

is the next standards meeting in case any

49:38

of the listeners are interested in trying to

49:40

attend to themselves? So the next standards

49:42

meeting is actually in November. And

49:44

Kona, Hawaii and the US taking

49:47

place from the 6th

49:49

to the 11th of November.

49:51

So, yeah, we had we had the meeting there last year,

49:53

also in November. And we're going to be in Kona

49:56

again. I'm actually this time not going

49:58

to be in Kona in.

49:59

person myself, this is going to be the first committee

50:02

meeting that I'm going to miss

50:04

since I joined, I'm going to attend it virtually.

50:07

Is it so you can dial in? Uh,

50:10

yes. So since, since COVID,

50:12

um, everything's hybrid and you can dial

50:14

in. The only thing you have to deal with is

50:16

a pretty brutal 12 hour time difference

50:18

between where I live and where Kona is. But

50:21

if you find a way to deal with that, then yes, if

50:24

you're a member of the committee, you can dial in and participate

50:26

in those discussions. Okay. So I think that's

50:28

something that

50:30

is a lot better than what it used to be before COVID

50:32

because before COVID it was like,

50:34

you can't afford to go there.

50:36

Um, then basically like you're

50:38

out, which is not a very inclusive

50:40

way of, um, standardizing

50:42

a language. So I'm very, very happy that we improved

50:44

on that one. That's cool. All right. So then

50:46

I think, uh, I've been nearing the end

50:49

of our episode here. Um, so

50:52

we should probably start wrapping up, but, um,

50:55

yeah. Um, Mark, is there anything else you want to

50:57

tell us before we do that? Is there anywhere people can reach

50:59

you if they want to contribute

51:02

to a sojourn or just get in touch,

51:04

ask your questions, talk to you about

51:06

reflection or whatever else? Yeah,

51:09

probably the easiest starting point is just GitHub,

51:12

GitHub forward slash Marsa. Um, my

51:14

repositories all have contact information for

51:16

me on them. So you can use that as a jumping

51:18

off point and go from there. I'm pretty active

51:20

on

51:21

Twitter or the artist formerly known as

51:23

Twitter. Uh,

51:26

and you know, discord and

51:28

various things. So I'm, I'm, I'm reachable.

51:31

Just get up to starting point and go from there.

51:33

All right.

51:34

Well, then thank you so much for being

51:36

our guest today, Mark. It was a great discussion. Thank

51:38

you so much. And thank you, Jason, for being

51:40

my co-host today. It was,

51:42

it was an honor and it was a lot of fun to have you back on

51:44

the show. And I hope this is not going to be the last time

51:46

and we're going to have you back at some point in the future again.

51:49

Absolutely. Two more. Let me know.

51:51

Thanks so much for listening in. As you chat about T

51:53

plus plus you'd love to hear what you think

51:55

of the podcast. Please let us know if we're

51:58

discussing the stuff you're interested in. If

52:00

you have a suggestion for a guest or topic, we'd

52:02

love to hear about that too. You can email

52:05

all your thoughts to feedback at cppcast.com.

52:08

We'd also appreciate it if you can follow CPP

52:10

Cast on Twitter or Mastodon. You

52:12

can also follow me and Phil individually

52:14

on Twitter or Mastodon. All

52:16

those links, as well as the show notes, can

52:18

be found on the podcast website at

52:20

cppcast.com.

52:24

The theme music for this episode was provided by

52:26

podcastthemes.com.

Rate

Join Podchaser to...

  • Rate podcasts and episodes
  • Follow podcasts and creators
  • Create podcast and episode lists
  • & much more

Episode Tags

Do you host or manage this podcast?
Claim and edit this page to your liking.
,

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features