Podchaser Logo
Home
A Conversation with Nvidia's Danny Shapiro

A Conversation with Nvidia's Danny Shapiro

Released Wednesday, 28th April 2021
Good episode? Give it some love!
A Conversation with Nvidia's Danny Shapiro

A Conversation with Nvidia's Danny Shapiro

A Conversation with Nvidia's Danny Shapiro

A Conversation with Nvidia's Danny Shapiro

Wednesday, 28th April 2021
Good episode? Give it some love!
Rate Episode

Episode Transcript

Transcripts are displayed as originally observed. Some content, including advertisements may have changed.

Use Ctrl + F to search

0:00

Sam Abuelsamid 0:00 I'm Sam Abuelsamid, and this is episode 193 of wheel bearings. On this episode, I have a conversation with Danny Shapiro, the Senior Director of automotive for Nvidia. Did you know you can support wheel bearings directly head to patreon.com, slash wheel bearings, media, and you can become a patron today. Your contributions will help fund the platforms and tools we use to bring the podcast to you. And exclusives improvements are already on the way thanks to your generosity. So if you want to be part of an automotive podcast, like no other head to patreon.com, slash wheel bearings, media so I'm joined today by Danny Shapiro, the Senior Director of automotive at Nvidia, a company that most people recognize, obviously for the video cards that they produce for computers, but they Nvidia is a big company that does so much more than just, you know, driving your your cyberpunk 2077. Danny, thanks for joining me today, Danny Shapiro 0:55 Sam. It's great to be here. Appreciate it. Sam Abuelsamid 0:58 So let's start off with a bit of an overview, as I said, in videos best best known to the general public as a purveyor of really high performance GPUs for computers. But you do do a whole so much more than that, in videos involved in a wide range of things, give us a little bit of an overview of what other things in videos involved in and then we'll dive into the automotive sector in particular, Danny Shapiro 1:27 that's that's a really good question. And we started out, as you mentioned, as a 3d graphics company creating video cards for for gamers. And that was 28 years ago. So a lot has changed in that time. And we're essentially a company that makes accelerated computing solutions. So that's a kind of fancy word, but it means we just make things go faster we make anyone who's doing work with a computer, whether it's a mobile device, whether it's their laptop, or a PC, or a data center computer, we accelerate the process of achieving their goals, letting them do their life's work. And so the this spans across all industries. I mean, every time you talk to Siri, or say, OK, Google into your Android device that goes to the cloud, on the back end is a data center full of GPUs that are using artificial intelligence to figure out what you're saying, and come back with the right response. So wherever there's computing, you'll often find Nvidia at the core of that, in the health care, we're able to have our processors in medical devices in the hospital that's helping doctors to analyze x rays or MRIs or CAT scans, but also in the cloud. They're trained to be able to help researchers develop new drugs and things like that. So this process of using Nvidia computing for artificial intelligence is applying across the spectrum of industries from healthcare to energy to finance to manufacturing is really no industry that's untouched by this new wave of computing. Sam Abuelsamid 3:09 Yeah, it's interesting. You mentioned manufacturing, you know, this this week happens to be the week of Nvidia's GTC, the GPU Technology Conference. And one of the things that was highlighted during Jensen Wang, he's the CEO for video for those who don't know, during his keynote was work that Nvidia is doing with BMW on their plant in Germany, where they're building the new AI x. Can you talk a little bit about that, Danny Shapiro 3:35 and this is a really interesting line in that emerges, so many of invidious core technology is getting graphics is a big part of what we do. And so whether it's on the gaming side, whether it's a professional visualization, which is what you know, automotive designers would use to create their new vehicles. It's all done in computer graphics, first 3d models. We do photorealistic rendering, we can do virtual simulations. So a virtual wind tunnel test, or a virtual crash test will be done with Nvidia technology. But in the manufacturing side, we've worked with them and they've created what's called a digital twin. So a replica of the physical factory, the whole building, the robots, the people. It's all modeled in a virtual world before it's actually built and implemented. And so that gives BMW the ability to experiment how they want to lay it out how they can optimize the workflows, how they can position different parts of the line to ensure that the workers there do not suffer from repetitive stress, injuries, and conduced to all kinds of streamlining, just by dragging and dropping and moving things around and then simulating that versus actually having to build something tested and realize how he made a mistake, you have to tear it out and rebuild it. So this concept of a digital twin is very powerful. We introduced something called omniva Which is the ability to have people collaborate in a virtual space. So you could have engineers and designers in locations all over the world. And this, of course, was really key during COVID, the last year where everyone was working from home. But these different people could collaborate in this virtual space, work to design a factory and then implement it in the real world. Sam Abuelsamid 5:22 Yeah, omniverse is a fascinating thing. Because one of one of the key things about omniverse, I think, is this idea of, you know, having this multi GPU platform, and particularly now, with the introduction a couple of years ago of your real time ray tracing technology, and how that enhances the visuals that are created. I mean, you know, in a video game, you know, it makes it look so much more realistic. But for something like simulation, for automated driving, that's really crucial to have that that realism, for the fidelity, talk a bit about how omniverse is being used in your drive sim platform, sure, Danny Shapiro 6:06 you raise a really good point ray tracing, for those that don't know, it's basically having a computer simulate how light behaves in the real world. So you have rays of light that would bounce off of a glass surface and reflect refract off different kinds of materials. And then eventually, it comes back to your eye. And that's how you see. But what we're doing now is physically modeling how light behaves in the real world with shadows, reflections, refractions. And so that technology was first developed. And we worked with companies like ILM, Industrial Light, magic, Pixar, Disney, all the visual effects companies are really early adopters of early adopters of our technology in this professional visualization arena. And so now we've brought that into things like manufacturing, where we can create photorealistic models of vehicles. And in fact, most TV commercials that you watch, they're not the real cars driving are usually a graphic simulation of that vehicle. It's much more cost effective, especially if it's a new vehicle, they are saying Sam Abuelsamid 7:09 that advertisers are here saying that advertisers are showing us stuff that doesn't exist. Danny Shapiro 7:13 Well, it weren't will exist, or it's just not doesn't exist in that location, right. So you might have a beautiful car that's driving down this curvy mountain road at that mountain road might not exist. But if it does exist, the car, they might not have had their one of a type one of a kind prototype in that location. So computer graphics is and ray tracing is used to to create amazing effects and think about movies, right? So many of these films that look photo real, everyone's shot on on a green screen. So this is where omniverse comes in. It allows people to collaborate and build these photorealistic scenes. and incorporate 3d models from all different sources and work with different software packages. So it's this platform that integrates. So we've extended that with Drs. And this is our autonomous vehicle simulator. It's an open platform. So we work with all kinds of modeling companies, companies that are developing sensors for autonomous vehicles, we simulate what those sensors can perceive. We also work with companies doing traffic models, environment models, and we pull it all together. And so essentially, what we're doing is creating a way to test and validate an autonomous vehicle in a simulated environment before we put it on the road. So the hardware and software that's now sitting in the data center in the simulator, is the exact same hardware and software that will actually go in a real vehicle. But this way, we can create all kinds of dangerous and hazardous scenarios, and test our AV system before putting it on the road. And so we can experiment with different traffic patterns, different lighting conditions, different weather conditions, and we can repeat this over and over and fine tune all the algorithms, make sure they're much safer than any human possibly could be. Before we deploy it. Sam Abuelsamid 9:05 Yeah, that's, that's critically important. Because there's out in the real world, when you're driving, there's so much variability. Like as you said, everything from the weather, lighting conditions, what depending on the time of day, if there's a little bit of mist in the air, or fog, or snow. And of course, you know, every vehicle looks different, you know, they've got different surface finishes on them different levels of reflectivity. And when you're relying on sensors to see the world mean, the human brain is remarkable in its ability to adapt and understand the world around it. But it's still a really hard problem for software to to interpret the world in the same way that we see it. And I think that that idea of really having that fundamental understanding of the way light works because a lot of the sensors we use, whether you're talking about cameras LIDAR are relying on light reflecting off of different surfaces and different surface finishes. And being able to simulate all that I think is, is critically important to getting a simulation that is actually going to be representative of what's happening on the road. Danny Shapiro 10:17 But you're absolutely right. And this is a continuous process. So we've developed what we call an end to end solution. And it starts in the cloud, where data is trained on our GPUs to create these deep neural networks. And a dnn could just be recognizing lanes, or it could be recognizing pedestrians, or it could be reading street signs. And so there's many different dNNs that are running simultaneously. In an autonomous vehicle, there's a massive amount of data needed to train that we're not writing code explicitly to read a stop sign, we feed the system 1000s and 1000s of images of stop signs, maybe millions of pictures of stop signs, and all different weather conditions, lighting conditions, sometimes there might be a tree or you know, obstructing part of it. But that teaches the system how to recognize any stop sign then. So that training is done. Sometimes we use synthetic data combined with real data. So the simulation can play a role in training. But then once we've developed our algorithms, we're fine tuning them, we test them in the simulator to make sure then that when we give it all kinds of scenes that it's never seen before, it can still detect everything, like those stop signs, the lanes, the people, the cars. And then if there's issues, we're able to go back and correct, get more data, fine tune the algorithm and continue to iterate, then when it gets to the point that it's safe to put on the road, we take that exact copy of the software, and begin our safety tests with drivers and co pilots out in the real world. Sam Abuelsamid 11:50 So for those that are not familiar with some of the AI, machine learning neural network technology, can you give us a quick, high level overview of what what those terms mean? Sure, Unknown Speaker 12:02 well, Danny Shapiro 12:03 there's been a method in the past two writing software, which is sort of a whole series of instructions that you'd get something and I'll go back to the example of the stop sign. If we wanted to write a program in the past to detect a stop sign, you'd have the front facing camera of a car, generating 30 images every second. And so that's just take one of those images, it's just a bunch of pixels, right? All different color values. And you'd see the scene with the trees and the road and maybe other stuff. And so the computer and the software would have to analyze that entire image, looking for what would be a stop sign. And so there'll be all kinds of different parts of code that was written to recognize red groups of red detect edges determines the shape, it's an octagon, to discern, okay, there's white, and red and pick out the letters, oh, it says stop. So there's a huge amount of code that somebody would manually have to write. And then you'd have to adapt it for all these different lighting conditions, and weather conditions, and is only part of the stop sign visible. So you'd have to figure all that out. Instead, with deep learning, what we do is kind of leverage the structure of the human brain, which is the model for a lot of this artificial intelligence, and instead have many different layers of processing. And so at the core level, when we feed it an image, it's looking automatically for edges, and then determining where those edges are to determine shapes and understanding the color. But we don't actually write code to do this. We feed it information. And so we'll give the neural network a lot of images of stop signs and tell it these are stop signs. So it does the processing to understand that it's a stop sign. And then we may give it a lot of other images and say these are not stop signs, so understands what's a stop sign, and what is not. And just like a human learns over time, and through experience, the deep neural network becomes more and more accurate, more knowledgeable, based on more information it's given. So the data rights the software instead of a human. The end result is that we can have many of these dNNs running simultaneously. And they'll have perception that far exceeds the human level. And accuracy. That's incredible. It requires a lot of processing though. So that's why the need for high performance computing is so critical now in the car. First, we would have trained it in a data center. But that data center is not present while you're driving. You just have whatever computing resource is in the car at the time it's driving. So it's processing those video camera images in 1/30 of a second and needs to identify that stop sign as well as everything else. The lines, the other cars, the pedestrians, so A massive amount of computing horsepower is now what's critical in these vehicles. Sam Abuelsamid 15:05 So what is it in particular about the GPU architecture and the way Nvidia has done that, that makes it so well suited to doing these kinds of calculations? Danny Shapiro 15:19 That's a really good question. And yeah, there's many different types of computing, in reality, just like Yep, all different types of algorithms. You know, traditionally, people were using CPUs, which is the central processing unit, you have that in your laptop or your PC. And the CPU tends to be what's called a serial processor does things in sequences single order. And now maybe you have a dual core or a quad core, which means you have two or four sort of computing lanes where you can have data flowing through those, and computing on them in sequence for the GPU is a parallel processor. So instead of doing things in a single line sequence, we do a lot of computations simultaneously. And if you think about it, your screen with millions of pixels, all those pixels need to be updated at the same time. So that's why the GPU was initially invented, to be able to parallel process all the data to drive graphics. We've, of course, modified it now for artificial intelligence, which has the same massive computing requirements. And so instead of two or four lines, like you would have a dual or quad core CPU, the GPU has 1000s of lines. So essentially, now you have a freeway with 1000, lines, processing all that data traffic. And so we can achieve so much more through parallel processing than you ever could with a CPU. And that's why, because artificial intelligence requires so much computing in such a fraction of a second that everyone has moved to using and video technology. Sam Abuelsamid 16:56 It's, it's a interesting analogy of comparing it to a highway with 1000s of lanes. Because just just as when we add lanes to to a roadway, it automatically seems to fill up with traffic to get congestion back to the levels where it was that the same thing I think has been happening with the chips that that you've been developing for the automotive space, you know, you started off, I think about 2015, with the original drive PX platform for this development, and every couple of years coming up with new generations that are getting exponentially more powerful. And talk a little bit about kind of the evolution of, you know, what you've been doing in the in the automotive sector for automated driving development, Danny Shapiro 17:39 you have a really good observation. And I think it's a universal truth about our computing, there's never enough, right? The software increases to the complexity of what the hardware can deliver, just go back to the original iPhone, breakthrough product, it did things that no other phone ever did. But now compared to the new generation of iPhone, it's just a world of difference in each generation, it gets better in terms of the processing, and then the software that gets written to take advantage of that processor. So we're absolutely seeing that same trend in the automotive space. We started by bringing our graphics into vehicles A long time ago. And so the infotainment screens and the instrument clusters that used to be very simple low resolution displays are now beautiful, rich graphics, you'd look at the new EQ s, which was just unveiled from Mercedes features that a pillar pillar dashboard, it spans the entire front of the car, in video drives that there's three huge displays that are integrated into a single piece of glass. So there's a massive amount of computing just in the in the dash there. But what we found is, as we move into autonomous vehicles, there's so much more computing required, because now there's many cameras, radar, LIDAR, that are all generating a massive amount of data that has to be processed in real time. So we're always trying to keep up with the computing needs, and delivering what state of the art. So to go back over time, our first drive px, as you mentioned, delivered about one tops, which is 1 trillion operations per second. So it's fast. And at the time, it was the fastest thing out there. Our next generation, then we kind of were able to shrink that down and put on a single chip, but we did the 30 tops with our Dr. Xavier. And again, that was just amazing. Dr. Oren, which is now coming out and what we just announced with with Volvo in addition to Mercedes and many others, Neo Li Otto x paying a lot of companies in China that are developing autonomous vehicles. That single processor has 254 tops, so 254 trillion operations per second. It's just a mind boggling amount of computation. And finally, a GTC, we just announced this week, our next generation, that's we're letting people know, on our roadmap, it's still many years out, it'll be really targeting 2025 vehicles, but that is going to have 1000 tops. So each generation, we're able to get these huge improvements. You know, we did like a 25 or so times boost, or one point, then an eight times boost. And now this is a four times boost. And so just the computing resources, incredible in our in our lineup. Sam Abuelsamid 20:34 And just as a point of reference there, if we go back to 2007, and the DARPA Grand Challenge, the DARPA urban challenge, the Chevy Tahoe that the Carnegie Mellon team built up that year and won the DARPA challenge with that one had 10 blade servers in the back with x86 CPUs, that combined to give about 1.8 billion operations per second. So, you know, you're talking many 10s of 1000s of times the performance of that pioneering autonomous vehicle that are coming to cars in production next year, or even even, you know, the study, even the Xavier powered systems, you know, that are out there now. You know, I think you're, that's about 15,000 times the performance of what was in that. So it's, it's amazing to see how that's evolved. Danny Shapiro 21:26 I think for us. What comment on though, is that people may say, Well, why, why do you need so much and for us, the number one priority is safety. And so we want to ensure that what is out on the road is safe is much, much safer than a human who has so many accidents, injuries and fatalities that are caused by human error. And so our goal here is to create safer roads. But what happens is we we have, you know, part of our safety systems include diversity and redundancy. So there's backups, or there's overlaps. We have cameras that are doing processing, there's radar, there's LIDAR, in some vehicles, and they're cross referencing each other, it's like a double check. And if something were to go wrong, in many cases, then there's all kinds of backup systems, if we're trying to trick take the driver out of the loop, if something happens, we need to have a backup system to continue to operate the vehicle and be able to safely operate it or have it pull over for things like that. So there's, there's the diversity of the systems and the redundancy also adds to the complexity and requirement for more computing inside the car. Sam Abuelsamid 22:38 Right. And, you know, one of the fundamental changes that happens with vehicles as we, as we move into this area of higher level automation, is this shift in mindset. You know, when I was working on stability control systems, and anti lock brakes, it was we would design failsafe systems that would detect a failure within the system and alert the driver that okay, this systems no longer working. But now when you're talking about a vehicle that may not have a driver may not have a human on board at all, if it's a delivery vehicle, for example, failsafe isn't good enough, it's not just a matter of detecting the failure, but it has to be able to detect it and then continue to operate, even if in a reduced mode. So you can fail operational, which is why you need that redundancy and diversity, and doing things in different ways. Danny Shapiro 23:27 That That is so true. And I think what we see is that it might not be a full duplicate system. But it is a system that, you know, if something were to happen, a sensor fails, perhaps or there's some other issue, the backup system doesn't have to do everything, but it needs to be able to navigate the vehicle pulling over safely, and summoning assistance. You also see this intellia operation, which is kind of remote control of a vehicle. So an autonomous vehicle may find that it gets into a situation where there might be delivery truck that's blocking traffic, and it may need some kind of human intervention. And so the car would call back to the command center. And a human could then basically teleport into that vehicle and see through the sensors on that vehicle that are this delivery truck is blocking, let me turn around or let me then maybe cross the double yellow to pass it or do some other things that may not be built into the system. And so then we can navigate around that issue and then send the autonomous vehicle back on its way. So I think we'll see a lot of tele operations, a lot of startups that are doing really interesting work in that space. And the integration of the cloud to the vehicle will become more and more a part of this solution. We see the data center managing a fleet of vehicles and each manufacturer basically having their operations It's not just about a single car, but rather this huge opportunity of maintaining fleets. And to that end, there's going to be software updates, of course. So the ability to monitor what's happening in their cars, to collect data from their cars to improve the software, and then go back over the air do software updates is this whole notion of a software defined car? It's not fixed function elements in the car, but rather a full computer at the core with a card built around it. And that car gets better and better over time. Sam Abuelsamid 25:30 Yeah, that's, that's an area that Tesla pioneered, in part with help from Nvidia, he used a lot of Nvidia processors in those in those early Tesla Model S's and model x's. And, you know, the idea in the past, it's, it's always been, you know, manufacturers would design, develop a vehicle, sell it to consumers, and then the manufacturer moves on to developing the next generation, that product and that product that the consumer bought, you know, unless they did aftermarket changes, it stayed basically static for its lifespan. And that's, that's no longer the case. You know, we're getting to the point now, where new functionality can be added at any time. And, you know, there's an example this week, you know, Ford talked about rolling out their new blue crews, hands free driving system, you know, later this summer. And, you know, Tesla's obviously updated their software repeatedly over the years. And that's going to be a standard thing going forward, I think, for pretty much everyone, isn't it? Danny Shapiro 26:32 Yeah, you're right. I mean, think about it. Whether you're had an iPhone or an Android phone, you get regular updates. And that's the norm. It's hard to imagine going back and buying a flip phone or some other device that doesn't get updates. Why Why would you. And this is really becoming the trend with cars. Having this core computer centralizing the computing, right, there may be 100 different chips, and a lot of cars today, and the complexity is too great. And also their fixed functions, as you mentioned. So the ability to have this central computer that's updatable to control how the car drives and drives itself in some cases, but also everything down to the instrumentation, how that can be customized how the door locks work, how the windshield wipers, function, all sorts of things can all have different modes and preferences when it's software control. And so for me, personally, I've been driving a software defined car for nearly five years now, I can't imagine buying a new car that doesn't get software updates to make it better. And I think once you experience something, whether it's particular safety features in your vehicle, or this ability for the whole driving experience to be updated, it's really hard to go back and give those things up. Sam Abuelsamid 27:51 Yeah, definitely. So one, one area, you mentioned, the you talked about data centers. And that's actually something that's coming more into the car itself. You also mentioned atlan, one of the interesting new components and all of these chips from examiner all actually going back to the original system without chip Neil in the drive px. And I'm sure further back than that, is these all contain multiple kinds of processors in them. They contain GPU cores, but they're also ARM cores and tensor processing units for AI acceleration. But atlan has a new piece in there, this Bluefield to dpu and tell us why that's important. Danny Shapiro 28:37 So you're right. The SOC is a lot of fun of acronyms. So I'll try to clear them up SOC stands for system on a chip. And that integrates many different types of processors. It has a CPU, which is the central processing unit, the GPU, the graphics processing unit, a DLA which is a deep learning accelerator, a PVA, which is a programmable vision accelerator. And so there's different types of, of data coming in. There's different types of algorithms, and so they each are processed on different parts of the chip that are optimized for those operations. cybersecurity is a huge factor now, huge issue in the data center, as well as autonomous vehicles. And so we've been developing networking technologies to secure systems. And so again, the types of systems that are used in data centers for maybe banking or medical records or all kinds of other data processing, have technologies for encryption, and virtualization and firewalls and an authentication to make sure that code that's running is safe code and as authorized code and there's no intrusion. So what we're doing now with atlin, is bringing our Bluefield data center or data processing units into the chip itself. So rather than having to have a separate element on our drive platform, it's now integrated at the chip level, and designed for higher bandwidth. So we see more and more sensors, higher resolution sensors on these vehicles. And they generate so much data that the infrastructure inside the car is struggling to handle it. So we're bringing Ethernet into the car and right into the chip to be able to process massive amounts of data is this traffic, these 1000s of lanes of data I was talking about are traveling over Ethernet. And so there's essentially data center caliber technology inside the car. So these future vehicles will truly be data centers on wheels, not just an iPhone on wheels, which was sort of the the analogy of the past, but it really is now becoming a data center on wheels. Sam Abuelsamid 30:52 Yeah, and with things like your Hyperion eight, platform for development, which combines a suite of sensors and the orange computers, and, and and your software, I think the if I recall, that one has eight, eight megapixel cameras as part of it for long range and and also some some shorter range cameras and, and interior cameras. And that's a lot of data to be chunking through every 30 milliseconds or so. Danny Shapiro 31:24 That's right. So yeah, we have basically, it's a whole development kit, it's, it's an out of the box. So you have the sensors with cameras, the radar, the LIDAR, it's all calibrated. There's the data recorders, there's cabin monitoring. And so our customers will take these systems, put them on their vehicles, and right out of the box have an operational data collection vehicle and a development tool. So they're able to start using our software as well as building their own applications on it. And it really accelerates the process of getting vehicles on the road for that whole development cycle. And complimentary to that it's already all built up in the simulator, so they can be essentially testing everything right away before they even put it on the road. Sam Abuelsamid 32:16 Well, Danny, I want to thank you for your time you have any final thoughts you want to share? Well, Danny Shapiro 32:21 I appreciate the invitation to be here, Sam. And I think this is a really exciting space, it's not a simple task. It's perhaps one of the most challenging computational tasks in the world trying to replace a human in a vehicle. And it's also a safety critical aspect of what we do every day. So while AI is being used everywhere, and as we talked about, you know, in health care and finance, you know, companies like Netflix, use artificial intelligence to help recommend a movie for you to watch based on your your likes and dislikes in the past. But if they recommend something that's not exactly right, if there's a bug in their software, it's not a big deal. What we're talking about here with autonomous vehicles, safety is critical. So we have to ensure we get it right. And so that's why you see it, potentially taking longer than people initially projected. It's a very complex process. We want to make sure we get it right. We're working with amazing companies like Mercedes Benz and Volvo Cars, and many, many others to develop this technology that's coming in the passenger vehicles. We're also working with companies like cruise and zooks, who are developing Robo taxis and companies like Navistar and to sample that are doing tracking and last mile delivery. So there's going to be so many different types of autonomous vehicles that the public will interact with and get to experience. And just over the next several years, we'll see more and more of these going out into deployment might not be on your neighborhood streets. But first on highways are geo fenced limited areas. But the technology is coming and we'll be BSA for all that. So we're really excited to be at the core of this transformation of the transportation industry. And to say stay tuned a lot of other great news to come this year and next Sam Abuelsamid 34:10 to x and my 30 years in this industry. This is definitely the most interesting time and more things changing faster than ever. So thanks, Danny. Appreciate it. Transcribed by https://otter.ai

Rate

Join Podchaser to...

  • Rate podcasts and episodes
  • Follow podcasts and creators
  • Create podcast and episode lists
  • & much more

Episode Tags

Do you host or manage this podcast?
Claim and edit this page to your liking.
,

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features