The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

A Technology, News and Tech News podcast featuring
 2 people rated this podcast

Best Episodes of The TWIML AI Podcast

Mark All
Search Episodes...
In this episode of our Differential Privacy series, I'm joined by Nicolas Papernot, Google PhD Fellow in Security and graduate student in the department of computer science at Penn State University. Nicolas and I continue this week’s look into differential privacy with a discussion of his recent paper, Semi-supervised Knowledge Transfer for Deep Learning From Private Training Data. In our conversation, Nicolas describes the Private Aggregation of Teacher Ensembles model proposed in this paper, and how it ensures differential privacy in a scalable manner that can be applied to Deep Neural Networks. We also explore one of the interesting side effects of applying differential privacy to machine learning, namely that it inherently resists overfitting, leading to more generalized models. The notes for this show can be found at twimlai.com/talk/134.
In the first episode of our Differential Privacy series, I'm joined by Aaron Roth, associate professor of computer science and information science at the University of Pennsylvania. Aaron is first and foremost a theoretician, and our conversation starts with him helping us understand the context and theory behind differential privacy, a research area he was fortunate to begin pursuing at its inception. We explore the application of differential privacy to machine learning systems, including the costs and challenges of doing so. Aaron discusses as well quite a few examples of differential privacy in action, including work being done at Google, Apple and the US Census Bureau, along with some of the major research directions currently being explored in the field. The notes for this show can be found at twimlai.com/talk/132.
Today, I'm joined by Davide Venturelli, science operations manager and quantum computing team lead for the Universities Space Research Association’s Institute for Advanced Computer Science at NASA Ames. Davide joined me backstage at the NYU Future Labs AI Summit a while back to give me some insight into a topic that I’ve been curious about for some time now, quantum computing. We kick off our discussion about the core ideas behind quantum computing, including what it is, how it’s applied and the ways it relates to computing as we know it today. We discuss the practical state of quantum computers and what their capabilities are, and the kinds of things you can do with them. And of course, we explore the intersection between AI and quantum computing, how quantum computing may one day accelerate machine learning, and how interested listeners can get started down the quantum rabbit hole. The notes for this show can be found at twimlai.com/talk/93
Today we’re joined by Abhishek Thakur, a machine learning engineer at Hugging Face, and the world’s first Quadruple Kaggle Grandmaster! In our conversation with Abhishek, we explore his Kaggle journey, including how his approach to competitions has evolved over time, what resources he used to prepare for his transition to a full-time practitioner, and the most important lessons he’s learned along the way. We also spend a great deal of time discussing his new role at HuggingFace, where he's building AutoNLP. We talk through the goals of the project, the primary problem domain, and how the results of AutoNLP compare with those from hand-crafted models. Finally, we discuss Abhishek’s book, Approaching (Almost) Any Machine Learning Problem. The complete show notes for this episode can be found at https://twimlai.com/go/475.
Today we’re joined by Saqib Shaikh, a Software Engineer at Microsoft, and the lead for the Seeing AI Project. In our conversation with Saqib, we explore the Seeing AI app, an app “that narrates the world around you.” We discuss the various technologies and use cases for the app, and how it has evolved since the inception of the project, how the technology landscape supports projects like this one, and the technical challenges he faces when building out the app. We also the relationship and trust between humans and robots, and how that translates to this app, what Saqib sees on the research horizon that will support his vision for the future of Seeing AI, and how the integration of tech like Apple’s upcoming “smart” glasses could change the way their app is used. The complete show notes for this episode can be found at twimlai.com/go/474.
Today we’re joined by Jelani Nelson, a professor in the Theory Group at UC Berkeley. In our conversation with Jelani, we explore his research in computational theory, where he focuses on building streaming and sketching algorithms, random projections, and dimensionality reduction. We discuss how Jelani thinks about the balance between the innovation of new algorithms and the performance of existing ones, and some use cases where we’d see his work in action. Finally, we talk through how his work ties into machine learning, what tools from the theorist’s toolbox he’d suggest all ML practitioners know, and his nonprofit AddisCoder, a 4 week summer program that introduces high-school students to programming and algorithms. The complete show notes for this episode can be found at twimlai.com/go/473.
Today we’re joined by Stevie Chancellor, an Assistant Professor in the Department of Computer Science and Engineering at the University of Minnesota. In our conversation with Stevie, we explore her work at the intersection of human-centered computing, machine learning, and high-risk mental illness behaviors. We discuss how her background in HCC helps shapes her perspective, how machine learning helps with understanding severity levels of mental illness, and some recent work where convolutional graph neural networks are applied to identify and discover new kinds of behaviors for people who struggle with opioid use disorder. We also explore the role of computational linguistics and NLP in her research, issues in using social media data being used as a data source, and finally, how people who are interested in an introduction to human-centered computing can get started. The complete show notes for this episode can be found at twimlai.com/go/472.
In this episode, we’re joined by Dataiku’s Director of Data Science, Conor Jensen. In our conversation, we explore the panel he lead at TWIMLcon “AI Operationalization: Where the AI Rubber Hits the Road for the Enterprise,” discussing the ML journey of each panelist’s company, and where Dataiku fits in the equation. The complete show notes for this episode can be found at https://twimlai.com/go/471. 
In this episode, we’re joined by Diego Oppenheimer, Founder and CEO of Algorithmia. In our conversation, we discuss Algorithmia’s involvement with TWIMLcon, as well as an exploration of the results of their recently conducted survey on the state of the AI market. The complete show notes for this episode can be found at twimlai.com/go/470.
In this episode, we’re joined by Santiago Giraldo, Director Of Product Marketing for Data Engineering & Machine Learning at Cloudera. In our conversation, we discuss Cloudera’s talks at TWIMLcon, as well as their various research efforts from their Fast Forward Labs arm. The complete show notes for this episode can be found at twimlai.com/sponsorseries.
In this episode, we’re joined by Paul van der Boor, Senior Director of Data Science at Prosus, to discuss his TWIMLcon experience and how they’re using ML platforms to manage machine learning at a global scale. The complete show notes for this episode can be found at twimlai.com/sponsorseries.
Today we’re joined by Emily M. Bender, Professor at the University of Washington, and AI Researcher, Margaret Mitchell.  Emily and Meg, as well as Timnit Gebru and Angelina McMillan-Major, are co-authors on the paper On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜. As most of you undoubtedly know by now, there has been much controversy surrounding, and fallout from, this paper. In this conversation, our main priority was to focus on the message of the paper itself. We spend some time discussing the historical context for the paper, then turn to the goals of the paper, discussing the many reasons why the ever-growing datasets and models are not necessarily the direction we should be going.  We explore the cost of these training datasets, both literal and environmental, as well as the bias implications of these models, and of course the perpetual debate about responsibility when building and deploying ML systems. Finally, we discuss the thin line between AI hype and useful AI systems, and the importance of doing pre-mortems to truly flesh out any issues you could potentially come across prior to building models, and much much more.  The complete show notes for this episode can be found at twimlai.com/go/467.
Today we’re joined by Abhishek Gupta, a PhD Student at UC Berkeley.  Abhishek, a member of the BAIR Lab, joined us to talk about his recent robotics and reinforcement learning research and interests, which focus on applying RL to real-world robotics applications. We explore the concept of reward supervision, and how to get robots to learn these reward functions from videos, and the rationale behind supervised experts in these experiments.  We also discuss the use of simulation for experiments, data collection, and the path to scalable robotic learning. Finally, we discuss gradient surgery vs gradient sledgehammering, and his ecological RL paper, which focuses on the “phenomena that exist in the real world” and how humans and robotics systems interface in those situations.  The complete show notes for this episode can be found at https://twimlai.com/go/466.
Today we’re joined by David Carmona, General Manager of Artificial Intelligence & Innovation at Microsoft.  In our conversation with David, we focus on his work on AI at Scale, an initiative focused on the change in the ways people are developing AI, driven in large part by the emergence of massive models. We explore David’s thoughts about the progression towards larger models, the focus on parameters and how it ties to the architecture of these models, and how we should assess how attention works in these models. We also discuss the different families of models (generation & representation), the transition from CV to NLP tasks, and an interesting point of models “becoming a platform” via transfer learning. The complete show notes for this episode can be found at twimlai.com/go/465.
Today we’re joined by Melanie Mitchell, Davis Professor at the Santa Fe Institute and author of Artificial Intelligence: A Guide for Thinking Humans.  While Melanie has had a long career with a myriad of research interests, we focus on a few, complex systems and the understanding of intelligence, complexity, and her recent work on getting AI systems to make analogies. We explore examples of social learning, and how it applies to AI contextually, and defining intelligence.  We discuss potential frameworks that would help machines understand analogies, established benchmarks for analogy, and if there is a social learning solution to help machines figure out analogy. Finally we talk through the overall state of AI systems, the progress we’ve made amid the limited concept of social learning, if we’re able to achieve intelligence with current approaches to AI, and much more! The complete show notes for this episode can be found at twimlai.com/go/464.
Today we’re joined by Adriana Kovashka, an Assistant Professor at the University of Pittsburgh. In our conversation with Adriana, we explore her visual commonsense research, and how it intersects with her background in media studies. We discuss the idea of shortcuts, or faults in visual question answering data sets that appear in many SOTA results, as well as the concept of masking, a technique developed to assist in context prediction. Adriana then describes how these techniques fit into her broader goal of trying to understand the rhetoric of visual advertisements.  Finally, Adriana shares a bit about her work on robust visual reasoning, the parallels between this research and other work happening around explainability, and the vision for her work going forward.  The complete show notes for this episode can be found at twimlai.com/go/463.
Today we’re joined by Nishan Subedi, VP of Algorithms at Overstock.com. In our conversation with Nishan, we discuss his interesting path to MLOps and how ML/AI is used at Overstock, primarily for search/recommendations and marketing/advertisement use cases. We spend a great deal of time exploring machine learning architecture and architectural patterns, how he perceives the differences between architectural patterns and algorithms, and emergent architectural patterns that standards have not yet been set for. Finally, we discuss how the idea of anti-patterns was innovative in early design pattern thinking and if those concepts are transferable to ML, if architectural patterns will bleed over into organizational patterns and culture, and Nishan introduces us to the concept of Squads within an organizational structure. The complete show notes for this episode can be found at https://twimlai.com/go/462.
Today we’re joined by Vered Shwartz, a Postdoctoral Researcher at both the Allen Institute for AI and the Paul G. Allen School of Computer Science & Engineering at the University of Washington. In our conversation with Vered, we explore her NLP research, where she focuses on teaching machines common sense reasoning in natural language. We discuss training using GPT models and the potential use of multimodal reasoning and incorporating images to augment the reasoning capabilities. Finally, we talk through some other noteworthy research in this field, how she deals with biases in the models, and Vered's future plans for incorporating some of the newer techniques into her future research. The complete show notes for this episode can be found at https://twimlai.com/go/461. 
Today we’re joined by returning guest and newly appointed Dean of the College of Engineering at The Ohio State University, Ayanna Howard.  Our conversation with Dr. Howard focuses on her recently released book, Sex, Race, and Robots: How to Be Human in the Age of AI, which is an extension of her research on the relationships between humans and robots. We continue to explore this relationship through the themes of socialization introduced in the book, like associating genders to AI and robotic systems and the “self-fulfilling prophecy” that has become search engines.  We also discuss a recurring conversation in the community around AI  being biased because of data versus models and data, and the choices and responsibilities that come with the ethical aspects of building AI systems. Finally, we discuss Dr. Howard’s new role at OSU, how it will affect her research, and what the future holds for the applied AI field.  The complete show notes for this episode can be found at https://twimlai.com/go/460.
Today we’re joined by Penousal Machado, Associate Professor and Head of the Computational Design and Visualization Lab in the Center for Informatics at the University of Coimbra.  In our conversation with Penousal, we explore his research in Evolutionary Computation, and how that work coincides with his passion for images and graphics. We also discuss the link between creativity and humanity, and have an interesting sidebar about the philosophy of Sci-Fi in popular culture.  Finally, we dig into Penousals evolutionary machine learning research, primarily in the context of the evolution of various animal species mating habits and practices. The complete show notes for this episode can be found at twimlai.com/go/459.  
Today we’re joined by Arul Menezes, a Distinguished Engineer at Microsoft.  Arul, a 30 year veteran of Microsoft, manages the machine translation research and products in the Azure Cognitive Services group. In our conversation, we explore the historical evolution of machine translation like breakthroughs in seq2seq and the emergence of transformer models.  We also discuss how they’re using multilingual transfer learning and combining what they’ve learned in translation with pre-trained language models like BERT. Finally, we explore what they’re doing to experience domain-specific improvements in their models, and what excites Arul about the translation architecture going forward.  The complete show notes for this series can be found at twimlai.com/go/458.
Today we’re joined by Luna Dong, Sr. Principal Scientist at Amazon. In our conversation with Luna, we explore Amazon’s expansive product knowledge graph, and the various roles that machine learning plays throughout it. We also talk through the differences and synergies between the media and retail product knowledge graph use cases and how ML comes into play in search and recommendation use cases. Finally, we explore the similarities to relational databases and efforts to standardize the product knowledge graphs across the company and broadly in the research community. The complete show notes for this episode can be found at https://twimlai.com/go/457.
Today we’re joined by Sarah Brown, an Assistant Professor of Computer Science at the University of Rhode Island. In our conversation with Sarah, whose research focuses on Fairness in AI, we discuss why a “systems-level” approach is necessary when thinking about ethical and fairness issues in models and algorithms. We also explore Wiggum: a fairness forensics tool, which explores bias and allows for regular auditing of data, as well as her ongoing collaboration with a social psychologist to explore how people perceive ethics and fairness. Finally, we talk through the role of tools in assessing fairness and bias, and the importance of understanding the decisions the tools are making. The complete show notes can be found at twimlai.com/go/456.
Today we’re joined by Andrew Trister, Deputy Director for Digital Health Innovation at the Bill & Melinda Gates Foundation.  In our conversation with Andrew, we explore some of the AI use cases at the foundation, with the goal of bringing “community-based” healthcare to underserved populations in the global south. We focus on COVID-19 response and improving the accuracy of malaria testing with a bayesian framework and a few others, and the challenges like scaling these systems and building out infrastructure so that communities can begin to support themselves.  We also touch on Andrew's previous work at Apple, where he helped develop what is now known as Research Kit, their ML for health tools that are now seen in apple devices like phones and watches. The complete show notes for this episode can be found at https://twimlai.com/go/455
Today we’re joined by Drago Anguelov, Distinguished Scientist and Head of Research at Waymo.  In our conversation, we explore the state of the autonomous vehicles space broadly and at Waymo, including how AV has improved in the last few years, their focus on level 4 driving, and Drago’s thoughts on the direction of the industry going forward. Drago breaks down their core ML use cases, Perception, Prediction, Planning, and Simulation, and how their work has lead to a fully autonomous vehicle being deployed in Phoenix.  We also discuss the socioeconomic and environmental impact of self-driving cars, a few research papers submitted to NeurIPS 2020, and if the sophistication of AV systems will lend themselves to the development of tomorrow’s enterprise machine learning systems. The complete show notes for this episode can be found at twimlai.com/go/454. 
Rate Podcast

Share This Podcast

Recommendation sent

Followers

8

Join Podchaser to...

  • Rate podcasts and episodes
  • Follow podcasts and creators
  • Create podcast and episode lists
  • & much more

Podcast Details

Created by
TWIML
Podcast Status
Active
Started
May 21st, 2016
Latest Episode
Apr 15th, 2021
Release Period
2 per week
Episodes
492
Avg. Episode Length
About 1 hour
Explicit
No
Order
Episodic
Language
English
Do you host or manage this podcast?
Claim and edit this page to your liking.
Are we missing an episode or update?
Use this to check the RSS feed immediately.