Podchaser Logo
Home
Matjaz Leonardis | Interpretability and Security of AI Models

Matjaz Leonardis | Interpretability and Security of AI Models

Released Friday, 15th December 2023
Good episode? Give it some love!
Matjaz Leonardis | Interpretability and Security of AI Models

Matjaz Leonardis | Interpretability and Security of AI Models

Matjaz Leonardis | Interpretability and Security of AI Models

Matjaz Leonardis | Interpretability and Security of AI Models

Friday, 15th December 2023
Good episode? Give it some love!
Rate Episode

Matjaž Leonardis, in collaboration with David Deutsch at the University of Oxford, is advancing the understanding of a theorem by Popper and Miller from 1983, offering new insights into Bayesian updating and its standard interpretations. His research interests span learning and creativity in both humans and machines. Beyond his academic pursuits, Matjaž engages with a wider audience through social media on topics ranging from education to computation.


Key Highlights

  • The use of ML models for critical decisions and the inherent risks of undetectable back doors influencing results.
  • Matjaz's paper on the practicality of inserting concealed back doors during ML training and their impact on model integrity.
  • Challenges to model interpretability and robustness posed by these back doors, alongside their relevance to AI safety.
  • The call for deeper research into complex ML model vulnerabilities, integrating cryptography to enhance computational clarity.


Dive deeper into the session: Full Summary


About Foresight Institute

Foresight Institute is a research organization and non-profit that supports the beneficial development of high-impact technologies. Since our founding in 1987 on a vision of guiding powerful technologies, we have continued to evolve into a many-armed organization that focuses on several fields of science and technology that are too ambitious for legacy institutions to support.


Allison Duettmann

The President and CEO of Foresight Institute, Allison Duettmann directs the Intelligent Cooperation, Molecular Machines, Biotech & Health Extension, Neurotech, and Space Programs, alongside Fellowships, Prizes, and Tech Trees. She has also been pivotal in co-initiating the Longevity Prize, pioneering initiatives like Existentialhope.com, and contributing to notable works like "Superintelligence: Coordination & Strategy" and "Gaming the Future".


Get Involved with Foresight:


Follow Us: Twitter | Facebook | LinkedIn


Note: Explore every word spoken on this podcast through Fathom.fm, an innovative podcast search engine.




Hosted on Acast. See acast.com/privacy for more information.

Show More

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features