Podchaser Logo
Home
92. Daniel Filan - Peering into neural nets for AI safety

92. Daniel Filan - Peering into neural nets for AI safety

Released Wednesday, 14th July 2021
Good episode? Give it some love!
92. Daniel Filan - Peering into neural nets for AI safety

92. Daniel Filan - Peering into neural nets for AI safety

92. Daniel Filan - Peering into neural nets for AI safety

92. Daniel Filan - Peering into neural nets for AI safety

Wednesday, 14th July 2021
Good episode? Give it some love!
Rate Episode

Many AI researchers think it’s going to be hard to design AI systems that continue to remain safe as AI capabilities increase. We’ve seen already on the podcast that the field of AI alignment has emerged to tackle this problem, but a related effort is also being directed at a separate dimension of the safety problem: AI interpretability.

Our ability to interpret how AI systems process information and make decisions will likely become an important factor in assuring the reliability of AIs in the future. And my guest for this episode of the podcast has focused his research on exactly that topic. Daniel Filan is an AI safety researcher at Berkeley, where he’s supervised by AI pioneer Stuart Russell. Daniel also runs AXRP, a podcast dedicated to technical AI alignment research.

Show More
Rate

Join Podchaser to...

  • Rate podcasts and episodes
  • Follow podcasts and creators
  • Create podcast and episode lists
  • & much more

Episode Tags

Do you host or manage this podcast?
Claim and edit this page to your liking.
,

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features