Podchaser Logo
Home
Future of Life Institute Podcast

Future of Life Institute

Future of Life Institute Podcast

A weekly Technology podcast
Good podcast? Give it some love!
Future of Life Institute Podcast

Future of Life Institute

Future of Life Institute Podcast

Episodes
Future of Life Institute Podcast

Future of Life Institute

Future of Life Institute Podcast

A weekly Technology podcast
Good podcast? Give it some love!
Rate Podcast

Episodes of Future of Life Institute Podcast

Mark All
Search Episodes...
Liron Shapira joins the podcast to discuss superintelligence goals, what makes AI different from other technologies, risks from centralizing power, and whether AI can defend us from AI. Timestamps:00:00 Intelligence as optimization-power05:
Annie Jacobsen joins the podcast to lay out a second by second timeline for how nuclear war could happen. We also discuss time pressure, submarines, interceptor missiles, cyberattacks, and concentration of power. You can find more on Annie's wo
Katja Grace joins the podcast to discuss the largest survey of AI researchers conducted to date, AI researchers' beliefs about different AI risks, capabilities required for continued AI-related transformation, the idea of discontinuous progress
Holly Elmore joins the podcast to discuss pausing frontier AI, hardware overhang, safety research during a pause, the social dynamics of AI risk, and what prevents AGI corporations from collaborating. You can read more about Holly's work at htt
Sneha Revanur joins the podcast to discuss the social effects of AI, the illusory divide between AI ethics and AI safety, the importance of humans in the loop, the different effects of AI on younger and older people, and the importance of AIs i
Roman Yampolskiy joins the podcast again to discuss whether AI is like a Shoggoth, whether scaling laws will hold for more agent-like AIs, evidence that AI is uncontrollable, and whether designing human-like AI would be safer than the current d
On this special episode of the podcast, Flo Crivello talks with Nathan Labenz about AI as a new form of life, whether attempts to regulate AI risks regulatory capture, how a GPU kill switch could work, and why Flo expects AGI in 2-8 years. Ti
Carl Robichaud joins the podcast to discuss the new nuclear arms race, how much world leaders and ideologies matter for nuclear risk, and how to reach a stable, low-risk era. You can learn more about Carl's work here: https://www.longview.org/a
Frank Sauer joins the podcast to discuss autonomy in weapon systems, killer drones, low-tech defenses against drones, the flaws and unpredictability of autonomous weapon systems, and the political possibilities of regulating such systems. You c
Darren McKee joins the podcast to discuss how AI might be difficult to control, which goals and traits AI systems will develop, and whether there's a unified solution to AI alignment. Timestamps:00:00 Uncontrollable superintelligence16:41 A
Mark Brakel (Director of Policy at the Future of Life Institute) joins the podcast to discuss the AI Safety Summit in Bletchley Park, objections to AI policy, AI regulation in the EU and US, global institutions for safe AI, and autonomy in weap
Dan Hendrycks joins the podcast again to discuss X.ai, how AI risk thinking has evolved, malicious use of AI, AI race dynamics between companies and between militaries, making AI organizations safer, and how representation engineering could hel
Samuel Hammond joins the podcast to discuss how AGI will transform economies, governments, institutions, and other power structures. You can read Samuel's blog at https://www.secondbest.ca Timestamps: 00:00 Is AGI close? 06:56 Compute versu
Are we doomed to a future of loneliness and unfulfilling online interactions? What if technology made us feel more connected instead?Imagine a World is a podcast exploring a range of plausible and positive futures with advanced AI, produced b
Let’s imagine a future where AGI is developed but kept at a distance from practically impacting the world, while narrow AI remakes the world completely. Most people don’t know or care about the difference and have no idea how they could disting
Steve Omohundro joins the podcast to discuss Provably Safe Systems, a paper he co-authored with FLI President Max Tegmark. You can read the paper here: https://arxiv.org/pdf/2309.01933.pdf Timestamps:00:00 Provably safe AI systems 12:17 Ali
What if AI allowed us to communicate with animals? Could interspecies communication lead to new levels of empathy? How might communicating with animals lead humans to reimagine our place in the natural world?Imagine a World is a podcast explo
If you could extend your life, would you? How might life extension technologies create new social and political divides? How can the world unite to solve the great problems of our time, like AI risk? What if AI creators could agree on an inspec
Johannes Ackva joins the podcast to discuss the main drivers of climate change and our best technological and governmental options for managing it. You can read more about Johannes' work at http://founderspledge.com/climateTimestamps:00:00 J
How do low income countries affected by climate change imagine their futures? How do they overcome these twin challenges? Will all nations eventually choose or be forced to go digital?Imagine a World is a podcast exploring a range of plausibl
What if we had one advanced AI system for the entire world? Would this led to a world 'beyond' nation states - and do we want this?Imagine a World is a podcast exploring a range of plausible and positive futures with advanced AI, produced by
Tom Davidson joins the podcast to discuss how AI could quickly automate most cognitive tasks, including AI research, and why this would be risky. Timestamps:00:00 The current pace of AI 03:58 Near-term risks from AI 09:34 Historical analog
How does who is involved in the design of AI affect the possibilities for our future? Why isn’t the design of AI inclusive already? Can technology solve all our problems? Can human nature change? Do we want either of these things to happen?Im
Are today's democratic systems equipped well enough to create the best possible future for everyone? If they're not, what systems might work better? And are governments around the world taking the destabilizing threats of new technologies serio
Coming Soon…The year is 2045. Humanity is not extinct, nor living in a dystopia. It has averted climate disaster and major wars. Instead, AI and other new technologies are helping to make the world more peaceful, happy and equal. How? This wa
Rate

Join Podchaser to...

  • Rate podcasts and episodes
  • Follow podcasts and creators
  • Create podcast and episode lists
  • & much more

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features