Podchaser Logo
Home
130. Edouard Harris - New Research: Advanced AI may tend to seek power *by default*

130. Edouard Harris - New Research: Advanced AI may tend to seek power *by default*

Released Wednesday, 12th October 2022
Good episode? Give it some love!
130. Edouard Harris - New Research: Advanced AI may tend to seek power *by default*

130. Edouard Harris - New Research: Advanced AI may tend to seek power *by default*

130. Edouard Harris - New Research: Advanced AI may tend to seek power *by default*

130. Edouard Harris - New Research: Advanced AI may tend to seek power *by default*

Wednesday, 12th October 2022
Good episode? Give it some love!
Rate Episode

Progress in AI has been accelerating dramatically in recent years, and even months. It seems like every other day, there’s a new, previously-believed-to-be-impossible feat of AI that’s achieved by a world-leading lab. And increasingly, these breakthroughs have been driven by the same, simple idea: AI scaling.

For those who haven’t been following the AI scaling sage, scaling means training AI systems with larger models, using increasingly absurd quantities of data and processing power. So far, empirical studies by the world’s top AI labs seem to suggest that scaling is an open-ended process that can lead to more and more capable and intelligent systems, with no clear limit.

And that’s led many people to speculate that scaling might usher in a new era of broadly human-level or even superhuman AI — the holy grail AI researchers have been after for decades.

And while that might sound cool, an AI that can solve general reasoning problems as well as or better than a human might actually be an intrinsically dangerous thing to build.

At least, that’s the conclusion that many AI safety researchers have come to following the publication of a new line of research that explores how modern AI systems tend to solve problems, and whether we should expect more advanced versions of them to perform dangerous behaviours like seeking power.

This line of research in AI safety is called “power-seeking”, and although it’s currently not well understood outside the frontier of AI safety and AI alignment research, it’s starting to draw a lot of attention. The first major theoretical study of power seeking was led by Alex Turner, who’s appeared on the podcast before, and was published in NeurIPS (the world’s top AI conference), for example.

And today, we’ll be hearing from Edouard Harris, an AI alignment researcher and one of my co-founders in the AI safety company (Gladstone AI). Ed’s just completed a significant piece of AI safety research that extends Alex Turner’s original power-seeking work, and that shows what seems to be the first experimental evidence suggesting that we should expect highly advanced AI systems to seek power by default.

What does power seeking really mean though? And does all this imply for the safety of future, general-purpose reasoning systems? That’s what this episode will be all about.

***

Intro music:

- Artist: Ron Gelinas

- Track Title: Daybreak Chill Blend (original mix)

- Link to Track: https://youtu.be/d8Y2sKIgFWc

*** 

Chapters:

- 0:00 Intro

- 4:00 Alex Turner's research

- 7:45 What technology wants

- 11:30 Universal goals

- 17:30 Connecting observations

- 24:00 Micro power seeking behaviour

- 28:15 Ed's research

- 38:00 The human as the environment

- 42:30 What leads to power seeking

- 48:00 Competition as a default outcome

- 52:45 General concern

- 57:30 Wrap-up

Show More
Rate

Join Podchaser to...

  • Rate podcasts and episodes
  • Follow podcasts and creators
  • Create podcast and episode lists
  • & much more

Episode Tags

Do you host or manage this podcast?
Claim and edit this page to your liking.
,

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features