Podchaser Logo
Home
Future Matters Reader

Matthew van der Merwe, Pablo Stafforini

Future Matters Reader

A daily Society, Culture and Philosophy podcast
Good podcast? Give it some love!
Future Matters Reader

Matthew van der Merwe, Pablo Stafforini

Future Matters Reader

Episodes
Future Matters Reader

Matthew van der Merwe, Pablo Stafforini

Future Matters Reader

A daily Society, Culture and Philosophy podcast
Good podcast? Give it some love!
Rate Podcast

Episodes of Future Matters Reader

Mark All
Search Episodes...
Success without dignity: a nearcasting story of avoiding catastrophe by luck, by Holden Karnofsky.https://forum.effectivealtruism.org/posts/75CtdFj79sZrGpGiX/success-without-dignity-a-nearcasting-story-of-avoidingNote: Footnotes in the origin
In this post, Larks argues that the proposal to make AI firms promise to donate a large fraction of profits if they become extremely profitable will primarily benefitting the management of those firms and thereby give managers an incentive to m
This is Otto Barten's summary of 'The effectiveness of AI existential risk communication to the American and Dutch public' by Alexia Georgiadis. In this paper Alexia measures changes in participants' awareness of AGI risks after consuming vario
Carl Shulman & Elliott Thornley argue that the goal of longtermists should be to get governments to adopt global catastrophic risk policies based on standard cost-benefit analysis rather than arguments that stress the overwhelming importance of
"The field of biosecurity is more complicated, sensitive and nuanced, especially in the policy space, than what impressions you might get based on publicly available information. As a result, say / write / do things with caution (especially if
The Global Priorities Institute has published a new paper summary: 'Are we living at the hinge of history?' by William MacAskill.https://globalprioritiesinstitute.org/summary-summary-longtermist-institutional-reform/Note: Footnotes and refere
The Global Priorities Institute has published a new paper summary: 'Longtermist institutional reform' by Tyler John & William MacAskill.https://globalprioritiesinstitute.org/summary-summary-longtermist-institutional-reform/Note: Footnotes and
The Global Priorities Institute has released Hayden Wilkinson's presentation on global priorities research. (The talk was given in mid-September last year but remained unlisted until now.)https://globalprioritiesinstitute.org/hayden-wilkinson-
New rules around gain-of-function research make progress in striking a balance between reward — and catastrophic risk.https://www.vox.com/future-perfect/2023/2/1/23580528/gain-of-function-virology-covid-monkeypox-catastrophic-risk-pandemic-lab
"One of two things must happen. Humanity needs to accelerate its adaptation to these technologies or a collective, enforceable decision must be made to slow the development of these technologies. Even doing both may not be enough."https://www.
Victoria Krakovna makes the point that you don't have to be a longtermist to care about AI alignment.
Anthropic shares a summary of their views about AI progress and its associated risks, as well as their approach to AI safety.https://www.anthropic.com/index/core-views-on-ai-safetyNote: Some footnotes in the original article have been omitte
Noah Smith argues that, although AGI might eventually kill humanity, large language models are not AGI, may not be a step toward AGI, and there's no plausible way they could cause extinction.https://noahpinion.substack.com/p/llms-are-not-goin
Peter Eckersley did groundbreaking work to encrypt the web. After his sudden death, a new organization he founded is carrying out his vision to steer artificial intelligence toward “human flourishing.”https://www.wired.com/story/peter-eckersl
A working paper by Shakked Noy and Whitney Zhang examines the effects of ChatGPT on production and labor markets.https://economics.mit.edu/sites/default/files/inline-files/Noy_Zhang_1.pdfNote: Some tables and footnotes in the original articl
Robin Hanson restates his views on AI risk.https://www.overcomingbias.com/p/ai-risk-again
In an Institute for Progress report, Bridget Williams and Rowan Kane make five policy recommendations to mitigate risks of catastrophic pandemics from synthetic biology.https://progress.institute/preventing-the-misuse-of-dna-synthesis/
Kevin Collier on how ChatGPT and advanced AI might redefine our definition of consciousness.https://www.nbcnews.com/tech/tech-news/chatgpt-ai-consciousness-rcna71777
Eric Landgrebe, Beth Barnes and Marius Hobbhahn discuss a survey of 1000 participants on their views about what values should be put into powerful AIs.https://www.lesswrong.com/posts/4iAkmnhhqNZe8JzrS/reflection-mechanisms-as-an-alignment-tar
Risto Uuk published the EU AI Act Newsletter #24.https://artificialintelligenceact.substack.com/p/the-eu-ai-act-newsletter-24
The prospect of automated systems manipulating our information environment, distorting societal values, and destabilizing political institutions is increasingly palpable. At present, it appears unlikely that market forces will address this cla
Matthew Barnett on the importance of work on AI forecasting. (This audio version is based on a talk given at EA Global: Bay Area 2023.)https://forum.effectivealtruism.org/posts/zrSx3NRZEaJENazHK/why-i-think-it-s-important-to-work-on-ai-forec
Scott Alexander on OpenAI's "Planning For AGI And Beyond"https://astralcodexten.substack.com/p/openais-planning-for-agi-and-beyond
Scott Aaronson: "My purpose, in this post, is to ask a more basic question than how to make GPT safer: namely, should GPT exist at all?"https://scottaaronson.blog/?p=7042
OpenAI released a blog post explaining their AGI roadmaphttps://openai.com/blog/planning-for-agi-and-beyond/Note: Some footnotes in the original article have been omitted.
Rate

Join Podchaser to...

  • Rate podcasts and episodes
  • Follow podcasts and creators
  • Create podcast and episode lists
  • & much more

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features