Podchaser Logo
Home
Into AI Safety

Jacob Haimes

Into AI Safety

Claimed
A weekly Technology podcast
Good podcast? Give it some love!
Into AI Safety

Jacob Haimes

Into AI Safety

Claimed
Episodes
Into AI Safety

Jacob Haimes

Into AI Safety

Claimed
A weekly Technology podcast
Good podcast? Give it some love!
Rate Podcast

Episodes of Into AI Safety

Mark All
Search Episodes...
The almost Dr. Igor Krawczuk joins me for what is the equivalent of 4 of my previous episodes. We get into all the classics: eugenics, capitalism, philosophical toads... Need I say more?If you're interested in connecting with Igor, head on over
As always, the best things come in 3s: dimensions, musketeers, pyramids, and... 3 installments of my interview with Dr. Peter Park, an AI Existential Safety Post-doctoral Fellow working with Dr. Max Tegmark at MIT.As you may have ascertained fr
Join me for round 2 with Dr. Peter Park, an AI Existential Safety Postdoctoral Fellow working with Dr. Max Tegmark at MIT. Dr. Park was a cofounder of StakeOut.AI, a non-profit focused on making AI go well for humans, along with Harry Luk and o
UPDATE: Contrary to what I say in this episode, I won't be removing any episodes that are already published from the podcast RSS feed.After getting some advice and reflecting more on my own personal goals, I have decided to shift the direction
Dr. Peter Park is an AI Existential Safety Postdoctoral Fellow working with Dr. Max Tegmark at MIT. In conjunction with Harry Luk and one other cofounder, he founded ⁠StakeOut.AI, a non-profit focused on making AI go well for humans.00:54 - I
Take a trip with me through the paper Large Language Models, A Survey, published on February 9th of 2024. All figures and tables mentioned throughout the episode can be found on the Into AI Safety podcast website.00:36 - Intro and authors01:50
Esben reviews an application that I would soon submit for Open Philanthropy's Career Transitition Funding opportunity. Although I didn't end up receiving the funding, I do think that this episode can be a valuable resource for both others and m
Before I begin with the paper-distillation based minisodes, I figured we would go over best practices for reading research papers. I go through the anatomy of typical papers, and some generally applicable advice.00:56 - Anatomy of a paper02:38
Join our hackathon group for the second episode in the Evals November 2023 Hackathon subseries. In this episode, we solidify our goals for the hackathon after some preliminary experimentation and ideation.Check out Stellaric's website, or follo
I provide my thoughts and recommendations regarding personal professional portfolios.00:35 - Intro to portfolios01:42 - Modern portfolios02:27 - What to include04:38 - Importance of visual05:50 - The "About" page06:25 - Tools08:12 - Future of
Darryl and I discuss his background, how he became interested in machine learning, and a project we are currently working on investigating the penalization of polysemanticity during the training of neural networks.Check out a diagram of the de
A summary and reflections on the path I have taken to get this podcast started, including some resources recommendations for others who want to do something similar.Links to all articles/papers which are mentioned throughout the episode can be
This episode kicks off our first subseries, which will consist of recordings taken during my team's meetings for the AlignmentJams Evals Hackathon in November of 2023. Our team won first place, so you'll be listening to the process which, at th
In this minisode I give some tips for staying up-to-date in the everchanging landscape of AI. I would like to point out that I am constantly iterating on these strategies, tools, and sources, so it is likely that I will make an update episode i
Alice Rigg, a mechanistic interpretability researcher from Ottawa, Canada, joins me to discuss their path and the applications process for research/mentorship programs.Join the Mech Interp Discord server and attend reading groups at 11:00am on
We're back after a month-long hiatus with a podcast refactor and advice on the applications process for research/mentorship programs.Check out the About page on the Into AI Safety website for a summary of the logistics updates.Links to all ar
This episode is a brief overview of the major takeaways I had from attending EAG Boston 2023, and an update on my plans for the podcast moving forward.TL;DL Starting in early December (2023), I will be uploading episodes on a biweekly basis (d
In this episode I discuss my initial research proposal for the 2024 Winter AI Safety Camp with one of the individuals who helps facilitate the program, Remmelt Ellen.The proposal is titled The Effect of Machine Learning on Bioengineered Pandem
Welcome to the Into AI Safety podcast! In this episode I provide reasoning for why I am starting this podcast, what I am trying to accomplish with it, and a little bit of background on how I got here.Please email all inquiries and suggestions t
Rate

Join Podchaser to...

  • Rate podcasts and episodes
  • Follow podcasts and creators
  • Create podcast and episode lists
  • & much more

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features