TalkRL: The Reinforcement Learning Podcast

A Technology podcast
 1 person rated this podcast

Best Episodes of TalkRL

Mark All
Search Episodes...
Aravind Srinivas is a 3rd year PhD student at UC Berkeley advised by Prof. Abbeel.He co-created and co-taught a grad course on Deep Unsupervised Learning at Berkeley.Featured ReferencesData-Efficient Image Recognition with Contrastive Predictive CodingOlivier J. Hénaff, Aravind Srinivas, Jeffrey De Fauw, Ali Razavi, Carl Doersch, S. M. Ali Eslami, Aaron van den OordContrastive Unsupervised Representations for Reinforcement LearningAravind Srinivas, Michael Laskin, Pieter AbbeelReinforcement Learning with Augmented DataMichael Laskin, Kimin Lee, Adam Stooke, Lerrel Pinto, Pieter Abbeel, Aravind SrinivasSUNRISE: A Simple Unified Framework for Ensemble Learning in Deep Reinforcement LearningKimin Lee, Michael Laskin, Aravind Srinivas, Pieter AbbeelAdditional References CS294-158-SP20 Deep Unsupervised Learning, Berkeley Phasic Policy Gradient, Karl Cobbe, Jacob Hilton, Oleg Klimov, John Schulman Bootstrap your own latent: A new approach to self-supervised Learning , Grill et al 2020
Taylor Killian is a Ph.D. student at the University of Toronto and the Vector Institute, and an Intern at Google Brain.Featured References Direct Policy Transfer with Hidden Parameter Markov Decision ProcessesYao, Killian, Konidaris, Doshi-Velez Robust and Efficient Transfer Learning with Hidden Parameter Markov Decision ProcessesKillian, Daulton, Konidaris, Doshi-Velez Transfer Learning Across Patient Variations with Hidden Parameter Markov Decision ProcessesKillian, Konidaris, Doshi-Velez Counterfactually Guided Policy Transfer in Clinical SettingsKillian, Ghassemi, Joshi Additional References Hidden Parameter Markov Decision Processes: A Semiparametric Regression Approach for Discovering Latent Task Parametrizations, Doshi-Velez, Konidaris Mimic III, a freely accessible critical care database. Johnson AEW, Pollard TJ, Shen L, Lehman L, Feng M, Ghassemi M, Moody B, Szolovits P, Celi LA, and Mark RG The Artificial Intelligence Clinician learns optimal treatment strategies for sepsis in intensive care, Komorowski et al
Danijar Hafner is a PhD student at the University of Toronto, and a student researcher at Google Research, Brain Team and the Vector Institute.  He holds a Masters of Research from University College London.Featured References A deep learning framework for neuroscienceBlake A. Richards, Timothy P. Lillicrap , Philippe Beaudoin, Yoshua Bengio, Rafal Bogacz, Amelia Christensen, Claudia Clopath, Rui Ponte Costa, Archy de Berker, Surya Ganguli, Colleen J. Gillon , Danijar Hafner, Adam Kepecs, Nikolaus Kriegeskorte, Peter Latham , Grace W. Lindsay, Kenneth D. Miller , Richard Naud , Christopher C. Pack, Panayiota Poirazi , Pieter Roelfsema , João Sacramento, Andrew Saxe, Benjamin Scellier, Anna C. Schapiro , Walter Senn, Greg Wayne, Daniel Yamins, Friedemann Zenke, Joel Zylberberg, Denis Therien, Konrad P. Kording Learning Latent Dynamics for Planning from PixelsDanijar Hafner, Timothy Lillicrap, Ian Fischer, Ruben Villegas, David Ha, Honglak Lee, James Davidson Dream to Control: Learning Behaviors by Latent ImaginationDanijar Hafner, Timothy Lillicrap, Jimmy Ba, Mohammad Norouzi Planning to Explore via Self-Supervised World ModelsRamanan Sekar, Oleh Rybkin, Kostas Daniilidis, Pieter Abbeel, Danijar Hafner, Deepak Pathak Additional References Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model Schrittwieser et al Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm Silver et al Shaping Belief States with Generative Environment Models for RL  Gregor et al Model-Based Active Exploration Shyam et al  Errata[Robin] Around 1:37 I say "some ... world models get confused by random noise". I meant "some curiosity formulations", not "world models"
Csaba Szepesvari is: Head of the Foundations Team at DeepMind Professor of Computer Science at the University of Alberta Canada CIFAR AI Chair Fellow at the Alberta Machine Intelligence Institute  Co-Author of the book Bandit Algorithms along with Tor Lattimore, and author of the book Algorithms for Reinforcement Learning References Bandit based monte-carlo planning, Levente Kocsis, Csaba Szepesvári Bandit Algorithms, Tor Lattimore, Csaba Szepesvári Algorithms for Reinforcement Learning, Csaba Szepesvári The Predictron: End-To-End Learning and Planning, David Silver, Hado van Hasselt, Matteo Hessel, Tom Schaul, Arthur Guez, Tim Harley, Gabriel Dulac-Arnold, David Reichert, Neil Rabinowitz, Andre Barreto, Thomas Degris A Bayesian framework for reinforcement learning, Strens Solving Rubik’s Cube with a Robot Hand ; Paper, OpenAI, Ilge Akkaya, Marcin Andrychowicz, Maciek Chociej, Mateusz Litwin, Bob McGrew, Arthur Petron, Alex Paino, Matthias Plappert, Glenn Powell, Raphael Ribas, Jonas Schneider, Nikolas Tezak, Jerry Tworek, Peter Welinder, Lilian Weng, Qiming Yuan, Wojciech Zaremba, Lei Zhang The Nonstochastic Multiarmed Bandit Problem, Peter Auer, Nicolò Cesa-Bianchi, Yoav Freund, and Robert E. Schapire Deep Learning with Bayesian Principles, Mohammad Emtiyaz Khan Tackling climate change with Machine Learning David Rolnick, Priya L. Donti, Lynn H. Kaack, Kelly Kochanski, Alexandre Lacoste, Kris Sankaran, Andrew Slavin Ross, Nikola Milojevic-Dupont, Natasha Jaques, Anna Waldman-Brown, Alexandra Luccioni, Tegan Maharaj, Evan D. Sherwin, S. Karthik Mukkavilli, Konrad P. Kording, Carla Gomes, Andrew Y. Ng, Demis Hassabis, John C. Platt, Felix Creutzig, Jennifer Chayes, Yoshua Bengio
Ben Eysenbach is a PhD student in the Machine Learning Department at Carnegie Mellon University.  He was a Resident at Google Brain, and studied math and computer science at MIT. He co-founded the ICML Exploration in Reinforcement Learning workshop. Featured ReferencesDiversity is All You Need: Learning Skills without a Reward FunctionBenjamin Eysenbach, Abhishek Gupta, Julian Ibarz, Sergey LevineSearch on the Replay Buffer: Bridging Planning and Reinforcement LearningBenjamin Eysenbach, Ruslan Salakhutdinov, Sergey LevineAdditional References Behaviour Suite for Reinforcement Learning, Ian Osband, Yotam Doron, Matteo Hessel, John Aslanides, Eren Sezener, Andre Saraiva, Katrina McKinney, Tor Lattimore, Csaba Szepesvari, Satinder Singh, Benjamin Van Roy, Richard Sutton, David Silver, Hado Van Hasselt Learning Latent Plans from Play, Corey Lynch, Mohi Khansari, Ted Xiao, Vikash Kumar, Jonathan Tompson, Sergey Levine, Pierre Sermanet Finale Doshi-Velez Emma Brunskill Closed-loop optimization of fast-charging protocols for batteries with machine learning,  Peter Attia, Aditya Grover, Norman Jin, Kristen Severson, Todor Markov, Yang-Hung Liao, Michael Chen, Bryan Cheong, Nicholas Perkins, Zi Yang, Patrick Herring, Muratahan Aykol, Stephen Harris, Richard Braatz, Stefano Ermon, William Chueh CMU 10-703 Deep Reinforcement Learning, Fall 2019, Carnegie Mellon University ICML Exploration in Reinforcement Learning workshop
Dec 19, 2019Thank you to all the presenters that participated.  I covered as many as I could given the time and crowds, if you were not included and wish to be, please email talkrl@pathwayi.comMore details on the official NeurIPS Deep RL Workshop site. 0:23  Approximating two value functions instead of one: towards characterizing a new family of Deep Reinforcement Learning algorithms; Matthia Sabatelli (University of Liege); Gilles Louppe (University of Liège); Pierre Geurts (University of Liège); Marco Wiering (University of Groningen) [external pdf link] 4:16  Single Deep Counterfactual Regret Minimization; Eric Steinberger (University of Cambridge). 5:38  On the Convergence of Episodic Reinforcement Learning Algorithms at the Example of RUDDER; Markus Holzleitner (LIT AI Lab, Institute for Machine Learning, Johannes Kepler University Linz, Austria); José Arjona-Medina (LIT AI Lab, Institute for Machine Learning, Johannes Kepler University Linz, Austria); Marius-Constantin Dinu (LIT AI Lab / University Linz ); Sepp Hochreiter (LIT AI Lab, Institute for Machine Learning, Johannes Kepler University Linz, Austria). 9:33  Objective Mismatch in Model-based Reinforcement Learning; Nathan Lambert (UC Berkeley); Brandon Amos (Facebook); Omry Yadan (Facebook); Roberto Calandra (Facebook). 10:51  Option Discovery using Deep Skill Chaining; Akhil Bagaria (Brown University); George Konidaris (Brown University). 13:44  Blue River Controls: A toolkit for Reinforcement Learning Control Systems on Hardware; Kirill Polzounov (University of Calgary); Ramitha Sundar (Blue River Technology); Lee Reden (Blue River Technology). 14:52  LeDeepChef: Deep Reinforcement Learning Agent for Families of Text-Based Games; Leonard Adolphs (ETHZ); Thomas Hofmann (ETH Zurich). 16:30  Accelerating Training in Pommerman with Imitation and Reinforcement Learning; Hardik Meisheri (TCS Research); Omkar Shelke (TCS Research); Richa Verma (TCS Research); Harshad Khadilkar (TCS Research). 17:27  Dream to Control: Learning Behaviors by Latent Imagination; Danijar Hafner (Google); Timothy Lillicrap (DeepMind); Jimmy Ba (University of Toronto); Mohammad Norouzi (Google Brain) [external pdf link]. 20:48  Adaptive Temperature Tuning for Mellowmax in Deep Reinforcement Learning; Seungchan Kim (Brown University); George Konidaris (Brown). 22:05  Meta-learning curiosity algorithms; Ferran Alet (MIT); Martin Schneider (MIT); Tomas Lozano-Perez (MIT); Leslie Kaelbling (MIT). 24:09  Predictive Coding for Boosting Deep Reinforcement Learning with Sparse Rewards; Xingyu Lu (Berkeley); Stas Tiomkin (BAIR, UC Berkeley); Pieter Abbeel (UC Berkeley). 25:44   Swarm-inspired Reinforcement Learning via Collaborative Inter-agent Knowledge Distillation; Zhang-Wei Hong (Preferred Networks); Prabhat Nagarajan (Preferred Networks); Guilherme Maeda (Preferred Networks). 26:35  Multiplayer AlphaZero; Nicholas Petosa (Georgia Institute of Technology); Tucker Balch (Ga Tech) [external pdf link]. 27:43  Prioritized Sequence Experience Replay; Marc Brittain (Iowa State University); Joshua Bertram (Iowa State University); Xuxi Yang (Iowa State University); Peng Wei (Iowa State University) [external pdf link]. 29:14  Recurrent neural-linear posterior sampling for non-stationary bandits; Paulo Rauber (IDSIA); Aditya Ramesh (USI); Jürgen Schmidhuber (IDSIA - Lugano). 29:36  Improving Evolutionary Strategies With Past Descent Directions; Asier Mujika (ETH Zurich); Florian Meier (ETH Zurich); Marcelo Matheus Gauy (ETH Zurich); Angelika Steger (ETH Zurich) [external pdf link]. 31:40  ZPD Teaching Strategies for Deep Reinforcement Learning from Demonstrations; Daniel Seita (University of California, Berkeley); David Chan (University of California, Berkeley); Roshan Rao (UC Berkeley); Chen Tang (UC Berkeley); Mandi Zhao (UC Berkeley); John Canny (UC Berkeley) [external pdf link]. 33:05  Bottom-Up Meta-Policy Search; Luckeciano Melo (Aeronautics Institute of Technology); Marcos Máximo (Aeronautics Institute of Technology); Adilson Cunha (Aeronautics Institute of Technology) [external pdf link]. 33:37  MERL: Multi-Head Reinforcement Learning; Yannis Flet-Berliac (University of Lille / Inria); Philippe Preux (INRIA) [external pdf link]. 35:30  Emergent Tool Use from Multi-Agent Autocurricula; Bowen Baker (OpenAI); Ingmar Kanitscheider (OpenAI); Todor Markov (OpenAI); Yi Wu (UC Berkeley); Glenn Powell (OpenAI); Bob McGrew (OpenAI); Igor Mordatch (). 37:09  Learning an off-policy predictive state representation for deep reinforcement learning for vision-based steering in autonomous driving; Daniel Graves (Huawei) 39:37  Multi-Task Reinforcement Learning without Interference; Tianhe Yu (Stanford University); Saurabh Kumar (Stanford); Abhishek Gupta (UC Berkeley); Karol Hausman (Google Brain); Sergey Levine (UC Berkeley); Chelsea Finn (UC Berkeley). 40:52  Behavior-Regularized Offline Reinforcement Learning; Yifan Wu (Carnegie Mellon University); George Tucker (Google Brain); Ofir Nachum (Google) [external pdf link]. 42:36  If MaxEnt RL is the Answer, What is the Question?; Ben Eysenbach (Carnegie Mellon University); Sergey Levine (UC Berkeley) [external pdf link]. 43:30  Receiving Uncertainty-Aware Advice in Deep Reinforcement Learning; Felipe Leno da Silva (University of Sao Paulo); Pablo Hernandez-Leal (Borealis AI); Bilal Kartal (Borealis AI); Matthew Taylor (Borealis AI). 45:03  Striving for Simplicity in Off-Policy Deep Reinforcement Learning; Rishabh Agarwal (Google Research, Brain Team); Dale Schuurmans (Google / University of Alberta); Mohammad Norouzi (Google Brain) [external pdf link]. 45:32  Interactive Fiction Games: A Colossal Adventure; Matthew Hausknecht (Microsoft Research); Prithviraj Ammanabrolu (Georgia Institute of Technology); Marc-Alexandre Côté (Microsoft Research); Xingdi Yuan (Microsoft Research) [external pdf link]. 52:20  rlpyt: A Research Code Base for Deep Reinforcement Learning in PyTorch; Adam Stooke (UC Berkeley); Pieter Abbeel (UC Berkeley)  [ Repo: https://github.com/astooke/rlpyt ] 53:49  Learning to Drive using Waypoints; Tanmay Agarwal, Hitesh Arora, Tanvir Parhar, Shubhankar V Deshpande, Jeff Schneider - from the NeurIPS 2019 Workshop on Machine Learning for Autonomous Driving Workshop
November 18, 2019Scott Fujimoto is a PhD student at McGill University and Mila. He is the author of TD3 as well as some of the recent developments in batch deep reinforcement learning. Featured ReferencesAddressing Function Approximation Error in Actor-Critic MethodsScott Fujimoto, Herke van Hoof, David MegerOff-Policy Deep Reinforcement Learning without ExplorationScott Fujimoto, David Meger, Doina PrecupBenchmarking Batch Deep Reinforcement Learning AlgorithmsScott Fujimoto, Edoardo Conti, Mohammad Ghavamzadeh, Joelle PineauAdditional References Striving for Simplicity in Off-Policy Deep Reinforcement LearningRishabh Agarwal, Dale Schuurmans, Mohammad Norouzi Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic ActorTuomas Haarnoja, Aurick Zhou, Pieter Abbeel, Sergey Levine Way Off-Policy Batch Deep Reinforcement Learning of Implicit Human Preferences in DialogNatasha Jaques, Asma Ghandeharioun, Judy Hanwen Shen, Craig Ferguson, Agata Lapedriza, Noah Jones, Shixiang Gu, Rosalind Picard Continuous control with deep reinforcement learningTimothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, Daan Wierstra Distributed Distributional Deterministic Policy GradientsGabriel Barth-Maron, Matthew W. Hoffman, David Budden, Will Dabney, Dan Horgan, Dhruva TB, Alistair Muldal, Nicolas Heess, Timothy Lillicrap
November 11, 2019Dr. Jessica Hamrick is a Research Scientist at DeepMind. She holds a PhD in Psychology from UC Berkeley.Featured ReferencesStructured agents for physical constructionVictor Bapst, Alvaro Sanchez-Gonzalez, Carl Doersch, Kimberly L. Stachenfeld, Pushmeet Kohli, Peter W. Battaglia, Jessica B. HamrickAnalogues of mental simulation and imagination in deep learningJessica HamrickAdditional References Metacontrol for Adaptive Imagination-Based OptimizationJessica B. Hamrick, Andrew J. Ballard, Razvan Pascanu, Oriol Vinyals, Nicolas Heess, Peter W. Battaglia Surprising Negative Results for Generative Adversarial Tree Search Kamyar Azizzadenesheli, Brandon Yang, Weitang Liu, Zachary C Lipton, Animashree Anandkumar Metareasoning and Mental SimulationJessica B. Hamrick Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning AlgorithmDavid Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur Guez, Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, Timothy Lillicrap, Karen Simonyan, Demis Hassabis Object-oriented state editing for HRLVictor Bapst, Alvaro Sanchez-Gonzalez, Omar Shams, Kimberly Stachenfeld, Peter W. Battaglia, Satinder Singh, Jessica B. Hamrick FeUdal Networks for Hierarchical Reinforcement LearningAlexander Sasha Vezhnevets, Simon Osindero, Tom Schaul, Nicolas Heess, Max Jaderberg, David Silver, Koray Kavukcuoglu PILCO: A Model-Based and Data-Efficient Approach to Policy SearchMarc Peter Deisenroth, Carl Edward Rasmussen Blueberry EarthAnders Sandberg
October 9, 2019Dr Pablo Samuel Castro is a Staff Research Software Engineer at Google Brain.  He is the main author of the Dopamine RL framework.Featured ReferencesA Comparative Analysis of Expected and Distributional Reinforcement LearningClare Lyle, Pablo Samuel Castro, Marc G. Bellemare A Geometric Perspective on Optimal Representations for Reinforcement LearningMarc G. Bellemare, Will Dabney, Robert Dadashi, Adrien Ali Taiga, Pablo Samuel Castro, Nicolas Le Roux, Dale Schuurmans, Tor Lattimore, Clare LyleDopamine: A Research Framework for Deep Reinforcement LearningPablo Samuel Castro, Subhodeep Moitra, Carles Gelada, Saurabh Kumar, Marc G. BellemareDopamine RL framework on github Tensorflow Agents on githubAdditional References Using Linear Programming for Bayesian Exploration in Markov Decision ProcessesPablo Samuel Castro, Doina Precup Using bisimulation for policy transfer in MDPsPablo Samuel Castro, Doina Precup Rainbow: Combining Improvements in Deep Reinforcement LearningMatteo Hessel, Joseph Modayil, Hado van Hasselt, Tom Schaul, Georg Ostrovski, Will Dabney, Dan Horgan, Bilal Piot, Mohammad Azar, David Silver Implicit Quantile Networks for Distributional Reinforcement LearningWill Dabney, Georg Ostrovski, David Silver, Rémi Munos A Distributional Perspective on Reinforcement LearningMarc G. Bellemare, Will Dabney, Rémi Munos
September 20, 2019Dr. Kamyar Azizzadenesheli is a post-doctorate scholar at Caltech.  His research interest is mainly in the area of Machine Learning, from theory to practice, with the main focus in Reinforcement Learning.  He will be joining Purdue University as an Assistant CS Professor in Fall 2020.Featured ReferencesEfficient Exploration through Bayesian Deep Q-NetworksKamyar Azizzadenesheli, Animashree AnandkumarSurprising Negative Results for Generative Adversarial Tree SearchKamyar Azizzadenesheli, Brandon Yang, Weitang Liu, Zachary C Lipton, Animashree AnandkumarMaybe a few considerations in Reinforcement Learning Research?Kamyar AzizzadenesheliAdditional References Model-Based Reinforcement Learning for Atari Lukasz Kaiser, Mohammad Babaeizadeh, Piotr Milos, Blazej Osinski, Roy H Campbell, Konrad Czechowski, Dumitru Erhan, Chelsea Finn, Piotr Kozakowski, Sergey Levine, Afroz Mohiuddin, Ryan Sepassi, George Tucker, Henryk Michalewski Near-optimal Regret Bounds for Reinforcement LearningThomas Jaksch, Ronald Ortner, Peter Auer Curious Model-Building Control SystemsJürgen Schmidhuber Rainbow: Combining Improvements in Deep Reinforcement Learning Matteo Hessel, Joseph Modayil, Hado van Hasselt, Tom Schaul, Georg Ostrovski, Will Dabney, Dan Horgan, Bilal Piot, Mohammad Azar, David Silver Schema Networks: Zero-shot Transfer with a Generative Causal Model of Intuitive PhysicsKen Kansky, Tom Silver, David A. Mély, Mohamed Eldawy, Miguel Lázaro-Gredilla, Xinghua Lou, Nimrod Dorfman, Szymon Sidor, Scott Phoenix, Dileep George Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning AlgorithmDavid Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur Guez, Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, Timothy Lillicrap, Karen Simonyan, Demis Hassabis
September 4, 2019Antonin Raffin is a researcher at the German Aerospace Center (DLR) in Munich, working in the Institute of Robotics and Mechatronics. His research is on using machine learning for controlling real robots (because simulation is not enough), with a particular interest for reinforcement learning.Ashley Hill is doing his thesis on improving control algorithms using machine learning for real time gain tuning. He works mainly with neuroevolution, genetic algorithms, and of course reinforcement learning, applied to mobile robots.  He holds a masters degree in Machine learning, and a bachelors in Computer science from the Université Paris-Saclay.Featured Referencesstable-baselines on githubAshley Hill, Antonin Raffin primary authors.S-RL ToolboxAntonin Raffin, Ashley Hill, René Traoré, Timothée Lesort, Natalia Díaz-Rodríguez, David FilliatDecoupling feature extraction from policy learning: assessing benefits of state representation learning in goal based roboticsAntonin Raffin, Ashley Hill, René Traoré, Timothée Lesort, Natalia Díaz-Rodríguez, David FilliatAdditional References Learning to Drive Smoothly in Minutes, Antonin Raffin Multimodal SRL (best paper at ICRA): Making Sense of Vision and Touch: Self-Supervised Learning of Multimodal Representations for Contact-Rich Tasks,  Michelle A. Lee, Yuke Zhu, Krishnan Srinivasan, Parth Shah, Silvio Savarese, Li Fei-Fei, Animesh Garg, Jeannette Bohg Benchmarking Model-Based Reinforcement Learning, Tingwu Wang, Xuchan Bao, Ignasi Clavera, Jerrick Hoang, Yeming Wen, Eric Langlois, Shunshi Zhang, Guodong Zhang, Pieter Abbeel, Jimmy Ba TossingBot: Learning to Throw Arbitrary Objects with Residual Physics, Andy Zeng, Shuran Song, Johnny Lee, Alberto Rodriguez, Thomas Funkhouser Stable Baselines roadmap OpenAI baselines stable-baselines github pull request
August 9, 2019Natasha Jaques is a PhD candidate at MIT working on affective and social intelligence.  She has interned with DeepMind and Google Brain, and was an OpenAI Scholars mentor.  Her paper “Social Influence as Intrinsic Motivation for Multi-Agent Deep Reinforcement Learning” received an honourable mention for best paper at ICML 2019.Featured ReferencesSocial Influence as Intrinsic Motivation for Multi-Agent Deep Reinforcement LearningNatasha Jaques, Angeliki Lazaridou, Edward Hughes, Caglar Gulcehre, Pedro A. Ortega, DJ Strouse, Joel Z. Leibo, Nando de FreitasTackling climate change with Machine LearningDavid Rolnick, Priya L. Donti, Lynn H. Kaack, Kelly Kochanski, Alexandre Lacoste, Kris Sankaran, Andrew Slavin Ross, Nikola Milojevic-Dupont, Natasha Jaques, Anna Waldman-Brown, Alexandra Luccioni, Tegan Maharaj, Evan D. Sherwin, S. Karthik Mukkavilli, Konrad P. Kording, Carla Gomes, Andrew Y. Ng, Demis Hassabis, John C. Platt, Felix Creutzig, Jennifer Chayes, Yoshua BengioAdditional References MIT Media Lab Flight Offsets,  Caroline Jaffe, Juliana Cherston, Natasha Jaques Modeling Others using Oneself in Multi-Agent Reinforcement Learning,Roberta Raileanu, Emily Denton, Arthur Szlam, Rob Fergus Inequity aversion improves cooperation in intertemporal social dilemmas, Edward Hughes, Joel Z. Leibo, Matthew G. Phillips, Karl Tuyls, Edgar A. Duéñez-Guzmán, Antonio García Castañeda, Iain Dunning, Tina Zhu, Kevin R. McKee, Raphael Koster, Heather Roff, Thore Graepel Sequential Social Dilemma Games on github, Eugene Vinitsky, Natasha Jaques  AI Alignment newsletter, Rohin Shah Paired Open-Ended Trailblazer (POET): Endlessly Generating Increasingly Complex and Diverse Learning Environments and Their Solutions, Rui Wang, Joel Lehman, Jeff Clune, Kenneth O. Stanley The social function of intellect, Nicholas Humphrey Autocurricula and the Emergence of Innovation from Social Interaction: A Manifesto for Multi-Agent Intelligence Research, Joel Z. Leibo, Edward Hughes, Marc Lanctot, Thore Graepel A Recipe for Training Neural Networks, Andrej Karpathy Emotionally Adaptive Intelligent Tutoring Systems using POMDPs, Natasha Jaques Sapiens, Yuval Noah Harari
August 23, 2019Michael L Littman is a professor of Computer Science at Brown University.  He was elected ACM Fellow in 2018 "For contributions to the design and analysis of sequential decision making algorithms in artificial intelligence".Featured ReferencesConvergent Actor Critic by HumansJames MacGlashan, Michael L. Littman, David L. Roberts, Robert Tyler Loftin, Bei Peng, Matthew E. TaylorPeople teach with rewards and punishments as communication, not reinforcementsMark Ho, Fiery Cushman, Michael L. Littman, Joseph AusterweilTheory of Minds: Understanding Behavior in Groups Through Inverse PlanningMichael Shum, Max Kleiman-Weiner, Michael L. Littman, Joshua B. TenenbaumPersonalized education at scaleSaarinen, Cater, LittmanAdditional References Michael Littman papers on Google Scholar, Semantic Scholar Reinforcement Learning on Udacity, Charles Isbell, Michael Littman, Chris Pryby  Machine Learning on Udacity, Michael Littman, Charles Isbell, Pushkar Kolhe  Temporal Difference Learning and TD-Gammon, Gerald Tesauro Playing Atari with Deep Reinforcement Learning, Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, Martin Riedmiller Ask Me Anything about MOOCs, D Fisher, C Isbell, ML Littman, M Wollowski, et al Reinforcement Learning and Decision Making (RLDM) Conference Algorithms for Sequential Decision Making, Michael Littman's Thesis Machine Learning A Cappella - Overfitting Thriller!, Michael Littman and Charles Isbell feat Infinite Harmony Turbotax Ad 2016: Genius Anna/Michael Littman
August 2, 2019TranscriptThe idea with TalkRL Podcast is to hear from brilliant folks from across the world of Reinforcement Learning, both research and applications.  As much as possible, I want to hear from them in their own language.  I try to get to know as much as I can about their work before hand.  And Im not here to convert anyone, I want to reach people who are already into RL.  So we wont stop to explain what a value function is, for example.  Though we also wont assume everyone has read the very latest papers.Why am I doing this? Because it’s a great way to learn from the most inspiring people in the field!  There’s so much happening in the universe of RL, and there’s tons of interesting angles and so many fascinating minds to learn from.Now I know there is no shortage of books, papers, and lectures, but so much goes unsaid.I mean I guess if you work at MILA or AMII or Vector Institute, you might be having these conversations over coffee all the time, but I live in a little village in the woods in BC, so for me, these remote interviews are like a great way to have these conversations, and I hope sharing with the community makes it more worthwhile for everyone.In terms of format, the first 2 episodes were interviews in longer form, around an hour long.  Going forward, some may be a lot shorter, it depends on the guest.If you want want to be a guest or suggest a guest, goto talkrl.com/about, you will find a link to a suggestion form.Thanks for listening!
Rate Podcast

Share This Podcast

Recommendation sent

Followers

1

Join Podchaser to...

  • Rate podcasts and episodes
  • Follow podcasts and creators
  • Create podcast and episode lists
  • & much more
Hosted by
TransistorAffiliate

Podcast Details

Created by
Robin Ranjit Singh Chauhan
Podcast Status
Active
Started
Jan 1st, 2020
Latest Episode
Sep 21st, 2020
Release Period
Monthly
Episodes
15
Avg. Episode Length
About 1 hour
Explicit
No
Order
Episodic

Podcast Tags

Do you host or manage this podcast?
Claim and edit this page to your liking.
Are we missing an episode or update?
Use this to check the RSS feed immediately.