[MINI] Multi-armed Bandit Problems

Released Friday, 2nd October 2015
Good episode? Give it some love!
Reviews
Creators
Lists
The multi-armed bandit problem is named with reference to slot machines (one armed bandits). Given the chance to play from a pool of slot machines, all with unknown payout frequencies, how can you maximize your reward? If you knew in advance which machine was best, you would play exclusively that machine. Any strategy less than this will, on average, earn less payout, and the difference can be called the "regret".

You can try each slot machine to learn about it, which we refer to as exploration. When you've spent enough time to be convinced you've identified the best machine, you can then double down and exploit that knowledge. But how do you best balance exploration and exploitation to minimize the regret of your play?

This mini-episode explores a few examples including restaurant selection and A/B testing to discuss the nature of this problem. In the end we touch briefly on Thompson sampling as a solution.

Creators & Guests

We don't know anything about the creators of this episode yet. You can add them yourself so they can be credited for this and other podcasts.

Episode Reviews

This episode hasn't been reviewed yet. You can add a review to show others what you thought.

Rate Episode

Recommend This Episode

Recommendation sent

Join Podchaser to...

  • Rate podcasts and episodes
  • Follow podcasts and creators
  • Create podcast and episode lists
  • & much more

Episode Details

Length
12m 47s
Explicit
No

Episode Tags

Do you host or manage this podcast?
Claim and edit this page to your liking.