📒
Machine & Deep Learning Compendium
Ask or search...
Ctrl
K
Experimental Design
Multi Armed Bandits
Previous
A/B Testing
Next
Contextual Bandits
Last updated
11 months ago
Was this helpful?
by Eugene Yan
Book - bandit algorithms
Bandits for recommender systems
Rolling out multi armed bandits for fast adaptive experimentation
Thompson sampling for multi-armed bandit