Book - bandit algorithms
Bandits for recommender systems by Eugene Yan
Rolling out multi armed bandits for fast adaptive experimentation
Thompson sampling for multi-armed bandit
Last updated 9 months ago
Was this helpful?