Book - bandit algorithmsarrow-up-right
Bandits for recommender systemsarrow-up-right by Eugene Yan
Rolling out multi armed bandits for fast adaptive experimentationarrow-up-right
Thompson sampling for multi-armed banditarrow-up-right
Last updated 1 year ago
Was this helpful?