Skip to content

Conversation

@motiwari
Copy link
Collaborator

@motiwari motiwari commented May 2, 2024

No description provided.

Comment on lines +183 to +190
# TODO(@colin): I don't think this is exactly correct. It may be the case that an arm is
# removed at some point, but then np.max(estimates) moves down and the arm gets added back later.
# The current implementation would say that arm has been pulled num_pulls times, but it's been pulled
# fewer times. For this reason, I think it's actually better to make num_pulls an *array* of how many
# times each arm has been pulled, and then update the confidence interval for each arm separately according
# to its number of pulls. This is how we did it in several other projects, see BanditPAM, FastForest, or BanditMIPS
# for examples.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. Tavor pointed me to the algorithm I used (Algorithm 3 from the paper, "Distributed Exploration in Multi-Armed Bandits" by Hillel et al. (2013).) which eliminates arms entirely after they fall outside the confidence interval (CI). I checked with Tavor to ask if our current CIs (which differ from those used in the paper) align with this assumption, and he said yes, but there may have been a miscommunication.
  2. There is an array storing how many times each arm has truly been pulled (in bandits_softmax.py). However, in this case all arms in the confidence set at a particular round will have been pulled the same amount and will therefore have the same confidence interval. This is because we perform the bandits algorithm prior to log norm estimation or other operations.
  3. There may be confusion about why the true number of arm pulls differs from the amount specified in num_pulls. This is because the confidence interval is not finite-population-corrected (FPC). Instead, we keep the confidence interval the same and adjust the amount of pulls made to reflect how many we would need to get the current confidence interval (which is around the same when num_pulls is less than d / 2 but becomes much lower past this point. While this is not a bug, I did change this behavior in a new branch to allow for true exponential growth of the number of arm pulls (before, FPC would make the true pulls approach d quite slowly).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants