badge
badge copied to clipboard
An implementation of the BADGE batch active learning algorithm.
There is an error in your implementation of entropy sampling. The following is an example and the result is when using implemented entropy sampling. ``` probs = [[0.1, 0.2, 0.7],...
I try to implement how to get those derivation,but I found I have an additional negative symbol,compared to the author's code. Could anyone account for this problem?Thanks very much.
Hi Jordan Ash, First of all, thank you for sharing such a clear and concise code. It has been a delight while going through the code step by step and...
Hi authors, Thanks for pushing this fantastic tool. I have a question regarding to the approach itself: When my my last layer dimension is really large + I have large...
I notice that there is no mechanism to avoid repetition of indices in the indsAll returned in the end.
Hi authors, It's great that you publish the source code, but the default hyper-parameters seem not to be correctly tuned, so could you please share the hyper-params to produce the...
Hi, Could you add a license to this project? (If you want the most permissive, you probably look for MIT license)
Thank you for sharing the source code for this great work. I am trying to replicate some of your baseline results. However, the result from the baseline (LeastConfidence) I got...
Hi, I do not see why [**get_exp_grad_embedding**](https://github.com/JordanAsh/badge/blob/master/query_strategies/strategy.py#L347), implemented as it is, allows to compute the last-layer Fisher matrices. Thank you in advance for your answer!
Your work is excellent and valuable. Maybe I missed some code, where is the code of using the bait for regression? please remind me if you see my message, thank...