Damien Lancry
Damien Lancry
Did anybody actually get this code to work without modifying anything?
Hi Great Work! I m trying to run it (Training) on my laptop. it s been training for 2 hours and it is still at the first episode. (17000 steps)....
hello, I noticed there is a big focus on uncertainty based sampling and information density based sampling techniques which is very nice. but in batch mode active learning, when several...
HI I opened this issue to discuss the implementation of the acquisition functions that you said you would like to make a feature in #48. I am interested in contributing....
Hi I am a research assistant and I have been working on deep bayesian active learning for the past few weeks. I have been using pytorch and custom active learning...
hello, I downloaded the datasets from https://git.uwaterloo.ca/jimmylin/hedwig-data/-/tree/master/datasets/AAPD thanks for this this is really amazing. I noticed the labels were one hot encoded and was wondering if there was anyway to...
Hello, First of all, thanks for those models and for those datasets, great work, this is very useful! second of all, I am familiar with almost all the datasets you...
When i do : `python main.py --is_train=False --display=True --use_gpu=False` I get : ` [*] GPU : 1.0000 [2018-05-23 17:17:55,692] Making new env: Breakout-v0 {'_save_step': 500000, '_test_step': 50000, 'action_repeat': 4, 'backend':...
I deployed a mistral-7b-instruct-v0.1 model an endpoint on sagemaker following this [tutorial](https://docs.djl.ai/master/docs/demos/aws/sagemaker/large-model-inference/sample-llm/vllm_deploy_mistral_7b.html). In my particular usecase, I want the llm to output only one token: "0" or "1". Therefore, I...