yuanke

Results 417 issues of yuanke

(gh_transformers-bloom-inference) amd00@MZ32-00:~/llm_dev/transformers-bloom-inference$ python bloom-inference-scripts/bloom-accelerate-inference.py --name ~/hf_model/bloom --batch_size 1 --benchmark Using 0 gpus Loading model /home/amd00/hf_model/bloom Traceback (most recent call last): File "/home/amd00/llm_dev/transformers-bloom-inference/bloom-inference-scripts/bloom-accelerate-inference.py", line 49, in tokenizer = AutoTokenizer.from_pretrained(model_name) File "/home/amd00/anaconda3/envs/gh_transformers-bloom-inference/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py",...

C:\ub16_prj\dash-network>python usage.py Running on http://127.0.0.1:8050/ Debugger PIN: 155-546-482 * Serving Flask app "usage" (lazy loading) * Environment: production WARNING: Do not use the development server in a production environment. Use...

mldl@mldlUB1604:~/ub16_prj/CKRL/src$ ./Train_TransC -size 50 -margin 1 -method 1 size = 50 learing rate = 0.001 margin = 1 method = bern Segmentation fault (core dumped) mldl@mldlUB1604:~/ub16_prj/CKRL/src$ ll ../data total 4660...

if I want debug it, for example, I edit the source and run it immediately without reinstall it by luarocks, how can I do that

mldl@mldlUB1604:~/ub16_prj/CommNet$ th main.lua starting 18 workers { dcomm_entropy_cost : 0 comm_mode : "avg" batch_size : 16 nagents : 1 init_std : 0.2 model : "mlp" unshare_hops : false show :...

mldl@mldlUB1604:/media/mldl/data1t/ub16_prj/TORCH/CommNet/levers$ th levers.lua --batchsize 512 --lr 10 --clip .01 --hdim 64 --apg 5 --nlevers 5 --reward_only --maxiter 100000 --comm { nlevers : 5 nlayer : 2 equal_bags : true reward_only...