hpcpony
hpcpony
tensorforce: 0.6.5 python: 3.10.4 ``` [user 0.6.5]$ python3 ../quick.py Traceback (most recent call last): File "/home/user/0.6.5/../quick.py", line 4, in environment = Environment.create( File "/opt/RL/Python_3.10.4_tensorforce/lib/python3.10/site-packages/tensorforce/environments/environment.py", line 204, in create return Environment.create(...
tensorforce: 0.6.5 ``` [user 0.6.5]$ python ../quick.py WARNING:root:Infinite min_value bound for state. Traceback (most recent call last): File "/home/user/0.6.5/../quick.py", line 24, in states = environment.reset() File "/opt/RL/Python_3.10.4_tensorforce/lib/python3.10/site-packages/tensorforce/environments/environment.py", line 529, in...
charliecloud 0.24, anaconda3 2021.05 (python 3.8.8), CentOS 7.6. (non-root user). I haven't found a comprehensive guide for working with NVIDIA GPUs and charliecloud but I've managed to piece together enough...
Looks like convert-hf-to-gguf.py make the assumption that model part files are named "model-\*" when BLOOM (https://huggingface.co/bigscience/bloom/tree/main) uses the convention "model_\*". Looks like there's a similar issue with "pytorch_model-\*" vs. BLOOM's...
I tried starting with the conda install from the installation.md... ``` conda create -n hf conda activate hf conda install -c intel intel_extension_for_transformers . . . ``` ...but it's incomplete....
I'd really like to build from source using one of the release tar files, but I'm having issues. Anybody know of any magic to make it possible? If you can't...
In the example for adding to gptneox_mem_req I see that n_layers comes from the num_hidden_layers in the config.json file, but where does the 512, 512, and 1024 come from? Maybe...
### Your current environment isolated system, can't provide environment. ### How you are installing vllm pip install . Is install from source using VLLM_FLASH_ATTN_SRC_DIR still supported? I don't see it...
### Your current environment N/A ### How you are installing vllm pip install . I was building vllm off-line with clones of CUTLASS and flash-attention. flash-attention (setup.py) does a "git...