José Guilherme

Results 7 issues of José Guilherme

Please, how do I set the shutter speed?

Sorry, please, I have this error: axolotl$ accelerate launch -m axolotl.cli.train examples/openllama-3b/lora.yml ... ... /.local/lib/python3.10/site-packages/flash_attn_2_cuda.cpython-310-x86_64-linux-gnu.so: undefined symbol: _ZN3c104cuda9SetDeviceEi

I apologize if this is not the appropriate place for questions, concerns, or suggestions regarding the project. One of the major challenges with AI is how quickly things progress, and...

llama stack build Enter value for name (required): ollama Enter value for distribution (default: local) (required): local-ollama Enter value for api_providers (optional): Enter value for image_type (default: conda) (required): Build...

### System Info Linux Ubuntu Anaconda ### Information - [X] The official example scripts - [ ] My own modified scripts ### 🐛 Describe the bug llama stack build /tmp/a/llama/anaconda/envs/stack/lib/python3.10/site-packages/pydantic/_internal/_fields.py:172:...

### System Info 121W in standby, ![image](https://github.com/user-attachments/assets/a8c817f7-c99f-4266-906e-6fd0c405ac5b) ![image](https://github.com/user-attachments/assets/0e32c137-93ff-4976-b1a3-e48722512eb7) ### Information - [X] The official example scripts - [ ] My own modified scripts ### 🐛 Describe the bug The GPU...

### 🚀 The feature, motivation and pitch ollama vision is new: https://ollama.com/x/llama3.2-vision providers: inference: - provider_id: remote::ollama provider_type: remote::ollama config: host: 127.0.0.1 port: 11434 in lama_stack/providers/adapters/inference/ollama/ollama.py OLLAMA_SUPPORTED_MODELS = { "Llama3.1-8B-Instruct":...