drbh
drbh
Hi @josephrocca thank you for opening this issues. I'm attempting to reproduce on main but am having some trouble. TGI started with ```bash docker run --shm-size=1gb --gpus all \ -v...
closing as just retested with `meta-llama-3.1-8B-Instruct` locally and stop is working as expected ```python from openai import OpenAI import os client = OpenAI( base_url="http://localhost:3000/v1", api_key=os.getenv("HF_TOKEN", "YOUR_API_KEY"), ) SEED = 1337...
Hi @rikardradovac thank you for opening this issue, currently we are not planning to support dynamic lora loading in TGI. This is because we load all of the weights into...
@bjoernQ I've acquired a logic analyzer and got it working with saleae's Logic2 however I'm not 100% sure what I should be looking for 😅 I've written a simple program...
Thank you everyones help! After much trial and error I was able to capture better samples of the i2c communication. I believe the issue is related to the `master_write_read` method...
> I've also gave it a stab @drbh - #979 awesome thank you! I just pulled it down and got similar results. However it appears that you're changes are for...
@elpiel got it, thank you for sharing I really appreciate the information and am going to use the code to help debug the issue further 🙏
@bjoernQ unfortunately I still cannot read data from the MLX90614, you were right I was originally compiling in debug. Building release did improve the timing (shown below) but I still...
updated the deps and ran some sanity test with [act](https://github.com/nektos/act) locally. The workflow seems to run as expected ```bash act workflow_dispatch -W .github/workflows/release.yml --input version=0.9.1 --container-architecture linux/amd64 ```