Gardner Bickford

Results 89 comments of Gardner Bickford

@Yamijalaa Set the env var OTEL_SDK_DISABLED=true ```python import os from crewai import Agent, Task, Crew, Process from crewai_tools import SerperDevTool os.environ["OTEL_SDK_DISABLED"] = "true" ```

@johnisanerd that warning looks like it come from the Open Telemetry Python SDK [here](https://github.com/open-telemetry/opentelemetry-python/blob/187048a35ee93194a70a45720fa68b78d57b6a97/opentelemetry-sdk/src/opentelemetry/sdk/trace/__init__.py#L1218). If you add this to your `/etc/hosts` file: ``` 127.0.0.1 telemetry.crewai.com ``` You will not see...

This looks like a dupe of https://github.com/OpenAccess-AI-Collective/axolotl/issues/1092

EDIT: It seems updating xformers to `xformers==0.0.23.post1` resolves this issue. See their [release notes](https://github.com/facebookresearch/xformers/releases/tag/v0.0.23.post1) This patch fixed the installation process for me: ```diff diff --git a/requirements.txt b/requirements.txt index 4583850..1b985c0 100644...

I was able to kick of a tinyllama QLoRA run with 4096 tokens on my 12GB 3060. It's hanging out at 11.4 GB usage but its running (pretty slowly). You...

Sounds like a reproducible regression. Maybe log it in a new issue with steps to reproduce so someone can try to narrow down what happened.

Have you two started a branch?

Still getting ``` llama_model_load: error loading model: create_tensor: tensor 'output.weight' not found ``` ``` ./main --version version: 2252 (525213d2) ```

I am still getting this on Apple Silicon: ```shell $ make clean ; git pull origin ; make -j $(nproc) $ conda activate llama $ python3 -m pip install -U...