Lincoln Stein
Lincoln Stein
> Hey there, did some digging and found this [here](https://huggingface.co/docs/diffusers/installation#notice-on-telemetry-logging) > > > Our library gathers telemetry information during from_pretrained() requests. This data includes the version of Diffusers and PyTorch/Flax,...
@blessedcoolant [TLDR] Whenever the model is changed and the HF Concepts Library is enabled, InvokeAI reaches out to HuggingFace to download the list of concept trigger terms. It should only...
@blessedcoolant Another issue: The web interface is downloading the concept terms even when "show textual inversion terms from HF" is disabled. It looks like `get_ti_triggers()` always asks for the list...
Nice detective work. I'll put together a small test script that illustrates the problem and file a bug report against the appropriate HuggingFace repository. Can you tell whether the issue...
The legacy web server has been removed. You'll need to launch the new web server with `invokeai-web` (not `invokeai --web`) from the command line: ``` source ~/invokeai/.venv/bin/activate # linux C:\YourUserName\invokeai\.venv\Scripts\activate...
> Can confirm what @psychedelicious reported. I get nearly 3x speeds on an RTX 3080 laptop GPU on Windows too. But I have to note that this speed boost keeps...
Tested on a ROCm system: ***good news***: Renders a nearly identical "banana sushi" to 1.13. Differences are subtle and about the same as generation-to-generation variances with `xformers` on a CUDA...
I tested in a CUDA system (NVIDIA RTX A2000 12GB) just now and the performance of 1.13+xformers is equal to 2.0.0 without xformers. No 3x speedup in my hands, unfortunately!
> Maybe somewhere in `CLI.py` (or another place/places), we do: `os.environ["CUBLAS_WORKSPACE_CONFIG"] = ":4096:8"`? As long as that hits before torch and friends load up, I imagine that will work. It...
> @lstein did you comment out the call to `_adjust_memory_efficient_attention`? Doing that was half the 200% improvement It's already commented out in the PR.