ddpasa
ddpasa
### Suggestion Description What is the state of ROCm support for iGPUs such as a Radeon 780M on a Ryzen 7 8840U? ([example](https://www.amd.com/en/products/processors/laptop/ryzen/8000-series/amd-ryzen-7-8840u.html)) llama.cpp runs great on these systems utilizing...
I have a dual SIM setup with both SIMs active. when I run termux-telephony-deviceinfo, I only get the information for the first one. Is it possible to get both, and...
moondream2 is an amazing tiny little VLM. The owner (https://github.com/vikhyat) releases updates quite frequently. I'm not sure which version ollama currently has, but there was a new release last week...
This is a small, but extremely helpful debug option for printing the gradient and model norms during training. I use it for debugging LR related issues.
The sd3 branch recommends using pytorch 2.4, but this is too old for new hardware (such as 5090 or MI300). I tried installing and running with 2.7, but got lots...
### Description Right now, you have to select a time window to export a csv file. When you have been using the app for a long time, it gets cumbersome...
### Your question Many of the new and cool AMD toys, like the Strix Halo (gfx1151), need ROCm 7.1 from TheRock to function properly. Which means that a default installation...
Added CoMaps
Is it possible to run inference with diffusers using a single-file safetensors created for ComfyUI/SD-Forge? It looks like FluxPipeline.from_single_file() might be intended for this purpose, but I'm getting the following...