Pam

Results 9 issues of Pam

**Describe what this pull request is trying to achieve.** Adds support for scaled dot product attention. It performs on par with xformers and doesn't require any side library to work....

There are several possible inference speedups: 1. Using torch 2.0 we can use `torch.compile` to speedup inference on Linux/WSL - I'm getting noticeable performance boost on both LLaMA 7B (fp16...

~40% inference speed gain in 8 bit quantized models (7.6 tokens/s vs 5.2 tokens/s in LLaMA 13B) on RTX4090 with `--int8-threshold 0` startup argument. May increase VRAM usage during inference....

Custom scheduler nodes have sigma inputs capped by 1000, but models with ZSNR operate on sigmas larger than this cap (`sigma_max` for ZSNR model ~= 4518.7637). This PR raises max...

Should resolve #12 and kinda resolves #28 (incorporates functionality from negpip node but still makes it incompatible) Based on https://github.com/laksjdjf/cd-tuner_negpip-ComfyUI Makes it possible to use negative weights in prompts. Since...

PatchModelAddDownscale is accessing `model.model_sampling` directly instead of using `get_model_object` function, causing wrong behavior when using nodes like ModelSamplingDiscrete. This PR contains fix only for PatchModelAddDownscale node. There might also be...

PR with similar fix from https://github.com/Extraltodeus/sigmas_tools_and_the_golden_scheduler/pull/1#issuecomment-2094929402

Seems like calling `model_management.load_model_gpu` inside `comfy.sd.CLIP.__init__` causes CLIP object patches to be ignored, breaking CLIPNegPip custom node from https://github.com/pamparamm/ComfyUI-ppm

Images generated by `KSampler (Advanced)` and `SamplerCustom` nodes are nondeterministic when `add_noise` is set to False. Seems like we need to call `torch.manual_seed(seed)` before sampling in order to get deterministic...

Bug
Good PR