Pakkapon Phongthawee

Results 55 comments of Pakkapon Phongthawee

As far as i know, there has been no one getting that kind of relighting to work. I'm still working on it too. And i will publish it if i...

After you get an EXR file from `exposure2hdr.py`, You can do a virtual object insertion by using the EXR file to be the lighting for the object on any 3D...

Yes, it can. But I'm unsure if quality improvement is worth the running time. You can produce more exposure by giving `--ev` to the `inpaint.py`. This will change the EV...

Currently, we don't have a plan to support the positive exposure value.

If you are not set the seed by providing `--seed` to inpaint.py (ie, args.seed=="auto") The first seed (ball_0.png) is a hash calculated from the filename without file extension. https://github.com/DiffusionLight/DiffusionLight/blob/7990de166a6ffa07c0888976ca4ac10422fbabc3/inpaint.py#L291-L293 For...

As mentioned by https://github.com/DiffusionLight/DiffusionLight/issues/9#issuecomment-1888495833 above you can increase the resolution by setting `--image_height`, `--image_width`, and `--ball_size`. But I'm not sure if increasing the here will improve the quality of the...

I don't get your question. Can you explain more about "post lora to change the exposure"? If you mean posting LoRA weight. Maybe check out the [hugging face model card](https://huggingface.co/DiffusionLight/DiffusionLight)...

Only method to use DiffusionLight right now is following our [Getting started](https://github.com/DiffusionLight/DiffusionLight?tab=readme-ov-file#getting-started) We currently have no plan to support other stable diffusion interfaces. At least until our publication passes the...

If your Mac is using a Radeon graphic card (ie, Mac Pro 2019), you need [ROCm](https://pytorch.org/blog/pytorch-for-amd-rocm-platform-now-available-as-python-package/) instead CUDA toolkit. For other Macs, such as Apple silicon-based (M1/M2/M3) or Intel-based without...

I have no luck in making our method run on a smaller VRAM than 16GB But if you prefer CPU inference. I already made the code support CPU inference in...