ShenZheng2000
ShenZheng2000
Which **text prompt** and **seed** are you utilizing for other diffusion models like InstructPix2Pix? **text prompt** 'driving in the night' aligns with Fig. 2, while 'day to night' or 'day2night'...
**Checklist** 1. I have searched related issues but cannot get the expected help. 2. I have read the [FAQ documentation](https://mmdetection.readthedocs.io/en/latest/faq.html) but cannot get the expected help. 3. The bug has...
BDD100K images has resolution of (1280, 720), but in [this](https://github.com/open-mmlab/mmsegmentation/blob/main/configs/_base_/datasets/bdd100k.py) config file, you are using scale of (2048, 1024), which is designed for Cityscapes, not BDD100K.
Here is the inference script I used for controlnet image to image translation. Note that I already download your `config.json` and `diffusion_pytorch_model.safetensors` and put them into `controlnet`. ``` from diffusers...
I followed the instructions provided [here](https://github.com/LiheYoung/Depth-Anything/tree/main/semseg) to fine-tune semantic segmentation on custom images. Despite using an RTX 4090 with 24 GB of VRAM, reducing the crop_size to 128x128, and using...
This [document](https://github.com/GaParmar/img2img-turbo/blob/main/docs/training_cyclegan_turbo.md) only shows the text prompts for **horse2zebra**, given as `fixed_prompt_a.txt` and `fixed_prompt_b.txt`. However, the **BDD100K** dataset does not have such text prompts. Could you provide the text prompts...
I would like to know how long it takes to train the model.