JingyeChen
JingyeChen
``` accelerate==0.18.0 aiofiles==23.1.0 aiohttp==3.8.4 aiosignal==1.3.1 albumentations==1.3.0 altair==5.0.1 anyio==3.7.0 async-timeout==4.0.2 attrs==23.1.0 braceexpand==0.1.7 certifi==2022.12.7 charset-normalizer==3.1.0 click==8.1.3 cmake==3.26.3 contourpy==1.0.7 cycler==0.11.0 dataclasses==0.6 datasets==2.11.0 -e git+https://github.com/JingyeChen/diffusers.git@90d9acf2cbb29dfdd0f2204435c4c3f9d11381f0#egg=diffusers dill==0.3.6 docker-pycreds==0.4.0 exceptiongroup==1.1.1 ExifRead-nocycle==3.0.1 fastapi==0.96.0 ffmpy==0.3.0 filelock==3.12.0 fire==0.4.0 fonttools==4.39.4...
> It seems weird. Did you use `accelerate config` to specify the number of GPU to be used? Maybe you can try this
> Hi, > > Thanks for the suggestion. I've degraded from 0.0.17 to xformers==0.0.16, the same as your config, then the memory consumption got decreased. But the output became the...
https://github.com/huggingface/diffusers/issues/2234#issuecomment-1416931212 Maybe you can try this. Currently we do not have plans to adapt code for xformers.
Thanks for your attention to our work! Could you send us the index (xxxxx_xxxxxxxxx) of samples that contain incorrect annotations? I will check those samples then. The filtering rules are...
Thanks for your feedback. It is a mistake and the command should be: ``` img2dataset --url_list=url.txt --output_folder=laion_ocr --thread_count=64 --resize_mode=no ``` We will fix it in the readme file. Thanks!
It is hard to say and may take some time. Please stay tuned ;D
We notice that there exist a few samples with mismatched annotations caused by the resize operation during releasing the datasets. We will fix it within one week. If you want...
> Hi! Thanks for your good job! If i use this commond:"img2dataset --url_list=url.txt --output_folder=laion_ocr --thread_count=64 --resize_mode=no", where should i resize the ori_image ? in the train.py ? (i find the...
The meta data, including detection and segmentation results, are updated. rec/det/seg are conducted with size 512x512. You can use np.clip(value, 0, 512) to clip the value. > > @JingyeChen, Hi,...