dalle-mini icon indicating copy to clipboard operation
dalle-mini copied to clipboard

DALL·E Mini - Generate images from a text prompt

Results 100 dalle-mini issues
Sort by recently updated
recently updated
newest added

I'm trying to run the inference notebook, but I get this error while loading dall-e mini, not sure how it gets checkpoint shape to 259, I guess it should be...

Each time I restart the container I have to paste in the wandb key, and then it redownloads 5GB of model data. This can be very slow (even with 100mbps...

The initial base image used often causes a lot of errors between python versions, pip version, JAX versions and CUDA versions as made evident by #260, #234 and #233. This...

I want to be able to generate the same set of images each time I run the inference script locally on my machine for reproducibility for the same set of...

Dunno if anyone else noticed but Jupyter has a new thing -- Jupyter Labs. It's a much better interface overall. Zero compatibility issues. Much nicer -- seriously, much much more...

Specifies the exact version of jaxlib if it returns the following error: ``` #5 12.13 ERROR: No matching distribution found for jaxlib==0.3.10+cuda11.cudnn82 (from jax[cuda]) ------ executor failed running [/bin/sh -c...

I tried to run the notebook in Colab Pro+, and got: ``` UnfilteredStackTrace Traceback (most recent call last) [](https://localhost:8080/#) in () 19 temperature, ---> 20 cond_scale, 21 ) 46 frames...

generate() from Transformers can take encoder outputs as kwargs instead of running the encoder. This PR extends this to "super conditioning" sampling. It also enables providing only one "null sequence"...

https://huggingface.co/spaces/dalle-mini/dalle-mini In the demo inference site, It'd be nice to have a feature that is to be able to input your own parameters such as `seed`, `gen_top_k`, `gen_top_p`, for example...

Hi basically when I am running locally by following [inference pipeline notebook](https://github.com/borisdayma/dalle-mini/blob/main/tools/inference/inference_pipeline.ipynb) The script gets stuck at the end when it's generating an image at 0%.