KyonP
KyonP
I am also looking forward to this code being updated. I have tried several modifications, such as applying deeper MLPs with patch outputs and positional embedding, they didn't work. outputs...
BTW, my tf version is 1.1, cuda 8.0, ubuntu 14.04
Still suffering. Anyone else has this issue?
# python -m bin.train --config_paths=" ./example_configs/nmt_medium.yml" --model_params " vocab_source: $VOCAB_SOURCE vocab_target: $VOCAB_TARGET" --input_pipeline_train " class: ParallelTextInputPipeline params: source_files: - $TRAIN_SOURCES target_files: - $TRAIN_TARGETS" --input_pipeline_dev " class: ParallelTextInputPipeline params: source_files: -...
I am trying to train the code on A100 (80GB VRAM) and it keeps failing due to OOM. ``` if [ "$1" = "pororo" ]; then echo "Training on Pororo"...
bump I am also wondering is it available.
Is there a way to save output (generated and ground truth) images from the best-performing checkpoint? I looked into your code; it is hard for me to utilize your `acc_tensors_to_images`...
also, your pre-trained weights on `./1.3B/` seem incompatible for inference. ``` root@1074e9836478:/home/my/storydalle/story-dalle# bash infer_story.sh pororo test Evaluating on Pororo Global seed set to 42 Initializing the Conditional Dalle model Setting...
for weight shape mismatch case, I found out the reason 😓 I didn't set the checkpoint path to contain the string 'pororo'. I changed the if-condition to check dataset_name instead....
I have solved this issue by re-downloading model files by running [this script ](https://github.com/CompVis/latent-diffusion/tree/main/scripts). Maybe it occurred by a file fragmentation problem. thank you for your suggestions!