LWM
LWM copied to clipboard
Create one image png: success Create video: fail A100 80G update --n_frames=2048 always fail !JAX_TRACEBACK_FILTERING=off python3 -u -m lwm.vision_generation \ --prompt={prompt} \ --output_file={output_filename} \ --temperature_image=1.0 \ --top_k_image=8192 \ --cfg_scale_image=5.0 \...
How should we understand data parallelism and fully sharded data parallelism?They should be a definition in terms of the previous understanding? Another question is that SP and TP constitute MP...
I'm currently able to use run_vision_chat.sh with a limited number of video frames being passed in for a single text query. The text result is output from the model and...
Hi I m using v38 tpu in GCP and while loading model getting below error : he above exception was the direct cause of the following exception: Traceback (most recent...
Thank you for publicizing such impressive work! And can you public the **LWM-1K/8K** version of LWM, too?
Hi everyoneee, I anticipate that it might be a stupid question but why do we have `model_max_length: 2048` in the tokenizer_config.json https://huggingface.co/LargeWorldModel/LWM-Text-Chat-1M/blob/main/tokenizer_config.json? Thank you!
Thanks for sharing this excellent great work. We want to use pytorch models to try the effect of ring attention. Are there any plans to develop ring attention implementation under...
请问这个文件中哪些参数都是什么意思呢?能不能写些注释呢?加入我想生成分辨率更高的视频,或者要生成时长更长的视频,我需要修改哪些参数呢?另外视频大小或者是分辨率有限制么?