ChesonHuang

Results 19 comments of ChesonHuang

同遇到过,单机单卡跑的话,可以用下面的命令,感觉是pytorch版本或者参数的问题 ` ##### 单机单卡用下面的命令,不要加分布式参数: python cn_clip/training/main.py \ --train-data=${train_data} \ --val-data=${val_data} \ --resume=${resume} \ --logs=${output_base_dir} \ --name=${name} \ --save-step-frequency=${save_step_frequency} \ --save-epoch-frequency=${save_epoch_frequency} \ --log-interval=${log_interval} \ ${report_training_batch_acc} \ --context-length=${context_length} \ --warmup=${warmup} \ --batch-size=${batch_size}...

> @learning233 @JianxinMa @yangapku @jxst539246 @manymuch 本地预测的时候,不要进行梯度计算,用with torch.no_grad() ``` model.eval() # 这行代码保证每次结果一致 with torch.no_grad(): # 进行推断操作的代码 output = model(input) ```

很有可能你的这些数据之间没有换行符\n

应该是参数不正确 GPUS_PER_NODE=1 # 每个机器上的GPU个数 WORKER_CNT=1 # 训练的机器个数,Number of GPU workers, for single-worker training, please set to 1 export RANK=0 # The rank of this worker, should be in {0, ...,...

可以试试下面的命令看看吗 先 cd sdb1/lxl2/Chinese-CLIP-master/ python cn_clip/training/main.py \ --train-data=${train_data} \ --val-data=${val_data} \ --resume=${resume} \ ${reset_data_offset} \ ${reset_optimizer} \ --logs=${output_base_dir} \ --name=${name} \ --save-step-frequency=${save_step_frequency} \ --save-epoch-frequency=${save_epoch_frequency} \ --log-interval=${log_interval} \ ${report_training_batch_acc} \ --context-length=${context_length}...

> > 可以试试下面的命令看看吗 > > 先 cd sdb1/lxl2/Chinese-CLIP-master/ > > python cn_clip/training/main.py --train-data=${train_data} --val-data=${val_data} --resume=${resume} ${reset_data_offset} ${reset_optimizer} --logs=${output_base_dir} --name=${name} --save-step-frequency=${save_step_frequency} --save-epoch-frequency=${save_epoch_frequency} --log-interval=${log_interval} ${report_training_batch_acc} --context-length=${context_length} --warmup=${warmup} --batch-size=${batch_size} --valid-batch-size=${valid_batch_size} --valid-step-interval=${valid_step_interval} --valid-epoch-interval=${valid_epoch_interval} --lr=${lr}...

> /flash_attn_cuda.cpython-39-x86_64-linux-gnu.so: undefined symbol: _ZN2at4_ops5zeros4callEN3c108ArrayRefINS2_6SymIntEEENS2_8optionalI 你的linux-gnu.so的依赖有问题,请参考https://github.com/open-mmlab/mmdetection3d/issues/1152这里类似的解决办法

> > > /flash_attn_cuda.cpython-39-x86_64-linux-gnu.so: undefined symbol: _ZN2at4_ops5zeros4callEN3c108ArrayRefINS2_6SymIntEEENS2_8optionalI > > > > > > 你的linux-gnu.so的依赖有问题,请参考[https://github.com/open-mmlab/mmdetection3d/issues/1152这里类似的解决办法](https://github.com/open-mmlab/mmdetection3d/issues/1152%E8%BF%99%E9%87%8C%E7%B1%BB%E4%BC%BC%E7%9A%84%E8%A7%A3%E5%86%B3%E5%8A%9E%E6%B3%95) > > 我根据1152的解决方法试过了,但还是不行。这个issues指的应该是mmcv的,但我这个是flash-attn的。 我又从flash-attn相关的issues上找了相关解决方法,还是不行,貌似flash-attn支持的torch是1.12以上的,我的是1.10,并且我也没有要用flash-attn,如何在代码中关闭或者忽略flash-attn相关的内容呢? pip uninstall flash_attn ![image](https://github.com/OFA-Sys/Chinese-CLIP/assets/34369493/4c5d5ec5-bd9a-4d46-af63-1af5985b48f2)

> accum_freq shell脚本里面,将--accum_freq=xxx 改成 --accum-freq=xxx ![image](https://github.com/OFA-Sys/Chinese-CLIP/assets/34369493/6a878ab1-60cc-4623-807a-a6a1efb8c58d)

> > > accum_freq > > > > > > shell脚本里面,将--accum_freq=xxx 改成 --accum-freq=xxx ![image](https://private-user-images.githubusercontent.com/34369493/322208459-6a878ab1-60cc-4623-807a-a6a1efb8c58d.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MTMwMTUyNjAsIm5iZiI6MTcxMzAxNDk2MCwicGF0aCI6Ii8zNDM2OTQ5My8zMjIyMDg0NTktNmE4NzhhYjEtNjBjYy00NjIzLTgwN2EtYTZhMWVmYjhjNThkLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA0MTMlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwNDEzVDEzMjkyMFomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTc1NjE5YjZkYTA4NTRjYjFiMDllOGZiM2UzZTEzMmI2ODQ4ZTk0ZjFiYTUwYmY3NDZhZTI1MWRlMDM1ZTNiYTMmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.arnP6IMTQJvms7n-MUXQ0p4PmYpiInbjfntY2G8vWTM) > > ``` > Traceback (most recent call last): > File "/home/amax/sdb1/lxl2/Chinese-CLIP-master/cn_clip/training/main.py", line 346, in >...