Echo
Echo
我已经在gpu上训练完了基于palm的模型,预测的话可以实现请问可以实现在cpu和gpu上都能运行吗? 我把palm依赖于paddle关于cuda的代码注释掉是可以在cpu上跑的。但是要实现cpu和gpu同时运行(留一个use_gpu=True/False)的属性,是需要对paddle.fluid.core_avx'的 'get_cuda_device_count进行修改吗
Hi, I'm trying to transfer the labert model to pytorch, I used the code online : ``` path="./chinese_labert-base-std-512/" tf_checkpoint_path = path + "model.ckpt/"#自己BERT模型文件夹下的ckpt文件(共3个一组) bert_config_file = path + "labert_config.json" #自己BERT模型文件夹下的config pytorch_dump_path...
Hi, we used longhorn in our production environment, but the storage space is increasing every day. For example, the size of storaged-0 volumn was 6.8 G yesterday, but it has...