orderer0001

Results 12 comments of orderer0001

你好 为什么我跑起来会报错unsupported operand type(s) for //: 'NoneType' and 'int',你有遇到这个问题吗

I added the parameter device=0 according to the error message, and then the error message disappeared, but there was a new error message: export failure 0.0s: No module named 'tensorrt'

我也是,我用PIL转的,也不行

Can other training methods be configured with multiple GPUs? Do I need to set parameters manually?

When will Lisa training’s support for multiple GPUs be updated?

> LLaMA-3-7b Do you support llama 2-7b? Or llama 3-8B? llama3-7b does not exist.

> Yes we support both models as long as the access is granted from huggingface repo for your huggingface account. > > * https://huggingface.co/meta-llama/Llama-2-7b > * https://huggingface.co/meta-llama/Meta-Llama-3-8B Which parameter is...

> The automatic allocation scheme of device_map='auto' of transformers may not be reasonable, in which case you can try manually allocating GPU memory to achieve maximum utilization, for example: >...

> 我感觉可能和数据集也有关系,或者网络结构也有关系,这一块代码我当时只是把网络结构搭建出来,只用了小数据跑了一下,所以可能自己尝试一下哈、 好像是数据集的问题。然后我发现你只用了两层图网络,而第二层就直接分类了,论文中不是说如果后面接分类层,最后前面那一层不要用wh拼接,而用加和平均吗?然后我看你的代码是用的wh拼接,我自己改成了加和平均,两个都试了下,感觉差不多。请问你对此有什么理解吗?