orderer0001

Results 21 issues of orderer0001

error during training: model.train(batch_size=1024,epochs=3,verbose=2) Traceback (most recent call last): File "/opt/homebrew/Caskroom/miniforge/base/envs/tensorflow24/lib/python3.8/site-packages/IPython/core/interactiveshell.py", line 3457, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "", line 1, in model.train(batch_size=1024,epochs=3,verbose=2) File "/opt/homebrew/Caskroom/miniforge/base/envs/tensorflow24/lib/python3.8/site-packages/ge/models/sdne.py", line 129, in train...

RuntimeError: einsum(): operands do not broadcast with remapped shapes [original->remapped]: [277, 8, 1, 234]->[277, 8, 234, 1, 1] [1, 64, 64]->[1, 1, 64, 64, 1]

Hello, when exporting to tensorrt (engine), an error is reported: export running on CPU but must be on GPU. How to fix it?

May I ask which mirror is installed to deploy the tracking of yolov8?

Qwen1.5-7b-chat loads normally before fine-tuning. After fine-tuning, qlora loads the model and consumes video memory. 4090 24g , what is the reason?

请教 为什么我用yolov7.pt权重文件也能够训练,并且好像也有预训练的效果? 这个权重文件对应的应该是合并分支以后的网络结构吧?为什么依然可以用来训练呢?如果他是合并之前的网络结构 那yolov7_training.pt存在的意义又是什么呢?

Does it support multi-gpu raining llama3 with lisa ?

If there are multiple GPUs, using the lisa method is also a direct script./scripts/run_finetune_with_lisa.sh? Do I need to set multi-GPU parameters?

Why do 3* 4090GPUs still out of memory (24*3>52GB) 0 NVIDIA GeForce RTX 4090 Off | 00000000:31:00.0 Off | Off | | 66% 24C P8 22W / 450W | 42MiB...

你好,我GAT模型训练到精度大约66%时,loss不降反升。尝试过降低lr没有很好的效果。数据集就是自带的。请问该如何解决?是不是输出pro需要锐化?