Wenwei Zhang
Wenwei Zhang
It seems that you need to upgrade your MMCV to the most recent version.
Would you like to create a PR to fix it?
Hi @vakker , It might be caused by limited memory. You may try workers_per_gpu=0 or 1.
Should we further consider use tensor in the baseinstancemasks?
Hi @LYMDLUT , Thanks for your kind contribution. Simply change the context to 8k will cause the InternLM-Chat model fail to process long context window. Thus, I suggest we enhance...
This PR looks good to me now. Can be merged after resolving the conflicts.
Hi @twmht , It would be great if you could also provide benchmark results so that we could make sure that the performance is compatible.
Closed as it has been solved in #9405
closed as it has been completed in #9252
Sure, you can do that!