Junglesl

Results 16 comments of Junglesl

> Hi, can you report the output of this command: > > ```shell > pip list | grep sacred > ``` > > maybe running : > > ```shell >...

> I think you need to uninstall sacred first: > > ``` > pip uninstall -y sacred && pip install -e 'git+https://github.com/kkoutini/[email protected]#egg=sacred' > ``` > > you can check wether...

> > I think you need to uninstall sacred first: > > ``` > > pip uninstall -y sacred && pip install -e 'git+https://github.com/kkoutini/[email protected]#egg=sacred' > > ``` > > >...

> 1. The model is trained on 10-seconds audio clips and has time positional encodings for 10 seconds only. If you want to do inference on longer clips, one possibility...

> Hi, yes the model can be loaded on multiple-gpus and be used with `DistributedDataParallel` similar to other torch models. > > > And I want to know if I...

> Unfortunately I've never got a Segmentation fault. It can be caused by any of the dependencies or packages with native code, in the environment. I want to know if...

> Unfortunately I've never got a Segmentation fault. It can be caused by any of the dependencies or packages with native code, in the environment. And when I download the...

> Unfortunately I've never got a Segmentation fault. It can be caused by any of the dependencies or packages with native code, in the environment. I find in environment.yml that...

> Hi, the environment.yml is a snapshot of the environment I'm using. It was exported using: > > ``` > conda env export --no-builds | grep -v "prefix" > environment.yml...

非常感谢回复!这是推理的代码,就把load_in_8bit改成了False,如果不改的话会更慢需要95s, 是在A100 80G上进行推理的 试了一下Llama2-Chinese-7b-Chat,也需要24s 模型推理时这个loading checkpoint shards会花费很多时间 请问是哪里参数设置的不对吗?听说原版llama2-13b推理时间能在5s以内,Llama2-Chinese-13b-Chat是因为进行了中文指令微调有了更多参数,所以变慢了吗?希望您能帮忙找找原因,非常感谢!