KangkangStu
KangkangStu
> Hi I've been stacked by the same problem when building pytorch, which uses NNpack and peachpy as submodules. Do you have any idea to fix this? Thanks! Hello,I also...
> does the following work? > > ```python > perplexity.compute(predictions=texts, model_id='PATH_TO_MY_MODEL') > ``` maybe not work. I tested your idea, but i got a error "huggingface_hub.utils.validators.HFValidationError: Repo id must use...
> 你好! > > 我希望计算本地训练/更新的 GPT2 模型的困惑度。 我将模型保存为 pytorch_model.bin。 > > 如何在 perplexity.compute 中提供本地模型的路径? `perplexity = load("perplexity", module_type="metric")` `perplexity.compute(predictions=texts, model_id='gpt2')['perplexities']` > > 我调查了[这个](https://huggingface.co/spaces/evaluate-metric/perplexity/commit/95d16d913d8245780a2f7e4e1ec0ecd5b5358d00#d2h-881643)提交。 提交安装似乎不适用于 `pip install git+https://github.com/huggingface/evaluate@95d16d913d8245780a2f7e4e1ec0ecd5b5358d00` Hello please have...
> > 以下有效吗? > > ```python > > perplexity.compute(predictions=texts, model_id='PATH_TO_MY_MODEL') > > ``` > > 这有效,谢谢! 我之前尝试提供用 torch 保存的 model.pt 的路径。 当我提供包含使用 trl 的本机保存函数保存的模型的目录路径时,例如 trainer._save_pretrained(logging_dir) 或 trainer.save_model(logging_dir),它可以工作。 Please post a...
Would love to know the difference between these models now, such as bin, pth, pt, onnx, hf, etc. Or how they convert viewing, etc. Because in the process of using...
> Hello, > I am new to the project, so I hope my request is a sensible one. hello,I am also a new to the project.I'd like to ask if...
> 度量的“官方”实现`perplexity`不缓存语言模型[ [1](https://github.com/huggingface/evaluate/blob/6abb0d53b82b1e5efea5d683b91d7990a653c78d/metrics/perplexity/perplexity.py#L103-L115) ]。看来指标实例应该获取模型并准备好在`_download_and_prepare`. 我认为应该有关于缓存和缓存重置的明确 API。此外,完全不清楚如何配置加载指标(只有`config_name`但`kwargs`被忽略)。 +1 ,Do you know now?
> > 我懂了。目前它是一个完全独立的文件 runq.c。我不喜欢这一点,但也没有真正看到任何真正的解决办法。让我再次重新加载我的 RAM,并再次使用 PR 所在的位置...iirc 它几乎已准备好合并,但令人惊讶的是它没有我最初预期的那么快。 > > 感谢合并。 > > 关于速度,也许它取决于系统,就我而言,速度要快得多,比如快 2 倍以上。 Hello, please ask what kind of model runq.c needs after compilation, I have tried...
> @KangkangStu您遵循此处的说明了吗?https://github.com/karpathy/llama2.c#int8-quantization Of course, I tried llama2_7b and it succeeded, but when I wanted to quantify stories15M.pt there was nothing I could do