zlh

Results 10 issues of zlh

您好, Wiki-KBP数据集我按照你的README里的方式进行处理,但是没处理好,可否直接给我一份处理好的Wiki-KBP数据集? 十分感谢!

您好,请问通过Perturbed Masking得到单词之间的影响矩阵M后,是怎么解码成依存树的? 论文3.1.2中写道“The tree decoding algorithm, such as Eisner and Chu–Liu/Edmonds’ algorithm , is then used to extract the dependency tree from the matrix M.” 我查资料得知,Chu–Liu/Edmond是一种求最大生成树的算法,所以影响矩阵M(加权有向图)的最大生成树就是我们需要的依存树吗? 不太了解依存树的相关知识,还请不吝赐教,十分感谢!

Hello, thank you for your open source code, but the file _Pubmed-shuffle-win-30. bin_ cannot be downloaded now, could you please provide a copy?

请问在计算分数的时候会考虑不连续实体中每个跨度间的顺序问题么?还是仅预测出跨度即可?

您好,我用您的[teacher_scores计算脚本](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/llm_embedder/run_lm_score.py)基于llama-7b-chat算了一下msmarco的teacher_scores,我并没对代码进行任何改动,但是分数和您给出的结果相差很大。请问您知道什么原因么? 下图左边是我运行您给定脚本的结果,右边是您给出的msmarco teacher_scores文件的结果。qid是1185869。 可以看到两份文件teacher_scores的差异体现在: 1. teacher_scores的排序差异很大。我的positive doc的teacher_scores得分是-0.4101,几乎仍然是top1,但是您计算出的得分是-3.1269,排名很靠后。 2. teacher_scores的分布区间差异很大。我的teacher_scores是在0-2之间,您生成的teacher_scores大约是在-1到-8之间。 期待您的回复! ![245A772A-D7FF-462e-B209-E81EA63CBEA0](https://github.com/FlagOpen/FlagEmbedding/assets/50194803/fb06b2cb-9464-4786-be4a-f91534e746e7)

Hello, I'm trying to run the "example.py" of grobid_client_python on linux. The error reported is: GROBID server does not appear up and running, the connection to the server failed.

question

Here are my execution steps. Step 1: Run the ngrok: ``` ngrok http http://localhost:8010 ``` and I get : ``` https://8cc3-36-133-141-100.ngrok-free.app -> http://localhost:8010 ``` Step 2: Run your docker, like...

![1714219618635](https://github.com/parthsarthi03/raptor/assets/50194803/c0ba25af-e57a-4bb3-ab55-79dce77f572d) Should [this line](https://github.com/parthsarthi03/raptor/blob/master/demo.ipynb) of code be modified as follows? `summary = outputs[0]["generated_text"][len(prompt):].strip()`

### Reminder - [X] I have read the README and searched the existing issues. ### System Info 训练命令: llamafactory-cli train \ --stage dpo \ --do_train \ --finetuning_type full \ --deepspeed...

pending