Results 11 issues of WuChannn

Hello, I get wrong position of table of content, and it covers the contents i want to show, how can i fix it ![Screen Shot 2019-03-27 at 4 51 00...

hello, how can I tune the position of github icon before twitter? I change the code in _config.yml, like ` github_username: yingjieYJH` `twitter_username: yingjieYJH ` but it doesn't work.

hello, I want to define the batch_size directly in input_data layer, and set batch_size in model.fit to None. However, I get an error " Incoming Tensor shape must be 4-D,...

@Duankaiwen Hello, Kaiwen. I would like to know what tag means? I noticed there was ‘dists’ in kp_utils.py and it represented the distance between ’tl_tag’ and ‘br_tag’. Could you please...

@Duankaiwen hello, kaiwen when i test my test dataset using my own trained model, i met with this problem, and the following is my log: cfg_file: config/CenterNet-52.json loading all datasets......

hi, I follow the installation instruction, and everything is ok until comes to "Train" part. After cd into playground/detection/coco/yolof/yolof.res50.C5.1x, I run the shell script `pods_train --num_gpus 1`, and get `pods_train:...

when i use multiple gpus to train GeoLayoutLM, error as follows: Can't pickle local object 'linear_scheduler..lr_lambda' @wdp-007 @congyao @alibaba-oss Looking forward to your reply.

@hotchpotch @yzhliu @zh217 @neofung 首先,特别感谢您的优秀工作。在学习您的工作时遇到了理解上的问题,想请教您。如题所述,对于这个含义有点不清楚,请帮忙解答一下。我的理解如下,请帮忙看看是否正确:**use_inbatch_neg:同一个batch中query对应的neg数据是否参与loss计算。** 具体分析其中的代码(FlagEmbedding/baai_general_embedding/finetune/modeling.py forward函数),又有些不理解的地方: 1. if self.use_inbatch_neg: 为每个查询创建了一个查询索引,将查询索引乘以 group_size,确保了每个查询都指向其对应的文档组的第一个文档,这可以视为正样本。但是后续的loss计算为self.compute_loss(scores, target),我的理解是只计算了正样本与query之间的loss,这里并没有体现出**use_inbatch_neg** 2. else:(也即 not self.use_inbatch_neg) 为每个查询创建了一个查询索引0, 表示每个查询只考虑第一个文档。第一个文档作为batch中的第一个文档,它只与第一个query对应,且为第一个query的正样本。将第一个query的正样本与所有的query求loss,这样是为了区分不同的query吗?那为什么不用第二个query的第一个文档与所有的query求loss呢? 非常期待您的回复。

现在放出的微调接口是Lora微调的,是否可以在训练资源充足的情况下进行全参微调呢?是不是注释掉 `peft_lora.py`中的`model = get_peft_model(model, peft_config)`,就可以了?