weiaicunzai
weiaicunzai
Thanks for your great work, I just wandering how do you evaluate cityscapes dataset, after reading your code, it seems like you trained the model on input size 512x512, and...
Here is the code in loss layer: ```python boxes = tf.reshape(labels[..., 1:5], [self.batch_size, self.cell_size, self.cell_size, 1, 4]) boxes = tf.tile( boxes, [1, 1, 1, self.boxes_per_cell, 1]) / self.image_size ``` Can...
HI, thanks for your great work. I'm currently using your pretrained dino network backbone (ViT-S/16) to extract patch (256x256) features. What mean and std should I use? ImageNet mean and...
Why I can only get Acc about 82%, 10% less than reported in the paper?
First, thanks for your great work, we've been writing a paper about indoor scene object pose estimation, your dataset would be a great help to us. But, your annotate_pose.m seems...
Hello!Sorry to bother, I've noticed your paper on arXiv, but I couldn't find the source code download link on your website or your paper, is that because you haven't release...
老师好,感谢分享这么好的模板。 我想请教一下,您在[论文写作模板](https://pengsida.notion.site/c1a22465a0fa4b15a12985223916048e)中提到了两个概念。module design 和 Module forward process。这两个概念的核心区别是什么呢?您给的Neural Body 的[例子中](https://pengsida.notion.site/image/https%3A%2F%2Fprod-files-secure.s3.us-west-2.amazonaws.com%2F952f5f87-b692-4249-a557-7f7ad0a77d56%2Ff79b3a78-0c06-4c20-95fe-2526655d0ade%2FUntitled.png?table=block&id=5684358c-52e3-45a6-960a-0cdadad927ef&spaceId=952f5f87-b692-4249-a557-7f7ad0a77d56&width=2000&userId=&cache=v2),我没有找到您的module design模块的具体写作规律。 Module forward process的写作规律倒是挺明确的, 从输入到输出。
Thanks for your great work, I have a problem regarding to the padding of the VAE encoder. Why not use symmetric padding, as commonly adopted in the computer vision area?
请教一下,关于train_llava的代码。为什么[final_inputs_ids] (https://github.com/yuanzhoulvpi2017/zero_nlp/blob/main/train_llava/train_llava/data.py#L128) 使用pad_token_id进行填充,而[final_label_ids]( https://github.com/yuanzhoulvpi2017/zero_nlp/blob/main/train_llava/train_llava/data.py#L143) 使用 ignore_idx进行填充? 为啥不是都用pad_token_id填充呢? 还有为啥ignore_idx 等于-100,模型是怎么知道-100是ignore_idx的呢?每个llm都知道-100是要忽略的值吗?比如llama,qwen之类的。 感谢!
### Before Reporting 报告之前 - [X] I have pulled the latest code of main branch to run again and the bug still existed. 我已经拉取了主分支上最新的代码,重新运行之后,问题仍不能解决。 - [X] I have read the...