兰铃

Results 10 issues of 兰铃

- [x] : I have read the checklist. # bug description background: The server I am using is not connected to the Internet therefore cannot use `pip`, so, I packed...

I want to use SSL to my project, [LFW](http://vis-www.cs.umass.edu/lfw/) is only a demo. Model structure, loss function, optimizer is keeping the same as this repo, only one GPU, load ResNet50...

I found that `m_queue.size()` isn't equal to zero when `pool.shutdown()` was runned, so some tasks do not finish when the thread pool closes. I fixed it up as follows: ```cpp...

```cpp #include #include "../include/ThreadPool.h" class TEST { private: std::mutex mtx; std::vector v1[2]; std::priority_queue q; public: void init() { v1[0].push_back(1); v1[0].push_back(2); v1[0].push_back(3); v1[1].push_back(1); v1[1].push_back(3); v1[1].push_back(2); } void push(const int& index) {...

Hello, thanks for you code, I'm als prepare to use SSL to improve robustness. Some question: https://github.com/Kim-Minseon/RoCL/blob/b6d5185e294e8bca5670e9146df81b54d99d0635/src/loss.py#L59 This loss function is from paper simCLR ? I also decided to use...

数据集是 0-1 格式的数据集,0 表示不相似,1 表示相似,各有10 万对。语料库中总共有 200 万个句子,也就是说有些样本没参与训练。不太一样的就是:第一个句子很短,5 6 个字左右,第二个句子很长,50字左右。比如淘宝搜索:XXX零食,推荐的结果会有:XXX商店XXX口味XXX面包。 我使用 CoSENT 进行训练,在 bert 后接入一个降维层,生成文本的 128 维度特征向量,期待相似样本的距离近,不相似样本距离远。微调 3 个 epoch 左右,spearman 得分在 0.86 左右。 而后,将所有文本生成特征向量,构建向量索引(这里用的别人成熟的框架,不是构建索引出错),并查询距离最近的向量,发现很难查回正样本,MRR 指标也很差,这个问题我怀疑是距离还是没有拉开,请问您在使用这个方法的时候有没有遇到过类似问题呢?~~仿佛偏题了~~

1. `app.exec_()` is pyqt4's method, `app.exec()` is pyqt5's method 2. I found that math equation displays well in windows but doesn't display in linux platform as description in #109 ,...

# 问题描述 虽然全部能AC过去,but,好像有个数算错了。 # 题目示例 **Sample In** `169 5 2` **Sample Out** `169 = 6^2 + 6^2 + 6^2 + 6^2 + 5^2` 我们来算一下: ![B$676$09IL}RSNU `9DQO@R](https://user-images.githubusercontent.com/43681138/63212905-600ad800-c13d-11e9-9abb-87928a66d750.png) 答案正确。 # 实际代码 从github拷贝,g++编译。...

就是问一下,使用 cosent 这个损失函数,最后会收敛到多少呢?或者说收敛情况如何呢?我发现我这里训练半个 epoch 就很难继续去收敛了。 我的数据集是,20万对 0-1 样本,10万正,10万负。使用的 chinese-roberta-wwm-ext 微调,下游任务是 last hidden state 经过 max pool 、全连接和 normalization 后的 128 维度向量尽可能接近。 其他参数的话,16 bacth,2e-5 的学习率并 torch OneCycleLR 调整(类似 Warmup ),训练 5...

``` /usr/lib/gcc/x86_64-linux-gnu/7/include/tmmintrin.h:185:1: error: inlining failed in call to always_inline ‘__m128i _mm_alignr_epi8(__m128i, __m128i, int)’: target specific option mismatch _mm_alignr_epi8(__m128i __X, __m128i __Y, const int __N) ```