Han Xiao
Han Xiao
You may use [`bert-as-service`](https://github.com/hanxiao/bert-as-service) for a quick evaluation by yourself. Sentence and ELMO-like token-level embedding are fairly easy to obtain using this service.
@abhinandansrivastava then perhaps [try different pooling strategies using bert-as-service](https://github.com/hanxiao/bert-as-service/#q-what-are-the-available-pooling-strategies)
You may use `bert-as-service` to extract features using a fine-tuned model: https://github.com/hanxiao/bert-as-service/#serving-a-fine-tuned-bert-model It will also generate a single file for the freezed model.
@googlebot I signed it!
One common mistake is that people thought they are working with Fashion-MNIST, but they are actually working with MNIST. This mistake can be found repeatedly in #110 #47 #36 #119...
I'm reopening this issue as @moonblue333 and @yuenuting argue that they are using the right dataset. I also mark this issue as "help wanted" and welcome the community to validate...
Again, @moonblue333 please do not think questioning your result is out to deliberately hurt you. There is no such drama or discrimination thing as you pictured. I see you open...
hi @moonblue333 @yuenuting , it looks like you are using ROC and AUC as the evaluation metrics. https://github.com/yuenuting/incremental-learning-world-record-mnist-fashion/blob/4a9a1b7992b9209f5e4a2f91cf246c8e7c58746a/tpj.py#L627-L634 Note that our benchmark table is based on the **mean accuracy**. To...
i can confirm this is a bug and this is highly likely be the cause of https://github.com/jina-ai/docarray/issues/426
@willmcgugan one solution is using env variable in setup.py to control the "subtracted" dependency, example https://github.com/jina-ai/jina/blob/c056fad074ae6f46a61b193fd89af6b7f0751be8/setup.py#L135