Koichi Yasuoka

Results 5 issues of Koichi Yasuoka

for some "brandnew" circumstances.

I've almost finished to build up [UD_Classical_Chinese-Kyoto](https://github.com/UniversalDependencies/UD_Classical_Chinese-Kyoto/tree/dev) Treebank, and now I'm trying to make a Classical Chinese model for NLP-Cube (please check my [diary](https://srad.jp/~yasuoka/journal/629704/)). But in my model sentence_accuracy

enhancement
help wanted

Thank you for releasing JGLUE, but I could not evaluate my [deberta-base-japanese-aozora](https://huggingface.co/KoichiYasuoka/deberta-base-japanese-aozora). There seem two problems exist: * `DeBERTaV2ForMultipleChoice` requires `transformers` v4.19.0 and after, but JGLUE requires v4.9.2 * Fast...

enhancement

Hi, my collegues and I have released [UD-Kanbun](https://github.com/KoichiYasuoka/UD-Kanbun), a python-based tokenizer, POS-tagger, and dependency-parser for classical Chinese texts. And now we are investigating sentence-segmentator. I compared [UDPipe](http://ufal.mff.cuni.cz/udpipe/models#universal_dependencies_24_models) (with our [UD_Classical_Chinese-Kyoto](https://github.com/UniversalDependencies/UD_Classical_Chinese-Kyoto/)...

Thank you for releasing [bert-small-japanese-fin](https://huggingface.co/izumi-lab/bert-small-japanese-fin) and other Electra models for FinTech. But I've found they tokenize "四半期連結会計期間末日満期手形" in bad way: ``` >>> from transformers import AutoTokenizer >>> tokenizer=AutoTokenizer.from_pretrained("izumi-lab/bert-small-japanese-fin") >>> tokenizer.tokenize("四半期連結会計期間末日満期手形")...

bug
enhancement