Liam Smith

Results 2 comments of Liam Smith

Currently, the encoding method used by `globals_helper.tokenizer` will result in a much higher token count when calculating the number of tokens in Chinese text. For example: ``` text = '唐高宗仪凤二年春天,六祖大师从广州法性寺来到曹溪南华山宝林寺,\...

Upgrade the langchain. It work for me. ` pip install --upgrade langchain`