Hai
Hai
 My attempt to add 16k models with model_token_mapping did not affect the token_max value when calling the map_reduce chain
You were right,I am now fixing the problem temporarily by changing the value of token_max,But token_max need to change the value depending on the model
 I tried to add a new model to openai under langchain framework
 I temporarily solved the problem by directly modifying the value, hoping for a better solution
> @SinaArdehali I have contacted the langchain team, they are working on some of the fixes around this issue. Good news! thank you
> @Ooho1997 @SinaArdehali My discussion with them is more about the mapreduce implementation itself. If you want to be able to set `token_max` here is how you can do that...
 You can try to add 0613 model in langchain openai.py
Thank you bro. It's very detailed
我测试下来将模型精度提高下从float16改到float32会大幅度降低中英文混杂问题 但是偶尔会有