Hai

Results 9 comments of Hai

![image](https://github.com/hwchase17/langchain/assets/38323944/b4d650e6-cdcf-4fb8-be82-d6e86c82a663) My attempt to add 16k models with model_token_mapping did not affect the token_max value when calling the map_reduce chain

You were right,I am now fixing the problem temporarily by changing the value of token_max,But token_max need to change the value depending on the model

![image](https://github.com/hwchase17/langchain/assets/38323944/f455033e-2c30-4cf5-972d-0193515f7b40) I tried to add a new model to openai under langchain framework

![image](https://github.com/hwchase17/langchain/assets/38323944/c5293bfa-fb4b-4582-a7b6-d8a37895aa28) I temporarily solved the problem by directly modifying the value, hoping for a better solution

> @SinaArdehali I have contacted the langchain team, they are working on some of the fixes around this issue. Good news! thank you

> @Ooho1997 @SinaArdehali My discussion with them is more about the mapreduce implementation itself. If you want to be able to set `token_max` here is how you can do that...

![image](https://github.com/hwchase17/langchain/assets/38323944/82243375-a2da-4c49-9169-d44493bfba1e) You can try to add 0613 model in langchain openai.py

我测试下来将模型精度提高下从float16改到float32会大幅度降低中英文混杂问题 但是偶尔会有