`model gemini-pro-vision generateContentStream` with Chinese prompt returns malfunctioning chunks
Expected Behavior
model gemini-pro-vision generateContentStream with Chinese prompt returns malfunctioning chunks
Actual Behavior
model gemini-pro-vision generateContentStream with Chinese prompt returns malfunctioning chunks which contains ���.
Steps to Reproduce the Problem
- code
const model = genAI.getGenerativeModel({ model: 'gemini-pro-vision' });
const result = await model.generateContentStream([
'用中文简体回答',
...history,
{
inlineData: {
data: data.base64,
mimeType: data.mimeType,
},
},
]);
2.get result like this: 宫保鸡丁是一道著名的川菜,起源于清朝山东巡抚丁宝桢的家厨,以花生、黄瓜、胡萝卜 、������、���丝等配料,再佐以干辣椒、花椒等调味料炒制而成。
3.text contains ���
Specifications
- Version: ^0.1.2
- Platform: windows
Experiencing this issue as well: https://github.com/danny-avila/LibreChat/discussions/1398
Seems similar to https://github.com/google/generative-ai-js/issues/9
Are you using 0.1.3? The original says ^0.1.2, is the actual installed node_modules version 0.1.2 or 0.1.3?
Just wanted to comment in this repo, since the issue exists in @langchain/google-genai which I believe was developed by someone at Google, and hopefully we can get eyes on it from here.
The langchain "community" package is using @google/generative-ai at version ^0.1.0.
More details here: https://github.com/langchain-ai/langchainjs/issues/4113
If I'm reading that issue right, you don't need to change anything to get the latest version since langchain has it already set to ^ which allows the latest up to the next major. You should be able to just delete your node_modules and lockfile (package-lock.json or yarn.lock) and reinstall (npm/yarn), and you should get the latest, which is actually 0.2.1 at the moment.
To verify which one is actually installed (or see if you have duplicates), run npm ls @google/generative-ai, it should tell you.
Is anyone still having this problem with the latest version?