Mark Daoust

Results 254 comments of Mark Daoust

Thanks for all your work everyone (especially @sgkouzias)! I just tweaked the order so that this new GPU debugging step is after the step where you test the GPU. I...

Really it has everything it needs we're just waiting for the internal merge, it should be through soon.

> That's in contrast to the 300 requests per minute limit mentioned in the documentation Where? Have you tried `gemini-1.5-flash`?

Hmmm... It goes through to_content**S** first, that catches None: https://github.com/google-gemini/generative-ai-python/blob/efead6bea6768f6f4a3d90d348647b0a54fe2435/google/generativeai/types/content_types.py#L300-L303 But to should it let `""` through? Updated title.

Hey @aidoskanapyanov, thanks again for the help! I think this one is a simple fix.

> The one in the permission_types? I should submit it in a separate PR. No I meant the one at the end of the PR description. I think this is...

This is a known issue, the eng team is working on improving this.

> `response = chat.send_message(` The SDK supports this, but that was never wired into chat https://github.com/google-gemini/generative-ai-python/pull/204 https://github.com/google-gemini/generative-ai-python/issues/290 That should be an easy fix. https://github.com/google-gemini/generative-ai-python/pull/341

@sykp241095 thanks for reporting. This is a limitation of Colab, long story. Try anywhere except Colab and you should get the chunks back as they are generated. Let me know...

I'm not sure it needs anything other than: https://github.com/google-gemini/generative-ai-python/pull/342