Gemini 1.5 Flash Supervised Fine Tuning Updates
Description of the feature request:
Increase character limitation in tuning jobs to take advantage of the Flash 1.5 1,000,000 token window.
What problem are you trying to solve with this feature?
Gemini 1.5 Flash has a very large token window, which potentially makes it ideal for extracting needles from haystacks of text. We would like to fine tune Gemini 1.5 to perform this task for us. Tuning is appropriate because the content we are typically examining is hundreds of thousands of tokens, leaving no room for multishot prompting techniques.
Any other information you'd like to share?
No response
What is the current limit?
It appears to be 40,000 characters, per the following error:
CreateTunedModelRequest.tuned_model.tuning_task.training_data.examples.examples[9].text_input: text_input is too long. The maximum character count accepted is 40000.
Thanks for raising this issue. Have there been updates since the last comment, and is this request still active?
We are no longer trying to train with Gemini 1.5 Flash, but thank you for following up.
Marking this issue as stale since it has been open for 14 days with no activity. This issue will be closed if no further activity occurs.