Does Voiceink leverages context caching?
I have a use case where I extract text from PDFs and textbooks, copy it onto the clipboard, and then use dictation for note-taking. This process drastically improves accuracy, almost to 100%, except for single-word or short dictations.
My text extractions from PDFs and textbooks usually range around 50,000 words or more. I was wondering if VoiceInk's prompts are structured to leverage context caching, as this could significantly reduce the cost & time of doing dictations.
Yes, VoiceInk should normally be able to utilise your copied text, but it would only be utilised in the post-processing, not in the voice transcription phase.
I think you didn't get my issue. What I'm trying to say is, does it work with prompt caching? I know my copied text gets utilized in the post-processing.
But what I want to know is, is the prompt setup of VoiceInk configured in such a way that prompt caching is going to be used in the process? Does caching get triggered in the API on the Gemini end?
https://ai.google.dev/gemini-api/docs/caching?lang=node
Sorry, it's not prompt caching; it's context caching.