CallOptions for both langchaingo's and provider side's cache control
Langchaingo's side:
add options like CacherSkipGet bool and CacherSkipPut bool that are understood only by Cacher as llms.Model impl, with clear semantics.
Provider's side:
Needed to support Anthropic prompt caching.
Is it all model have?
@lifejwang11 I don't understand your question
@leventov Is this feature unique to this model or does it exist in all other models
I don't know.
The Langchaingo's side cache control (which I'm actually using; I've already implemented this in my fork of langchaingo, but didn't publish a PR yet) would be agnostic of the model, or course.
you can push the pr to main,I think everyone need this feature