Fix choosing the right transforms
- Shortens prompts and redundancy since longer context lengths take longer to process and ruins the smaller models
- Renames provides -> fulfills (not sure if matters; was just desperate to try things to get it work)
- removes decision and uses llm_validator instead
- bumps temperature (not sure if matters)
Looks good, I think I would stick with provides over fulfill. Other than that and the linting issues I'm happy to merge.
Dropped the llm_validator because it was using OpenAI underneath and it didn't work with LlamaCpp right out of the box without manual patching https://github.com/jxnl/instructor/blob/cea534fd2280371d2778e0f043d3fe557cc7bc7e/instructor/dsl/validators.py#L74
Also the typing of it was somewhat complex, so I replaced it something similar that captures the underlying logic and provided field descriptions thru FieldInfo when creating the transform model; still works successfully.
transform_model = create_model(
"Transform",
summary_of_query=(
str,
FieldInfo(
description="A summary of the query and whether you think a transform is needed."
),
),
transform_required=(
bool,
FieldInfo(
description="Whether a transform is required for the query; sometimes the user may not need a transform.",
default=True,
),
),
Looks good, just make pre-commit happy and I'll merge.
Okay, let's fix tests on main. I'll merge this PR though.
This pull request has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.