griptape
griptape copied to clipboard
Unstructured Output
- [x] I have read and agree to the contributing guidelines.
Is your feature request related to a problem? Please describe. Users want some way to get structured output like behavior for non-deterministic output. For instance, if I've got a Rule that says "Talk like a Pirate", I want something that automatically enforces (and automatically corrects) an LLM to output that way.
Describe the solution you'd like
- Use
EvalEngineto automatically evaluate output on whether it follows the provided Rulesets. - Provide a custom function that can do whatever kind of check they'd like. Describe alternatives you've considered Build the loop myself:
from griptape.engines import EvalEngine
from griptape.structures import Pipeline
from griptape.tasks import PromptTask
engine = EvalEngine(
criteria="Determine whether the answer is spoken like a pirate.",
evaluation_params=[EvalEngine.Param.INPUT, EvalEngine.Param.ACTUAL_OUTPUT],
)
pipeline = Pipeline(
tasks=[
PromptTask("What is 2 + 2? {% if args and args[0] %}{{ args[0] }}{% endif %}"),
]
)
score = -1
reason = None
while score < 0.5:
pipeline.run(reason)
score, reason = engine.evaluate(input=pipeline.input, actual_output=pipeline.output)
Opportunity for new subtask: https://github.com/griptape-ai/griptape/pull/1865