guidance icon indicating copy to clipboard operation
guidance copied to clipboard

Support `select()` with OpenAI models

Open nchammas opened this issue 6 months ago • 7 comments

Trying a simple test of Guidance's select() with an OpenAI model:

from guidance import system, user, assistant
from guidance import select
from guidance.models import OpenAI


if __name__ == "__main__":
    model = OpenAI("gpt-4.1-mini")
    with system():
        model += "Pick a, b, or c."
    with assistant():
        model += select(['a', 'b', 'c'])
    print(model)

Yields an UnsupportedNodeError:

Traceback (most recent call last):
  File ".../openai-test.py", line 11, in <module>
    model += select(['a', 'b', 'c'])
  File ".../.venv/lib/python3.12/site-packages/guidance/models/_base/_model.py", line 110, in __add__
    self = self._apply_node(other)
           ^^^^^^^^^^^^^^^^^^^^^^^
  File ".../.venv/lib/python3.12/site-packages/guidance/models/_base/_model.py", line 139, in _apply_node
    for i, output_attr in enumerate(self._interpreter.run(node, sampling_params=self.sampling_params.copy())):
                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File ".../.venv/lib/python3.12/site-packages/guidance/models/_base/_interpreter.py", line 36, in run
    yield from node.simplify()._run(self, **kwargs)
  File ".../.venv/lib/python3.12/site-packages/guidance/models/_openai_base.py", line 437, in rule
    yield from chunks
  File ".../.venv/lib/python3.12/site-packages/guidance/models/_base/_interpreter.py", line 36, in run
    yield from node.simplify()._run(self, **kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File ".../.venv/lib/python3.12/site-packages/guidance/_ast.py", line 387, in _run
    return interpreter.select(self, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File ".../.venv/lib/python3.12/site-packages/guidance/models/_base/_interpreter.py", line 73, in select
    return self.grammar(node, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File ".../.venv/lib/python3.12/site-packages/guidance/models/_base/_interpreter.py", line 67, in grammar
    raise UnsupportedNodeError(interpreter=self, node=node)
guidance.models._base._interpreter.UnsupportedNodeError: <guidance.models._openai.OpenAIInterpreter object at 0x11084ac90> does not support SelectNode(alternatives=(LiteralNode(value='a'), LiteralNode(value='b'), LiteralNode(value='c'))) of type <class 'guidance._ast.SelectNode'>

Is this something Guidance cannot support due to some limitation of OpenAI's API, or is an implementation possible?

nchammas avatar Jul 05 '25 00:07 nchammas

It looks like support for OpenAI is in general very limited. Trying to call gen(stop="\n"), for example, yields:

ValueError: Stop condition not yet supported for OpenAI

I know Guidance is evolving rapidly, and I see a lot of churn in the tests and docs. Are there any remote model endpoints that Guidance currently supports well?

nchammas avatar Jul 05 '25 00:07 nchammas

Is this something Guidance cannot support due to some limitation of OpenAI's API, or is an implementation possible?

This is honestly a limitation of OpenAI's API. If they ever expose an interface for selecting between a fixed set of strings, we can enable select support. Likewise, if they ever expose an interface for regex-constrained generation, we can add gen(regex=...) support.

At the moment, OpenAI only supports JSON constraints, which we leverage in order to support json:

Image

With local models, we have full control over the entire inference stack and can thus run any constraint. We also expose some experimental support for models being run with vLLM (SGLang should be following at some point), as we have tight integrations in these inference engines.

That being said, gen(stop=...) is actually a good candidate for us to support for OpenAI sooner rather than later, as I believe OpenAI does indeed support that on their side.

hudson-ai avatar Jul 07 '25 20:07 hudson-ai

With local models, we have full control over the entire inference stack and can thus run any constraint.

To be clear, are you saying that today it's not possible to get the full power of Guidance with any remote model inference service (not just OpenAI)?

nchammas avatar Jul 07 '25 21:07 nchammas

Yes, as guidance needs low-level integration with the inference engine itself.

That being said, you could for example deploy vLLM with a hosted azure endpoint and get full guidance support (as vLLM has the required integrations).

hudson-ai avatar Jul 07 '25 22:07 hudson-ai

That being said, you could for example deploy vLLM with a hosted azure endpoint and get full guidance support (as vLLM has the required integrations).

It would be great if we could point users (such as myself!) to a guide on how to deploy a model to a service that still supports the full power of Guidance, and then show Guidance in action with that remote service.

I do recall that the main README used to have examples of using Guidance with OpenAI, but for this hypothetical guide I am talking about showing off full Guidance support with some kind of remote service. Only being able to run Guidance at its full power with local models is a big limitation of the project.

If you agree that demonstrating full Guidance support for some remote service is (or should be) a goal of the project, I can create a separate issue to track this. Just let me know.

nchammas avatar Jul 08 '25 19:07 nchammas

Thank you for your feedback @nchammas. We are working on deploying a website with updated documentation over the next 1-2 weeks. One of the planned tutorials includes deploying a model to Azure OpenAI with guidance support.

nking-1 avatar Jul 08 '25 21:07 nking-1

Is this supported now?

adityaprakash-continue avatar Sep 19 '25 07:09 adityaprakash-continue