promptflow icon indicating copy to clipboard operation
promptflow copied to clipboard

[BUG] [VSCode Extension] Open LLM Tool doesn't allow specifying the 'stop' parameter when api is 'completion'

Open cedricvidal opened this issue 1 year ago • 4 comments

Describe the bug Open LLM Tool doesn't allow specifying the 'stop' parameter when api is 'completion' when api is 'completion'. This parameter is important when using the completion api to control when to stop generating tokens.

How To Reproduce the bug Steps to reproduce the behavior, how frequent can you experience the bug: Reproduced all the time.

The screenshot bellow shows that the 'stop' parameter cannot be set when using the 'completion' api.

Screenshot 2024-03-21 at 9 07 51 PM

Screenshots

  1. On the VSCode primary side bar > the Prompt flow pane > quick access section. Find the "install dependencies" action. Please it and attach the screenshots there.
Screenshot 2024-03-21 at 9 09 09 PM

Environment Information

  • Promptflow Package Version using pf -v: 1.6.0
  • Operating System: 13.6.4 (22G513)
  • Python Version using python --version: 3.11.8
  • VS Code version.1.87.2
  • Prompt Flow extension version. v1.14.0
  • On the VS Code bottom pane > Output pivot > "prompt flow" channel, find the error message could be relevant to the issue and past them here. That would be helpful for our trouble shooting.
  • If your code to repro the issue is public and you want to share us, please share the link.

Additional context Add any other context about the problem here.

cedricvidal avatar Mar 22 '24 04:03 cedricvidal

It's a feature ask instead of extension bug. No stop parameter described in the open_model_llm.yaml.

@dans-msft @Adarsh-Ramanathan Please help to take a look, thanks.

Joouis avatar Mar 28 '24 09:03 Joouis

Hello @Joouis , thank you for your reply. Let me clarify, I classified this as a bug because as it is, the tool offers a completion api drop down menu option but cannot be used in practice to consume the Llama 2 7b completion model (as opposed to the chat model) and I believe this would affect any completion model but I haven’t tried.

As a workaround, I’m using a Python node.

cedricvidal avatar Mar 28 '24 14:03 cedricvidal

@Joouis, I think this has been incorrectly assigned - based on other openLLM bugs in this repo, the correct owner is probably @youngpark.

Adarsh-Ramanathan avatar Apr 09 '24 04:04 Adarsh-Ramanathan

@youngpark could you please take a look at the issue of open model llm?

chjinche avatar Apr 11 '24 03:04 chjinche