AutoPrompt icon indicating copy to clipboard operation
AutoPrompt copied to clipboard

A framework for prompt tuning using Intent-based Prompt Calibration

Results 16 AutoPrompt issues
Sort by recently updated
recently updated
newest added

Hi, with the latest changes I got a new error when run `run_generation_pipeline.py` ``` Traceback (most recent call last): File "path_to_repo/AutoPrompt/run_generation_pipeline_alena.py", line 64, in best_prompt = ranker_pipeline.run_pipeline(opt.num_ranker_steps) File "path_to_repo/AutoPrompt/optimization_pipeline.py", line...

[pipenv](https://pipenv.pypa.io/) is a modern, easy tool to manage python versions and dependencies, including virtual environments. This PR adds support for pipenv

![PixPin_2024-02-29_12-04-59](https://github.com/Eladlev/AutoPrompt/assets/127315512/a5642eb6-db15-4049-8e07-ffe2698f25bc) I am getting this error in my usage. My educational background is not in computers and my programming foundation is weak, so I hope the reply is easy to...

Output of the sample_batches **doesn't have the sample keys** ``` samples_list = [ element for sublist in samples_batches for element in sublist["samples"] ] ``` Reference: https://github.com/Eladlev/AutoPrompt/blob/c640cc0108e78601b474b380462a1a6274318fcc/optimization_pipeline.py#L192C25-L192C32

Hi! Great tool. When I run the code with an initial dataset, the synthetic datarows get added to the dumped dataset, but I think it would be great to also...

enhancement

Is there any plan to support local offline models?

I have a prompt which is used to generate sql query from the input text given by a user. I am trying to optimize prompt using run_generation_pipeline.py, but I am...

Currently, not all output_schemas support customized output parsers. For example in the [classification output schemes](https://github.com/Eladlev/AutoPrompt/blob/main/prompts/meta_prompts_classification/output_schemes.py), only json schemas are available, meaning this prompt only works well for the models which...

enhancement

**config/config_default.yml** llm: type: 'HuggingFacePipeline' name: 'Qwen-14B-Chat' max_new_tokens: 4096 cammand: > python run_pipeline.py \ --prompt "Does this movie review contain a spoiler? answer Yes or No" \ --task_description "Assistant is an...

![image](https://github.com/user-attachments/assets/d97b62a3-20d1-4ba6-8977-a6f37013df71) can i load data in this place rather than argilla。