PromptWizard
PromptWizard copied to clipboard
Task-Aware Agent-driven Prompt Optimization Framework
I followed the instructions https://github.com/microsoft/PromptWizard?tab=readme-ov-file#steps-to-be-followed-for-custom-datasets and then I run demos\scenarios\dataset_scenarios_demo.ipynb and get the warning `"No module named 'azure'"` seems there is a missing dependency ? output: ``` mutated_prompt_generation=Sorry, I am...
fixes for issue #10 - fixed missing dependencies (azure and ipywidgets) - adopted .gitignore with useful ignores
# Description Can we optimize prompts which are Dynamic i.e Prompts contains variables An example Prompt can be """ You are an AI chatbot you must never answer/ respond to...
How to configure OpenAI custom API endpoint and key
Bumps [llama-index-core](https://github.com/run-llama/llama_index) from 0.10.21 to 0.10.38. Release notes Sourced from llama-index-core's releases. v0.10.38 No release notes provided. v0.10.37 No release notes provided. v0.10.36 No release notes provided. 2024-05-07 (v0.10.35) llama-index-agent-introspective...
Bumps [tqdm](https://github.com/tqdm/tqdm) from 4.66.1 to 4.66.3. Release notes Sourced from tqdm's releases. tqdm v4.66.3 stable cli: eval safety (fixes CVE-2024-34062, GHSA-g7vv-2v7x-gj9p) tqdm v4.66.2 stable pandas: add DataFrame.progress_map (#1549) notebook: fix...
This is my code ``` gp = GluePromptOpt(promptopt_config_path, setup_config_path, dataset_jsonl=None, data_processor=None) best_prompt, expert_profile = gp.get_best_prompt(use_examples=False, run_without_train_examples=True, generate_synthetic_examples=False) print(f"best_prompt:{best_prompt}") print("---------------------------") print(expert_profile) ``` Output result best_prompt is None Only a few were...
# `gen_different_styles` cant split mutated prompt normally ```python ## promptwizard/glue/promptopt/techniques/critique_n_refine/core_logic.py def gen_different_styles(self, base_instruction: str, task_description: str, mutation_rounds: int = 2, thinking_styles_count: int = 10) -> List: ...... generated_mutated_prompt = self.chat_completion(mutated_sample_prompt)...
I updated the .env file using my own AOAI key and endpoint. and i changed the config to mode: online once i run the demo, i got following error code....