prompt2model
prompt2model copied to clipboard
prompt2model - Generate Deployable Models from Natural Language Instructions
https://github.com/neulab/prompt2model/pull/335#discussion_r1319296255 https://github.com/neulab/prompt2model/pull/335#discussion_r1319799726 We need to add a more dedicate cache system.
Right now we have an encoded dataset index file, `huggingface_data/huggingface_datasets/huggingface_datasets_datafinder_index`, checked in to the repository. Instead of having a binary in our repo, it would be better to download this...
Getting this while importing OpenAIInstructionParser, TaskType --------------------------------------------------------------------------- TypeError Traceback (most recent call last) Cell In[1], line 1 ----> 1 from prompt2model.prompt_parser import OpenAIInstructionParser, TaskType 3 prompt_spec = OpenAIInstructionParser(task_type=TaskType.TEXT_GENERATION) 4 prompt_spec.parse_from_prompt(prompt)...
In the prompt2model paper, we examined performance on several tasks, but performance was not as good on multilingual tasks. We're looking to improve performance on these tasks, so this is...
In some huggingface datasets, the data we want is in a nested structure. For example, in wikisql: ```json { ..., "sql": { "human_readable": "SELECT Notes FROM table WHERE Current slogan...
Our current trainer does not support [MPS](https://huggingface.co/docs/accelerate/usage_guides/mps) training.
Currently in the CLI, the dataset retriever retrieves datasets, but it's not clear how big they are. I'd like to avoid downloading a huge dataset with multiple millions of examples,...
The prompt2model CLI demo is largely automated, you can put in a prompt and it walks you through the steps to get a model. However, there are still some choices...
When I run python cli_demo.py, it reports errors: Generating examples: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 100/100 [00:00
Our current Prompt2Model pipeline uses a fixed set of hyperparameters for all tasks ([shown here](https://github.com/neulab/prompt2model/blob/0c1f10b52ca093b19a1d4296143b3a03e39f825c/prompt2model/model_trainer/generate.py#L273-L284)). To robustly handle different tasks, we want to implement automated hyperparameter selection by computing metrics...