stable-diffusion-webui icon indicating copy to clipboard operation
stable-diffusion-webui copied to clipboard

[Feature Request][API]: I would like to easily change the checkpoint from the API

Open siriux opened this issue 2 years ago • 4 comments

Is there an existing issue for this?

  • [X] I have searched the existing issues and checked the recent builds/commits

What would your feature do ?

I would like to be able to get a list of the available checkpoints from the API, and then change the current checkpoint also from the API in a simple and clear way more inline with the new /sdapi/v1/txt2img and /sdapi/v1/img2img APIs.

Currently this is possible by copying the same request the interface does (using /api/predict/) but the parameters to provide are not simple or clear.

This is an example:

{"data":[{"value":"sd-v1-5.ckpt [81761151]","__type__":"update"},"{\"samples_save\": false, \"samples_format\": \"png\", \"samples_filename_pattern\": \"\", \"save_images_add_number\": true, \"grid_save\": false, \"grid_format\": \"png\", \"grid_extended_filename\": false, \"grid_only_if_multiple\": false, \"grid_prevent_empty_spots\": false, \"n_rows\": -1, \"enable_pnginfo\": true, \"save_txt\": true, \"save_images_before_face_restoration\": true, \"jpeg_quality\": 80, \"export_for_4chan\": false, \"use_original_name_batch\": false, \"save_selected_only\": true, \"do_not_add_watermark\": true, \"outdir_samples\": \"outputs/\", \"outdir_txt2img_samples\": \"outputs/txt2img-images\", \"outdir_img2img_samples\": \"outputs/img2img-images\", \"outdir_extras_samples\": \"outputs/extras-images\", \"outdir_grids\": \"\", \"outdir_txt2img_grids\": \"outputs/txt2img-grids\", \"outdir_img2img_grids\": \"outputs/img2img-grids\", \"outdir_save\": \"outputs/\", \"save_to_dirs\": false, \"grid_save_to_dirs\": false, \"use_save_to_dirs_for_ui\": false, \"directories_filename_pattern\": \"\", \"directories_max_prompt_words\": 8, \"ESRGAN_tile\": 192, \"ESRGAN_tile_overlap\": 8, \"realesrgan_enabled_models\": [\"R-ESRGAN x4+\", \"R-ESRGAN x4+ Anime6B\"], \"SWIN_tile\": 192, \"SWIN_tile_overlap\": 8, \"ldsr_steps\": 100, \"upscaler_for_img2img\": null, \"use_scale_latent_for_hires_fix\": false, \"face_restoration_model\": \"CodeFormer\", \"code_former_weight\": 0.5, \"face_restoration_unload\": false, \"memmon_poll_rate\": 8, \"samples_log_stdout\": false, \"multiple_tqdm\": true, \"unload_models_when_training\": false, \"dataset_filename_word_regex\": \"\", \"dataset_filename_join_string\": \" \", \"training_image_repeats_per_epoch\": 1, \"training_write_csv_every\": 500.0, \"sd_model_checkpoint\": \"sd-v1-5.ckpt [81761151]\", \"sd_checkpoint_cache\": 0, \"sd_hypernetwork\": \"None\", \"sd_hypernetwork_strength\": 1, \"img2img_color_correction\": false, \"save_images_before_color_correction\": false, \"img2img_fix_steps\": false, \"enable_quantization\": false, \"enable_emphasis\": true, \"use_old_emphasis_implementation\": false, \"enable_batch_seeds\": true, \"comma_padding_backtrack\": 20, \"filter_nsfw\": false, \"CLIP_stop_at_last_layers\": 1, \"random_artist_categories\": [], \"interrogate_keep_models_in_memory\": false, \"interrogate_use_builtin_artists\": true, \"interrogate_return_ranks\": false, \"interrogate_clip_num_beams\": 1, \"interrogate_clip_min_length\": 24, \"interrogate_clip_max_length\": 48, \"interrogate_clip_dict_limit\": 1500.0, \"interrogate_deepbooru_score_threshold\": 0.5, \"deepbooru_sort_alpha\": true, \"deepbooru_use_spaces\": false, \"deepbooru_escape\": true, \"show_progressbar\": true, \"show_progress_every_n_steps\": 10, \"show_progress_grid\": true, \"return_grid\": true, \"do_not_show_images\": false, \"add_model_hash_to_info\": true, \"add_model_name_to_info\": false, \"disable_weights_auto_swap\": false, \"font\": \"\", \"js_modal_lightbox\": true, \"js_modal_lightbox_initially_zoomed\": true, \"show_progress_in_title\": true, \"quicksettings\": \"sd_model_checkpoint\", \"localization\": \"None\", \"hide_samplers\": [], \"eta_ddim\": 0, \"eta_ancestral\": 1, \"ddim_discretize\": \"uniform\", \"s_churn\": 0, \"s_tmin\": 0, \"s_noise\": 1, \"eta_noise_seed_delta\": 0}"],"is_generating":false,"duration":0.003006458282470703,"average_duration":0.003006458282470703}

Proposed workflow

  1. Use the GET method /sdapi/v1/availablecheckpoints to get the list of available checkpoints
  2. Use the POST method /sdapi/v1/checkpoint to set the current checkpoint

Additional information

No response

siriux avatar Oct 26 '22 07:10 siriux

I'd like this, as well. Currently, the way to get available checkpoints and swap them is convoluted at best. The new API is fresh out of the oven, so there's a ways to go in developing it. Hoping checkpoints swapping could be prioritized. It's important enough that it's the very first element on the web ui, even before the prompt 🙂

Kilvoctu avatar Oct 28 '22 03:10 Kilvoctu

+1

aliencaocao avatar Oct 28 '22 11:10 aliencaocao

+1

arttukataja avatar Nov 26 '22 15:11 arttukataja

BTW you can already do it in the options API, what I am still wishing for is the override settings part

aliencaocao avatar Nov 26 '22 15:11 aliencaocao

@aliencaocao This options method? http://localhost:7860/docs#/default/set_config_sdapi_v1_options_post Do you have an example? The docs don't show the schema

Edit: Nvm, the payload should be something like this

export type PostOptions = {
    samples_save:                          boolean;
    samples_format:                        string;
    samples_filename_pattern:              string;
    save_images_add_number:                boolean;
    grid_save:                             boolean;
    grid_format:                           string;
    grid_extended_filename:                boolean;
    grid_only_if_multiple:                 boolean;
    grid_prevent_empty_spots:              boolean;
    n_rows:                                number;
    enable_pnginfo:                        boolean;
    save_txt:                              boolean;
    save_images_before_face_restoration:   boolean;
    save_images_before_highres_fix:        boolean;
    save_images_before_color_correction:   boolean;
    jpeg_quality:                          number;
    export_for_4chan:                      boolean;
    use_original_name_batch:               boolean;
    use_upscaler_name_as_suffix:           boolean;
    save_selected_only:                    boolean;
    do_not_add_watermark:                  boolean;
    temp_dir:                              string;
    clean_temp_dir_at_start:               boolean;
    outdir_samples:                        string;
    outdir_txt2img_samples:                string;
    outdir_img2img_samples:                string;
    outdir_extras_samples:                 string;
    outdir_grids:                          string;
    outdir_txt2img_grids:                  string;
    outdir_img2img_grids:                  string;
    outdir_save:                           string;
    save_to_dirs:                          boolean;
    grid_save_to_dirs:                     boolean;
    use_save_to_dirs_for_ui:               boolean;
    directories_filename_pattern:          string;
    directories_max_prompt_words:          number;
    ESRGAN_tile:                           number;
    ESRGAN_tile_overlap:                   number;
    realesrgan_enabled_models:             string[];
    upscaler_for_img2img:                  null;
    use_scale_latent_for_hires_fix:        boolean;
    ldsr_steps:                            number;
    ldsr_cached:                           boolean;
    SWIN_tile:                             number;
    SWIN_tile_overlap:                     number;
    face_restoration_model:                string;
    code_former_weight:                    number;
    face_restoration_unload:               boolean;
    memmon_poll_rate:                      number;
    samples_log_stdout:                    boolean;
    multiple_tqdm:                         boolean;
    unload_models_when_training:           boolean;
    pin_memory:                            boolean;
    save_optimizer_state:                  boolean;
    dataset_filename_word_regex:           string;
    dataset_filename_join_string:          string;
    training_image_repeats_per_epoch:      number;
    training_write_csv_every:              number;
    training_xattention_optimizations:     boolean;
    sd_model_checkpoint:                   string;
    sd_checkpoint_cache:                   number;
    sd_vae:                                string;
    sd_vae_as_default:                     boolean;
    sd_hypernetwork:                       string;
    sd_hypernetwork_strength:              number;
    inpainting_mask_weight:                number;
    initial_noise_multiplier:              number;
    img2img_color_correction:              boolean;
    img2img_fix_steps:                     boolean;
    img2img_background_color:              string;
    enable_quantization:                   boolean;
    enable_emphasis:                       boolean;
    use_old_emphasis_implementation:       boolean;
    enable_batch_seeds:                    boolean;
    comma_padding_backtrack:               number;
    CLIP_stop_at_last_layers:              number;
    random_artist_categories:              any[];
    interrogate_keep_models_in_memory:     boolean;
    interrogate_use_builtin_artists:       boolean;
    interrogate_return_ranks:              boolean;
    interrogate_clip_num_beams:            number;
    interrogate_clip_min_length:           number;
    interrogate_clip_max_length:           number;
    interrogate_clip_dict_limit:           number;
    interrogate_deepbooru_score_threshold: number;
    deepbooru_sort_alpha:                  boolean;
    deepbooru_use_spaces:                  boolean;
    deepbooru_escape:                      boolean;
    deepbooru_filter_tags:                 string;
    show_progressbar:                      boolean;
    show_progress_every_n_steps:           number;
    show_progress_type:                    string;
    show_progress_grid:                    boolean;
    return_grid:                           boolean;
    do_not_show_images:                    boolean;
    add_model_hash_to_info:                boolean;
    add_model_name_to_info:                boolean;
    disable_weights_auto_swap:             boolean;
    send_seed:                             boolean;
    send_size:                             boolean;
    font:                                  string;
    js_modal_lightbox:                     boolean;
    js_modal_lightbox_initially_zoomed:    boolean;
    show_progress_in_title:                boolean;
    quicksettings:                         string;
    localization:                          string;
    hide_samplers:                         any[];
    eta_ddim:                              number;
    eta_ancestral:                         number;
    ddim_discretize:                       string;
    s_churn:                               number;
    s_tmin:                                number;
    s_noise:                               number;
    eta_noise_seed_delta:                  number;
    disabled_extensions:                   string[];
    images_history_preload:                boolean;
    images_history_page_columns:           number;
    images_history_page_rows:              number;
    images_history_pages_perload:          number;
    wildcards_same_seed:                   boolean;
}

(reverse-engineered from the GET response)

The sd_model_checkpoint needs to be set to the title from /sdapi/v1/sd-models

mnpenner avatar Dec 30 '22 20:12 mnpenner

No You can use options endpoint with key=option name and value=option value. Option name can be get from inspect element on the settings page Or if you use my PR, its override settings, not options. Key and values are same

aliencaocao avatar Dec 31 '22 01:12 aliencaocao

Could someone explain better how to call the option from python to change the checkpoint via the API?

I got the key=sd_model_checkpoint and I believe the value would be the name of the checkpoint but how do you actually call the API to execute the command?

I tried calling this way but the sd_model_checkpoint is not recognized...

api = webuiapi.WebUIApi(host='127.0.0.1', port=7860, sampler='Euler a', steps=20)

wprompt = sys.argv[1] nprompt = sys.argv[2] selected_model = sys.argv[3]

result1 = api.txt2img(prompt=wprompt, negative_prompt=nprompt, sd_model_checkpoint=selected_model, seed=-1, cfg_scale=8,

jdc4429 avatar Jul 20 '23 08:07 jdc4429

Here you go guys -

import json
import requests
import io
import base64
from PIL import Image, PngImagePlugin

url = "http://127.0.0.1:7860"

payload = {
    "sd_model_checkpoint": "realisticVisionV51_v51VAE.safetensors [15012c538f]",
    "prompt": "A young girl with long hair and pink background",
    "steps": 30,
}

response = requests.post(url=f'{url}/sdapi/v1/txt2img', json=payload)

r = response.json()

for i in r['images']:
    image = Image.open(io.BytesIO(base64.b64decode(i.split(",",1)[0])))

    png_payload = {
        "image": "data:image/png;base64," + i
    }
    response2 = requests.post(url=f'{url}/sdapi/v1/png-info', json=png_payload)

    pnginfo = PngImagePlugin.PngInfo()
    pnginfo.add_text("parameters", response2.json().get("info"))
    image.save('output.png', pnginfo=pnginfo)
    ```

itsidleboy avatar Dec 08 '23 07:12 itsidleboy