Tushar Patil
Tushar Patil
Hi @honey-zhao, These segmentation masks are obtained through the tiresome annotation method. We can use plenty of open-source tools available online like labelImg, vggannotator, etc to annotate. You can find...
Hi @MustafaAlahmid , Check this [notebook](https://github.com/tshr-d-dragon/CODE_TEMPLATES/blob/main/keras2tf2tflite.ipynb) for keras model to tflite model conversion. Close this issue if this solves it!!!:)
@vuhungtvt142 you can try this code snippet: ``` from glob import glob import numpy as np import cv2 images_paths = glob("/*") images_array = [] for image_path in images_paths: image =...
Hi @ranjan2601, Please share your training notebook/scripts. Your input image and its corresponding ground_truth mask do not match. Also, there might be some other issue in the training too. Then,...
Hi @lanyao-wang , I think this [blog](https://mrsalehi.medium.com/a-review-of-different-interpretation-methods-in-deep-learning-part-1-saliency-map-cam-grad-cam-3a34476bc24d) will help you !!!:)
Hi @abderraouf2che, The scg_gcn.py file at https://github.com/samleoqh/MSCG-Net/blob/master/lib/net/scg_gcn.py contains the code you need.
Just copy this function in modules/ui.py ``` def create_sampler_and_steps_selection(choices, tabname): return scripts.scripts_txt2img.script('Sampler').steps, scripts.scripts_txt2img.script('Sampler').sampler_name ```
@riccorohl Try installing the 1.7.0. version of stable difffusion webui, can be found [here](https://github.com/AUTOMATIC1111/stable-diffusion-webui/releases/tag/v1.7.0)
@kernsaunders2257 Did you try above solution??? > Just copy this function in modules/ui.py > > ``` > def create_sampler_and_steps_selection(choices, tabname): > return scripts.scripts_txt2img.script('Sampler').steps, scripts.scripts_txt2img.script('Sampler').sampler_name > ```
> > @kernsaunders2257 Did you try above solution??? > > > Just copy this function in modules/ui.py > > > ``` > > > def create_sampler_and_steps_selection(choices, tabname): > > >...