How to disable WANDB, for DUMMIES
Can anyone tell me how can I disable WANDB in KOHYA SS 23.0.15?
I'm not a programmer/coder/... I know how to edit .py or bat files in notepad++
Please, what is the name of the file that I need to edit and what line I need to edit (add or remove)?
Thanks!!!!!
You uncheck the use of WANDB in the advanced configuration:
This will then default to Tensorboard.
You can then disable logging alltogether, by leaving the Log director path empty at the folders tab.
5KilosOfCheese many thanks for your answer but I'm confused: I'm training an SDXL lora, my WANDB in advanced configurations is unchecked and the Log director path is empty but, WANDB continue to ask to logging. Do you know what am I doing wrong? Thanks
Hummm... maybe that is a bug, what is it that is it asking for? Is there a wandb parameter in the list that is displayed?
5KilosOfCheese many thanks for your answer but I'm confused: I'm training an SDXL lora, my WANDB in advanced configurations is unchecked and the Log director path is empty but, WANDB continue to ask to logging. Do you know what am I doing wrong? Thanks
Could post the print training command output here? Because it shouldn't be calling it if it is disabled.
Because I tried few things to make WANDB to come up even if disabled and I can't.
If you load a saved config, then there are cases where the UI shows something being not enabled/disabled even if it isn't. So you have to double check by selecting and unselecting that thing. Example SDPA and Xformers dropdown menu has this glitch.
Make sure that all those fields are empty and the option unchecked. See if adding a log directory fixes this. If not try reinstalling Kohya as something might have gone wrong if have updated many version. Generally just deleting venv and rerunning the install script fixes.
5KilosOfCheese and bmaltais, thanks for the answers!!!!
I tried it 2 or 3 times and the messages asking to connect in wandb appears every time, so, I did the simple way, deleted everything, every single file and I started a brand new kohya (generally I update from version to version) and now it works exactly as you said. Sorry for the inconvenience!
One more question while I have the attention of the masters: about the presets in KOHYA SS (prodigy, LOHA, LOKR, kudou reira...) is there a place where I can see the results and the best moment/style for every one of these presets? If so, can you send me the link?
Thanks! Thanks! and Thanks again!!!
Regarding the presets there is no such things… would be nice if a civitai for training preset and sample dataset existed… but it is not the case. Consider them as starting point from which you can customise them to train something that is adequate.
One more question while I have the attention of the masters: about the presets in KOHYA SS (prodigy, LOHA, LOKR, kudou reira...) is there a place where I can see the results and the best moment/style for every one of these presets?
All of the different LoRA forms do a different thing and work in a different way. Obviously it would be possible to create a standard training dataset and parameters (including seed) and then train in that manner to find example. But the testing range for the lora would have to be quite broad, somewhere like 1000 images in specific sets and types of prompting. But even then you just get general broad approximation on what kind of results you might get from them, there is no guarantee that your dataset would yield those kinds of results. This is even more so when you add captions to your images. (And this is still with the assumption that everything is implemented and working correctly as described in the papers).
I personally just use standard LoRA and it has thus far managed to serve me perfectly. As in I have yet to try to something that I can't make happen because of technical limitations. All my limitations are frankly my own down and fault, generally down to me being (in my opinion) quite bad at prompting and generating images compared to training things.
But my experience is just that... experience. I have made hundreds of LoRAs and tens of version of each on my humble 4060TI - none that I have really published officially. And just like in my real life when I as an engineer go adjust some cutting system, I quite literally just test values and write down observations. Then make conclusions on what works, what doesn't and what is the desired and undesired property.
For example when it comes to how many ranks I use: I figured that out just by starting from rank 1 and seeing at what point the LoRA "starts to work". Then I worked up until I concluded there was no meaningful benefit, or the functionality became inferior.
The presets in the GUI are just some one person's opinion about what are the good settings. Other than some specific optimiser related things (like extra arguments to make it work) it largely boils down to opinion. I'm quite sure that if I published my favourite DAdaptAdam preset there it would be just as good and any other's - even though lot of the things in it go against the grain of public opinion. This is just because I have made so many trial and error runs and read the papers and technical docs. I consider my method to be "purer" in that sense but not best or good.
Honestly. Just choose one and start experimenting... and write down the thing you did and observations relating to it.
@5KilosOfCheese I totally agree. Each dataset require it’s own custom tuning to extract the most out of it. The presets presented in the GUI are starting points of what worked in the context of one or few similar datasets.
It sort of present settings that worked well but will still require adjustments for other dataset. Think of them as a generic cake recipe for a set of selected ingredients. It is up to you to add some cocoa, change the ratio of flour to wet ingredients, etc to make the type of cake you expect.