Dennis Bappert
Dennis Bappert
This seems to work, beside it not being very elegant: ```python # square pad shapes = fn.shapes(y, dtype=types.INT32) h = fn.slice(shapes, 0, 1, axes=[0]) w = fn.slice(shapes, 1, 1, axes=[0])...
Hey @bluesky314, using the channels_last memory format nearly doubled the training performance for me. **_train.py_** ```python if cfg.trainer.channels_last is True: model = model.to(memory_format=torch.channels_last) ``` **_collate_function.py_** ```python class CollateFunction: def __init__(self,...
Yes but very selectively e.g. perspective warping only for portraits and not for the supervisely dataset. I currently dropped color jittering as I synthesized a couple of more images and...
Perspective Warping produces samples like this (taken from the training, 2nd & 3rd rows are the predictions):  Warping shows that the model is not super stable to spatial transformations....
I published my training code [here](https://github.com/dennisbappert/u-2-net-portrait) and one of the preliminary pretrained models. @xuebinqin I kept the model untouched so the weights I'm providing are compatible to existing applications.
Thanks for adding my repo to the README, looks fine for me.
I will be able to test with K series in 1-2w from now. Happy to get in touch.
Mainly StyleGAN synthesized portraits, the supervisely person dataset and the aisegment dataset, however it is important to highlight that the model is not fully trained just for a couple of...
Hey @FraPochetti, thanks for your interest. The augmentation pipeline is part of the configuration files (dataset.yaml). The published model was trained on roughly 30k quite noisy samples. The model is...
I did some preliminary testing and explored the effect of different loss functions. It is important to highlight that I'm using alpha mattes and no normal segmentation masks. L1 performed...