Oliver Guhr

Results 22 comments of Oliver Guhr

I am glad you like it. You totally can do this - just modify this line, to get the value of the logit: https://github.com/oliverguhr/german-sentiment-lib/blob/master/germansentiment/sentimentmodel.py#L32 If you decide to change the...

Thank you very much! I downloaded the full resolution images and started the training with your updates. I post the results as soon as it's ready (+33h).

The results are looking a bit strange. Here is the model output after 195 epoch: Left the model trained on Version 0.4.15 and the small images, right the model trained...

First of all, thank you for all the work you put in this! I played a bit with the parameters and started a new run with a batch size of...

I trained the model a while longer (200k) iterations, with the best results at about 160k iterations ![161-ema](https://user-images.githubusercontent.com/3495355/78266707-6c924400-7506-11ea-8739-10e32aa13c03.jpg) However, after that, it got only worse and there are still some...

For me, there was no noticeable difference in the results between a batch size of 3 and a network capacity of 32 and batch size 7 and network capacity of...

Hi @yuanlunxi [here you can read more about FP16](https://docs.nvidia.com/deeplearning/performance/mixed-precision-training/index.html) I did not share my model, because the results are not perfect yet. I don't know what I can expect, but...

@Johnson-yue I started a new training run with the latest version of the code and it looks promising. I am using two attention layers and a resolution of 128x128. This...

I don't know what happed, but until iteration 682k the results got worse: ![682-ema](https://user-images.githubusercontent.com/3495355/93206965-85ac4b80-f75a-11ea-864f-36d3ee708985.jpg) one(!) iteration later the image looked like this: ![683-ema](https://user-images.githubusercontent.com/3495355/93207077-b12f3600-f75a-11ea-8512-0fc53dde3fe3.jpg) And after some more iterations, the images...

Sorry for the late response. Here is a list of trained models (and some sample results) that you can download: [.config.json](https://www2.htw-dresden.de/~guhr/dist/styleganfaces/.config.json) [model_203.pt](https://www2.htw-dresden.de/~guhr/dist/styleganfaces/model_203.pt) ![model_203.jpg](https://www2.htw-dresden.de/~guhr/dist/styleganfaces/203.jpg) [model_300.pt](https://www2.htw-dresden.de/~guhr/dist/styleganfaces/model_300.pt) ![model_300.jpg](https://www2.htw-dresden.de/~guhr/dist/styleganfaces/300.jpg) [model_400.pt](https://www2.htw-dresden.de/~guhr/dist/styleganfaces/model_400.pt) ![model_400.jpg](https://www2.htw-dresden.de/~guhr/dist/styleganfaces/400.jpg) [model_500.pt](https://www2.htw-dresden.de/~guhr/dist/styleganfaces/model_500.pt) ![model_500.jpg](https://www2.htw-dresden.de/~guhr/dist/styleganfaces/500.jpg) [model_550.pt](https://www2.htw-dresden.de/~guhr/dist/styleganfaces/model_550.pt)...