Sachin Abeywardana
Sachin Abeywardana
just wondering if the last layer will still have a swish activation? When I print out the model, that seems to be the case. If so how do you remove...
I've expanded on the question above on my [SO](https://stackoverflow.com/questions/62954999/flattening-efficientnet-model) question.
Hi, Just my 2cents here but as far as I'm aware pix2pix was trained using TF 0.12.1 whereas in this repo TF 1.2.1 was used. So it's something to do...
^correct @lofar788 . I want all the most important data closer to the final output vector. So this ought to make it easier for the network. Technically speaking it ought...
I apologise for taking so long to respond. Haven't read the paper but in terms of the statement: > if we used the 90% quantile as the loss to train...
Did you mean to put more nodes (256) in the upper layer or was that a typo. And thanks will try it out.