ecilay
ecilay
@14211019 @sunxiaochuanpr could you guys kindly take a look pls? I followed the link above, but still got the same error of not finding "boost_numpy". For convience, i copied the...
thanks for the prompt reply! I actually trained from tweaking a VAE/GAN model, by combining encoder and discriminator into one model as described in your paper. And two loss optimizers,...
yea the loss for `discriminator_encoder = bce_real + bce_reconstruction + bce_sampled_noise` (bce = binary cross entropy). I have one model for 1)discriminator_encoder; one model for 2)decoder, as a normal decoder...
Hi you mentioned for encoder, > doesn't propagate gradients back to it. then how you train encoder? not together with discriminator? thanks!!
Thanks, so you mean remove `--precisionConstraints=prefer --layerPrecisions=*:fp16,*:fp32 --layerOutputTypes=*:fp16,*:fp32 `? How do I inpsect the case layer? What shall I look for?
> > How do I inpsect the case layer? What shall I look for? > > Use netron. I used neuron, but I think the information from it is very...
Thanks @zerollzeng , I find that as long as I get rid of `layerOutputTypes` it works. Can I know why I would need both `--fp16 --int8`? Would it be redundant...
Another thing is if I used the trt model converted dynamically with `--minShapes=text:1x48 --optShapes=text:2x48 --maxShapes=text:4x48`, the inference doesn't work, since `shape = engine.get_binding_shape(0)` will be (-1, 48), thus when I...
> When working with dynamic shapes, you need to specify runtime dimensions. See docs here: https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html#runtime_dimensions thanks this works
Another thing I observe very weird is that when I do `trtexec --onnx=model.onnx --fp16`, the output trt model inference results are very wrong; if I get rid of `--fp16`. the...