Lukas Geiger

Results 129 comments of Lukas Geiger
trafficstars

@abattery Thanks for taking a look! I am not sure why this issue was transferred to TF-MOT though, since it is not directly related to TF-MOT. The issue lies not...

> Minor comment: @tf.function(experimental_implements=...) doesn't make it convert, it only adds a tag that the converter can read, you'd still need to add the pattern to compute engine, so you...

> How do you think we should manage the scenario where somebody uses a new version of Larq but an old version of the converter? I guess on init we...

I added a LCE version check to `__init__` in 3f1647407259263c923b3e246ca6fe38946a778e now that LCE 0.6.1 has support for this. I'd be happy to move this check to `setup.py` though in case...

> I guess we should run a quick sanity check model conversion with LCE 0.6.1 before merging, but this looks great. I will run a sanity check later today or...

@simonmaurer Did you do any performance profiling of you proposed solution? One problem with using `tf.where(x >= 0., 1., -1.)` is that this might change the datatype (e.g. it will...

> by making Integer and kernel_quantizer= None Could you elaborate a bit on what you are doing? If possible, it would be good to post a minimal code sample that...

> I have made Integer and kernel_quantizer= None instead of ste_sign What do you mean by "Integer" in this context? > I have used time.clock() and time.time() to measure total...

Sorry for the late respone, have you checked out [this discussion](https://spectrum.chat/larq/general/prediction-with-latent-weights~808d7c00-6bb4-497b-a9e0-9ee563b69bc3) about a very similar request? It also includes an example notebook and a workaround for your use case. In...

Outside the `quantize_context` this is expected when training models with latent weights as explained in the docs you linked above since the weights are only binarized in the forward pass...