tholmb
tholmb
I found the answer to the question 1. from the provided codes. So, convolutional kernels are initialized with xavier initialization and biases with trundated normal initialization (mean 0.0 and std...
I may have found solution from your codes related to flow1d project. For 1D correlation it should be straight forward with [this](https://github.com/haofeixu/flow1d/blob/main/flow1d/correlation.py) and for cross-attention with [this](https://github.com/haofeixu/flow1d/blob/main/flow1d/attention.py). I just don't...
Maybe I just should modify FeatureFlowAttention instead of single_head_split_window_attention. After the modifications it will look like following: ``` class FeatureFlowAttention1D(nn.Module): """ flow propagation with self-attention on feature query: feature0, key:...
Thanks for the reply! I'm still a little bit confused that which functions I should modify. It's clear that at least global_correlation_softmax() but I'm not sure that is it enough?...
So if I have understood correctly, in TransformerBlock we run first self-attention, then cross-attention and lastly ffn. Both self-attention and cross-attention calls TranformerLayer class but the difference is that self-attention...
Small update: I trained the network for stereo matching task with sceneflow and it is producing some weird looking checkerboard artifact.  Do you have any idea where this can...
Thanks for the answer! I will validate with SceneFlow dataset and let you know if the unwanted behavior remains. In my opinion the issue can be closed.
And for my purposes aimet_tensorflow's keras implementation would be what I need but of course someone else would need it for original aimet_tensorflow.
Any plans to add this feature to new realases of AIMET?