burn icon indicating copy to clipboard operation
burn copied to clipboard

`sign` tensor operator for copying sign of one tensor onto another

Open wbrickner opened this issue 2 years ago • 5 comments

I need to take tensor a, and apply its sign of all of its elements to the elements of tensor b.

There seems to be no autodiff compatible way to do this, although all backends support it AFAIK.

Thank you

wbrickner avatar Jul 24 '23 02:07 wbrickner

We are crucially missing two tensor operations for this: sign() and abs(). If we had them, then you could do this:

# Get the sign of tensor_a
sign_a = tensor_a.sign()

# Make a new tensor that is the absolute value of tensor_b, multiplied by the sign of tensor_a
output_tensor = tensor_b.abs() * sign_a

For missing ops there are workarounds with the existing ops, which I can provide tomorrow after verifying they work on my computer. Stay tuned.

I'll leave this issue open and convert to a feature request for abs and sign

antimora avatar Jul 24 '23 03:07 antimora

For now I have done the (very silly):

let sign = t.clone() / (t.clone() * t.clone()).sqrt();

although this is not guaranteed to yield elements in {-1, +1}. What is the blocker for adding these operations to Tensor?

Also, I only care about the tch backend, so is there some way to specialize my code and just use tensor ops from tch, while maintaining compatibility with the burn autodiff / ml system?

wbrickner avatar Jul 24 '23 16:07 wbrickner

There is no example on how to add code specialized for a backend, but it is possible. The best way to do it is to create another trait MyBackend: Backend + MyAdditionalFunctions where you can implement the trait MyAdditionalFunctions for the backend you want to use. To support autodiff, you would need to implement the function for two backends: TchBackend and ADBackendDecorator<TchBackend>. For now, I would not recommend to do that, since it would probably be faster to add the ops to burn, and the lack of documentation/example can slow you down.

The sign function can simply be the following:

let sign = t.ones_like().mask_fill(t.lower_elem(0.0), -1.0);

nathanielsimard avatar Jul 24 '23 16:07 nathanielsimard

I guess with the missing ops you could do this:


let sign = tensor.ones_like().mask_fill(tensor.lower_elem(0.0), -1.0);

let output_tensor =  tensor.powf(2.0).sqrt().mul(sign);

Note: I haven't tried it yet myself.

antimora avatar Jul 24 '23 16:07 antimora

Linking a related issue here: https://github.com/burn-rs/burn/issues/506

antimora avatar Jul 24 '23 16:07 antimora

There is no example on how to add code specialized for a backend, but it is possible. The best way to do it is to create another trait MyBackend: Backend + MyAdditionalFunctions where you can implement the trait MyAdditionalFunctions for the backend you want to use. To support autodiff, you would need to implement the function for two backends: TchBackend and ADBackendDecorator<TchBackend>. For now, I would not recommend to do that, since it would probably be faster to add the ops to burn, and the lack of documentation/example can slow you down.

The sign function can simply be the following:

let sign = t.ones_like().mask_fill(t.lower_elem(0.0), -1.0);

This does not account for 0.0 case which should be 0.0 according to PyTorch's implementation

antimora avatar Mar 10 '24 00:03 antimora