Adrian Seyboldt

Results 123 comments of Adrian Seyboldt

I'll try if I can figure out how to put the improved cython support into a PR to numba itself, maybe that actually is the correct place for this. If...

Uff, this looks bad. This makes me wish we had dimension objects even more, I think that would solve this problem? But I don't see how we could introduce that...

No, it is a pretty different idea than `specify_shape`. `specify_shape` just fixes the shape to known integer values, it provides no information if two different arrays that happen to have...

Oh, that's neat, I didn't realize you could put in variables! I'll have to play with this and see where I can get with it. The different broadcasting behavior still...

Maybe it would help if we added an op that precisely computes what actually happens when we broadcast several arrays? So a wrapper around something like this (I really hope...

About the shape 0: Numpy allows that actually (although I think that might be a design flaw and I think this complicates things a bit, but I haven't thought it...

> Did not expect that! I don't think that's differentiable anyway :P Yes it is, the derivative is just an empty array. :-) I wrote a couple of tests, maybe...

Found another error message: ```python ( [(1, 1, 1), (), (2, 1)], [np.float32, np.float32], (1, 2, 1), np.float32, ), ``` ``` [(1, 1, 1), (), (2, 1)] Cannot drop a...

Most do. However these also happen with `know_shapes=True`: ``` [(2, 3), (), (1, 1, 3)] Elemwise{add,no_inplace}.grad returned a term with 3 dimensions, but 2 are required. [(1, 1, 1), (),...

@ricardoV94 You mean something like this with `broadcast_to`? ```python class SumToShape(at.Op): __props__ = () def make_node(self, tensor, target_shape): dtype = tensor.type.dtype target_shape = at.as_tensor_variable(target_shape) #output = type(tensor.type)(dtype=dtype, shape=target_shape)() # Why...