Michael Abbott

Results 1315 comments of Michael Abbott

It's pretty odd that this fails while the example of https://github.com/mcabbott/TensorCast.jl/pull/10#issue-359041016 does not. Both use `orient` to reshape a transposed matrix. But doing this twice seems to cause problems: ```...

Here is one way to work around this, forcing the broadcast to be a CUDA one: ```julia trick = cu(fill(false)) @reduce D[m,a] := sum(p) C[p,a] + L[p,m] + trick ```

I think this can be closed as fixed by #31, current behaviour is: ```julia julia> @reduce D[m,a] := sum(p) C[p,a] + L[p,m] 3×2 CuArray{Float32, 2, CUDA.Mem.DeviceBuffer}: 20.0 20.0 20.0 20.0...

BTW, the sliced object here is an ordinary array of CuArrays. Which I think means that the iteration over slices happens on the CPU. I'm not sure precisely how that...

Thanks, that doesn't sound too difficult.

This package still lacks its own GPU tests, sadly. It could at least easily get fake GPU tests via [JLArrays](https://github.com/JuliaGPU/GPUArrays.jl/tree/master/lib/JLArrays). Besides slices above, it would be worth testing examples from...

Agree this is connected to the subgradient story, in that optimisations don't commute with differentiation. But while we could decide to shrug and declare all subgradients acceptable, we can't do...

Oh that's weird. I've been confused by this before; for a while I though I understood some of it as a tangent vs cotangent and the rest as bugs. However,...

Turns out there's a better function for this. Which is defined & runs without error. But returns a `tuple`, where it's documented to return a number, I think: ```julia julia>...

I was surprised by `exponent(0.0)` because I expected it to read out the bits -- while the mathematical answer is ambiguous as you say, there any solution will do, and...