Michael Abbott

Results 1311 comments of Michael Abbott

One vote against. If the page listing layers is really too hard to navigate, it can be split up, or given a table of contents. But I'm not convinced it...

This seems like a good idea. Maybe it should literally be the same switch as `CUDA.allowscalar`, to avoid introducing ever more functions you have to know about? It's not exactly...

> seems wrong to me since scalar indexing and floating point precision are totally unrelated This was my suggestion! Float precision is a big deal on GPU and not otherwise....

Note that CUDA.jl has switched the default to be disallowing scalar access. Maybe that means using the same switch is a worse idea. So if we own a switch, what's...

If you want to do this yourself, and only have a struct of real numbers, then it will be fairly simple: ```julia julia> using ForwardDiff: Dual, partials julia> make_dual(z::Foo) =...

Maybe the first question is whether `*` is intended to be the multiplication for objects with a non-associative binary operation. Has this been discussed anywhere? (I don't mean the docs...

A possible cause is Functors v0.5 recursing into arbitrary custom structs? For built-in layers, Flux.state does typically save integers like padding, does not load them (since they are immutable), but...

Same error with https://github.com/JuliaDiff/ChainRules.jl/pull/569, FWIW. Not certain this is relevant, but notice the similarity to this: ```julia julia> accumulate(=>, (1,2,3)) (1, 1 => 2, (1 => 2) => 3) julia>...

> As I understand the ArrayPartition declares eltype as a type that all udnerlying elements can be promoted to. I don't think this is quite true, although I'm not sure...

I think the reason ForwardDiff is confused because this ArrayPartition declares itself to have Float64 elements, but in fact returns an Int sometimes: ```julia julia> ap = ArrayPartition([ 0.0 ],...