Michael Abbott
Michael Abbott
Originally this just wanted to add `@layer` since this is the simplest model with a `struct`... Edit: Oh no I see #407 now, working on the same model. Edit': closes...
At present Parallel allows multiple layers and one input, but not the reverse. This PR extends it to allow both ways... much like broadcasting in `connection((inputs .|> layers)...)`. ```julia julia>...
This is a minimal update to run on Julia 1.0 & up.
The following error showed up in #27: ```julia julia> y1 = onehotbatch([1, 3, 0, 2], 0:9); julia> y1 == y1 true julia> using JLArrays julia> y2 = onehotbatch([1, 3, 0,...
I was hit by the following performance bug, when using this package and MLUtils: ```julia julia> let x, _ = Flux.splitobs(Flux.onehotbatch(rand(1:99, 100), 1:100); at=1.0, shuffle=false) @show summary(x) emb = Flux.Embedding(100...
Surely this should just work: ```julia julia> ytest 10×10000 OneHotMatrix(::CuArray{UInt32, 1, CUDA.Mem.DeviceBuffer}) with eltype Bool: ⋅ ⋅ ⋅ 1 ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ 1 ⋅ ⋅ 1 ⋅...
The lack of https://github.com/FluxML/Flux.jl/pull/1959 causes the following error, currently blocking https://github.com/FluxML/Flux.jl/pull/2025 : ```julia julia> using CUDA, OneHotArrays, NNlibCUDA julia> CUDA.allowscalar(false) julia> x = [1, 3, 2]; julia> y = onehotbatch(x,...
Unlike https://github.com/FluxML/Flux.jl/pull/1959, this uses `map` over arrays. Some duplication, unfortunately. Possibly the new method should be restricted to AbstractGPUArrays? Closes #16 Also tries to organise the tests just a little...
This restores #39467. Which was reverted in #40504, as it revealed that #39301 worked around #40613 by wrapping arrays in an Adjoint, and this simplification defeated the workaround. I can't...
Some operations on `Slices` from https://github.com/JuliaLang/julia/pull/32310 can be more efficiently done on the parent array. Should we add shortcuts, and how widely? The obvious candidate is reductions but the problem...