Carlo Lucibello

Results 275 issues of Carlo Lucibello

```julia julia> x = [:a=>1, :b=>2] 2-element Vector{Pair{Symbol, Int64}}: :a => 1 :b => 2 # This is working as expected julia> gradient(x -> x[1].second, x) (Union{Nothing, NamedTuple{(:first, :second), Tuple{Nothing,...

namedtuple

While working at https://github.com/FluxML/NNlib.jl/pull/260 I hit a bug on Zygote master that I managed to reduce to the following ```julia julia> f(x) = reshape(x, fill(2, 2)...) f (generic function with...

The following exported methods don't have a docstring: - [ ] `pullback` - [ ] `pushforward` - [ ] `@code_adjoint`

help wanted
good first issue

I would expect the gradient of a dictionary to behave like the gradient of a named tuple and contain all of the keys of the original object. For the dict...

discussion
dictionary

Trying to fix #1293 With respect to master the case ```julia gradient(2) do c d = Dict([i => i*c for i=1:3]) return d[1] end ``` is now fixed, but I...

dictionary

As proposed in https://github.com/FluxML/Functors.jl/issues/49 and implemented in https://github.com/FluxML/Functors.jl/pull/51 we can switch the `functor` semantics from opt-in to opt-out, and eliminate an obscure piece of magic from users' code. Should we...

discussion

https://fluxml.ai/Flux.jl/stable/models/advanced/#Freezing-Layer-Parameters should be reworded on the line of https://fluxml.ai/Optimisers.jl/dev/#Frozen-Parameters

documentation
optimisers-dot-jl
good first issue

On latest Flux with CUDA.jl v4.0 we have the following regression where gradients are wrong for model on gpu containing BatchNorm layers: ```julia using Flux, FiniteDifferences, Test d, n =...

A convenience method that avoids the onehot transformation of the labels. Pytorch supports this [as well](https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html#torch.nn.CrossEntropyLoss). Performance seems to be approximately the same on both CPU and GPU: ```julia using...