`ProjectTo{SparseMatrixCSC}` can incorrectly assume `eltype` of projected value
The example below attempts to project a Unitful.Quantity differential onto a sparse array of Float64
using SparseArrays
using ChainRulesCore
using Unitful
using UnitfulChainRules # defines projection of quantities onto Floats/Complex{<:Float}
A = sprandn(5,5, 0.5)
pA = ProjectTo(A)
dx = randn((5,5)) * u"m"
pA(dx)
# Unitful error:
# ERROR: DimensionError: and m are not dimensionally compatible.
The error occurs here because we pre-allocate the nzvals vector using project_type(project.element). However, project_type doesn't take into account the value we are projecting, which, in the case of a Unitful.Quantity, can mean that project_type(ProjectTo(A)) != eltype(A).
In the case of SparseMatrixCSC, we can rewrite the projection without preallocating nzvals using mapreduce to get a type-stable pullback that doesn't use project_type. In general, would it be better to get away without project_type and write projections which infer the type based on the input types?
E- On closer inspection the mapreduce solution runs into issues if the differential isn't a homogeneous type, project_type may be a necessary solution here
Everything about sparse array projection is very crude, and could use attention from anyone who's using it. It was roughly the minimal code to establish the idea (before 1.0) that projection does preserve sparsity.
In the case of SparseMatrixCSC, we can rewrite the projection without preallocating nzvals using mapreduce to get a type-stable pullback that doesn't use project_type. In general, would it be better to get away without project_type and write projections which infer the type based on the input types?
E- On closer inspection the mapreduce solution runs into issues if the differential isn't a homogeneous type, project_type may be a necessary solution here
I think this is still the correct solution. If your differntial isn't a homogeneous type you are in for a bad-time regardless. SparseMatrixes don't work well with hetrogenous types. Plus all kinds of mathematical operations will become slow and type unstable, no?
So I think that solution might be best and we can always generalize it with the progressive widening stratergy when and if someone complains.
As @mcabbott says that code for SparseArrays right now is just enough to demonstrate the behavour that we preserve the sparsity structure, it is not well tested by any users of SparseArrays. It would be great if you wanted to have a play around and then open a PR with a solution and some tests.