David Widmann
David Widmann
The product of a ScalMat and a scalar currently returns a ScalMat. That's wrong though unless the scalar is positive. Returning eg a `Diagonal` for negative scalars and a ScalMat...
I think TransformVariables shouldn't put such constrains on the arguments of `inverse`. Clearly, non-finite values are not useful for frequentist or Bayesian estimation, but for pure evaluation of a model...
I don't think you have to wait for that PR, you could remove the hard dependency on Julia < 1.9 right away.
The documentation of AbstractMCMC in the README should be correct: https://github.com/TuringLang/AbstractMCMC.jl If not, please open an issue or a PR :slightly_smiling_face: Generally, the Turing docs are quite outdated (see https://github.com/TuringLang/TuringTutorials/issues/86...
> I do not think any Cholesky decomposition is computed for ScalMat or PDiagMat, This issue is about `PDMat`, not `ScalMat` or `PDiagMat`. That being said, I think multiplication with...
`AbstractPDMat` are `AbstractMatrix` subtypes, IMO multiplication with a scalar should never error. If one wants to preserve pd-ness, IMO one should use one of the existing congruence transforms or eg...
My view is completely different. There are some operations in which you want to exploit that the matrix is pd and is already factorized. But not every matrix operation benefits...
IMO `sqrt` is not an appropriate analogy here as `Float64` is not a matrix type; the correct analogy would be other structured matrix types such as `Diagonal` etc.
> If one really wants to compute C - a * M (s)he could just type C - a * Matrix(M) This would be unnecessarily inefficient, in particular for `M::ScalMat`...
If you know that you're only dealing with positive scalars in the multiplication, IMO it would be better to use and forward that information by using the existing functionality for...