ForneyLab.jl icon indicating copy to clipboard operation
ForneyLab.jl copied to clipboard

Julia package for automatically generating Bayesian inference algorithms through message passing on Forney-style factor graphs.

Results 19 ForneyLab.jl issues
Sort by recently updated
recently updated
newest added

Following our [discussion](https://github.com/biaslab/ReactiveMP.jl/pull/132#discussion_r896459954), we need to change the update rule for `ruleVBPoissonOut(marg_out::Any, marg_l::Distribution{Univariate, Gamma})` as in `ReactiveMP.jl` (see [out.jl](https://github.com/biaslab/ReactiveMP.jl/blob/4de9106eeac3b8d613bb58d01a7698d7d4223b34/src/rules/poisson/out.jl#L5))

I wasn't sure if this meant to use `Iterators.repeated` or `repeat`, but it seemed to just mean `fill`.

Currently, the cutoff (e.g. through clamp) epsilon (e.g. tiny) values are set as constants. Ultimately, it should be easy for a user to overwrite these values.

Hi, can you clarify ForneyLab License? At https://github.com/biaslab/ForneyLab.jl, it is stated: _License (c) 2019 GN Store Nord A/S. Permission to use this software for any non-commercial purpose is granted. See...

I am implementing a discrete Bayesian network using a few Nonlinear functions and cannot get the `messagePassingAlgorithm()` method to run without error. The resulting graph has all its edges terminated...

This PR adds a rule for multiplication node: ```julia ruleSPMultiplicationAGPN(msg_out::Message{F, Multivariate}, msg_in1::Message{PointMass, Multivariate}, msg_a::Nothing) ``` ``` ^ | a ~ Univariate out ~ Multivariate -->[x]

It has been reported that convergence test in Laplace approximation for more that 2 dimensions do not work properly.

bug

# `@ffg` macro This PR implements `@ffg` macro for defining Forney-style factor graphs. Example: ```julia using ForneyLab @ffg function gaussian_add(prior) x ~ GaussianMeanVariance(prior.m, prior.v) y = x + 1.0 ∥...

## Example signal model Consider the probabilistic model: ``` X ~ GMM(z_x, mu_x1, lambda_x1, ..., mu_xN, lambda_xN) Y ~ GMM(z_y, mu_y1, lambda_y1, ..., mu_yM, lambda_yM) Z = X + Y...

enhancement