RxInfer.jl icon indicating copy to clipboard operation
RxInfer.jl copied to clipboard

An overview of "sharp bits" of RxInfer

Open svilupp opened this issue 2 years ago • 10 comments

This is inspired by JAX Sharp Bits

It would be awesome to have a documentation page describing the most unexpected differences to what a normal Julia user (or a Turing.jl user) would expect, eg,

  • what the allowed operations are within a model (you cannot use functions // you cannot use sum on r a random variable array // you cannot use + with more than 2 arguments)
  • what are the initialization needs and the minimum required
  • troubleshooting frequently occurring problems
    • what should be in your function return statement (because you cannot just execute the model anyway. my guess: it's for the quantities that you want to manually subscribe to)
    • if your fit is bad at the very beginning of the time series, try increasing iterations
    • if you have BFE NaNs check that there isn't any input multiplied by a MvNormal with only zeros; if there is add, add tiny to the first column
    • if you get MethodError with q_ m_, try XYZ first

svilupp avatar Jun 18 '22 05:06 svilupp

Hi! I am looking into ReactiveMP for some state space modelling. It looks like a very cool and useful package, however, not being able to call functions in models is unfortunately kind of a deal breaker for me. My use case is I would like to call a function that returns transition/observation matrices and use those in the inference.

Is this a something you are looking to make possible in the future or is this a hard limitation of ReactiveML?

SebastianCallh avatar Aug 17 '22 18:08 SebastianCallh

Hey @SebastianCallh ,

Depends on what exactly you are trying to achieve. If you just want to generate a state-transition matrix that does depend on some "out-of-model" process just make it datavar(Matrix) and simply pass it together with your observation. There is a good example of online filtering in a hierarchical gaussian filter model: https://biaslab.github.io/ReactiveMP.jl/stable/examples/hierarchical_gaussian_filter/. In this demo we set our priors to be datavar's such that we can continuously change/update them during the inference procedure. You can set up something similar for state-transition matrix as well.

If you want to generate those matrices depending on some random variable in your model than it is not supported yet, but we are working hard to make it work. The main difficulty here is that ReactiveMP.jl is aimed to support fast real-time inference in state-space models and making inference fast for any arbitrary function is quite a difficult challenge.

bvdmitri avatar Aug 18 '22 10:08 bvdmitri

Thank you for that insight @bvdmitri . It makes sense to keep the scope limited.

I'm actually working on a library for structural time series primitives and it looks like ReactiveMP is a wonderful inference engine that I could pair it with. I have used Turing with Kalman filter to fit models before but it is quite slow.

There are some examples on how I currently construct and use models here https://github.com/SebastianCallh/STSlib.jl#basics. Do you think this would pair well with ReactiveMP? I was thinking I would call the STS model each time step for states and observations and simply pass them to RectiveMP for inference but I couldn't figure out how to call my functions (operating on mean vectors and covariance matrices) with RandomVariable objects.

SebastianCallh avatar Aug 18 '22 21:08 SebastianCallh

Hi @SebastianCallh!

Our sincere apologies for the (extremely) late reply to your question. It just escaped our attention. Although your question seems a bit irrelated to the issue, I am happy to answer it here. From your description I get the feeling that you are looking for the following example in our docs: https://biaslab.github.io/RxInfer.jl/stable/examples/Kalman%20filter%20with%20LSTM%20network%20driven%20dynamic/#Generate-data. It basically describes a Kalman Filter whose transition matrices are modeled by a neural network (here powered by Flux.jl). The code is a bit rough there, because the neural network is trained simultaneously. If your network has already been trained, then you could make use of the more convenient rxinference function, which processes data sequentially in an online manner. Especially our @autoupdates macro might describe what you are looking for.

I hope this answers/solves your problem. If you would like to dive in a bit more detail regarding your implementation, feel free to open a separate issue in which we can discuss your implementation as we think this line of research is very interesting!

bartvanerp avatar Feb 27 '23 12:02 bartvanerp

Further update on the sharp bits section:

  • Variable relations described by arbitrary functions can now be used inside the model specification language. As inference in these cases is not tractable, we need to resort to some approximation method. For this purpose we use CVI, which needs to be specified using the @meta macro. See this notebook for an example.
  • Multi-argument +, - and * operations are now available. They can also be joined, e.g. y ~ a + b * c.
  • The sum operation is currently not yet available, because it depends on the dependence assumption between the variables. However, an issue is filed here. @wouterwln is working hard on improving GraphPPL.jl to make it better and to catch issues like this.

Tasks:

  • [x] Add debugging section (https://github.com/biaslab/ReactiveMP.jl/issues/162)
  • [x] GraphPPL restructuring (@wouterwln)
  • [ ] Handling non-existing rules (@albertpod, @bartvanerp)

If someone encounters certain limitations of our tool, we highly encourage you to open an issue, such that we know what pitfalls people are experiencing and how we can help improving our package :)

bartvanerp avatar Feb 27 '23 14:02 bartvanerp

@bartvanerp Thank you so much for your polite response. I was a bit too eager when posting my question here, and completely agree it belongs in a separate issue. I will study the example you linked!

SebastianCallh avatar Feb 28 '23 10:02 SebastianCallh

@mhidalgoaraya

bvdmitri avatar Oct 05 '23 13:10 bvdmitri

Hi @mhidalgoaraya! I see this one has "in progress" status. Are we working on it?

albertpod avatar Mar 13 '24 10:03 albertpod

ping @mhidalgoaraya

albertpod avatar Mar 20 '24 14:03 albertpod

@albertpod, it is the first time that I see this. It seems that it was assigned to me. I can take care of it. Can we discuss it next week and you get me up to speed. Thanks

mhidalgoaraya avatar Mar 20 '24 14:03 mhidalgoaraya