Tor Erlend Fjelde
Tor Erlend Fjelde
I believe @SamuelBrand1 means that you can drop the usage of `~`, which loses the niceties that it brings, e.g. sample from prior, in favour of the faster `@addlogprob!` path...
Tried to summarize some of my thoughts on the topic. There are two issues here: 1. We don't have well-defined "modes" of operation, and hence can't _properly_ answer the question...
> However I am a little bit confused why it is needed at all for things that in theory already return idd components (i.e filldist, .~). `filldist` isn't just used...
> don't think we need @observed_value(x) since this is available in the model scope Maybe I misunderstand, but this is not available in the model scope in the case where...
> (internal) macro DynamicPPL.@get_paramter_type(x::VarName) to replace the current hard-coded logic for determining whether a parameter is missing, observation, fixed or random variable. So, currently, the generation of these branches is...
The issue with ComponentArrays.jl is that it effectively takes a `NamedTuple` approach. This has the benefit that it's very fast to do something like `x_opt.x` (+ you can infer the...
> where there are many, many individually named variables (2,600 in this case) the performance of a NamedTuple with entries for all of them will suffer? Yep. But a more...
Another question I think is worth raising: is this example even worth including in the tutorial? It's a bit unclear what it's actually inferring, no? It feels like it's almost...
What do you say @devmotion @yebai ? Should we just drop it from the tutorial?
I'm not suggesting removing the tutorial; I'm suggesting we remove only the SDE example. Can someone explain what we're actually doing in the SDE example? We're somehow defining a likelihood...