Venturecxx
Venturecxx copied to clipboard
AEInfer happens regardless of the inference program, except when it doesn't
As far as I can tell, AEinfer is called here: https://github.com/probcomp/Venturecxx/blob/master/backend/lite/trace.py#L543
In particular:
- It happens once per transition of any primitive inference operator, including weird ones like
draw_scaffold
andprint_scaffold_stats
. - It ignores the scope and block, except if there are no random choices in the block (latents don't count as random choices), in which case it doesn't run it at all.
Yeah, it's weird. AE Kernels in Venture are weird, partly because they happened before we understood that we wanted inference programming.
I expect the scope and block to affect AEInfer through the registerAEKernels mechanism, which is, if I recall correctly, supposed to select only the relevant AE Kernels to run. This may also do something reasonable with e.g. draw_scaffold
(or may not).
I don't think so. From reading the code, it looks like registerAEKernels just maintains a global set of nodes that have AEKernels, and any primitive infer hits all of them regardless of whatever else it does.
Some testing seems to confirm this:
venture[script] > assume x = tag("scope", "block", flip())
False
venture[script] > assume h = make_lazy_hmm(simplex(0.5, 0.5), matrix(array(array(0.5, 0.5), array(0.5, 0.5))), matrix(array(array(0.5, 0.5), array(0.5, 0.5))))
([], {})
venture[script] > observe h(3) = atom<0>
venture[script] > sample h
([matrix([[1, 0]]), matrix([[0, 1]]), matrix([[1, 0]]), matrix([[0, 1]])], {3: [0]})
venture[script] > print_scaffold_stats("scope", "block", 1)
---Scaffold---
# pnodes: 1
# absorbing nodes: 1
# aaa nodes: 0
# brush nodes: 0
border lengths: [2]
# lkernels: 0
[]
venture[script] > sample h
([matrix([[0, 1]]), matrix([[1, 0]]), matrix([[0, 1]]), matrix([[1, 0]])], {3: [0]})
What you described sounds like a reasonable thing that it should do. Maybe there should also be a dedicated inference action for running AEKernels, so that they can be controlled by inference programs.
Could argue that this is blocked on the Mite backend being finished, on the grounds that it might light the way for what to do in Lite.