Ways to build measures
- [ ] From a log-density wrt some base measure
- [ ] As a superposition of other measures
- [ ] As a product measure
- [ ] By reweighting an existing measure
- [ ] By normalizing an existing measure
- [ ] From a CDF or copula function
- [ ] "Importance sampling" representation - changing the base measure
- [ ] From a set or weighted set
- [ ] From a loss function (treat as a negative log-density over Lebesgue measure)
- [ ] From expectations of basis functions, e.g. characteristic functions (https://github.com/cscherrer/MeasureTheory.jl/issues/59#issuecomment-772777110)
- [ ] From a random sampler (https://github.com/cscherrer/MeasureTheory.jl/issues/59#issuecomment-773177788)
- [ ] As a pushforward of an existing measure
- [ ] By sorting a power measure, for a linearly-ordered space (order statistics)
- [ ] As a Dirichlet Process generated from an existing measure
- [ ] As an empirical measure by sampling an existing measure (with replacement)
- [ ] By sampling from an existing measure without replacement
There's also using expectations of basis functions. E.g. characteristic functions. Useful for defining a measure as a convolution of measures.
Thanks @sethaxen , I've added it
Also one might want to define a measure from some function that computes random samples.
Nice @sethaxen . This one will probably be tricky, maybe @zenna's work could help with it?
This one will probably be tricky, maybe @zenna's work could help with it?
Yeah, I'm just listing things a user might want to do, not necessarily that will be easy to support. I'm not familiar with @zenna's work.
Sam Power tweeted out this chapter a few days ago, and it had some additional ways of generating random variables that I hadn't seen before: http://luc.devroye.org/handbooksimulation1.pdf
Nice, reminds me of this: https://blog.shakirm.com/2015/10/machine-learning-trick-of-the-day-4-reparameterisation-tricks/