pints
pints copied to clipboard
Add brute-force samplers (for uniform priors)
For example, explore each parameter individually (param x, score y) or plot any two parameters against each other (param 1 x, param 2 z, score y)
Use evaluator
interface to parallelise
- [ ] Uniform
- [ ] Latin hypercube
- [ ] Sobol
Michael wrote:
"_Thinking about doing some brute-force parameter space exploration around points returned by optimisers. Need to do it in 2D/3D so I can visualise though. Would it be enough to plot parameter 1 & parameter 2, parameter 3 & parameter 4 etc. Or should I do plots for all possible combinations (like the triangular MCMC plots?) _"
Jonathan wrote:
"Depends what you want to use it for I guess. Doing all combinations will give you a better feel for what's going on, and just involves more plots, not more computation. You could perhaps figure out which pairs give the most interesting surface (biggest changes in likelihood) and plot those first?"
@martinjrobins wrote:
"@michael r u implementing this in pints? I'd like to do this as well, so it would be nice to have this as a general plot_likelihood_around_point() type function"
Thinking it could be good to have this in a box (i.e. boundaries / uniform prior) and then uniformly spaced. If we make it more complicated it stops being a check and becomes a bad version of mcmc or other samplers
Turns out I have this in Myokit (but have deprecated it because fitting isn't a core goal there)
def map_grid(f, bounds, n, parallel=False, args=None):
"""
Maps a parameter space by evaluating every point in a rectangular grid.
Arguments:
``f``
A function to map. The function ``f(x)`` must be callable with ``x`` a
sequence of ``m`` coordinates and should return a single scalar value.
``bounds``
A list of ``m`` tuples ``(min_i, max_i)`` specifying the minimum and
maximum values in the search space for each dimension ``i``. The mapped
space will be within these bounds.
``n``
The number of points to sample in each direction. If ``n`` is a scalar
the function will map a grid of ``n`` points in each direction, so that
the total number of points is ``n**m``, where ``m`` is the
dimensionality of the search space. Alternatively, the number of points
in each dimension can be specified by passing in a length ``m``
sequence of sizes, so that the total number of points mapped is
``n[0] * n[1] * ... * n[m-1]``.
``parallel``
Set to ``True`` to run evaluations on all available cores.
``args``
An optional tuple containing extra arguments to ``f``. If ``args`` is
specified, ``f`` will be called as ``f(x, *args)``.
Returns a tuple ``(x, fx)`` where ``x`` is a numpy array containing all the
tested points and ``fx`` contains the calculated ``f(x)`` for each ``x``.
"""
# Check bounds, get number of dimensions
ndims = len(bounds)
if ndims < 1:
raise ValueError('Problem must be at least 1-dimensional.')
for b in bounds:
if len(b) != 2:
raise ValueError(
'A minimum and maximum must be specified for each dimension.')
# Check number of points
try:
len(n)
except TypeError:
n = (n,) * ndims
if len(n) != ndims:
if len(n) == 1:
n = (n,) * ndims
else:
raise ValueError(
'The positional argument n must be a scalar or provide a value'
' for each dimension.')
npoints = np.array(n)
ntotal = np.prod(npoints)
# Create points
x = []
n = iter(npoints)
for xmin, xmax in bounds:
x.append(np.linspace(xmin, xmax, next(n)))
# Create a grid from these points
x = np.array(np.meshgrid(*x, indexing='ij'))
# Re-organise the grid to be a series of nd-dimensional points
x = x.reshape((ndims, ntotal)).transpose()
# Evaluate and return
return x, evaluate(f, x, parallel=parallel, args=args)
Should we include something like this in Pints?
Maybe latin hypercube sampling too
And sobol sampling
Or perhaps we should implement these as methods inside the UniformPrior
. Touches on what we discussed today a bit. Any thoughts @ben18785 @martinjrobins ?
Yeah, I like the idea!
Could we also use methods from SMC to help with this? As in, we reweight particles according to their functional value, then propagate them?
On 9 Jul 2019, at 14:17, Michael Clerx [email protected] wrote:
Or perhaps we should implement these as methods inside the UniformPrior. Touches on what we discussed today a bit. Any thoughts @ben18785 @martinjrobins ?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.
I think so! But that sounds suspiciously like you're inventing a sampler :D
@MichaelClerx Do we still want this? I'm not sure about it since uniformly sampling a space is easy enough to do and doesn't work well in anywhere above a handful of dimensions.
I'm still thinking it could be very useful e.g. to explore and plot score functions. Perhaps as several sampling methods on the bounded uniform prior, or something like that. I also like the idea of having a sort of zero-cleverness brute-force approach as a baseline to compare stuff with (even if that means the method doesn't really work for most cases) And this is something I've done and found useful in the past, so I'd like to keep this ticket open