QuantEcon.py icon indicating copy to clipboard operation
QuantEcon.py copied to clipboard

Scalar function maximizer

Open jstac opened this issue 8 years ago • 18 comments

Adding to wish list: I would like to have a 1D maximization routine in QuantEcon.py, something akin to this SciPy function:

https://github.com/scipy/scipy/blob/v0.16.1/scipy/optimize/optimize.py#L1554

The signature would be

import quantecon as qe
qe.maximize(f, a, b)  # maximize function f on [a, b]

The algorithm in the link above is Brent's method and I think that would be fine. Although perhaps it would be nice to have an optional argument concave that allows users to exploit concavity.

Rationale:

  • Could be written in Cython. The SciPy version is pure Python. Cython seems like a better option than Numba because numbafied functions cannot be passed functions as arguments. Cython functions can be passed functions.
  • SciPy and similar libraries only supply minimizers. With a maximizer code is a bit cleaner.

Notes:

I wonder how this would work if it was written in Cython and you passed in numbafied primitives. For example, if f was numbafied in the example call given above. I suspect it would work well.

jstac avatar Dec 26 '15 16:12 jstac

@jstac

I think it might be possible to use Numba to do this. See the very bottom of issue #159 for an example of how to pass jitted functions as arguments.

davidrpugh avatar Dec 26 '15 17:12 davidrpugh

@davidrpugh Many thanks.

Actually, I see now that my Numba knowledge is out of date: We can just pass jitted functions to jitted functions:

http://numba.pydata.org/numba-doc/dev/user/jit.html#calling-and-inlining-other-functions

This is probably the easiest way to go.

jstac avatar Dec 26 '15 17:12 jstac

That's a tricky one. Currently the state of optimizing functions in Python is not ideal and it is hard to write efficiently some algorithm that rely on repeated optimization. In general, you need at the same time to define a function f to be optimized on a list of point, and loop over this list of point. Ideally, the objective f and the outer-loop would be written in pure Python or in very simple Numba. By and large, this is not feasible right now. The basic reason for that is that the optimization routine, takes a function as an argument. From my understanding, this is not going to be in numba in the very near future, but will eventually happen. There are possible workarounds:

  • Copy the code of the optimizer manually. Concretely you can have in your file:
@jit
def f(x):
   # defines the model
   return res

@jit
def optimize(a,b,x0):
     v = f(x0)
     ...
     return res

@jit
def outer_loop():
    # ..
    for i in grid.shape[0]:
        res[i,:] = optimize(a,b,res[i,:])

In that case, everything is just in time compiled. It should be possible to write it in a way that does not invove a full copy of the optimization code, but in any case, optimize does not take f as an argument.

  • Write a function factory, which takes the optimizer, and the function to optimize and return a new jitted function which takes a supported numba type as an argument and a supported type as output. This requires quite a bit of untransparent magic, and puts some constraints on the output. On the bright side, the syntax could be rather transparent: optimizer(f)(x0) where optimizer(f) would be a jitted function.

At any rate, in both cases, and in the long run, we need some optimization routines that we can compile in a nopython context. For d=1, the algorithm are simple enough that we can provide simple Python implementations to be copied, or jitted. Compecon has some, scipy more or less the same ones, and they are easy enough to code anyway so we don't lack options. For d>=2, this is more involved and I don't know that we have other options than wrapping low-level functions (which currently is not feasible).

Bottom line: I don't think we are ready right now to supply an optimized maximization routine. However, maximization routines are essential and we can explain how to reuse them. I'll try to have another look at the function factory approach: I don't think we can provide a robust library function but we can certainly do a notebook explaining how to do it.

albop avatar Dec 26 '15 19:12 albop

Actually, the reason I mentioned the outter loop as part of the problem is that sometime you want to write the whole algorithm as a function of 'f', so there is a double dip aspect to the problem. You want to jit-compile it as a function of 'f'. Also, if you are willing to write 'f' and the outer-loop in Cython, then you don't have that much problems. All you need is a cythonized version of an optimizer, which probably already exists. Otherwise, the problem is essentially the same.

albop avatar Dec 26 '15 20:12 albop

For what it is worth, I re-implemented the the functions from scipy.optimize.zeros (link is to the C source code) in Julia and they are quite fast/robust. You can see them here. If we do decide to add something like this to QE.py they are ready to go on the Julia side.

sglyon avatar Dec 28 '15 16:12 sglyon

Just one or two days ago, the new feature jitclass has been merged into Numba master, and it is now possible to pass a jitted class instance to a jitted function.

I played a bit with it; see this notebook.

(I don't know how come something that is possible with a class is not possible with a function though.)

oyamad avatar Jan 01 '16 07:01 oyamad

@spencerlyon2: Julia clearly has a point in making this kind of code design much easier to implement. Hopefully, the Python side will catchup. While numba continues to improve I wonder how Pypy/Pyston would perform on this kind of loops.

@oyamad: very interesting example ! I have been waiting impatiently for these JIT classes to happen. Apparently they provide another workaround to the "function as argument" problem. I haven't run your notebook but I suspects it works because the golden_search function is recompiled each time it sees a new function (because it also is a new class). In the example you provide, if you were to define another function g, then you would have to define it as a new class, not as another instance of MyObjectiveFunction` since the code in value is attached to the class, not to the instance.

This means that one could imagine a special decorator fjit such that:

@fjit
def f(args):
   # ... 
    return ...

would be equivalent to:

class RandomnName12:

    def value(self, x):
        # same calculations as f
        return ...

f = RandomnName12()

In this case, f could be used as arguments in jitted functions. Not ideal but certainly fun.

albop avatar Jan 01 '16 19:01 albop

@oyamad Great! I ran your code with the lastest from Numba and got similar output.

It's a shame the __call__ method doesn't work. I hope they fix that. Other than that, it looks like we are close. I'm very keen to have a jitted scalar maximizer, a minimizer and a root finder in QuantEcon.py. I know that for me it's going to speed up a lot of my DP routines.

@albop That's a neat idea.

Actually my main use case (and perhaps other people's too) is maximizing the same function x -> f(x, alpha) with respect to x at many different values of alpha. So the code @oyamad provided is already very close to what I need.

@spencerlyon2 Good to know that these are ready to go on the Julia side.

jstac avatar Jan 01 '16 21:01 jstac

I have implemented an experimental version of a class wrapper there: https://github.com/albop/numba_experiments/blob/master/Using%20a%20class%20to%20define%20functions%20with%20numba.ipynb. It is rather simple and basically creates a new class like you would do manually. In principle one can write:

@fjit
def f(x): return x**2

then

minimize(f, 0.5)

if minimize has a jit decorator and uses f.value() instead of f(). It would be very easy to extend to several arguments for f. There still seem to be a penalty cost to the jitclass approach though (3000 times slower than normal jit and 10 times faster than pure python). I'll ask on the numba mailing list about that. It seems to me that the singleton class approach (creating a new class for each new function) could be extended to add some kind of general function support to numba without much refactoring (i.e. without developing type inference further).

albop avatar Jan 06 '16 13:01 albop

@albop Very nice, thanks. I ran your notebook and got similar outcomes.

With a larger N, the differences got larger, where the jitted versions do not slow down much, while the jitclass and pure python slow down almost proportionally.

(Just to let you know, I had asked a question about support for __call__ in Numba Issues.)

oyamad avatar Jan 06 '16 14:01 oyamad

@oyamad: thank you for the heads-up. I had missed the recent updates.

Actually, I remember reading another issue about jit-classes where methods starting with underscores where mentioned but I can't find it anymore.

On Wed, Jan 6, 2016 at 2:20 PM, Daisuke Oyama [email protected] wrote:

@albop https://github.com/albop Very nice, thanks. I ran your notebook and got similar outcomes.

With a larger N, the differences got larger, where the jitted versions do not slow down much, while the jitclass and pure python slow down almost proportionally.

(Just to let you know, I had asked a question about support for call in Numba Issues https://github.com/numba/numba/pull/1433#issuecomment-169324254.)

— Reply to this email directly or view it on GitHub https://github.com/QuantEcon/QuantEcon.py/issues/216#issuecomment-169333946 .

albop avatar Jan 06 '16 15:01 albop

With version 0.38.0, "Numba now allows the passing of jitted functions (and containers of jitted functions) as arguments to other jitted functions" (release notes).

See an updated version of @albop 's notebook in https://github.com/QuantEcon/QuantEcon.py/issues/216#issuecomment-169320321 with an added case where a jitted function is passed another jitted function as an argument (cells [10], [16]).

It is x6 slower than the direct jitted version, but x250- faster than the jitclass version.

oyamad avatar Apr 28 '18 05:04 oyamad

And the update of my previous notebook in https://github.com/QuantEcon/QuantEcon.py/issues/216#issuecomment-168287601.

oyamad avatar Apr 28 '18 05:04 oyamad

Awesome!

Until someone else provides these, it would be nice to have some basic jitted scalar optimization and root finding routines in QE.py...

jstac avatar Apr 28 '18 10:04 jstac

Wow, that's huge news!

I agree about having these routines here. Having them in QE.jl has been useful for me

sglyon avatar Apr 28 '18 13:04 sglyon

Very cool indeed ! Just when I had neglected to look at numba release notes. It's a very good idea to provide simple numba-compatible optimizers. In the short term there doesn't seem to be another home for them but there are discussions on scipy mailing list about using numba in the future. I actually wonder about how far we can go with this jitted function arguments. It would be wonderful if we could make c function pointers and pass them to a C lib. I'm thinking here about the multivariate which is so far a big bottleneck. Starting with the 1d case and using it is likely to provide us with some insights.

albop avatar Apr 28 '18 20:04 albop

@sglyon are these the ones you've implemented in QE.jl https://github.com/QuantEcon/QuantEcon.jl/blob/273781c2a0a4166a87c6f29cee8041c83b9fb30d/src/zeros.jl. I might reference these implementations in separate issues for development in QE.py.

mmcky avatar Apr 30 '18 03:04 mmcky

Those are the 1d root finders.

I also have the golden method optimization routine here https://github.com/QuantEcon/QuantEcon.jl/blob/master/src/optimization.jl, but would love to have more.

sglyon avatar Apr 30 '18 12:04 sglyon