Mohamed Tarek
Mohamed Tarek
Realistically, we may only need to define a handful of distributions that people use for this to be immediately useful. Then the rest is documentation.
> Any ideas? Use eval. First walk the expression from DiffRules replacing every function call `f(...)` with `CUDA.cufunc(f)(...)` then create an expression that uses `DiffRules.@define_diffrule` and `eval` it.
>Do you think it's reasonable to extract the key distribution implementation codes from DistributionsAD.jl into something like DistributionKernels.jl, in which we make sure the low-level primitives are also CUDA friendly....
@torfjelde is this what you want? ```julia nargs = 1 args = ntuple(i -> gensym(Symbol(:x, i)), nargs) diffrule = DiffRules.diffrule(:Base, :sin, args...) diffrule_cu = CUDAExtensions.replace_device_all(diffrule) @eval begin DiffRules.@define_diffrule CUDAExtensions.cusin($args...,) =...
Another challenge we didn't highlight here is generating random numbers in the GPU kernel. This is not trivial because we need to maintain a different RNG for each thread and...
Hi @thowell, thanks for the comment. Looks like CALIPSO follows a similar style to Ipopt in that there is a problem type with all the functions. https://github.com/JuliaNonconvex/NonconvexIpopt.jl would be a...
Yes currently I only support linear expressions because this is the main utility of using JuMP over Nonconvex directly to define objectives or constraints. Extending this is possible but it...
Cool, thanks for the suggestion. Really excited for the constraint handling work. Is there a branch I can follow to learn more or is it still private?
I see. Thanks for the link. If this effort goes further, I would be happy to give it a spin then :)
Thanks for sharing this.