proper support for interpreting opaque closures
fixes #537
Codecov Report
Attention: 4 lines in your changes are missing coverage. Please review.
Comparison is base (
7beca92) 86.54% compared to head (ae687a0) 84.15%. Report is 1 commits behind head on master.
:exclamation: Current head ae687a0 differs from pull request most recent head ca7a028. Consider uploading reports for the commit ca7a028 to get more accurate results
| Files | Patch % | Lines |
|---|---|---|
| src/builtins.jl | 33.33% | 2 Missing :warning: |
| src/construct.jl | 66.66% | 2 Missing :warning: |
Additional details and impacted files
@@ Coverage Diff @@
## master #593 +/- ##
==========================================
- Coverage 86.54% 84.15% -2.40%
==========================================
Files 12 12
Lines 2542 2404 -138
==========================================
- Hits 2200 2023 -177
- Misses 342 381 +39
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
@oxinabox Any chance you could give this a try some time for your Diffractor use case? It's a bit hard to rely on CI currently due to #580
Sorry yeah, I was going to then i fixed the bug I was needing to debug. It is in my TODOs, I get a ping from slack each day reminding me
Finally tied this and I get an error
1|debug> nc
ERROR: optimization of inferred code not implemented
Stacktrace:
[1] (::DAECompiler.JITOpaqueClosure{Tuple{Vector{Float64}, Nothing, Float64}, DAECompiler.var"#163#168"{TransformedIRODESystem, Vector{Int64}, Vector{Int64}, DAECompiler.DebugConfig}, Core.OpaqueClosure{Tuple{Vector{Float64}, Nothing, Float64}, Tuple{Vector{Union{Nothing, Float64}}, Vector{Union{Nothing, Float64}}}}})(args::Tuple{Vector{Float64}, Vector{Float64}, Float64})
...
and the debugger exits
Ok, let's track this down! Do you have a reproducer or is this dependent on internal code?
Its dependent on internal code. Let me see if I can make a reproducer that isn't
this reproduces it. It could probably be minimized more though
using Debugger
function get_toplevel_mi_from_ir(ir, _module::Module)
mi = ccall(:jl_new_method_instance_uninit, Ref{Core.MethodInstance}, ());
mi.specTypes = Tuple{ir.argtypes...}
mi.def = _module
return mi
end
function infer_ir!(ir, interp::Core.Compiler.AbstractInterpreter, mi::Core.Compiler.MethodInstance)
method_info = Core.Compiler.MethodInfo(#=propagate_inbounds=#true, nothing)
min_world = world = Core.Compiler.get_world_counter(interp)
max_world = Base.get_world_counter()
irsv = Core.Compiler.IRInterpretationState(interp, method_info, ir, mi, ir.argtypes, world, min_world, max_world)
rt = Core.Compiler._ir_abstract_constant_propagation(interp, irsv)
return ir
end
# add overloads from Core.Compiler into Base
Base.iterate(compact::Core.Compiler.IncrementalCompact, state) = Core.Compiler.iterate(compact, state)
Base.iterate(compact::Core.Compiler.IncrementalCompact) = Core.Compiler.iterate(compact)
Base.getindex(c::Core.Compiler.IncrementalCompact, args...) = Core.Compiler.getindex(c, args...)
Base.setindex!(c::Core.Compiler.IncrementalCompact, args...) = Core.Compiler.setindex!(c, args...)
Base.setindex!(i::Core.Compiler.Instruction, args...) = Core.Compiler.setindex!(i, args...)
###################################
# Demo
function foo(x)
a = sin(x+pi/2)
b = cos(x)
return a - b
end
input_ir = first(only(Base.code_ircode(foo, Tuple{Float64})))
ir = Core.Compiler.copy(input_ir)
empty!(ir.argtypes)
push!(ir.argtypes, Tuple{}) # the function object itself
push!(ir.argtypes, Float64) # x
compact = Core.Compiler.IncrementalCompact(ir)
for ((_, idx), inst) in compact
ssa = Core.SSAValue(idx)
if Meta.isexpr(inst, :invoke)
# we can insert nodes, lets print the function objects
Core.Compiler.insert_node_here!(
compact,
Core.Compiler.NewInstruction(
Expr(:call, println, inst.args[2]),
Any, # type
Core.Compiler.NoCallInfo(), # call info
Int32(1), # line
Core.Compiler.IR_FLAG_REFINED # flag
)
)
compact[ssa][:inst] = Expr(:call, inst.args[2:end]...)
compact[ssa][:type] = Any
compact[ssa][:flag] |= Core.Compiler.IR_FLAG_REFINED
end
end
ir = Core.Compiler.finish(compact)
ir = Core.Compiler.compact!(ir)
interp = Core.Compiler.NativeInterpreter()
mi = get_toplevel_mi_from_ir(ir, @__MODULE__);
ir = infer_ir!(ir, interp, mi)
inline_state = Core.Compiler.InliningState(interp)
ir = Core.Compiler.ssa_inlining_pass!(ir, inline_state, #=propagate_inbounds=#true)
ir = Core.Compiler.compact!(ir)
ir = Core.Compiler.sroa_pass!(ir, inline_state)
ir = Core.Compiler.adce_pass!(ir, inline_state)
ir = Core.Compiler.compact!(ir)
f1 = Core.OpaqueClosure(ir; do_compile=true)
f1(1.2)
@enter f1(1.2)
I'm assuming this is latest Julia nightly?
11 days old nightly, but I assume it reproduces on nightly as well
Ok, I think I have an idea what's happening here now. So you're putting optimized IR into an opaque closure, but unfortunately JuliaInterpreter can't handle that rn (at least in the general case) because it would have to deal with PhiNodes and such.
So what we'd need is either implement handling for interpreting post-opt IR (basically continuing https://github.com/JuliaDebug/JuliaInterpreter.jl/pull/491) or an inverse for slot2reg I think (Maybe someone already implemented that?). For now, we should probably call the compiled oc if we discover the IR has been optimized so we don't error but that probably doesn't help you much then.
Not confident on all this stuff though, maybe @aviatesk or @keno have some ideas?
We have ir_to_codeinf!, which converts back IRCode to CodeInfo, although it does preserve post-opt IR elements like PhiNode. AFAIR there is no utilities like an inverse of slot2reg. We would need to implement a new JuliaInterpreter pass for interpreting them..
Yeah, I think the code is saved as a CodeInfo already, but that's what I was afraid of. Thanks for confirming!
@aviatesk Does this look reasonable to you?
Bump
This looks very reasonable. Thanks so much for tackling this and sorry for being very late to give a review.