JuliaInterpreter.jl
JuliaInterpreter.jl copied to clipboard
precompile fails to cache finish_and_return!
While working on #60 I noticed that despite the appropriate precompile calls, finish_and_return!(::Vector{JuliaStackFrame}, ::JuliaStackFrame) is still being inferred the first time it gets called. For @interpret 1+1 it's the only significant inference overhead, around 0.25s on the machine I tested. This is not a major priority, but it does merit investigation (possible precompile bug).
For reference, here's the patch I use for timing inference:
diff --git a/base/compiler/typeinfer.jl b/base/compiler/typeinfer.jl
index bbeea301fe..822875ccad 100644
--- a/base/compiler/typeinfer.jl
+++ b/base/compiler/typeinfer.jl
@@ -535,8 +535,17 @@ function typeinf_code(method::Method, @nospecialize(atypes), sparams::SimpleVect
return (frame.src, widenconst(result.result))
end
-# compute (and cache) an inferred AST and return type
+const inf_timing = []
function typeinf_ext(linfo::MethodInstance, params::Params)
+ tstart = ccall(:jl_clock_now, Float64, ())
+ ret = _typeinf_ext(linfo, params)
+ tstop = ccall(:jl_clock_now, Float64, ())
+ push!(inf_timing, (tstart, linfo, tstop))
+ return ret
+end
+
+# compute (and cache) an inferred AST and return type
+function _typeinf_ext(linfo::MethodInstance, params::Params)
method = linfo.def::Method
for i = 1:2 # test-and-lock-and-test
i == 2 && ccall(:jl_typeinf_begin, Cvoid, ())
I've found this really helps. SnoopCompile essentially times the wrong thing. (It times e.g., LLVM, but we don't cache native code yet so that's pretty irrelevant.)