ParallelAccelerator.jl icon indicating copy to clipboard operation
ParallelAccelerator.jl copied to clipboard

Pkg.add("ParallelAccelerator") throws error on Windows 10

Open shivaramkrs opened this issue 8 years ago • 62 comments

I get this build error while installing ParallelAccelarator. image

shivaramkrs avatar Mar 16 '16 19:03 shivaramkrs

We don't support Windows right now. You could try it through cygwin and see what happens. In cygwin you could run build.sh manually and see what happens which might give you a clue. If it is something small at that point then maybe we can fix it but if it turns into one issue after another then it becomes less likely all the problems would be fixed in the short term.

DrTodd13 avatar Mar 16 '16 20:03 DrTodd13

I investigated supporting Windows a while ago. The issue I couldn't solve was that ParallelAccelerator couldn't load the shared library file.

ehsantn avatar Mar 16 '16 20:03 ehsantn

I am using cygwin and getting the following error:

image

shivaramkrs avatar Mar 16 '16 20:03 shivaramkrs

Maybe you have an old version of g++ that doesn't support the -std=c++11 option. Can you report what "g++ --version" says? I think you need GCC 4.7 or later. You might later get stuck at the same issue that ehsantn did.

DrTodd13 avatar Mar 16 '16 21:03 DrTodd13

You are right, I had version 4.6x, I updated it to 5.x. Now I get new error

image

This may be the error ehsantn was struck at...

shivaramkrs avatar Mar 16 '16 21:03 shivaramkrs

No, that's not the error that ehsantn was mentioning. Do you have a file libj2carray.so.1.0? If so, then build.sh worked successfully and you can go on to trying to use ParallelAccelerator. If you get some error about ccall not being able to load a library or something to that effect then that is probably ehsantn's error. If you see this problem you'll like see it at driver.jl:321.

DrTodd13 avatar Mar 16 '16 22:03 DrTodd13

Using ParallelAccelerator works. I am getting -std=c++11 error again when I am trying to run black-sholes.jl.

image

Maybe I am not setting some path to the right g++ compiler.

shivaramkrs avatar Mar 16 '16 22:03 shivaramkrs

If you start a Julia REPL, and do "run(g++ --version)" then you should be able to see the default g++ version Julia finds. If that isn't the latest one you installed then I'd really suggest tinkering with your paths to make sure the new g++ comes first. If you really get stuck then as a last attempt you can add a full path to g++ on lines 2703 and 2800 of cgen.jl.

DrTodd13 avatar Mar 16 '16 22:03 DrTodd13

I executed run(g++ --version) in Julia REPL. I changed Env path to make the cgwin g++ (version 5) compiler come first. image

When I run the code through the REPL, It starts running and after sometime it exits julia automatically! How do I get a dump of messages before it exits?

shivaramkrs avatar Mar 17 '16 06:03 shivaramkrs

I don't think I've ever seen or heard of this behavior before. The way that we debug the system is to set PROSPECT_DEV_MODE=1 in the environment. You can do that in the REPL with ENV["PROSPECT_DEV_MODE"]=1. Then do something like this:

using ParallelAccelerator ParallelAccelerator.set_debug_level(3)

Then try the rest of your program and you should get some messages from ParallelAccelerator. Doing this in the REPL it will be harder to capture the output. You should be able to put this in a Julia file and then run Julia on that and capture to a file. Then, you can attach the log here.

DrTodd13 avatar Mar 17 '16 16:03 DrTodd13

Here is the dump:

points= 10000 domain code = $(Expr(:lambda, Any[:n], Any[Any[Any[symbol("######rest#3905#8266#8304"),Tuple{},0],Any[symbol("GenSym(7)##2"),Int64,18],Any[symbol("GenSym(4)##1"),Int64,18],Any[symbol("##args#8305"),Tuple{Float64,Int64},0],Any[symbol("##dims#8303"),Tuple{Int64},0],Any[symbol("######rest#3905#8266#8302"),Tuple{},0],Any[:y,Array{Float64,1},18],Any[symbol("##dims#8301"),Tuple{Int64},0],Any[:x,Array{Float64,1},18],Any[:n,Int64,0],Any[symbol("##args#8306"),Tuple{Float64,Int64},0]],Any[],Any[Array{Float64,1},Array{Float64,1},BitArray{1},Int64,Int64,Array{Float64,1},Array{Float64,1},Int64,Array{Float64,1},Array{Float64,1},Array{Float64,1},Array{Float64,1},Array{Float64,1},Array{Int64,1},Float64,Float64,Float64,Float64,Float64,Float64,Float64,Float64],Any[]], :(begin # C:\Users\shiva.julia\v0.4\ParallelAccelerator\examples\pi\pi.jl, line 49: GenSym(0) = $(Expr(:alloc, Float64, Any[:(n::Int64)])) GenSym(4) = (Base.arraylen)(GenSym(0))::Int64 GenSym(4)##1 = GenSym(4) GenSym(5) = $(Expr(:mmap!, Any[GenSym(0)], ([:(x1::Float64)];) -> (Any[:(((top(rand!))(Base.Random.GLOBAL_RNG,x1,GenSym(4)##1::Int64,Base.Random.CloseOpen)::Float64,))])::Type[Float64])) GenSym(6) = $(Expr(:mmap, Any[GenSym(5)], ([:(x1::Float64)];) -> (Any[:(((top(mul_float))(x1, 2.0)::Float64,))])::Type[Float64])) x = $(Expr(:mmap, Any[GenSym(6)], ([:(x1::Float64)];) -> (Any[:(((top(sub_float))(x1,1.0)::Float64,))])::Type[Float64])) # C:\Users\shiva.julia\v0.4\ParallelAccelerator\examples\pi\pi.jl, line 50: GenSym(1) = $(Expr(:alloc, Float64, Any[:(n::Int64)])) GenSym(7) = (Base.arraylen)(GenSym(1))::Int64 GenSym(7)##2 = GenSym(7) GenSym(8) = $(Expr(:mmap!, Any[GenSym(1)], ([:(x1::Float64)];) -> (Any[:(((top(rand!))(Base.Random.GLOBAL_RNG,x1,GenSym(7)##2::Int64,Base.Random.CloseOpen)::Float64,))])::Type[Float64])) GenSym(9) = $(Expr(:mmap, Any[GenSym(8)], ([:(x1::Float64)];) -> (Any[:(((top(mul_float))(x1,2.0)::Float64,))])::Type[Float64])) y = $(Expr(:mmap, Any[GenSym(9)], ([:(x1::Float64)];) -> (Any[:(((top(sub_float))(x1,1.0)::Float64,))])::Type[Float64])) # C:\Users\shiva.julia\v0.4\ParallelAccelerator\examples\pi\pi.jl, line 51: GenSym(10) = $(Expr(:mmap, Any[:(x::Array{Float64,1})], ([:(x1::Float64)];) -> (Any[:(((top(^))(x1,2)::Float64,))])::Type[Float64])) GenSym(11) = $(Expr(:mmap, Any[:(y::Array{Float64,1})], ([:(x1::Float64)];) -> (Any[:(((top(^))(x1,2)::Float64,))])::Type[Float64])) GenSym(12) = $(Expr(:mmap, Any[GenSym(10),GenSym(11)], ([:(x1::Float64),:(x2::Float64)];) -> (Any[:(((top(add_float))(x1,x2)::Float64,))])::Type[Float64])) GenSym(2) = $(Expr(:mmap, Any[GenSym(12)], ([:(x1::Float64)];) -> (Any[:(((top(lt_float))(x1,1.0)::Bool,))])::Type[Bool])) GenSym(13) = $(Expr(:mmap, Any[GenSym(2)], ([:(x1::Bool)];) -> (Any[:(((top(mul_int))(1,x1)::Int64,))])::Type[Int64])) GenSym(3) = $(Expr(:reduce, 0, GenSym(13), ([:(x1::Int64),:(x2::Int64)];) -> ([:(((top(add_int))(x1,x2)::Int64,))])::Type[Int64])) GenSym(14) = (Core.Intrinsics.sitofp)(Float64,GenSym(3))::Float64 GenSym(15) = (Core.Intrinsics.box)(Float64,GenSym(14))::Float64 GenSym(16) = (Core.Intrinsics.mul_float)(4.0,GenSym(15))::Float64 GenSym(17) = (Core.Intrinsics.sitofp)(Float64,n::Int64)::Float64 GenSym(18) = (Core.Intrinsics.box)(Float64,GenSym(16))::Float64 GenSym(19) = (Core.Intrinsics.box)(Float64,GenSym(17))::Float64 GenSym(20) = (Core.Intrinsics.div_float)(GenSym(18),GenSym(19))::Float64 GenSym(21) = (Core.Intrinsics.box)(Float64,GenSym(20))::Float64 return GenSym(21) end::Float64))) accelerate: DomainIR conversion time = 8.400554217 parallel code = $(Expr(:lambda, Any[:n], Any[Any[Any[:parallel_ir_save_array_len_1_1,Int64,18],Any[symbol("GenSym(7)##2"),Int64,18],Any[:parallel_ir_reduction_output_6,Int64,2],Any[symbol("GenSym(4)##1"),Int64,18],Any[symbol("parallel_ir_temp_GenSym(5)_1"),Float64,18],Any[:parallel_ir_temp_parallel_ir_new_array_name_5_1_1,Int64,50],Any[symbol("parallel_ir_temp_GenSym(13)_1"),Int64,18],Any[symbol("parallel_ir_temp_GenSym(1)_2"),Float64,50],Any[symbol("parallel_ir_temp_GenSym(8)_1"),Float64,18],Any[:parallel_ir_temp_y_1,Float64,18],Any[symbol("parallel_ir_temp_GenSym(1)_1"),Float64,18],Any[:parallel_ir_temp_x_1,Float64,18],Any[symbol("parallel_ir_temp_GenSym(8)_2"),Float64,50],Any[symbol("parallel_ir_temp_GenSym(0)_1"),Float64,18],Any[symbol("parallel_ir_temp_GenSym(5)_2"),Float64,50],Any[:n,Int64,0],Any[:parfor_index_1_1,Int64,18],Any[symbol("parallel_ir_temp_GenSym(0)_2"),Float64,50]],Any[],Any[Int64,Int64,Int64,Float64,Float64,Float64,Float64,Float64,Float64,Float64,Float64,Float64,Float64,Float64,Float64,Float64,Bool],Any[]], :(begin GenSym(6) = (Core.Intrinsics.sitofp)(Float64,n::Int64)::Float64 GenSym(8) = (Core.Intrinsics.box)(Float64,GenSym(6))::Float64 GenSym(0) = 0 GenSym(4)##1::Int64 = n GenSym(1) = 0 GenSym(7)##2::Int64 = n parallel_ir_save_array_len_1_1::Int64 = n $(Expr(:parfor,

PIR Body: parallel_ir_temp_GenSym(0)_1::Float64 = 0 parallel_ir_temp_GenSym(0)_2::Float64 = (top(rand!))(Base.Random.GLOBAL_RNG,parallel_ir_temp_GenSym(0)_1::Float64,GenSym(4)##1::Int64,Base.Random.CloseOpen)::Float64 parallel_ir_temp_GenSym(5)_1::Float64 = parallel_ir_temp_GenSym(0)_2::Float64 GenSym(11) = (top(mul_float))(parallel_ir_temp_GenSym(5)_1::Float64,2.0)::Float64 parallel_ir_temp_GenSym(5)_2::Float64 = (top(sub_float))(GenSym(11),1.0)::Float64 parallel_ir_temp_GenSym(1)_1::Float64 = 0 parallel_ir_temp_GenSym(1)_2::Float64 = (top(rand!))(Base.Random.GLOBAL_RNG,parallel_ir_temp_GenSym(1)_1::Float64,GenSym(7)##2::Int64,Base.Random.CloseOpen)::Float64 parallel_ir_temp_GenSym(8)_1::Float64 = parallel_ir_temp_GenSym(1)_2::Float64 GenSym(12) = (top(mul_float))(parallel_ir_temp_GenSym(8)_1::Float64,2.0)::Float64 parallel_ir_temp_GenSym(8)_2::Float64 = (top(sub_float))(GenSym(12),1.0)::Float64 parallel_ir_temp_x_1::Float64 = parallel_ir_temp_GenSym(5)_2::Float64 parallel_ir_temp_y_1::Float64 = parallel_ir_temp_GenSym(8)_2::Float64 GenSym(13) = (top(^))(parallel_ir_temp_y_1::Float64,2)::Float64 GenSym(14) = (top(^))(parallel_ir_temp_x_1::Float64,2)::Float64 GenSym(15) = (top(add_float))(GenSym(14),GenSym(13))::Float64 GenSym(16) = (top(lt_float))(GenSym(15),1.0)::Bool parallel_ir_temp_parallel_ir_new_array_name_5_1_1::Int64 = (top(mul_int))(1,GenSym(16))::Int64 parallel_ir_temp_GenSym(13)_1::Int64 = parallel_ir_temp_parallel_ir_new_array_name_5_1_1::Int64 parallel_ir_reduction_output_6::Int64 = (top(add_int))(parallel_ir_reduction_output_6::Int64,parallel_ir_temp_GenSym(13)_1::Int64)::Int64 Loop Nests: ParallelAccelerator.ParallelIR.PIRLoopNest(:(parfor_index_1_1::Int64),1,:(parallel_ir_save_array_len_1_1::Int64),1) Reductions: ParallelAccelerator.ParallelIR.PIRReduction(:(parallel_ir_reduction_output_6::Int64),0,ParallelAccelerator.ParallelIR.DelayedFunc((anonymous function),Any[Any[:(parallel_ir_reduction_output_6::Int64 = (top(add_int))(parallel_ir_reduction_output_6::Int64,parallel_ir_temp_GenSym(13)_1::Int64)::Int64)],:(parallel_ir_reduction_output_6::Int64),:(parallel_ir_temp_GenSym(13)_1::Int64)])) Poststatements: 0 )) GenSym(2) = parallel_ir_reduction_output_6::Int64 GenSym(3) = (Core.Intrinsics.sitofp)(Float64,GenSym(2))::Float64 GenSym(4) = (Core.Intrinsics.box)(Float64,GenSym(3))::Float64 GenSym(5) = (Core.Intrinsics.mul_float)(4.0,GenSym(4))::Float64 GenSym(7) = (Core.Intrinsics.box)(Float64,GenSym(5))::Float64 GenSym(9) = (Core.Intrinsics.div_float)(GenSym(7),GenSym(8))::Float64 GenSym(10) = (Core.Intrinsics.box)(Float64,GenSym(9))::Float64 return GenSym(10) end::Float64))) accelerate: ParallelIR conversion time = 13.011465882 flattened code = $(Expr(:lambda, Any[:n], Any[Any[Any[:parallel_ir_save_array_len_1_1,Int64,18],Any[symbol("GenSym(7)##2"),Int64,18],Any[:parallel_ir_reduction_output_6,Int64,2],Any[symbol("GenSym(4)##1"),Int64,18],Any[symbol("parallel_ir_temp_GenSym(5)_1"),Float64,18],Any[symbol("parallel_ir_temp_GenSym(13)_1"),Int64,18],Any[:parallel_ir_temp_parallel_ir_new_array_name_5_1_1,Int64,50],Any[symbol("parallel_ir_temp_GenSym(1)_2"),Float64,50],Any[symbol("parallel_ir_temp_GenSym(8)_1"),Float64,18],Any[:parallel_ir_temp_y_1,Float64,18],Any[symbol("parallel_ir_temp_GenSym(1)_1"),Float64,18],Any[:parallel_ir_temp_x_1,Float64,18],Any[symbol("parallel_ir_temp_GenSym(8)_2"),Float64,50],Any[symbol("parallel_ir_temp_GenSym(0)_1"),Float64,18],Any[symbol("parallel_ir_temp_GenSym(5)_2"),Float64,50],Any[:n,Int64,0],Any[:parfor_index_1_1,Int64,18],Any[symbol("parallel_ir_temp_GenSym(0)_2"),Float64,50]],Any[],Any[Int64,Int64,Int64,Float64,Float64,Float64,Float64,Float64,Float64,Float64,Float64,Float64,Float64,Float64,Float64,Float64,Bool],Any[]], :(begin GenSym(6) = (Core.Intrinsics.sitofp)(Float64,n::Int64)::Float64 GenSym(8) = (Core.Intrinsics.box)(Float64,GenSym(6))::Float64 GenSym(0) = 0 GenSym(4)##1::Int64 = n GenSym(1) = 0 GenSym(7)##2::Int64 = n parallel_ir_save_array_len_1_1::Int64 = n $(Expr(:parfor_start, ParallelAccelerator.ParallelIR.PIRParForStartEnd([ParallelAccelerator.ParallelIR.PIRLoopNest(:(parfor_index_1_1::Int64),1,:(parallel_ir_save_array_len_1_1::Int64),1)],[ParallelAccelerator.ParallelIR.PIRReduction(:(parallel_ir_reduction_output_6::Int64),0,ParallelAccelerator.ParallelIR.DelayedFunc((anonymous function),Any[Any[:(parallel_ir_reduction_output_6::Int64 = (top(add_int))(parallel_ir_reduction_output_6::Int64,parallel_ir_temp_GenSym(13)_1::Int64)::Int64)],:(parallel_ir_reduction_output_6::Int64),:(parallel_ir_temp_GenSym(13)_1::Int64)]))],nothing,Union{GenSym,Symbol,SymbolNode}[GenSym(11),GenSym(14),symbol("parallel_ir_temp_GenSym(5)_1"),symbol("parallel_ir_temp_GenSym(13)_1"),:parallel_ir_temp_parallel_ir_new_array_name_5_1_1,symbol("parallel_ir_temp_GenSym(1)_2"),symbol("parallel_ir_temp_GenSym(8)_1"),:parallel_ir_temp_y_1,GenSym(16),symbol("parallel_ir_temp_GenSym(1)_1"),:parallel_ir_temp_x_1,GenSym(13),symbol("parallel_ir_temp_GenSym(8)_2"),symbol("parallel_ir_temp_GenSym(0)_1"),symbol("parallel_ir_temp_GenSym(5)_2"),GenSym(15),GenSym(12),symbol("parallel_ir_temp_GenSym(0)_2")]))) parallel_ir_temp_GenSym(0)_1::Float64 = 0 parallel_ir_temp_GenSym(0)_2::Float64 = (top(rand!))(Base.Random.GLOBAL_RNG,parallel_ir_temp_GenSym(0)_1::Float64,GenSym(4)##1::Int64,Base.Random.CloseOpen)::Float64 parallel_ir_temp_GenSym(5)_1::Float64 = parallel_ir_temp_GenSym(0)_2::Float64 GenSym(11) = (top(mul_float))(parallel_ir_temp_GenSym(5)_1::Float64,2.0)::Float64 parallel_ir_temp_GenSym(5)_2::Float64 = (top(sub_float))(GenSym(11),1.0)::Float64 parallel_ir_temp_GenSym(1)_1::Float64 = 0 parallel_ir_temp_GenSym(1)_2::Float64 = (top(rand!))(Base.Random.GLOBAL_RNG,parallel_ir_temp_GenSym(1)_1::Float64,GenSym(7)##2::Int64,Base.Random.CloseOpen)::Float64 parallel_ir_temp_GenSym(8)_1::Float64 = parallel_ir_temp_GenSym(1)_2::Float64 GenSym(12) = (top(mul_float))(parallel_ir_temp_GenSym(8)_1::Float64,2.0)::Float64 parallel_ir_temp_GenSym(8)_2::Float64 = (top(sub_float))(GenSym(12),1.0)::Float64 parallel_ir_temp_x_1::Float64 = parallel_ir_temp_GenSym(5)_2::Float64 parallel_ir_temp_y_1::Float64 = parallel_ir_temp_GenSym(8)_2::Float64 GenSym(13) = (top(^))(parallel_ir_temp_y_1::Float64,2)::Float64 GenSym(14) = (top(^))(parallel_ir_temp_x_1::Float64,2)::Float64 GenSym(15) = (top(add_float))(GenSym(14),GenSym(13))::Float64 GenSym(16) = (top(lt_float))(GenSym(15),1.0)::Bool parallel_ir_temp_parallel_ir_new_array_name_5_1_1::Int64 = (top(mul_int))(1,GenSym(16))::Int64 parallel_ir_temp_GenSym(13)_1::Int64 = parallel_ir_temp_parallel_ir_new_array_name_5_1_1::Int64 parallel_ir_reduction_output_6::Int64 = (top(add_int))(parallel_ir_reduction_output_6::Int64,parallel_ir_temp_GenSym(13)_1::Int64)::Int64 $(Expr(:parfor_end, ParallelAccelerator.ParallelIR.PIRParForStartEnd([ParallelAccelerator.ParallelIR.PIRLoopNest(:(parfor_index_1_1::Int64),1,:(parallel_ir_save_array_len_1_1::Int64),1)],[ParallelAccelerator.ParallelIR.PIRReduction(:(parallel_ir_reduction_output_6::Int64),0,ParallelAccelerator.ParallelIR.DelayedFunc((anonymous function),Any[Any[:(parallel_ir_reduction_output_6::Int64 = (top(add_int))(parallel_ir_reduction_output_6::Int64,parallel_ir_temp_GenSym(13)_1::Int64)::Int64)],:(parallel_ir_reduction_output_6::Int64),:(parallel_ir_temp_GenSym(13)_1::Int64)]))],nothing,Union{GenSym,Symbol,SymbolNode}[GenSym(11),GenSym(14),symbol("parallel_ir_temp_GenSym(5)_1"),symbol("parallel_ir_temp_GenSym(13)_1"),:parallel_ir_temp_parallel_ir_new_array_name_5_1_1,symbol("parallel_ir_temp_GenSym(1)_2"),symbol("parallel_ir_temp_GenSym(8)_1"),:parallel_ir_temp_y_1,GenSym(16),symbol("parallel_ir_temp_GenSym(1)_1"),:parallel_ir_temp_x_1,GenSym(13),symbol("parallel_ir_temp_GenSym(8)_2"),symbol("parallel_ir_temp_GenSym(0)_1"),symbol("parallel_ir_temp_GenSym(5)_2"),GenSym(15),GenSym(12),symbol("parallel_ir_temp_GenSym(0)_2")]))) GenSym(2) = parallel_ir_reduction_output_6::Int64 GenSym(3) = (Core.Intrinsics.sitofp)(Float64,GenSym(2))::Float64 GenSym(4) = (Core.Intrinsics.box)(Float64,GenSym(3))::Float64 GenSym(5) = (Core.Intrinsics.mul_float)(4.0,GenSym(4))::Float64 GenSym(7) = (Core.Intrinsics.box)(Float64,GenSym(5))::Float64 GenSym(9) = (Core.Intrinsics.div_float)(GenSym(7),GenSym(8))::Float64 GenSym(10) = (Core.Intrinsics.box)(Float64,GenSym(9))::Float64 return GenSym(10) end::Float64))) array_types_in_sig from signature = Dict{DataType,Int64}() array_types_in_sig including returns = Dict{DataType,Int64}() ParallelAccelerator.accelerate for _ppcalcPip7907_j2c_proxy C File = C:\Users\shiva.julia\v0.4\ParallelAccelerator\src../deps/generated/cgen_output0.cpp dyn_lib = C:\Users\shiva.julia\v0.4\ParallelAccelerator\src../deps/generated/libcgen_output0.so.1.0 convert_to_ccall_typ typ = Int64 typeof(typ) = DataType convert_to_ccall_typ typ = Int64 typeof(typ) = DataType new_tuple.args = Any[Int64] sig_ndims = Any[0] modified_sig = (Int64,) sig_dims = Any[0] len? 11 signature = (Int64,) -> [(Float64,false)] modified_args = Array{Any,1} Any[symbol("##n#12624")] extra_sig = Type[Ptr{Float64}] ret_arg_exps = Any[:((top(pointer))((top(arrayref))(ret_args::Array{Any,1},1)))] tuple_sig_expr = (Int32,Int64,Ptr{Float64}) accelerate: accelerate conversion time = 3.614398157

Please submit a bug report with steps to reproduce this fault, and any error messages that follow (in their entirety). Thanks. Exception: EXCEPTION_ACCESS_VIOLATION at 0x64f2539e -- jl_egal at C:\Users\shiva\AppData\Local\Julia-0.4.3\bin\libjulia.dll (unknown line) jl_egal at C:\Users\shiva\AppData\Local\Julia-0.4.3\bin\libjulia.dll (unknown line) unknown function (ip: 0000000064F047D8) jl_apply_type at C:\Users\shiva\AppData\Local\Julia-0.4.3\bin\libjulia.dll (unknown line) jl_inst_concrete_tupletype_v at C:\Users\shiva\AppData\Local\Julia-0.4.3\bin\libjulia.dll (unknown line) jl_init_types at C:\Users\shiva\AppData\Local\Julia-0.4.3\bin\libjulia.dll (unknown line) jl_apply_generic at C:\Users\shiva\AppData\Local\Julia-0.4.3\bin\libjulia.dll (unknown line) typeinf_uncached at inference.jl:1662 jlcall_typeinf_uncached_151 at (unknown line) jl_apply_generic at C:\Users\shiva\AppData\Local\Julia-0.4.3\bin\libjulia.dll (unknown line) typeinf at inference.jl:1339 jlcall_typeinf_147 at (unknown line) jl_apply_generic at C:\Users\shiva\AppData\Local\Julia-0.4.3\bin\libjulia.dll (unknown line) typeinf_ext at inference.jl:1283 jl_apply_generic at C:\Users\shiva\AppData\Local\Julia-0.4.3\bin\libjulia.dll (unknown line) jl_method_cache_insert at C:\Users\shiva\AppData\Local\Julia-0.4.3\bin\libjulia.dll (unknown line) jl_method_cache_insert at C:\Users\shiva\AppData\Local\Julia-0.4.3\bin\libjulia.dll (unknown line) jl_apply_generic at C:\Users\shiva\AppData\Local\Julia-0.4.3\bin\libjulia.dll (unknown line) toq at util.jl:80 main at C:\Users\shiva.julia\v0.4\ParallelAccelerator\examples\pi\pi.jl:79 jlcall_main_2448 at (unknown line) jl_apply_generic at C:\Users\shiva\AppData\Local\Julia-0.4.3\bin\libjulia.dll (unknown line) jl_interpret_toplevel_expr at C:\Users\shiva\AppData\Local\Julia-0.4.3\bin\libjulia.dll (unknown line) jl_interpret_toplevel_thunk_with at C:\Users\shiva\AppData\Local\Julia-0.4.3\bin\libjulia.dll (unknown line) jl_eval_with_compiler_p at C:\Users\shiva\AppData\Local\Julia-0.4.3\bin\libjulia.dll (unknown line) jl_parse_eval_all at C:\Users\shiva\AppData\Local\Julia-0.4.3\bin\libjulia.dll (unknown line) jl_load_file_string at C:\Users\shiva\AppData\Local\Julia-0.4.3\bin\libjulia.dll (unknown line) include_string at loading.jl:266 jl_apply_generic at C:\Users\shiva\AppData\Local\Julia-0.4.3\bin\libjulia.dll (unknown line) jl_interpret_toplevel_expr at C:\Users\shiva\AppData\Local\Julia-0.4.3\bin\libjulia.dll (unknown line) jl_interpret_toplevel_thunk_with at C:\Users\shiva\AppData\Local\Julia-0.4.3\bin\libjulia.dll (unknown line) jl_eval_with_compiler_p at C:\Users\shiva\AppData\Local\Julia-0.4.3\bin\libjulia.dll (unknown line) jl_f_tuple at C:\Users\shiva\AppData\Local\Julia-0.4.3\bin\libjulia.dll (unknown line) include_string at C:\Users\shiva.julia\v0.4\CodeTools\src\eval.jl:32 jl_apply_generic at C:\Users\shiva\AppData\Local\Julia-0.4.3\bin\libjulia.dll (unknown line) anonymous at C:\Users\shiva.julia\v0.4\Atom\src\eval.jl:84 withpath at C:\Users\shiva.julia\v0.4\Requires\src\require.jl:37 jl_apply_generic at C:\Users\shiva\AppData\Local\Julia-0.4.3\bin\libjulia.dll (unknown line) withpath at C:\Users\shiva.julia\v0.4\Atom\src\eval.jl:53 jl_apply_generic at C:\Users\shiva\AppData\Local\Julia-0.4.3\bin\libjulia.dll (unknown line) anonymous at C:\Users\shiva.julia\v0.4\Atom\src\eval.jl:83 jl_unprotect_stack at C:\Users\shiva\AppData\Local\Julia-0.4.3\bin\libjulia.dll (unknown line)

Please submit a bug report with steps to reproduce this fault, and any error messages that follow (in their entirety). Thanks. Exception: EXCEPTION_ACCESS_VIOLATION at 0x7ffa853c729b -- RtlVirtualUnwind at C:\WINDOWS\SYSTEM32\ntdll.dll (unknown line)

Please submit a bug report with steps to reproduce this fault, and any error messages that follow (in their entirety). Thanks. Exception: EXCEPTION_ACCESS_VIOLATION at 0x7ffa853c729b -- RtlVirtualUnwind at C:\WINDOWS\SYSTEM32\ntdll.dll (unknown line)

Please submit a bug report with steps to reproduce this fault, and any error messages that follow (in their entirety). Thanks. Exception: EXCEPTION_ACCESS_VIOLATION at 0x7ffa853c729b -- RtlVirtualUnwind at C:\WINDOWS\SYSTEM32\ntdll.dll (unknown line) Julia has stopped: 3221225477, null

shivaramkrs avatar Mar 17 '16 18:03 shivaramkrs

I think you may be at the same point ehsantn mentioned.

I tried a cpp file with:

extern "C" int f1(int a, int b) { printf("a = %d, b = %d\n", a, b); return 7; }

Then:

g++ -O3 -fopenmp -std=c++11 -g -fpic -c -o f1.o f1.cpp g++ -g -shared -fopenmp -std=c++11 -o f1.dll f1.o -lm

Then in the Julia REPL: ret = ccall(("f1","f1.dll"),Cint, (Cint, Cint), 3, 4)

This doesn't work: ERROR: error compiling anonymous: could not load library "f1.dll" The specified module could not be found.

Then I found issue #6260. So, I ran dependency walker on f1.dll and there were 4 modules that weren't found. CYGGCC_S-1.DLL CYGGOMP-1.DLL CYGSTDC++-6.DLL CYGWIN1.DLL

These files exist in the cygwin/bin directory. I added c:\cygwin\bin to my user-specific path variable and re-tried dependency walker and all the modules were then found. However, it still fails in Julia.

I'll ask around about this issue. If you tinker and find a solution to this simple example let me know and we can apply it to the real system.

DrTodd13 avatar Mar 17 '16 20:03 DrTodd13

The problem with my previous example is that you can't compile the DLL with the regular cygwin g++ toolchain, which produces a dependency in the DLL on cygwin. You have to compile the DLL with the mingw version of g++ and then the previous example works.

I installed 64-bit mingw compiler through cygwin setup and here's the command.

/bin/x86_64-w64-mingw32-g++ -g -shared -std=c++11 -o f1.dll -lm f1.cpp

I don't have time right now to check if this works with the whole system. You could install mingw and put an alias from g++ to the above executable name in some new directory and then put that at the head of your path and then try the system again. An alternative is again to change the name of g++ in cgen.jl to point to this new location.

To potentially fix this problem correctly in the real system then maybe we do a @windows_only section where we specify x86_64-w64-mingw32-g++ as the compiler name and that way it would fail if the correct mingw wasn't installed.

A pull request with such a fix would certainly be appreciated.

DrTodd13 avatar Mar 18 '16 16:03 DrTodd13

Should use i686-w64-mingw32 for 32 bit.

The WinRPM package would allow automating the toolchain installation to avoid manual setup steps or having to install cygwin. If you tried that, what issues did it have?

tkelman avatar Mar 18 '16 17:03 tkelman

The WinRPM package would allow automating the toolchain installation to avoid manual setup steps or having to install cygwin. If you tried that, what issues did it have?

I am not sure if I am right, but I think WinRPM installs g++ 4.63 (that's the one present by default). Parallel Accelerator needs 4.7+

shivaramkrs avatar Mar 18 '16 18:03 shivaramkrs

I get a lot of deprecation warnings on "using WinRPM". Then, I do WinRPM.update() and get:

INFO: Downloading https://cache.e.ip.saba.us/http://download.opensuse.org/repositories/windows:/mingw:/win32/openSUSE_13.1/repodata/repomd.xml WARNING: Base.Uint16 is deprecated, use UInt16 instead. WARNING: Base.Uint16 is deprecated, use UInt16 instead. WARNING: Base.Uint8 is deprecated, use UInt8 instead. WARNING: Base.Uint8 is deprecated, use UInt8 instead. WARNING: Unknown download failure, error code: 2148270105 WARNING: Retry 1/5 downloading: https://cache.e.ip.saba.us/http://download.opensuse.org/repositories/windows:/mingw:/win32/openSUSE_13.1/repodata/repomd.xml WARNING: Unknown download failure, error code: 2148270105 WARNING: Retry 2/5 downloading: https://cache.e.ip.saba.us/http://download.opensuse.org/repositories/windows:/mingw:/win32/openSUSE_13.1/repodata/repomd.xml WARNING: Unknown download failure, error code: 2148270105 WARNING: Retry 3/5 downloading: https://cache.e.ip.saba.us/http://download.opensuse.org/repositories/windows:/mingw:/win32/openSUSE_13.1/repodata/repomd.xml WARNING: Unknown download failure, error code: 2148270105 WARNING: Retry 4/5 downloading: https://cache.e.ip.saba.us/http://download.opensuse.org/repositories/windows:/mingw:/win32/openSUSE_13.1/repodata/repomd.xml WARNING: Unknown download failure, error code: 2148270105 WARNING: Retry 5/5 downloading: https://cache.e.ip.saba.us/http://download.opensuse.org/repositories/windows:/mingw:/win32/openSUSE_13.1/repodata/repomd.xml WARNING: received error 0 while downloading https://cache.e.ip.saba.us/http://download.opensuse.org/repositories/windows:/mingw:/win32/openSUSE_13.1/repodata/repomd.xml INFO: Downloading https://cache.e.ip.saba.us/http://download.opensuse.org/repositories/windows:/mingw:/win64/openSUSE_13.1/repodata/repomd.xml WARNING: Unknown download failure, error code: 2148270105 WARNING: Retry 1/5 downloading: https://cache.e.ip.saba.us/http://download.opensuse.org/repositories/windows:/mingw:/win64/openSUSE_13.1/repodata/repomd.xml WARNING: Unknown download failure, error code: 2148270105 WARNING: Retry 2/5 downloading: https://cache.e.ip.saba.us/http://download.opensuse.org/repositories/windows:/mingw:/win64/openSUSE_13.1/repodata/repomd.xml WARNING: Unknown download failure, error code: 2148270105 WARNING: Retry 3/5 downloading: https://cache.e.ip.saba.us/http://download.opensuse.org/repositories/windows:/mingw:/win64/openSUSE_13.1/repodata/repomd.xml WARNING: Unknown download failure, error code: 2148270105 WARNING: Retry 4/5 downloading: https://cache.e.ip.saba.us/http://download.opensuse.org/repositories/windows:/mingw:/win64/openSUSE_13.1/repodata/repomd.xml WARNING: Unknown download failure, error code: 2148270105 WARNING: Retry 5/5 downloading: https://cache.e.ip.saba.us/http://download.opensuse.org/repositories/windows:/mingw:/win64/openSUSE_13.1/repodata/repomd.xml WARNING: received error 0 while downloading https://cache.e.ip.saba.us/http://download.opensuse.org/repositories/windows:/mingw:/win64/openSUSE_13.1/repodata/repomd.xml

Then trying a search for mingw in WinRPM I get the following:

julia> WinRPM.search("mingw") WinRPM Package Set:

If indeed WinRPM installs gcc 4.6.3 then that is another complication.

DrTodd13 avatar Mar 18 '16 19:03 DrTodd13

WinRPM fails to download xml file behind the firewall. I get this problem in my office. I build it in another machine and then copy the folder.

shivaramkrs avatar Mar 18 '16 19:03 shivaramkrs

You're using an outdated version of WinRPM, are your packages up to date? The gcc version on WinRPM is 5.3.0, you should be able to add it via WinRPM.add("gcc") though firewalls can cause some issues.

tkelman avatar Mar 18 '16 23:03 tkelman

I just ran into this issue on Windows 10. Then I used WinRPM to install the gcc-c++ package.

julia> import WinRPM

julia> WinRPM.install("gcc-c++";yes=true)
INFO: Packages to install: gcc-c++
INFO: Downloading: gcc-c++
INFO: Extracting: gcc-c++
INFO: Complete

julia> WinRPM.select(WinRPM.lookup("gcc-c++"),"gcc-c++")
WinRPM Package:
  Name: gcc-c++
  Summary: MinGW Windows compiler for C++
  Version: 6.1.0 (rel 4.1)
  Arch: mingw64
  URL: http://www.mingw.org/
  License: GPL-2.0+
  Description: MinGW Windows compiler for C++

julia> gpp=Pkg.dir("WinRPM","deps","usr","x86_64-w64-mingw32","sys-root","mingw","bin","g++");

julia> run(`$gpp --version`)
g++ (GCC) 6.1.0
Copyright (C) 2016 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

So, it seems WinRPM is providing g++ version 6.1.0. Is there a way to automate the compilation process on Windows by doing everything inside build.jl instead of build.sh? Or otherwise provide instructions on how to compile manually on Windows? Thanks!

ibadr avatar Jun 09 '16 19:06 ibadr

For anyone stumbling upon this: I have managed to install ParallelAccelerator on Windows 10. The basic steps are these:

  1. In Julia REPL, do Pkg.add("ParallelAccelerator"). This will throw an error because Windows doesn't know how to handle .sh files.
  2. Download and install MSYS2 from here: https://msys2.github.io/
  3. Install gcc. I use TDM gcc (http://tdm-gcc.tdragon.net/download), but presumably you could also install gcc using MSYS2's package manager, pacman. If you use the former method, make sure that g++ is in your path.
  4. Open an Msys2 console, and do cd /c/path/to/julia/ParallelAccelerator/deps/ ./build.sh

Pkg.test("ParallelAccelerator") will issue a warning that matrix operations will be slow because it cannot find a BLAS installation. I haven't yet managed to fix this.

If desired I could prepare a PR to add this to the readme.

s-broda avatar Jun 23 '16 14:06 s-broda

Rewriting build.sh into a build.jl and using the automatic WinRPM installation of gcc would be a better and more automatic short-term solution, though maybe before too much longer the need for a compiler here will be eliminated. Not sure how far away that is.

tkelman avatar Jun 23 '16 17:06 tkelman

I used @s-broda's method and was able to build under Windows 10. (I installed gcc with pacman). However, when I go to test the package, in addition to the BLAS not found errors I see at every test something like the following:

image

Does anyone know what might be going on here?

amellnik avatar Jul 04 '16 06:07 amellnik

@s-broda I'd welcome an addition to our documentation on how to get ParallelAccelerator running on Windows, although I suggest adding it to our docs rather than the README.

Regarding BLAS, these are warnings and not errors. Some things will be slower without a BLAS library. Our docs could probably stand to have a better explanation of what a BLAS library is and why it is that ParallelAccelerator wants one. It might also be good to provide a way to suppress the BLAS-related warnings.

lkuper avatar Jul 06 '16 00:07 lkuper

@amellnik Do you mean you're seeing these errors when you run Pkg.test("ParallelAccelerator"), or something else?

lkuper avatar Jul 06 '16 00:07 lkuper

@lkuper This is when running Pkg.test("ParallelAccelerator") on 0.4.5. Looking more closely at the error log I think I missed the more important segment:

image

It looks like the paths to the generated files are being correctly generated here but are being shelled out without the escapes being removed properly somewhere around here. All the back-tick command constructions seem correct there, but I'll keep looking.

On 0.5.0 master the tests die immediately with

julia> Pkg.test("ParallelAccelerator")
INFO: Testing ParallelAccelerator
WARNING: Base.LambdaStaticData is deprecated, use LambdaInfo instead.
  likely near C:\Users\Alex\.julia\v0.5\CompilerTools\src\ast_walk.jl:480
WARNING: Base.LambdaStaticData is deprecated, use LambdaInfo instead.
  likely near C:\Users\Alex\.julia\v0.5\CompilerTools\src\ast_walk.jl:481
ERROR: LoadError: LoadError: LoadError: LoadError: UndefVarError: GenSym not defined
 in include_from_node1(::String) at .\loading.jl:426 (repeats 2 times)
 in eval(::Module, ::Any) at .\boot.jl:234
 in require(::Symbol) at .\loading.jl:357
 in include_from_node1(::String) at .\loading.jl:426
 in eval(::Module, ::Any) at .\boot.jl:234
 in require(::Symbol) at .\loading.jl:357
 in include_from_node1(::String) at .\loading.jl:426
 in process_options(::Base.JLOptions) at .\client.jl:266
 in _start() at .\client.jl:322
while loading C:\Users\Alex\.julia\v0.5\CompilerTools\src\ast_walk.jl, in expression starting on line 423
while loading C:\Users\Alex\.julia\v0.5\CompilerTools\src\CompilerTools.jl, in expression starting on line 32
while loading C:\Users\Alex\.julia\v0.5\ParallelAccelerator\src\ParallelAccelerator.jl, in expression starting on line 30
while loading C:\Users\Alex\.julia\v0.5\ParallelAccelerator\test\runtests.jl, in expression starting on line 26
=========================[ ERROR: ParallelAccelerator ]=========================

failed process: Process(`'C:\Users\Alex\AppData\Local\Julia-0.5.0-dev\bin\julia' -Cx86-64 '-JC:\Users\Alex\AppData\Local\Julia-0.5.0-dev\lib\julia\sys.dll' --compile=yes --depwarn=yes --check-bounds=yes --code-coverage=none --color=yes 'C:\Users\Alex\.julia\v0.5\ParallelAccelerator\test\runtests.jl'`, ProcessExited(1)) [1]

================================================================================
ERROR: ParallelAccelerator had test errors
 in #test#51(::Bool, ::Function, ::Array{AbstractString,1}) at .\pkg\entry.jl:720
 in (::Base.Pkg.Entry.#kw##test)(::Array{Any,1}, ::Base.Pkg.Entry.#test, ::Array{AbstractString,1}) at .\<missing>:0
 in (::Base.Pkg.Dir.##2#3{Array{Any,1},Base.Pkg.Entry.#test,Tuple{Array{AbstractString,1}}})() at .\pkg\dir.jl:31
 in cd(::Base.Pkg.Dir.##2#3{Array{Any,1},Base.Pkg.Entry.#test,Tuple{Array{AbstractString,1}}}, ::String) at .\file.jl:48
 in #cd#1(::Array{Any,1}, ::Function, ::Function, ::Array{AbstractString,1}, ::Vararg{Array{AbstractString,1},N}) at .\pkg\dir.jl:31
 in (::Base.Pkg.Dir.#kw##cd)(::Array{Any,1}, ::Base.Pkg.Dir.#cd, ::Function, ::Array{AbstractString,1}, ::Vararg{Array{AbstractString,1},N}) at .\<missing>:0
 in #test#3(::Bool, ::Function, ::String, ::Vararg{String,N}) at .\pkg\pkg.jl:255
 in test(::String, ::Vararg{String,N}) at .\pkg\pkg.jl:255
 in eval(::Module, ::Any) at .\boot.jl:234
 in macro expansion at .\REPL.jl:92 [inlined]
 in (::Base.REPL.##1#2{Base.REPL.REPLBackend})() at .\event.jl:46

Edit: I realized that I was looking at the last tagged release of ParallelAccelerator. On master tests die immediately with:

image

amellnik avatar Jul 06 '16 01:07 amellnik

@amellnik Yeah, for the moment ParallelAccelerator will only work on 0.4.x versions of Julia, although keep an eye out for 0.5 compatibility once a Julia 0.5 release candidate is available. In the error message you posted, I think the multiple backslashes are a red herring. I'm suspicious of "libgomp.spec: No such file or directory", though. I would suggest searching on that error message and mingw.

lkuper avatar Jul 06 '16 01:07 lkuper

@lkuper OK, I've submitted a PR.

@tkelman I agree that this is not a very elegant solution, but I'm afraid I'm not very good with bash scripts. Perhaps someone else could tackle this.

This issue could probably be closed.

s-broda avatar Jul 06 '16 14:07 s-broda

I would recommend against using any version of gcc other than the exact version used to compile Julia itself (which is a cygwin cross compile) or the WinRPM packages (which are cross compiled from opensuse).

tkelman avatar Jul 06 '16 16:07 tkelman

@s-broda Could your approach still work if using the Julia WinRPM approach to get gcc, as recommended upthread?

@tkelman Why is it important to use the exact same version of gcc that was used to compile Julia?

lkuper avatar Jul 06 '16 21:07 lkuper

There are many different variations on how to configure gcc in terms of exception handling, threading, and many other details. When you pick some random gcc build then there's no expectation of maintaining ABI compatibility for trying to load libraries built by the different compilers into the same process, especially if C++ or threading is involved. It's pretty similar to trying to build a library on one linux distribution and use it on a different distro - it can work if you're very careful and follow some strict constraints, but if not it's going to be error prone.

tkelman avatar Jul 06 '16 22:07 tkelman