mlc-llm
mlc-llm copied to clipboard
[Bug] free(): invalid pointer ,Aborted (core dumped)
All finished, 163 total shards committed, record saved to dist/open-llama-plus-7b_0515-q4f32_0/params/ndarray-cache.json Save a cached module to dist/open-llama-plus-7b_0515-q4f32_0/mod_cache_before_build_cuda.pkl. Dump static shape TIR to dist/open-llama-plus-7b_0515-q4f32_0/debug/mod_tir_static.py Dump dynamic shape TIR to dist/open-llama-plus-7b_0515-q4f32_0/debug/mod_tir_dynamic.py
- Dispatch to pre-scheduled op: fused_NT_matmul4_divide2_maximum1_minimum1
- Dispatch to pre-scheduled op: decode7
- Dispatch to pre-scheduled op: fused_decode2_fused_matmul9_silu1
- Dispatch to pre-scheduled op: fused_NT_matmul2_multiply
- Dispatch to pre-scheduled op: fused_decode2_fused_matmul9_multiply1
- Dispatch to pre-scheduled op: fused_NT_matmul3_add
- Dispatch to pre-scheduled op: matmul4
- Dispatch to pre-scheduled op: decode6
- Dispatch to pre-scheduled op: NT_matmul
- Dispatch to pre-scheduled op: fused_decode1_matmul7
- Dispatch to pre-scheduled op: rms_norm
- Dispatch to pre-scheduled op: fused_decode1_fused_matmul7_add1
- Dispatch to pre-scheduled op: fused_NT_matmul2_silu
- Dispatch to pre-scheduled op: softmax1
- Dispatch to pre-scheduled op: decode5
- Dispatch to pre-scheduled op: fused_NT_matmul1_divide1_maximum_minimum
- Dispatch to pre-scheduled op: fused_NT_matmul_add
- Dispatch to pre-scheduled op: softmax2
- Dispatch to pre-scheduled op: matmul8
- Dispatch to pre-scheduled op: fused_decode3_fused_matmul10_add1 Finish exporting to dist/open-llama-plus-7b_0515-q4f32_0/open-llama-plus-7b_0515-q4f32_0-cuda.so Finish exporting chat config to dist/open-llama-plus-7b_0515-q4f32_0/params/mlc-chat-config.json free(): invalid pointer Aborted (core dumped)
Why do I have this problem compiling every model?
See my reply in the other issue: https://github.com/mlc-ai/mlc-llm/issues/272#issuecomment-1569002433
This is a symbol conflict in LLVM symbols between PyTorch and TVM, as both are linked against different versions of LLVM