mlc-llm icon indicating copy to clipboard operation
mlc-llm copied to clipboard

[Bug] free(): invalid pointer ,Aborted (core dumped)

Open wangxu569 opened this issue 2 years ago • 3 comments

All finished, 163 total shards committed, record saved to dist/open-llama-plus-7b_0515-q4f32_0/params/ndarray-cache.json Save a cached module to dist/open-llama-plus-7b_0515-q4f32_0/mod_cache_before_build_cuda.pkl. Dump static shape TIR to dist/open-llama-plus-7b_0515-q4f32_0/debug/mod_tir_static.py Dump dynamic shape TIR to dist/open-llama-plus-7b_0515-q4f32_0/debug/mod_tir_dynamic.py

  • Dispatch to pre-scheduled op: fused_NT_matmul4_divide2_maximum1_minimum1
  • Dispatch to pre-scheduled op: decode7
  • Dispatch to pre-scheduled op: fused_decode2_fused_matmul9_silu1
  • Dispatch to pre-scheduled op: fused_NT_matmul2_multiply
  • Dispatch to pre-scheduled op: fused_decode2_fused_matmul9_multiply1
  • Dispatch to pre-scheduled op: fused_NT_matmul3_add
  • Dispatch to pre-scheduled op: matmul4
  • Dispatch to pre-scheduled op: decode6
  • Dispatch to pre-scheduled op: NT_matmul
  • Dispatch to pre-scheduled op: fused_decode1_matmul7
  • Dispatch to pre-scheduled op: rms_norm
  • Dispatch to pre-scheduled op: fused_decode1_fused_matmul7_add1
  • Dispatch to pre-scheduled op: fused_NT_matmul2_silu
  • Dispatch to pre-scheduled op: softmax1
  • Dispatch to pre-scheduled op: decode5
  • Dispatch to pre-scheduled op: fused_NT_matmul1_divide1_maximum_minimum
  • Dispatch to pre-scheduled op: fused_NT_matmul_add
  • Dispatch to pre-scheduled op: softmax2
  • Dispatch to pre-scheduled op: matmul8
  • Dispatch to pre-scheduled op: fused_decode3_fused_matmul10_add1 Finish exporting to dist/open-llama-plus-7b_0515-q4f32_0/open-llama-plus-7b_0515-q4f32_0-cuda.so Finish exporting chat config to dist/open-llama-plus-7b_0515-q4f32_0/params/mlc-chat-config.json free(): invalid pointer Aborted (core dumped)

wangxu569 avatar Jun 01 '23 04:06 wangxu569

Why do I have this problem compiling every model?

wangxu569 avatar Jun 01 '23 04:06 wangxu569

See my reply in the other issue: https://github.com/mlc-ai/mlc-llm/issues/272#issuecomment-1569002433

yzh119 avatar Jun 01 '23 05:06 yzh119

This is a symbol conflict in LLVM symbols between PyTorch and TVM, as both are linked against different versions of LLVM

junrushao avatar Jun 01 '23 16:06 junrushao