runtime error
Traceback (most recent call last):
File "tools/k2/icefall/egs/aishell/ASR/local/compile_hlg.py", line 166, in
gdb --args python /path/to/your/code.py
(You can use `gdb` to debug the code. Please consider compiling
a debug version of k2.).
If you are unable to fix it, please open an issue at:
https://github.com/k2-fsa/k2/issues/new
Compile decoding graph HLG.pt succeeded
@wwfcnu Please post all the logs.
@wwfcnu Please post all the logs.
/usr/local/python3.7.10/lib/python3.7/site-packages/torchaudio/_internal/module_utils.py:99: UserWarning: Failed to import soundfile. 'soundfile' backend
is not available.
warnings.warn("Failed to import soundfile. 'soundfile' backend is not available.")
Compile lexicon L.pt L_disambig.pt succeeded
/tmp/pip-install-w9q6xu_b/kaldilm_80542c8aeb934a60b2460688478f9031/kaldilm/csrc/arpa_file_parser.cc:void kaldilm::ArpaFileParser::Read(std::istream&):79
[I] Reading \data\ section.
/tmp/pip-install-w9q6xu_b/kaldilm_80542c8aeb934a60b2460688478f9031/kaldilm/csrc/arpa_file_parser.cc:void kaldilm::ArpaFileParser::Read(std::istream&):140
[I] Reading \1-grams: section.
/tmp/pip-install-w9q6xu_b/kaldilm_80542c8aeb934a60b2460688478f9031/kaldilm/csrc/arpa_file_parser.cc:void kaldilm::ArpaFileParser::Read(std::istream&):213
[W] line 7 [-8.6712055
/tmp/pip-install-w9q6xu_b/kaldilm_80542c8aeb934a60b2460688478f9031/kaldilm/csrc/arpa_file_parser.cc:void kaldilm::ArpaFileParser::Read(std::istream&):140
[I] Reading \2-grams: section.
/tmp/pip-install-w9q6xu_b/kaldilm_80542c8aeb934a60b2460688478f9031/kaldilm/csrc/arpa_file_parser.cc:void kaldilm::ArpaFileParser::Read(std::istream&):140
[I] Reading \3-grams: section.
/usr/local/python3.7.10/lib/python3.7/site-packages/torchaudio/_internal/module_utils.py:99: UserWarning: Failed to import soundfile. 'soundfile' backend
is not available.
warnings.warn("Failed to import soundfile. 'soundfile' backend is not available.")
2022-12-08 10:04:06,974 INFO [compile_hlg.py:154] Processing /home/wangweifei/data/local/hlg
2022-12-08 10:04:07,840 INFO [lexicon.py:168] Loading pre-compiled /home/wangweifei/data/local/hlg/Linv.pt
2022-12-08 10:04:07,910 INFO [compile_hlg.py:73] Building ctc_topo. max_token_id: 4232
2022-12-08 10:04:08,680 INFO [compile_hlg.py:78] Loading pre-compiled G_3_gram
2022-12-08 10:07:15,573 INFO [compile_hlg.py:93] Intersecting L and G
[F] /home/runner/work/k2/k2/k2/csrc/array.h:501:void k2::Array1<T>::Init(k2::ContextPtr, int32_t, k2::Dtype) [with T = k2::Arc; k2::ContextPtr = std::sha
red_ptrk2::Context; int32_t = int] Check failed: size >= 0 (-2043020552 vs. 0) Array size MUST be greater than or equal to 0, given :-2043020552
[ Stack-Trace: ]
/usr/local/python3.7.10/lib/python3.7/site-packages/k2/lib/libk2_log.so(k2::internal::GetStackTrace()+0x47) [0x7f7d83be9f37]
/usr/local/python3.7.10/lib/python3.7/site-packages/k2/lib/libk2context.so(k2::internal::Logger::~Logger()+0x5a) [0x7f7d83f1eaaa]
/usr/local/python3.7.10/lib/python3.7/site-packages/k2/lib/libk2context.so(k2::FsaVecCreator::Init(std::vector<k2host::Array2Size
/usr/local/python3.7.10/lib/python3.7/site-packages/k2/lib/libk2context.so(k2::Intersect(k2::Raggedk2::Arc&, int, k2::Raggedk2::Arc&, int, bool, k2::
Raggedk2::Arc, k2::Array1
gdb --args python /path/to/your/code.py
(You can use `gdb` to debug the code. Please consider compiling
a debug version of k2.).
If you are unable to fix it, please open an issue at:
https://github.com/k2-fsa/k2/issues/new
Compile decoding graph HLG.pt succeeded
[F] /home/runner/work/k2/k2/k2/csrc/array.h:501:void k2::Array1::Init(k2::ContextPtr, int32_t, k2::Dtype) [with T = k2::Arc; k2::ContextPtr = std::sha
red_ptrk2::Context; int32_t = int] Check failed: size >= 0 (-2043020552 vs. 0) Array size MUST be greater than or equal to 0, given :-2043020552
How large is your G? (e.g., its file size).
Also, are you using the latest k2?
python3 -m k2.version
shall give you the version information.
[F] /home/runner/work/k2/k2/k2/csrc/array.h:501:void k2::Array1::Init(k2::ContextPtr, int32_t, k2::Dtype) [with T = k2::Arc; k2::ContextPtr = std::sha red_ptrk2::Context; int32_t = int] Check failed: size >= 0 (-2043020552 vs. 0) Array size MUST be greater than or equal to 0, given :-2043020552How large is your G? (e.g., its file size).
Also, are you using the latest k2?
python3 -m k2.versionshall give you the version information.
Collecting environment information...
k2 version: 1.21 Build type: Release Git SHA1: f1ae355b619db05cda7bd4305628df4ab4c900e1 Git date: Sun Oct 30 16:37:07 2022 Cuda used to build k2: 11.3 cuDNN used to build k2: 8.2.0 Python version used to build k2: 3.7 OS used to build k2: Ubuntu 18.04.6 LTS CMake version: 3.24.2 GCC version: 7.5.0 CMAKE_CUDA_FLAGS: -Wno-deprecated-gpu-targets -lineinfo --expt-extended-lambda -use_fast_math -Xptxas=-w --expt-extended-lambda -gencode arch=compute_35,code=sm_35 -lineinfo --expt-extended-lambda -use_fast_math -Xptxas=-w --expt-extended-lambda -gencode arch=compute_50,code=sm_50 -lineinfo --expt-extended-lambda -use_fast_math -Xptxas=-w --expt-extended-lambda -gencode arch=compute_60,code=sm_60 -lineinfo --expt-extended-lambda -use_fast_math -Xptxas=-w --expt-extended-lambda -gencode arch=compute_61,code=sm_61 -lineinfo --expt-extended-lambda -use_fast_math -Xptxas=-w --expt-extended-lambda -gencode arch=compute_70,code=sm_70 -lineinfo --expt-extended-lambda -use_fast_math -Xptxas=-w --expt-extended-lambda -gencode arch=compute_75,code=sm_75 -lineinfo --expt-extended-lambda -use_fast_math -Xptxas=-w --expt-extended-lambda -gencode arch=compute_80,code=sm_80 -lineinfo --expt-extended-lambda -use_fast_math -Xptxas=-w --expt-extended-lambda -gencode arch=compute_86,code=sm_86 -DONNX_NAMESPACE=onnx_c2 -gencode arch=compute_35,code=sm_35 -gencode arch=compute_50,code=sm_50 -gencode arch=compute_52,code=sm_52 -gencode arch=compute_60,code=sm_60 -gencode arch=compute_61,code=sm_61 -gencode arch=compute_70,code=sm_70 -gencode arch=compute_75,code=sm_75 -gencode arch=compute_80,code=sm_80 -gencode arch=compute_86,code=sm_86 -gencode arch=compute_86,code=compute_86 -Xcudafe --diag_suppress=cc_clobber_ignored,--diag_suppress=integer_sign_change,--diag_suppress=useless_using_declaration,--diag_suppress=set_but_not_used,--diag_suppress=field_without_dll_interface,--diag_suppress=base_class_has_different_dll_interface,--diag_suppress=dll_interface_conflict_none_assumed,--diag_suppress=dll_interface_conflict_dllexport_assumed,--diag_suppress=implicit_return_from_non_void_function,--diag_suppress=unsigned_compare_with_zero,--diag_suppress=declared_but_not_referenced,--diag_suppress=bad_friend_decl --expt-relaxed-constexpr --expt-extended-lambda -D_GLIBCXX_USE_CXX11_ABI=0 --compiler-options -Wall --compiler-options -Wno-strict-overflow --compiler-options -Wno-unknown-pragmas CMAKE_CXX_FLAGS: -D_GLIBCXX_USE_CXX11_ABI=0 -Wno-unused-variable -Wno-strict-overflow PyTorch version used to build k2: 1.12.1+cu113 PyTorch is using Cuda: 11.3 NVTX enabled: True With CUDA: True Disable debug: True Sync kernels : False Disable checks: False Max cpu memory allocate: 214748364800 bytes (or 200.0 GB) k2 abort: False file: /usr/local/python3.7.10/lib/python3.7/site-packages/k2/version/version.py _k2.file: /usr/local/python3.7.10/lib/python3.7/site-packages/_k2.cpython-37m-x86_64-linux-gnu.so
Could you try the following pull request? https://github.com/k2-fsa/icefall/pull/606
It uses OpenFst to build HLG and is able to support a much larger G.
Git date: Sun Oct 30 16:37:07 2022
Also, your k2 is somewhat outdated. please update it to the latest one.
Git date: Sun Oct 30 16:37:07 2022
Also, your k2 is somewhat outdated. please update it to the latest one.
okay, I'll give it a try
Also, your G is so large that I am afraid the resulting HLG won't fit into the GPU memory.
Also, your G is so large that I am afraid the resulting HLG won't fit into the GPU memory. my lm.arpa is 50G ,so do i need to prune arpa?
Yes, I think so.
You may use https://github.com/kaldi-asr/kaldi/pull/4594 to prune it.
egs/librispeech/ASR/local/compile_hlg_using_openfst.py路径下没找到这个文件啊
k2-fsa/icefall#606
egs/librispeech/ASR/local/compile_hlg_using_openfst.py路径下没找到这个文件啊
I just merged https://github.com/k2-fsa/icefall/pull/606
Please check again. You should find it.
egs/librispeech/ASR/local/compile_hlg_using_openfst.py路径下没找到这个文件啊
I just merged k2-fsa/icefall#606
Please check again. You should find it 找到了