moonbeam
moonbeam copied to clipboard
Node unexpectedly terminates
Hello, we're facing an issue where our node unexpectedly stops working. It keeps restarting and crashing with the following log.
2024-01-15 01:37:05 [🌗] discovered: 12D3KooW9r9vDpoupJMG3Jwoaf7x7LuR14Yzotp8ver6ndkcC6PL /ip4/IP/tcp/30334/ws
2024-01-15 01:37:05 [Relaychain] discovered: 12D3KooWNjDZzpYksrJxAQDzpfKyXVwWkR2a7nUCbfVM4jm7nhfK /ip4/IP/tcp/30333/ws
2024-01-15 01:37:05 Accepting new connection 1/100000
2024-01-15 01:37:05 [Relaychain] Sending fatal alert BadCertificate
2024-01-15 01:37:06 [Relaychain] 🔍 Discovered new external address for our node: /ip4/IP/tcp/30334/ws/p2p/12D3KooW9r9vDpoupJMG3Jwoaf7x7LuR14Yzotp8ver6ndkcC6PL
2024-01-15 01:37:06 [🌗] 🔍 Discovered new external address for our node: /ip4/IP/tcp/30333/ws/p2p/12D3KooWNjDZzpYksrJxAQDzpfKyXVwWkR2a7nUCbfVM4jm7nhfK
====================
Version: 0.35.0-7131ef902c0
0: sp_panic_handler::set::{{closure}}
1: <alloc::boxed::Box<F,A> as core::ops::function::Fn<Args>>::call
at rustc/8ede3aae28fe6e4d52b38157d7bfe0d3bceef225/library/alloc/src/boxed.rs:1999:9
std::panicking::rust_panic_with_hook
at rustc/8ede3aae28fe6e4d52b38157d7bfe0d3bceef225/library/std/src/panicking.rs:709:13
2: std::panicking::begin_panic_handler::{{closure}}
at rustc/8ede3aae28fe6e4d52b38157d7bfe0d3bceef225/library/std/src/panicking.rs:597:13
3: std::sys_common::backtrace::__rust_end_short_backtrace
at rustc/8ede3aae28fe6e4d52b38157d7bfe0d3bceef225/library/std/src/sys_common/backtrace.rs:151:18
4: rust_begin_unwind
at rustc/8ede3aae28fe6e4d52b38157d7bfe0d3bceef225/library/std/src/panicking.rs:593:5
5: core::panicking::panic_fmt
at rustc/8ede3aae28fe6e4d52b38157d7bfe0d3bceef225/library/core/src/panicking.rs:67:14
6: core::result::unwrap_failed
at rustc/8ede3aae28fe6e4d52b38157d7bfe0d3bceef225/library/core/src/result.rs:1651:5
7: <sp_state_machine::ext::Ext<H,B> as sp_externalities::Externalities>::storage
8: sp_io::storage::get_version_1
9: sp_io::storage::ExtStorageGetVersion1::call
10: <F as wasmtime::func::IntoFunc<T,(wasmtime::func::Caller<T>,A1),R>>::into_func::wasm_to_host_shim
11: <unknown>
12: <unknown>
13: <unknown>
14: <unknown>
15: <unknown>
16: <unknown>
17: <unknown>
18: <unknown>
19: <unknown>
20: <unknown>
21: <unknown>
22: <unknown>
23: <unknown>
24: <unknown>
25: <unknown>
26: <unknown>
27: <unknown>
28: <unknown>
29: <unknown>
30: <unknown>
31: wasmtime_runtime::traphandlers::catch_traps::call_closure
32: wasmtime_setjmp
33: sc_executor_wasmtime::runtime::WasmtimeInstance::call_impl
34: sc_executor_common::wasm_runtime::WasmInstance::call_export
35: sc_executor::executor::WasmExecutor<H>::with_instance::{{closure}}
36: <sc_executor::executor::NativeElseWasmExecutor<D> as sp_core::traits::CodeExecutor>::call
37: sp_state_machine::execution::StateMachine<B,H,Exec>::execute
38: <sc_service::client::client::Client<B,E,Block,RA> as sp_api::CallApiAt<Block>>::call_api_at
39: <moonbeam_runtime::RuntimeApiImpl<__SrApiBlock__,RuntimeApiImplCall> as sp_api::Core<__SrApiBlock__>>::__runtime_api_internal_call_api_at
40: <&sc_service::client::client::Client<B,E,Block,RA> as sc_consensus::block_import::BlockImport<Block>>::import_block::{{closure}}
41: <alloc::sync::Arc<T> as sc_consensus::block_import::BlockImport<B>>::import_block::{{closure}}
42: <fc_consensus::FrontierBlockImport<B,I,C> as sc_consensus::block_import::BlockImport<B>>::import_block::{{closure}}
43: <cumulus_client_consensus_common::ParachainBlockImport<Block,BI,BE> as sc_consensus::block_import::BlockImport<Block>>::import_block::{{closure}}
44: <nimbus_consensus::import_queue::NimbusBlockImport<I> as sc_consensus::block_import::BlockImport<Block>>::import_block::{{closure}}
45: <alloc::boxed::Box<dyn sc_consensus::block_import::BlockImport<B>+Error = sp_consensus::error::Error+core::marker::Send+core::marker::Sync> as sc_consensus::block_import::BlockImport<B>>::import_block::{{closure}}
46: sc_consensus::import_queue::basic_queue::BlockImportWorker<B>::new::{{closure}}
47: <futures_util::future::future::Map<Fut,F> as core::future::future::Future>::poll
48: <tracing_futures::Instrumented<T> as core::future::future::Future>::poll
49: tokio::runtime::task::raw::poll
50: std::sys_common::backtrace::__rust_begin_short_backtrace
51: core::ops::function::FnOnce::call_once{{vtable.shim}}
52: <alloc::boxed::Box<F,A> as core::ops::function::FnOnce<Args>>::call_once
at rustc/8ede3aae28fe6e4d52b38157d7bfe0d3bceef225/library/alloc/src/boxed.rs:1985:9
<alloc::boxed::Box<F,A> as core::ops::function::FnOnce<Args>>::call_once
at rustc/8ede3aae28fe6e4d52b38157d7bfe0d3bceef225/library/alloc/src/boxed.rs:1985:9
std::sys::unix::thread::Thread::new::thread_start
at rustc/8ede3aae28fe6e4d52b38157d7bfe0d3bceef225/library/std/src/sys/unix/thread.rs:108:17
53: <unknown>
54: clone
Thread 'tokio-runtime-worker' panicked at 'Externalities not allowed to fail within runtime: "Trie lookup error: Database missing expected key: 0x7c1d596a9b31c0986d4b1a303a13c606fd05779e044292d524dc35358c569bed"', /root/.cargo/git/checkouts/polkadot-sdk-38b703c7469a7d1e/542721d/substrate/primitives/state-machine/src/ext.rs:176
This is a bug. Please report it at:
https://github.com/PureStake/moonbeam/issues/new
2024-01-15 01:37:06 [Relaychain] Report 12D3KooWJ5zCF4Tkx9k6XsF9cUofaPnZ1rJyxFRTE1Xe582TQcDR: -2147483648 to -2147483648. Reason: Genesis mismatch. Banned, disconnecting.
image: moonbeam-tracing:v0.35.0-2700-50aa config:
"--chain=moonbeam",
"--base-path=/data",
"--name=gummybear",
"--unsafe-rpc-external",
"--rpc-cors=*",
"--rpc-port=8545",
"--rpc-max-response-size=100",
"--rpc-max-connections=100000",
"--ethapi-max-permits=1000",
"--execution=wasm",
"--wasm-execution=compiled",
"--state-pruning=archive",
"--max-runtime-instances=32",
"--wasm-runtime-overrides=/moonbeam/moonbeam-substitutes-tracing",
"--trie-cache-size=0",
"--ethapi=debug,trace,txpool",
"--ethapi-trace-max-count=10000",
"--runtime-cache-size=64",
"--db-cache=30000"
hello @blakelukas, it looks like a database corruption issue. Are you syncing from scratch? At what block number did you get the error?
I think it was a problem with the disk. It has been replaced now, and we are syncing from scratch. Everything is going fine for now. No assistance is needed at the moment. Thank you!