panic when trying to sync with Acala / Karura
$ RUST_BACKTRACE=1 cargo run -- run --chain=./bin/acala-dist.json
Finished dev [optimized + debuginfo] target(s) in 0.24s
Running `target/debug/full-node run --chain=./bin/acala-dist.json`
path : "./bin/polkadot.json"
2022-06-09T03:40:58.216213+00:00 INFO full_node::run: successful-initialization local_peer_id=12D3KooWDrz9ZxfAfEb6owhmtkk4ak11TXAHa3LAnmrQ3yq6w1wu database_is_new=true finalized_block_hash=0xfc41…a64c finalized_block_number=0
2022-06-09T03:41:07.211219+00:00 WARN full_node::run::consensus_service: failed-block-verification hash=0x2543…1ebb height=3 error=Error while executing Wasm VM: Trap: MemoryAccessOutOfBounds ] #1190311 (🔗 24) (🌐 24)
""
2022-06-09T03:41:07.880869+00:00 WARN full_node::run::consensus_service: failed-block-verification hash=0x417d…efa9 height=1 error=Error while executing Wasm VM: Trap: ElemUninitialized ] #1190312 (🔗 24) (🌐 24)
""
2022-06-09T03:41:08.172604+00:00 WARN full_node::run::consensus_service: failed-block-verification hash=0x417d…efa9 height=1 error=Error while executing Wasm VM: Trap: ElemUninitialized ] #1190312 (🔗 24) (🌐 24)
""
thread 'tasks-pool-3' panicked at 'called `Option::unwrap()` on a `None` value', /Users/xiliangchen/projects/smoldot/src/sync/optimistic.rs:598:18 ] #1190312 (🔗 26) (🌐 26)
stack backtrace:
0: rust_begin_unwind
at /rustc/b17226fcc11587fed612631be372a5b4cb89988a/library/std/src/panicking.rs:584:5
1: core::panicking::panic_fmt
at /rustc/b17226fcc11587fed612631be372a5b4cb89988a/library/core/src/panicking.rs:143:14
2: core::panicking::panic
at /rustc/b17226fcc11587fed612631be372a5b4cb89988a/library/core/src/panicking.rs:48:5
3: core::option::Option<T>::unwrap
at /rustc/b17226fcc11587fed612631be372a5b4cb89988a/library/core/src/option.rs:752:21
4: smoldot::sync::optimistic::OptimisticSync<TRq,TSrc,TBl>::finish_request_failed
at ./src/sync/optimistic.rs:595:13
5: smoldot::sync::all::AllSync<TRq,TSrc,TBl>::blocks_request_response
at ./src/sync/all.rs:1254:25
6: full_node::run::consensus_service::SyncBackground::run::{{closure}}::{{closure}}
at ./bin/full-node/src/run/consensus_service.rs:513:53
7: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
at /rustc/b17226fcc11587fed612631be372a5b4cb89988a/library/core/src/future/mod.rs:84:19
8: full_node::run::consensus_service::SyncBackground::run::{{closure}}
at ./bin/full-node/src/run/consensus_service.rs:342:5
9: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
at /rustc/b17226fcc11587fed612631be372a5b4cb89988a/library/core/src/future/mod.rs:84:19
10: <tracing::instrument::Instrumented<T> as core::future::future::Future>::poll
at /Users/xiliangchen/.cargo/registry/src/github.com-1ecc6299db9ec823/tracing-0.1.34/src/instrument.rs:272:9
11: <futures_task::future_obj::LocalFutureObj<T> as core::future::future::Future>::poll
at /Users/xiliangchen/.cargo/registry/src/github.com-1ecc6299db9ec823/futures-task-0.3.21/src/future_obj.rs:84:18
12: <futures_task::future_obj::FutureObj<T> as core::future::future::Future>::poll
at /Users/xiliangchen/.cargo/registry/src/github.com-1ecc6299db9ec823/futures-task-0.3.21/src/future_obj.rs:127:9
13: futures_util::future::future::FutureExt::poll_unpin
at /Users/xiliangchen/.cargo/registry/src/github.com-1ecc6299db9ec823/futures-util-0.3.21/src/future/future/mod.rs:562:9
14: futures_executor::thread_pool::Task::run
at /Users/xiliangchen/.cargo/registry/src/github.com-1ecc6299db9ec823/futures-executor-0.3.21/src/thread_pool.rs:322:27
15: futures_executor::thread_pool::PoolState::work
at /Users/xiliangchen/.cargo/registry/src/github.com-1ecc6299db9ec823/futures-executor-0.3.21/src/thread_pool.rs:154:39
16: futures_executor::thread_pool::ThreadPoolBuilder::create::{{closure}}
at /Users/xiliangchen/.cargo/registry/src/github.com-1ecc6299db9ec823/futures-executor-0.3.21/src/thread_pool.rs:284:42
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
[1] 78122 abort RUST_BACKTRACE=1 cargo run -- run --chain=./bin/acala-dist.json
Different error with Karura
$ RUST_BACKTRACE=1 cargo run -- run --chain=./bin/karura-dist.json
Finished dev [optimized + debuginfo] target(s) in 0.20s
Running `target/debug/full-node run --chain=./bin/karura-dist.json`
path : "./bin/kusama.json"
2022-06-09T03:45:02.948036+00:00 INFO full_node::run: successful-initialization local_peer_id=12D3KooWQvM4NhQL4ezFC89SnBaLMoH2x12jvgdjj57m8NaUC6vE database_is_new=false finalized_block_hash=0xbaf5…126b finalized_block_number=0
thread 'tasks-pool-4' panicked at 'assertion failed: _was_inserted', /Users/xiliangchen/projects/smoldot/src/network/service.rs:1617:25 ] #2054349 (🔗 24) (🌐 24)
stack backtrace:
0: rust_begin_unwind
at /rustc/b17226fcc11587fed612631be372a5b4cb89988a/library/std/src/panicking.rs:584:5
1: core::panicking::panic_fmt
at /rustc/b17226fcc11587fed612631be372a5b4cb89988a/library/core/src/panicking.rs:143:14
2: core::panicking::panic
at /rustc/b17226fcc11587fed612631be372a5b4cb89988a/library/core/src/panicking.rs:48:5
3: smoldot::network::service::ChainNetwork<TNow>::next_event
at ./src/network/service.rs:1617:25
4: full_node::run::network_service::update_round::{{closure}}
at ./bin/full-node/src/run/network_service.rs:720:37
5: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
at /rustc/b17226fcc11587fed612631be372a5b4cb89988a/library/core/src/future/mod.rs:84:19
6: full_node::run::network_service::background_task::{{closure}}
at ./bin/full-node/src/run/network_service.rs:696:49
7: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
at /rustc/b17226fcc11587fed612631be372a5b4cb89988a/library/core/src/future/mod.rs:84:19
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
Another error when running with released mode (that ignores debug asset)
Finished release [optimized] target(s) in 2m 08s
Running `target/release/full-node run --chain=./bin/karura-dist.json`
path : "./bin/kusama.json"
2022-06-09T03:51:02.601950+00:00 INFO full_node::run: successful-initialization local_peer_id=12D3KooWDNhSgGDKAHjrFW5e4fXqigTQEjMKBwR7RY1uJcHU2dt5 database_is_new=false finalized_block_hash=0xbaf5…126b finalized_block_number=0
thread 'tasks-pool-6' panicked at 'called `Option::unwrap()` on a `None` value', /Users/xiliangchen/projects/smoldot/src/libp2p/peers.rs:1178:72 ] #2054381 (🔗 22) (🌐 22)
stack backtrace:
0: _rust_begin_unwind
1: core::panicking::panic_fmt
2: core::panicking::panic
3: smoldot::libp2p::peers::Peers<TConn,TNow>::start_request
4: <futures_util::future::future::Map<Fut,F> as core::future::future::Future>::poll
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
[1] 98308 abort RUST_BACKTRACE=1 cargo run --release -- run --chain=./bin/karura-dist.json
Chain specs can be found here: https://github.com/AcalaNetwork/Acala/blob/master/resources/ Note that you need to manually edit the relayChain and paraId field to be compatible
Thanks for the bug reports. Please note that the full node is just "for fun", and is known to be full of bugs, especially when it comes to parachains.
Ok. I have been told substrate connect is ready for parachains to integrate, and it doesn't work with Acala. So I want to check if the issue is on our bootnode node setup or on smoldot side.
You can run npm start in bin/wasm-node/javascript, and it will start a small demo of the light client in NodeJS.
After https://github.com/paritytech/smoldot/pull/2363, Acala and Karura are even included in the demo, so you will just have to click on the right chain. And it works.
The difference between this demo and substrate-connect is that substrate-connect can only connect through secure WebSocket (because browsers) while this demo can connect both through plain TCP and WebSocket (both secure and non-secure).
So you need bootnodes that allow WebSocket connections (that's already the case), put an HTTP reverse proxy with a valid TLS certificate in front of them, and add the addresses of these reverse proxies to the chain specs (with /wss).
For what it's worth we're trying to ship support for WebRTC as soon as possible to avoid all these complications.
Tried again today. Getting different errors
Acala
$ RUST_BACKTRACE=1 cargo run -- run --chain=./bin/acala-dist.json
Finished dev [optimized + debuginfo] target(s) in 0.24s
Running `target/debug/full-node run --chain=./bin/acala-dist.json`
2022-08-07T22:32:42.107261+00:00 WARN full_node::run: Please note that this full node is experimental. It is not feature complete and is known to panic often. Please report any panic you might encounter to <https://github.com/paritytech/smoldot/issues>.
2022-08-07T22:32:42.473760+00:00 INFO full_node::run: successful-initialization local_peer_id=12D3KooWBwPMk98SAANXUB5C1cx9YAc6E45FDB157TnBVDJnZfra database_is_new=false finalized_block_hash=0xfc41…a64c finalized_block_number=0
2022-08-07T22:32:43.870142+00:00 WARN full_node::run::consensus_service: failed-block-verification hash=0x2543…1ebb height=3 error=Error while executing Wasm VM: Trap: MemoryAccessOutOfBounds
""
2022-08-07T22:32:44.419666+00:00 WARN full_node::run::consensus_service: failed-block-verification hash=0x417d…efa9 height=1 error=Error while executing Wasm VM: Trap: ElemUninitialized
""
2022-08-07T22:32:44.852224+00:00 WARN full_node::run::consensus_service: failed-block-verification hash=0x417d…efa9 height=1 error=Error while executing Wasm VM: Trap: ElemUninitialized
Karura. First few minutes is working fine syncing Kusama and Karura blocks until something triggered this.
2022-08-07T22:33:19.821603+00:00 INFO full_node::run: successful-initialization local_peer_id=12D3KooWH4kBWKiPGxgqMMjnSRkZPNJtrgTG5LXbyEQhAbLEw7Yh database_is_new=false finalized_block_hash=0xbaf5…126b finalized_block_number=0
thread 'tasks-pool-9' panicked at 'explicit panic', /Users/xiliangchen/projects/smoldot/src/libp2p/peers.rs:1169:18 ] #2411648 (🔗 9) (🌐 9)
stack backtrace:
0: rust_begin_unwind
at /rustc/b17226fcc11587fed612631be372a5b4cb89988a/library/std/src/panicking.rs:584:5
1: core::panicking::panic_fmt
at /rustc/b17226fcc11587fed612631be372a5b4cb89988a/library/core/src/panicking.rs:143:14
2: core::panicking::panic
at /rustc/b17226fcc11587fed612631be372a5b4cb89988a/library/core/src/panicking.rs:48:5
3: smoldot::libp2p::peers::Peers<TConn,TNow>::queue_notification
at ./src/libp2p/peers.rs:1169:18
4: smoldot::network::service::ChainNetwork<TNow>::send_block_announce
at ./src/network/service.rs:780:9
5: full_node::run::network_service::NetworkService::send_block_announce::{{closure}}
at ./bin/full-node/src/run/network_service.rs:545:22
6: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
at /rustc/b17226fcc11587fed612631be372a5b4cb89988a/library/core/src/future/mod.rs:84:19
7: full_node::run::consensus_service::SyncBackground::process_blocks::{{closure}}
at ./bin/full-node/src/run/consensus_service.rs:1045:42
8: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
at /rustc/b17226fcc11587fed612631be372a5b4cb89988a/library/core/src/future/mod.rs:84:19
9: full_node::run::consensus_service::SyncBackground::run::{{closure}}::{{closure}}
at ./bin/full-node/src/run/consensus_service.rs:356:41
10: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
at /rustc/b17226fcc11587fed612631be372a5b4cb89988a/library/core/src/future/mod.rs:84:19
11: full_node::run::consensus_service::SyncBackground::run::{{closure}}
at ./bin/full-node/src/run/consensus_service.rs:352:5
12: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
at /rustc/b17226fcc11587fed612631be372a5b4cb89988a/library/core/src/future/mod.rs:84:19
13: <tracing::instrument::Instrumented<T> as core::future::future::Future>::poll
at /Users/xiliangchen/.cargo/registry/src/github.com-1ecc6299db9ec823/tracing-0.1.36/src/instrument.rs:272:9
14: <futures_task::future_obj::LocalFutureObj<T> as core::future::future::Future>::poll
at /Users/xiliangchen/.cargo/registry/src/github.com-1ecc6299db9ec823/futures-task-0.3.21/src/future_obj.rs:84:18
15: <futures_task::future_obj::FutureObj<T> as core::future::future::Future>::poll
at /Users/xiliangchen/.cargo/registry/src/github.com-1ecc6299db9ec823/futures-task-0.3.21/src/future_obj.rs:127:9
16: futures_util::future::future::FutureExt::poll_unpin
at /Users/xiliangchen/.cargo/registry/src/github.com-1ecc6299db9ec823/futures-util-0.3.21/src/future/future/mod.rs:562:9
17: futures_executor::thread_pool::Task::run
at /Users/xiliangchen/.cargo/registry/src/github.com-1ecc6299db9ec823/futures-executor-0.3.21/src/thread_pool.rs:322:27
18: futures_executor::thread_pool::PoolState::work
at /Users/xiliangchen/.cargo/registry/src/github.com-1ecc6299db9ec823/futures-executor-0.3.21/src/thread_pool.rs:154:39
19: futures_executor::thread_pool::ThreadPoolBuilder::create::{{closure}}
at /Users/xiliangchen/.cargo/registry/src/github.com-1ecc6299db9ec823/futures-executor-0.3.21/src/thread_pool.rs:284:42
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
[1] 37990 abort RUST_BACKTRACE=1 cargo run -- run --chain=./bin/karura-dist.json
It's hard to say for sure what caused these panic, but after some refactorings they seem to be gone. Closing this issue, as there's nothing more to do.