restate
restate copied to clipboard
Using restatectl against restate-server running in OrbStack causes panic
Extremely bizarre crash that I can consistently reproduce in 1.2.0 (but not 1.1.6) when running the docker container in OrbStack on my mac. It appears that OrbStack does some kind of magical port 80 forwarding which possibly injects something into the request, which in turn makes restate crash.
Minimal reproducer:
- macOS 15.3 (24D60)
- OrbStack Version 1.10.0 (19021)
docker run --rm -e RUST_BACKTRACE=full -p 80:8080 -p 8080:8080 --name restate12 docker.io/restatedev/restate:1.2
Call with:
❯ curl -v http://restate12.orb.local:80
* Host restate12.orb.local:80 was resolved.
* IPv6: fd07:b51a:cc66:0:a617:db5e:c613:f81b
* IPv4: 198.19.248.27
* Trying 198.19.248.27:80...
* connect to 198.19.248.27 port 80 from 198.19.249.3 port 63544 failed: Connection refused
* Trying [fd07:b51a:cc66:0:a617:db5e:c613:f81b]:80...
* connect to fd07:b51a:cc66:0:a617:db5e:c613:f81b port 80 from fd07:b51a:cc66:0:a617:db5e:ab7:e9f1 port 63545 failed: Connection refused
* Failed to connect to restate12.orb.local port 80 after 35 ms: Couldn't connect to server
* Closing connection
curl: (7) Failed to connect to restate12.orb.local port 80 after 35 ms: Couldn't connect to server
Server output:
2025-02-17T14:46:56.692198Z TRACE server: mio::poll: registering event source with poller: token=Token(281473108684672), interests=READABLE | WRITABLE server_name=admin-api-server net.host.addr="0.0.0.0" net.host.port=9070
2025-02-17T14:46:56.692237Z TRACE mio::poll: registering event source with poller: token=Token(281473058444160), interests=READABLE | WRITABLE
2025-02-17T14:46:56.692272Z TRACE server: mio::poll: registering event source with poller: token=Token(281473109869440), interests=READABLE | WRITABLE server_name=admin-api-server net.host.addr="0.0.0.0" net.host.port=9070
2025-02-17T14:46:56.692316Z DEBUG server:SocketHandler: restate_core::network::net_util: New connection accepted server_name=admin-api-server net.host.addr="0.0.0.0" net.host.port=9070 peer_addr=192.168.215.1:41374
2025-02-17T14:46:56.692324Z DEBUG server:SocketHandler: restate_core::network::net_util: New connection accepted server_name=admin-api-server net.host.addr="0.0.0.0" net.host.port=9070 peer_addr=192.168.215.1:41362
2025-02-17T14:46:56.692335Z TRACE server: mio::poll: registering event source with poller: token=Token(281473109869696), interests=READABLE | WRITABLE server_name=node-rpc-server net.host.addr="0.0.0.0" net.host.port=5122
2025-02-17T14:46:56.692355Z TRACE server: mio::poll: registering event source with poller: token=Token(281473109869952), interests=READABLE | WRITABLE server_name=node-rpc-server net.host.addr="0.0.0.0" net.host.port=5122
2025-02-17T14:46:56.692381Z DEBUG server:SocketHandler: restate_core::network::net_util: New connection accepted server_name=node-rpc-server net.host.addr="0.0.0.0" net.host.port=5122 peer_addr=192.168.215.1:48656
2025-02-17T14:46:56.692388Z DEBUG server:SocketHandler: restate_core::network::net_util: New connection accepted server_name=node-rpc-server net.host.addr="0.0.0.0" net.host.port=5122 peer_addr=192.168.215.1:48646
2025-02-17T14:46:56.692489Z TRACE server:SocketHandler: mio::poll: deregistering event source from poller server_name=node-rpc-server net.host.addr="0.0.0.0" net.host.port=5122 peer_addr=192.168.215.1:48656
2025-02-17T14:46:56.692541Z TRACE mio::poll: registering event source with poller: token=Token(281473145987968), interests=READABLE | WRITABLE
2025-02-17T14:46:56.692544Z TRACE server:SocketHandler: mio::poll: deregistering event source from poller server_name=admin-api-server net.host.addr="0.0.0.0" net.host.port=9070 peer_addr=192.168.215.1:41362
2025-02-17T14:46:56.693048Z TRACE mio::poll: registering event source with poller: token=Token(281473145988224), interests=READABLE | WRITABLE
2025-02-17T14:46:56.693195Z TRACE mio::poll: deregistering event source from poller
thread 'rs:worker-6' panicked at /usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/pgwire-0.28.0/src/tokio/server.rs:427:18:
called `Option::unwrap()` on a `None` value
stack backtrace:
2025-02-17T14:46:56.693707Z ERROR tracing_panic: A panic occurred panic.payload="called `Option::unwrap()` on a `None` value" panic.location="/usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/pgwire-0.28.0/src/tokio/server.rs:427:18" panic.backtrace= 0: std::backtrace::Backtrace::create
1: tracing_panic::panic_hook
2: restate_server::main::{{closure}}::{{closure}}
3: std::panicking::rust_panic_with_hook
4: std::panicking::begin_panic_handler::{{closure}}
5: std::sys::backtrace::__rust_end_short_backtrace
6: rust_begin_unwind
7: core::panicking::panic_fmt
8: core::panicking::panic
9: core::option::unwrap_failed
10: pgwire::tokio::server::process_socket::{{closure}}
11: <tracing::instrument::Instrumented<T> as core::future::future::Future>::poll
12: tokio::runtime::task::raw::poll
13: tokio::runtime::scheduler::multi_thread::worker::Context::run_task
14: tokio::runtime::scheduler::multi_thread::worker::run
15: tokio::runtime::task::raw::poll
16: std::sys::backtrace::__rust_begin_short_backtrace
17: core::ops::function::FnOnce::call_once{{vtable.shim}}
18: std::sys::pal::unix::thread::Thread::new::thread_start
19: <unknown>
20: <unknown>
0: 0xaaaab33dbce0 - <std::sys::backtrace::BacktraceLock::print::DisplayBacktrace as core::fmt::Display>::fmt::h07341f681e37d744
1: 0xaaaab04c391c - core::fmt::write::h28916acde7c5de6e
2: 0xaaaab33d7f64 - std::io::Write::write_fmt::h66565b91cafb7c3e
3: 0xaaaab33dbb80 - std::sys::backtrace::BacktraceLock::print::h1265ec2d4fa70fa4
4: 0xaaaab33dcca0 - std::panicking::default_hook::{{closure}}::h85ac6a8b3c0065ab
5: 0xaaaab33dcaf4 - std::panicking::default_hook::h98c77640ae5d389d
6: 0xaaaab33dd338 - std::panicking::rust_panic_with_hook::h19ce483362d6f225
7: 0xaaaab33dd0d8 - std::panicking::begin_panic_handler::{{closure}}::h93ac5c24e8a039bf
8: 0xaaaab33dc1ec - std::sys::backtrace::__rust_end_short_backtrace::he9bb083c3e0a20c8
9: 0xaaaab33dcdc0 - rust_begin_unwind
10: 0xaaaaafa0efdc - core::panicking::panic_fmt::hfea5d25dc25675a2
11: 0xaaaaafa0f04c - core::panicking::panic::he6ed612076c49b38
12: 0xaaaaafa0ef74 - core::option::unwrap_failed::hda1c5f7d40af2ba2
13: 0xaaaab2cd9c0c - pgwire::tokio::server::process_socket::{{closure}}::hc3e1b86b4b11cbca
14: 0xaaaab2d165fc - <tracing::instrument::Instrumented<T> as core::future::future::Future>::poll::h059577b32af82b2e
15: 0xaaaab2c8c744 - tokio::runtime::task::raw::poll::ha9521b040e0fdbdc
16: 0xaaaab340c3f8 - tokio::runtime::scheduler::multi_thread::worker::Context::run_task::hf4d24e6fe77b8978
17: 0xaaaab340b49c - tokio::runtime::scheduler::multi_thread::worker::run::h5765b31eb7c42746
18: 0xaaaab34183f4 - tokio::runtime::task::raw::poll::hddcc857a85d41db7
19: 0xaaaab33f8e00 - std::sys::backtrace::__rust_begin_short_backtrace::hdf4bce124706f7db
20: 0xaaaab33fba80 - core::ops::function::FnOnce::call_once{{vtable.shim}}::h8681543941f8391c
21: 0xaaaab33e1a00 - std::sys::pal::unix::thread::Thread::new::thread_start::h574120427d5430d3
22: 0xffff9651ee90 - <unknown>
23: 0xffff96587b1c - <unknown>
24: 0x0 - <unknown>
Interestingly, all of the following work just fine:
curl -v http://localhost:80
curl -v http://localhost:8080
curl -v http://restate12.orb.local:8080
It seems like we don't even get to complete the TCP handshake. This is from the Mac side of the network bridge: