beast icon indicating copy to clipboard operation
beast copied to clipboard

Feature: websocket force-close

Open vinniefalco opened this issue 6 years ago • 16 comments

In some cases it may be useful to have an implementation of close and async_close which simply sends the close frame at the earliest opportunity and then performs the teardown without waiting for any pending incoming messages to complete.

vinniefalco avatar May 09 '18 16:05 vinniefalco

There would be useful method that just send close frame and nothing else. Then we can get websocket::error::closed on read incoming messages.

onto avatar Oct 19 '18 05:10 onto

Is this what you want? https://www.boost.org/doc/libs/1_68_0/libs/beast/doc/html/beast/ref/boost__beast__websocket__stream/async_close.html

vinniefalco avatar Oct 19 '18 13:10 vinniefalco

If I correctly understand, async_close send close frame and then read from stream until close frame from other side is recieved, and only after that it call callback. In my suggestion I mean that async_close can optionally only send close frame and at next time when we immediately call async_read we got websocket::error::closed . It's not a force-close, more like deffred-close.

onto avatar Oct 29 '18 06:10 onto

Hi @vinniefalco, I was just curious on how is this going as I noticed that this issue is 1 year old.

This force call will send the close frame and it will close the socket without waiting for the close frame from the client, right?

Right now when we detect that the client has disconnected ungracefully(lost the Internet connection) by using a timer and PING-PONG algorithm, we should call ws.next_layer().cancel();, like you said here. If we would call async_close the connection would never close, because the client can't respond with the close frame.

But if we would have this async_force_close we wouldn't need to call ws.next_layer().cancel();, we would call async_force_close and this will close everything without waiting for a response from the client, right?

chreniuc avatar Mar 13 '19 14:03 chreniuc

using a timer

"idle ping" and "idle disconnect" timers are now built-in to the websocket stream so you can get rid of all that code and replace it with something like this:

ws.set_option(
    websocket::stream_base::timeout::suggested(beast::role_type::server));

See: https://www.boost.org/doc/libs/master/libs/beast/doc/html/beast/using_websocket/timeouts.html

vinniefalco avatar Mar 13 '19 14:03 vinniefalco

Thanks @vinniefalco, I managed to make this change in our code and it works like a charm.

There were some issues updating from 1.69.0 to 1.70.0 to use tcp_stream, I had to replace some things:

Older(1.69.0):

std::shared_ptr<boost::beast::websocket::stream<
  boost::asio::ssl::stream<boost::asio::ip::tcp::socket>>> ws{nullptr}

New(1.70.0):

std::shared_ptr<boost::beast::websocket::stream<
  boost::beast::ssl_stream<boost::beast::tcp_stream>>> ws{nullptr};

Older:

const boost::asio::ip::tcp::endpoint& endpoint =
 connection->ws->lowest_layer().remote_endpoint();

New:

const boost::asio::ip::tcp::endpoint& endpoint =
  connection->ws->next_layer().next_layer().socket().remote_endpoint();

I also notice that you cannot create a strand like we did in version 1.69.0(you get an error at compile time), ex:

std::shared_ptr<boost::beast::websocket::stream<
  boost::asio::ssl::stream<boost::asio::ip::tcp::socket>>> ws{nullptr}

std::shared_ptr<boost::asio::strand<boost::asio::io_context::executor_type>> strand{nullptr};
strand = std::make_shared<
  boost::asio::strand<boost::asio::io_context::executor_type>>(
  ws->get_executor());

We fixed that like you did in the advanced example, we removed the strand variables and used make_strand, I tried to create a skeleton of our code and show you the differences between 1.69.0 and 1.70.0. Maybe this will help others aswell: https://github.com/chreniuc/beast_1.69.0-to-1.70.0/commit/8dcdfb22d65277ebadb913fd8019886dfd556570

In the advanced example you are using the same io_context for acceptor and for the complition handlers for reading and writing. We are using two different io_contexts for these operations as you can see in that example and I was wondering if I created the strands correctly here.

We want the acceptor to use an io_context and not use strand, but we want to use strands when reading and writing. In 1.69.0 you would use bind_executor if you wanted to use strands, but in 1.70.0 you don't use bind_executor even when you want to use strands, you simply create a tcp::socket(make_strand(io_context)), right? So if I do not pass make_strand to the constructor of acceptor it should use strand, right?

Thanks. :D

chreniuc avatar Mar 18 '19 15:03 chreniuc

You can use beast::get_lowest_layer to simplify

const boost::asio::ip::tcp::endpoint& endpoint =
  connection->ws->next_layer().next_layer().socket().remote_endpoint();

into

auto const& ep = get_lowest_layer(connection->ws).socket().remote_endpoint();

Alternatively, you can skip using beast::tcp_stream and just use net::ip::tcp::socket directly. Note that both tcp_stream and tcp::socket use net::executor which is a type-erasing wrapper that comes at the cost of a memory allocation.

If you want your acceptor to not use a strand, construct it with the executor returned by io_context::get_executor.

I had a look at your code, you will get better results if you pass executors around instead of passing io_context.

vinniefalco avatar Mar 18 '19 16:03 vinniefalco

I had a look at your code, you will get better results if you pass executors around instead of passing io_context.

Are you referring to this? :

socket(::boost::asio::make_strand(ioc))
// or
socket = boost::asio::ip::tcp::socket(boost::asio::make_strand(rw_ioc));

to be replaced with:

socket(::boost::asio::make_strand(ioc.get_executor()))
// or
socket = boost::asio::ip::tcp::socket(boost::asio::make_strand(rw_ioc.get_executor()));

?

chreniuc avatar Mar 19 '19 11:03 chreniuc

No. I mean that you should pass executors around internally. For example:

// rw_ioc is used for write and read operations, save it as a refence to pass it to make_strand

Instead of storing rw_ioc, store rw_executor. Add two typedefs:

using acceptor_executor_type = net::io_context:executor_type;
using executor_type = net::strand<net::io_context::executor_type>;

And pass instances of these types in interfaces, rather than passing an I/O context. This won't affect performance, it is just a matter of style. However, it does let you easily experiment with changes. Such as using the system_context instead of an io_context.

vinniefalco avatar Mar 19 '19 13:03 vinniefalco

Idle ping/disconnect is a nice feature for lost connections. But what happens if a malicious client does not respond to a server-sent close frame, but still respond to ping? Is there a timeout for client to respond to close frame?

lowsfer avatar Dec 12 '20 16:12 lowsfer

From memory, yes

madmongo1 avatar Dec 12 '20 17:12 madmongo1

https://www.boost.org/doc/libs/1_75_0/libs/beast/doc/html/beast/ref/boost__beast__websocket__stream_base__timeout/handshake_timeout.html

madmongo1 avatar Dec 12 '20 17:12 madmongo1

This feature can be useful in case of objects destruction. In our situation, we have externa io_conxtext and we can't just stop it. The only possibility is to call async_close and wait for result. Is there any way to close it rapidly in destructor without waiting for async feed back? Is there a better solution, then set a timeout (suggested::client) and then just call async_close in the destructor and wait for the onclose handler?

quazeeee avatar Jun 15 '21 18:06 quazeeee

Is there any way to close it rapidly in destructor without waiting for async feed back?

Yes there is a way to close it rapidly, but not from the destructor. You need to cancel the underlying I/O (e.g. using asio::socket::cancel) and wait for the completion handlers to finish. They will complete immediately with asio::error::operation_aborted. After all the completion handlers are finished executing, then all calls to io_context::run should return (don't forget to also close your listening sockets, if any). Typically shared_ptr is used to manage the connection objects.

vinniefalco avatar Jun 15 '21 19:06 vinniefalco

So, looks like there is no solution without waiting for callbacks, in both cases, we need to call some function (async_close or lowest_layey-cancel) and wait for handler (on_close or on_read with abortion). Is it correct?

quazeeee avatar Jun 15 '21 20:06 quazeeee

Another way to do it is to just call io_context::stop(). This is demonstrated in the example: https://github.com/boostorg/beast/blob/710cc53331f197f6f17e8c38454c09df68e43c03/example/websocket/server/chat-multi/main.cpp#L63

"Waiting for callbacks" implies a long duration, but this is not the case. When you cancel I/O (either by calling cancel or by closing the socket), all pending operations complete immediately. This is the preferred way to shut down a server, as it gives all the callbacks the opportunity to clean up their resources, if any. For example, you may want to flush data in memory to a database. Using io_context::stop makes this more difficult or impossible.

vinniefalco avatar Jun 15 '21 22:06 vinniefalco