rsocket-java
rsocket-java copied to clipboard
1.1.4 Regression - RejectedSetupException on auth failure results in ClosedChannelException
RSocket server can do connection-level authentication checks via socketAcceptor interceptors. An interceptor can check for valid authentication in the setup payload and decide whether to forward the accept the connection, or reply with a Mono#error of io.rsocket.exceptions.RejectedSetupException with rejection details in the exception message.
This authentication approach has worked flawlessly on 1.1.3, but on 1.1.4 the clients don't receive the RejectedSetupException anymore. Rather the connection is abruptly closed with ClosedChannelException.
Expected Behavior
Clients should receive the error with RejectedSetupException throwable containing the details when authentication fails.
Actual Behavior
The TCP connection is eagerly shut down and clients receive java.nio.channels.ClosedChannelException.
I tried bisecting the commits since the 1.1.3 git tag and it seems it works fine until at least 00d8311a7436fd2421dbcefb13f744bc4973184e, and then it gets weird, but I'm not sure if those next commits are self-contained changes...
Steps to Reproduce
The following example runs fine with 1.1.3, but fails with 1.1.4:
public class AuthenticationTest {
private static final Logger LOG = LoggerFactory.getLogger(AuthenticationTest.class);
private static final int PORT = 23200;
@Test
void authTest() {
createServer().block();
RSocket rsocketClient = createClient().block();
StepVerifier.create(
rsocketClient.requestResponse(DefaultPayload.create("Client: Hello"))
)
.expectError(RejectedSetupException.class)
.verify();
}
private static Mono<CloseableChannel> createServer() {
LOG.info("Starting server at port {}", PORT);
RSocketServer rSocketServer = RSocketServer.create((connectionSetupPayload, rSocket) -> Mono.just(new MyServerRsocket()));
TcpServer tcpServer = TcpServer.create()
.host("localhost")
.port(PORT);
return rSocketServer
.interceptors(interceptorRegistry -> interceptorRegistry.forSocketAcceptor(socketAcceptor -> (setup, sendingSocket) -> {
if (true) {//TODO here would be an authentication check based on the setup payload
return Mono.error(new RejectedSetupException("ACCESS_DENIED"));
} else {
return socketAcceptor.accept(setup, sendingSocket);
}
}))
.bind(TcpServerTransport.create(tcpServer))
.doOnNext(closeableChannel -> LOG.info("RSocket server started."));
}
private static Mono<RSocket> createClient() {
LOG.info("Connecting....");
return RSocketConnector.create().connect(TcpClientTransport.create(TcpClient.create()
.host("localhost")
.port(PORT)))
.doOnNext(rSocket -> LOG.info("Successfully connected to server"))
.doOnError(throwable -> LOG.error("Failed to connect to server"));
}
public static class MyServerRsocket implements RSocket {
private static final Logger LOG = LoggerFactory.getLogger(MyServerRsocket.class);
@Override
public Mono<Payload> requestResponse(Payload payload) {
LOG.info("Got a request with payload: {}", payload.getDataUtf8());
return Mono.just("Response data blah blah blah")
.map(DefaultPayload::create);
}
}
}
Your Environment
- RSocket 1.1.4 (via rsocket-bom)
- JDK 17
Just FYI, I filed what seems like a very similar regression between 1.1.2 and 1.1.3: #1087.
I just want to add here that this is blocking us (and probably others) from upgrading to 1.1.4. What's worse is that due to 1.1.3 depending on old netty, we are also stuck on Reactor 2020.0.24 which is now 14 months old.
@OlegDokuka Do you know of a suitable workaround for this bug?
@mdindoffer I'm back to work. Let me fix it quickly!
Thank you for being patient all that time!
@OlegDokuka I understand this is not a priority for you, and that's fine. But we'd really love to upgrade the rest of the reactive stack at least (i.e. bumping the whole reactor-bom).
So please forgive me for asking again - is there a way to decouple RSocket from the rest of the Reactor stack (core, reactor-netty, netty) in a way that enables safe upgrade path with RSocket 1.1.3? See https://github.com/rsocket/rsocket-java/issues/1082#issuecomment-1561129900 . I imagine locking individual netty artifacts is a bad idea.
Alternatively, can you think of a workaround for the premature closing of the channel?