hanabi-live
hanabi-live copied to clipboard
Allow remaking table from restart rather than instant-restart
Normally, if the game is not a speedrun (and we are not a test user and there are not 1000+ games in the lobby), pressing restart generates a prompt which then recreates the game and immediately starts it.
This PR replaces the prompt with a modal that has the option "Remake table" which works similarly to "Restart game", but leaves the users in the pregame. This means that it's possible for people to join (or leave) or for the users to change the variant before starting.
Does this appear regardless of whether the leader has 1000+ games?
Not sure what you mean. The relevant lines are
restartButton.on("click tap", () => {
if (
globals.options.speedrun ||
debug.amTestUser(globals.metadata.ourUsername) ||
globals.lobby.totalGames >= 1000
) {
globals.lobby.conn!.send("tableRestart", {
tableID: globals.lobby.tableID,
hidePregame: true,
});
} else {
globals.elements.restartArea?.visible(true);
globals.layers.UI2.batchDraw();
}
});
The globals.lobby.totalGames >= 1000
was there before me so I didn't change it. I assume it never happens in practice.
I would like to use the remake feature and I have more than 1000 total games.
I assume it never happens in practice.
I don't understand. In practice, because I have 1000 games, it doesn't give me a confirmation box when I hit the restart button.
So maybe just remove the check for number of games?
Oh I thought that referred to the number of games in the lobby rather than number of games played by the player! Sorry, that's my mistake.
Yeah, let's remove the check.
very nice feature, thanks
i am afraid that any changes to command_restart.go will deadlock the server. is it okay if we wait a week or so for me to finish rewriting the new TableManager infrastructure?
i am afraid that any changes to command_restart.go will deadlock the server. is it okay if we wait a week or so for me to finish rewriting the new TableManager infrastructure?
No problem. Not in any rush.
Would it better if it just remade the table and teleported everybody without confirmation and then they can hit start game if they're ready or remake the shared replay if they hit it by accident? It seems like yes since all it takes to remake the shared replay is hitting the back arrow in their web browser
Does it still work if someone left, remaking it with whoever remains?
Does it still work if someone left, remaking it with whoever remains?
It didn't before, but that's a good idea; I'll remove that check for the "remake table" option to work as you described.
still working on rewriting the enter server, progress can be seen on the channels branch
@Zamiell is there any chance this could get merged while waiting on channels, or is there still a concern about server deadlocks? Fine if the latter is the case; just thought to check since I don't actually know what caused the deadlock issue (thought maybe there was a chance that was resolved by now).
yes deadlocking is an issue, the code is spaghetti and anything could fuck it up
just to follow up on this, deadlocking is an issue and the it happens to the server once every once in a while. i feel that merging this make it worse. the channels branch was very difficult to implement and required a lot of boilerplate code. i talked it over with rob and stephen and i decided to rewrite the server in typescript using redis to store the state. this moves the complexity of the implementation inside of a redis transaction, which im not 100% sure how it will work, but i think i will be able to figure it out. the advantage of rewriting the server in typescript is that both the client and the server will be able to share data structures, which is something i've done recently in a separate project and it worked out well.
Okie, understand. Nothing needed on my end for now, I assume?
not yet
Brought up to date since this was mentioned in #2673
for speedrunners, will the box still appear?
for speedrunners, will the box still appear?
No.
ok thanks, lets try it out
if we get any deadlocks i will have to revert it
got 1 deadlock so far immediately after restarting, which isn't a good sign
POTENTIAL DEADLOCK:
Previous place where the lock was grabbed
goroutine 3434 lock 0xc0000373f8
~/go/pkg/mod/github.com/sasha-s/[email protected]/deadlock.go:85 go-deadlock.(*Mutex).Lock { lock(m.mu.Lock, m) } <<<<<
server/src/websocket_connect.go:67 main.websocketConnect { sessions.ConnectMutex.Lock() }
~/go/pkg/mod/github.com/gabstv/[email protected]/melody.go:189 melody.(*Melody).HandleRequestWithKeys { }
server/src/http_ws.go:113 main.httpWS { // (but that is not a problem because this function is called in a dedicated goroutine) }
~/go/pkg/mod/github.com/gin-gonic/[email protected]/context.go:168 gin.(*Context).Next { c.handlers[c.index](c) }
~/go/pkg/mod/github.com/gin-contrib/[email protected]/sessions.go:54 sessions.Sessions.func1 { c.Next() }
~/go/pkg/mod/github.com/gin-gonic/[email protected]/context.go:168 gin.(*Context).Next { c.handlers[c.index](c) }
~/go/pkg/mod/github.com/gin-gonic/[email protected]/recovery.go:99 gin.CustomRecoveryWithWriter.func1 { c.Next() }
~/go/pkg/mod/github.com/gin-gonic/[email protected]/context.go:168 gin.(*Context).Next { c.handlers[c.index](c) }
~/go/pkg/mod/github.com/gin-gonic/[email protected]/logger.go:243 gin.LoggerWithConfig.func1 { // Log only when path is not being skipped }
~/go/pkg/mod/github.com/gin-gonic/[email protected]/context.go:168 gin.(*Context).Next { c.handlers[c.index](c) }
~/go/pkg/mod/github.com/gin-gonic/[email protected]/gin.go:555 gin.(*Engine).handleHTTPRequest { c.Next() }
~/go/pkg/mod/github.com/gin-gonic/[email protected]/gin.go:512 gin.(*Engine).ServeHTTP { }
/usr/lib/go-1.13/src/net/http/server.go:2802 http.serverHandler.ServeHTTP { handler.ServeHTTP(rw, req) }
/usr/lib/go-1.13/src/net/http/server.go:1890 http.(*conn).serve { serverHandler{c.server}.ServeHTTP(w, w.req) }
Have been trying to lock it again for more than 30s
goroutine 4307 lock 0xc0000373f8
~/go/pkg/mod/github.com/sasha-s/[email protected]/deadlock.go:85 go-deadlock.(*Mutex).Lock { lock(m.mu.Lock, m) } <<<<<
server/src/websocket_connect.go:67 main.websocketConnect { sessions.ConnectMutex.Lock() }
~/go/pkg/mod/github.com/gabstv/[email protected]/melody.go:189 melody.(*Melody).HandleRequestWithKeys { }
server/src/http_ws.go:113 main.httpWS { // (but that is not a problem because this function is called in a dedicated goroutine) }
~/go/pkg/mod/github.com/gin-gonic/[email protected]/context.go:168 gin.(*Context).Next { c.handlers[c.index](c) }
~/go/pkg/mod/github.com/gin-contrib/[email protected]/sessions.go:54 sessions.Sessions.func1 { c.Next() }
~/go/pkg/mod/github.com/gin-gonic/[email protected]/context.go:168 gin.(*Context).Next { c.handlers[c.index](c) }
~/go/pkg/mod/github.com/gin-gonic/[email protected]/recovery.go:99 gin.CustomRecoveryWithWriter.func1 { c.Next() }
~/go/pkg/mod/github.com/gin-gonic/[email protected]/context.go:168 gin.(*Context).Next { c.handlers[c.index](c) }
~/go/pkg/mod/github.com/gin-gonic/[email protected]/logger.go:243 gin.LoggerWithConfig.func1 { // Log only when path is not being skipped }
~/go/pkg/mod/github.com/gin-gonic/[email protected]/context.go:168 gin.(*Context).Next { c.handlers[c.index](c) }
~/go/pkg/mod/github.com/gin-gonic/[email protected]/gin.go:555 gin.(*Engine).handleHTTPRequest { c.Next() }
~/go/pkg/mod/github.com/gin-gonic/[email protected]/gin.go:512 gin.(*Engine).ServeHTTP { }
/usr/lib/go-1.13/src/net/http/server.go:2802 http.serverHandler.ServeHTTP { handler.ServeHTTP(rw, req) }
/usr/lib/go-1.13/src/net/http/server.go:1890 http.(*conn).serve { serverHandler{c.server}.ServeHTTP(w, w.req) }
Here is what goroutine 3434 doing now
goroutine 3434 [IO wait]:
internal/poll.runtime_pollWait(0x7f454cc6bd08, 0x72, 0xffffffffffffffff)
/usr/lib/go-1.13/src/runtime/netpoll.go:184 +0x55
internal/poll.(*pollDesc).wait(0xc000386498, 0x72, 0x1800, 0x18cf, 0xffffffffffffffff)
/usr/lib/go-1.13/src/internal/poll/fd_poll_runtime.go:87 +0x45
internal/poll.(*pollDesc).waitRead(...)
/usr/lib/go-1.13/src/internal/poll/fd_poll_runtime.go:92
internal/poll.(*FD).Read(0xc000386480, 0xc000c93300, 0x18cf, 0x18cf, 0x0, 0x0, 0x0)
/usr/lib/go-1.13/src/internal/poll/fd_unix.go:169 +0x1cf
net.(*netFD).Read(0xc000386480, 0xc000c93300, 0x18cf, 0x18cf, 0x203000, 0x5, 0x18cf)
/usr/lib/go-1.13/src/net/fd_unix.go:202 +0x4f
net.(*conn).Read(0xc000010220, 0xc000c93300, 0x18cf, 0x18cf, 0x0, 0x0, 0x0)
/usr/lib/go-1.13/src/net/net.go:184 +0x68
crypto/tls.(*atLeastReader).Read(0xc0005485e0, 0xc000c93300, 0x18cf, 0x18cf, 0x6f50cdd19b786aae, 0xa9d93814e13c9de1, 0xc000f82a28)
/usr/lib/go-1.13/src/crypto/tls/conn.go:780 +0x60
bytes.(*Buffer).ReadFrom(0xc000057e58, 0x10795a0, 0xc0005485e0, 0x40bdf5, 0xde9900, 0xec2be0)
/usr/lib/go-1.13/src/bytes/buffer.go:204 +0xb4
crypto/tls.(*Conn).readFromUntil(0xc000057c00, 0x107a480, 0xc000010220, 0x5, 0xc000010220, 0x161)
/usr/lib/go-1.13/src/crypto/tls/conn.go:802 +0xec
crypto/tls.(*Conn).readRecordOrCCS(0xc000057c00, 0x0, 0x0, 0x0)
/usr/lib/go-1.13/src/crypto/tls/conn.go:609 +0x124
crypto/tls.(*Conn).readRecord(...)
/usr/lib/go-1.13/src/crypto/tls/conn.go:577
crypto/tls.(*Conn).Read(0xc000057c00, 0xc00192a5b9, 0x1a47, 0x1a47, 0x0, 0x0, 0x0)
/usr/lib/go-1.13/src/crypto/tls/conn.go:1255 +0x161
io.ReadAtLeast(0x7f454cc70098, 0xc000057c00, 0xc00192a5b9, 0x1a47, 0x1a47, 0x5, 0xc000a5f800, 0x176, 0x800)
/usr/lib/go-1.13/src/io/io.go:310 +0x87
github.com/jackc/chunkreader/v2.(*ChunkReader).appendAtLeast(...)
/root/go/pkg/mod/github.com/jackc/chunkreader/[email protected]/chunkreader.go:88
github.com/jackc/chunkreader/v2.(*ChunkReader).Next(0xc00007b7c0, 0x5, 0xc000a5f800, 0x176, 0x800, 0x6, 0xc000042580)
/root/go/pkg/mod/github.com/jackc/chunkreader/[email protected]/chunkreader.go:78 +0x132
github.com/jackc/pgproto3/v2.(*Frontend).Receive(0xc000424c00, 0xc000057f20, 0xfffffffe, 0xc000000000, 0x160)
/root/go/pkg/mod/github.com/jackc/pgproto3/[email protected]/frontend.go:72 +0x509
github.com/jackc/pgconn.(*PgConn).peekMessage(0xc0000d7600, 0xc0003aeb90, 0xc000057dc8, 0x0, 0xc000057dd8)
/root/go/pkg/mod/github.com/jackc/[email protected]/pgconn.go:466 +0x29e
github.com/jackc/pgconn.(*ResultReader).readUntilRowDescription(0xc0000d7698)
/root/go/pkg/mod/github.com/jackc/[email protected]/pgconn.go:1496 +0x66
github.com/jackc/pgconn.(*PgConn).execExtendedSuffix(0xc0000d7600, 0xc000464800, 0x14a, 0x400, 0xc0000d7698)
/root/go/pkg/mod/github.com/jackc/[email protected]/pgconn.go:1098 +0x2b6
github.com/jackc/pgconn.(*PgConn).ExecPrepared(0xc0000d7600, 0x10972a0, 0xc0000a2000, 0xc000bc67e0, 0xb, 0xc000a80fc0, 0x4, 0x4, 0xc000d41690, 0x4, ...)
/root/go/pkg/mod/github.com/jackc/[email protected]/pgconn.go:1043 +0x207
github.com/jackc/pgx/v4.(*Conn).Query(0xc000177100, 0x10972a0, 0xc0000a2000, 0xf4d956, 0x21c, 0xc0005c6c00, 0x4, 0x4, 0x0, 0x0, ...)
/root/go/pkg/mod/github.com/jackc/pgx/[email protected]/conn.go:680 +0xae7
github.com/jackc/pgx/v4/pgxpool.(*Conn).Query(0xc0006996c0, 0x10972a0, 0xc0000a2000, 0xf4d956, 0x21c, 0xc0005c6c00, 0x4, 0x4, 0x7f45518816d0, 0x0, ...)
/root/go/pkg/mod/github.com/jackc/pgx/[email protected]/pgxpool/conn.go:54 +0xb1
github.com/jackc/pgx/v4/pgxpool.(*Pool).Query(0xc00038af50, 0x10972a0, 0xc0000a2000, 0xf4d956, 0x21c, 0xc0005c6c00, 0x4, 0x4, 0x1, 0xc0019181a0, ...)
/root/go/pkg/mod/github.com/jackc/pgx/[email protected]/pgxpool/pool.go:496 +0xde
main.(*Games).GetGameIDsFriends(0x17f1e28, 0x7d25, 0xc0017dde30, 0x0, 0xa, 0x6, 0xc000aa7b00, 0xa, 0x10, 0x0)
/root/hanabi-live/server/src/models_games.go:420 +0x2fb
main.websocketConnectHistoryFriends(0xc0005c61c0)
/root/hanabi-live/server/src/websocket_connect.go:378 +0x71
main.websocketConnect(0xc0005c6140)
/root/hanabi-live/server/src/websocket_connect.go:101 +0x48d
github.com/gabstv/melody.(*Melody).HandleRequestWithKeys(0xc00007c300, 0x7f454cc821a8, 0xc000ce6e00, 0xc0017d4d00, 0xc0018580c0, 0xc001805738, 0x2)
/root/go/pkg/mod/github.com/gabstv/[email protected]/melody.go:188 +0x1a1
main.httpWS(0xc000ce6e00)
/root/hanabi-live/server/src/http_ws.go:114 +0x4e5
github.com/gin-gonic/gin.(*Context).Next(0xc000ce6e00)
/root/go/pkg/mod/github.com/gin-gonic/[email protected]/context.go:168 +0x3b
github.com/gin-contrib/sessions.Sessions.func1(0xc000ce6e00)
/root/go/pkg/mod/github.com/gin-contrib/[email protected]/sessions.go:54 +0x19e
github.com/gin-gonic/gin.(*Context).Next(0xc000ce6e00)
/root/go/pkg/mod/github.com/gin-gonic/[email protected]/context.go:168 +0x3b
github.com/gin-gonic/gin.CustomRecoveryWithWriter.func1(0xc000ce6e00)
/root/go/pkg/mod/github.com/gin-gonic/[email protected]/recovery.go:99 +0x6d
github.com/gin-gonic/gin.(*Context).Next(0xc000ce6e00)
/root/go/pkg/mod/github.com/gin-gonic/[email protected]/context.go:168 +0x3b
github.com/gin-gonic/gin.LoggerWithConfig.func1(0xc000ce6e00)
/root/go/pkg/mod/github.com/gin-gonic/[email protected]/logger.go:241 +0xe1
github.com/gin-gonic/gin.(*Context).Next(0xc000ce6e00)
/root/go/pkg/mod/github.com/gin-gonic/[email protected]/context.go:168 +0x3b
github.com/gin-gonic/gin.(*Engine).handleHTTPRequest(0xc0000a5040, 0xc000ce6e00)
/root/go/pkg/mod/github.com/gin-gonic/[email protected]/gin.go:555 +0x637
github.com/gin-gonic/gin.(*Engine).ServeHTTP(0xc0000a5040, 0x108e020, 0xc000302e00, 0xc0017d4d00)
/root/go/pkg/mod/github.com/gin-gonic/[email protected]/gin.go:511 +0x16c
net/http.serverHandler.ServeHTTP(0xc000b54b60, 0x108e020, 0xc000302e00, 0xc0017d4d00)
/usr/lib/go-1.13/src/net/http/server.go:2802 +0xa4
net/http.(*conn).serve(0xc000e901e0, 0x1097260, 0xc00091b9c0)
/usr/lib/go-1.13/src/net/http/server.go:1890 +0x875
created by net/http.(*Server).Serve
/usr/lib/go-1.13/src/net/http/server.go:2928 +0x384