Are there any plans to implement prepared writes?
I'm currently working on a websocket-based broadcasting server and would be more then happy to use this great library, but there is no prepared writes functionality, like in some other websocket libraries (e.g. gorilla/websocket and lxzan/gws). This would help a lot to optimize the broadcasting of identical messages. Are there any plans to implement this feature in some way (or accept PRs that implements it)?
I'm not sure how the API should look like, but it would be cool to have something similar.
var connections []*websocket.Conn
// Create prepared message without frames generation.
pm := websocket.PrepareMessage(websocket.MessageText, []byte("hello"))
defer pm.Close()
for _, c := range connections {
// Lazily initialize frame buffer for "pm" and reuse it for other connections.
// For each set of compression settings (mode and threshold), there will be separate pre-initialized frame
// buffers, so if some connections don't have compression enabled and some do have, then two types of
// frame buffers will be initialized for each of them.
if err := c.WritePrepared(ctx, pm); err != nil {
log.Printf("failed to write prepared message: %v\n", err)
}
}
@kvizyx hey, and thanks for the proposal. This hasn't been on our radar but it looks useful enough that we could add it. 👍🏻
I see you included pm.Close() in your proposal, can you think of a use-case where it'd be beneficial vs just letting the GC take care of the prepared message?
And yes, in general PRs are very welcome, but I don't mind taking this on. 👍🏻
I see you included
pm.Close()in your proposal, can you think of a use-case where it'd be beneficial vs just letting the GC take care of the prepared message?
My initial idea was to use memory pool for frame buffers in prepared message, so in this case, pm.Close() would be required to return resources, but maybe I'm just overthinking it. Can you share your vision of how the API for this feature should look like in general?
Also, I'd be happy to work on it, if you don't mind.
@kvizyx Honestly, the proposed API is quite alright, just a small naming adjustment and dropping the Close would be a good fit IMO:
// PreparedMessage ...
type PreparedMessage struct { ... }
// NewPreparedMessage ...
func NewPreparedMessage(typ MessageType, data []byte) *PreparedMessage
// WritePrepared ...
func (c *Conn) WritePrepared(ctx context.Context, pm *PreparedMessage) error
My initial idea was to use memory pool for frame buffers in prepared message, [...]
Can you give one or two example scenarios where this would be beneficial? Personally I think the gain will be marginal, but I'm also not the target audience for this feature. Off the top of my head, the following risks come to mind:
- Increased API surface and remember to call Close
- Pooled buffers too small -> grow -> increased memory consumption
- Guessing optimal buffer size (1KB, 4KB, ...?) doesn't match implementation requirements
- Introducing library-level state (user controlled pools are preferable)
Also, I'd be happy to work on it, if you don't mind.
Alright. 👍🏻
Off the top of my head, a few things we may have to consider:
- We only use one compression level, so that may simplify things.
- Context/no context takeover?
- For client writes, cache or generate new mask key every time?
- I'd probably lean towards new mask key to avoid weirdness, we're probably not optimizing for clients anyway.
- Wasm? (probably not much to do since we hand over to browser)
@mafredri Sorry for late response.
- Increased API surface and remember to call Close
- Pooled buffers too small -> grow -> increased memory consumption
- Guessing optimal buffer size (1KB, 4KB, ...?) doesn't match implementation requirements
- Introducing library-level state (user controlled pools are preferable)
Fair, but the problem is that I don't see how we can make a minimalistic API that would fit into the current API and allow users to use buffers under their control, rather then do it internally. I think the most rational solution for now would be to just forget about it and implement it simply, without memory pools. After all, it won't be a big deal to add it in the future if necessary.
Btw, this can also be applied to the wsjson package, where memory pool is used for read operations, but I guess that's ok since it's just an utility package.
- For client writes, cache or generate new mask key every time?
- I'd probably lean towards new mask key to avoid weirdness, we're probably not optimizing for clients anyway.
I can hardly imagine a situation where someone would need to broadcast identical messages to a large number of different servers from clients on the same process, so it would only add unnecessary complexity and weirdness to the code, as you said.
- Wasm? (probably not much to do since we hand over to browser)
Honestly, I don't think this feature will be relevant for wasm users.
In addition, it would probably also be useful to add prepared write method for the wsjson package and a prepared writer (not sure about this, because the API for it would look weird).