yjs-scalable-ws-backend
yjs-scalable-ws-backend copied to clipboard
Has there been some load testing on the ws-backend?
@kapv89 I did some load testing for a single instance of the ws-backend, where I have tested the operation in 2 cases:
- average number of calls(5) to different documents(5) - first peak on the graphs
- a large number of connections(35) to one document - the second peak in the graphs
I used a document of 24618 characters, so a large number of messages that need to be exchanged between these sockets.
I am a little concerned:
- Why does the load go so heavily up if we all use the very same document? In my opinion the load should be very low, because there is just one state … for everyone.
- Also, the vCPU is above 0.6 and memory increase 300 MB for just 35 users ons just one document … which is quite a lot.
A small calculation makes it hard to understand for me: 25 k characters is around 25 kb, so a really small document. Hence, 35 Users * 25 kb = small load, yet we see such a dramatic jump in memory.
Hence, is there some way to do it more efficiently? 🙂
Hey .. very heavy load at work .. will try to look into this coming week.
And no .. no load testing done by me ..
@kapv89 Many thanks for letting me know 🙂
Please let me know, once you have some updates. 🙂
@kapv89 Did you have some chance already to look into this issue.
Please let me know, if I can be of any help. 🙂
@junoriosity ... @sbalikondwar on discord was doing some load testing on code from this repo for their company ... don't know where they reached, but if they are using code from this solution they might have some helpful insights for you
@kapv89 Sounds very interesting, could you tell me more?
They were doing some load testing stuff and were observing increasing memory .. u gave suggestion might be because of undo-log, and might clear up after connections close .. didn't connect after that .. probably reach out to him
@sbalikondwar Could you give us some input on that matter?
@junoriosity Hey, I am curious about how you conducted the load testing and which tool you used.
How did you emulate concurrent users sending WebSocket messages to the server?
I tried using k6 for this purpose, but I couldn't succeed in sending the correct message, or at least it didn't work as expected.
Could you suggest an alternative way to perform the testing?
@zyhzsh What I did is the following:
- I spun up a K8s cluster
- I deployed the Pod in the K8s cluster
- I used to connect with many connections to one document
- I inserted a large number of of images in that document
- I repeated the last two steps.
That was what caused the issues I mentioned.