noVNC icon indicating copy to clipboard operation
noVNC copied to clipboard

Reduce network traffic by delaying incremental framebuffer update requests

Open redneck-f25 opened this issue 3 months ago • 7 comments

Is your feature request related to a problem? Please describe. I need to observe the screen of a remote device. The screen ist recorded with an USB camera and ffplay from another headless device and shared with ThightVNC and noVNC. The interval of relevant changes on the screen is very slow.

As mentioned in 6.4.3 FramebufferUpdateRequest (rfbproto-3.8.pdf) I need to regulate the rate at which it sends incremental FramebufferUpdateRequests to avoid hogging the network to reduce network traffic over a metered connection.

Describe the solution you'd like A new property delayIncrementalUpdateRequests on RFB instances delays incremental FramebufferUpdateRequest (client to server) messages after FramebufferUpdate (server to client) messages.

If you like the idea I would integrate my current implementation (see below) into rfb.js and vnc.html and create a PR.

Describe alternatives you've considered None.

Additional context

Below is a redacted screenshot from a page with two <iframe>s to the same (no)VNC server.

  • https://example.org/properties={"compressionLevel":9,"qualityLevel":4}
  • https://example.org/properties={"compressionLevel":9,"qualityLevel":4,"delayIncrementalUpdateRequests":500}

Over about 11 minutes I could reduce the transferred bytes to 16% from 93.56 MiB to 14.72 MiB and the number of messages (TCP frames/IP packets) to 14% from 90523 to 12052 with a delay of 500 ms.

Image

Here is my current implementation to fix this in this use case:

const $$screen = document.getElementById('screen');
const rfbUrl = location.origin.replace(/^http/, 'ws') + location.pathname.replace(/[^/]*$/, 'websockify');
// FIXME: DO NOT DO THIS IS REAL LIFE!!!
const rfbOptions = JSON.parse(decodeURIComponent(location.search.match(/[?&]options=([^&]*)/)[1]));
const rfbProperties = JSON.parse(decodeURIComponent(location.search.match(/[?&]properties=([^&]*)/)[1]));
const rfb = Object.assign(new RFB($$screen, rfbUrl, rfbOptions), rfbProperties);
// 6.4.3 FramebufferUpdateRequest (rfbproto-3.8.pdf)
//
// In the case of a fast client, the client may want to regulate the rate at which it sends
// incremental FramebufferUpdateRequests to avoid hogging the network.
//
if (rfb.delayIncrementalUpdateRequests) {
  RFB.messages.fbUpdateRequest = ((fbUpdateRequest)=>{
    let timer = null;
    let delay = +rfb.delayIncrementalUpdateRequests;
    return (sock, incremental, x, y, w, h) => {
      if (!incremental) {
        fbUpdateRequest(sock, incremental, x, y, w, h);
        return;
      }
      if (timer !== null) {
        return;
      }
      timer = setTimeout(() => {
        timer = null;
        fbUpdateRequest(sock, incremental, x, y, w, h);
      }, delay);
    };
  })(RFB.messages.fbUpdateRequest);
}

BTW: The measurements are carried out with my WebSocketGuardian.

<script src="./core/WebSocketGuardian.js"></script>
<script type="module">
const encoder = new TextEncoder();
const startTime = Date.now();
let bytes = 0;
let bytesSent = 0;
let bytesRcvd = 0;
let messgages = 0;
let messgagesSent = 0;
let messgagesRcvd = 0;

WebSocket.onSendMessage = (event) => {
  const data = event.data;
  let length = typeof data === 'string' ? encoder.encode( data ).length : data.length;
  length += length <= 125 ? 6 : length <= 0x7fff ? 8 : 14;
  bytesSent += length;
  bytes += length;
  ++messgagesSent;
  ++messgages;
  updateStatus();
}

WebSocket.onRecvMessage = (event) => {
  const data = event.data;
  let length = typeof data === 'string' ? encoder.encode( data ).length : data.byteLength;
  length += length <= 125 ? 6 : length <= 0x7fff ? 8 : 14;
  bytesRcvd += length;
  bytes += length;
  ++messgagesRcvd;
  ++messgages;
  updateStatus();
}
</script>

redneck-f25 avatar Aug 30 '25 13:08 redneck-f25

That's a nice reduction of network traffic!

Shouldn't it be possible to limit the frame rate in ffplay to get similar results? Then the FrameBufferUpdates would contain less pixel data over time.

CendioZeijlon avatar Sep 10 '25 07:09 CendioZeijlon

Maybe we could try to fiddle on the remote device, but it's easier to change the behavior of the local client.

Another use case: In a vehicle for public transportation we have 16 displays each with it's own CPU, GPU, FB and VNC server. There are less changes over time. Over 10 minutes I could reduce the traffic from 240MiB to 9MiB with a delay of 10 seconds.

| delay | duration   | total                   | average                   | messages     |
|-------|------------|-------------------------|---------------------------|--------------|
|     0 | 0d00:10:22 | 239.03 MiB +   7.60 MiB |  1.35 GiB/h + 43.93 MiB/h | 249127 msg/h |
| 10000 | 0d00:10:24 |   8.99 MiB + 322.70 KiB | 51.82 MiB/h +  1.82 MiB/h |  17796 msg/h |

Total and average show the WebSocket data length + assumed WS, TCP and IP header size.

Image

redneck-f25 avatar Sep 11 '25 13:09 redneck-f25

In the meantime I forked the repo and created some branches. Do you think it's worth to do PRs?

This issue: feature/delay-fb-update-requests

Crop the region to transfer: feature/crop-fb

Show local cursors for pointerless devices (or ThigtVNC) and while moving the visible area feature/local-cursors

Monitor traffic (feature/traffic-stats)[https://github.com/bitctrl/noVNC/tree/feature/traffic-stats]

redneck-f25 avatar Sep 11 '25 14:09 redneck-f25

The FramebufferUpdateRequest was meant for flow control, not for this use case. So I'm cautious there might be unforeseen consequences.

One immediate problem is that advanced VNC servers do not use it, and will not be throttled.

CendioOssman avatar Sep 11 '25 14:09 CendioOssman

@CendioOssman what is flow control in this case?

One immediate problem is that advanced VNC servers do not use it, and will not be throttled.

Since they are using continuous frame buffer updates instead?

CendioZeijlon avatar Sep 11 '25 18:09 CendioZeijlon

@CendioOssman what is flow control in this case?

Making sure the client doesn't get updates faster than it has time to process them. Failure to do so could result in excessive buffering, or even lost data if the transport doesn't have its own flow control.

One immediate problem is that advanced VNC servers do not use it, and will not be throttled.

Since they are using continuous frame buffer updates instead?

Yup.

CendioOssman avatar Sep 12 '25 07:09 CendioOssman

@CendioOssman @CendioZeijlon thanks for the discussion and noVNC in general.

The FramebufferUpdateRequest was meant for flow control, not for this use case.

As noted in the docs (rfbproto-3.8.pdf/6.4.3 FramebufferUpdateRequest) the client may want to regulate the rate at which it sends incremental FramebufferUpdateRequests to avoid hogging the network.

EDIT: Found another referece :-) rfbproto.rst#743framebufferupdaterequest

A noVNC RFB instance calls RFB.messages.fbUpdateRequest() from this._negotiateServerInit() with incemental = false and from this._normalMsg() immediately after the handling of each and every FramebufferUpdate if continuous updates are not enabled.

What I did is exactly what The RFB Protocol documentation says. I added a parameter to optionally slow down the client, but only if the user wants it. There is no additional buffering.

Image

If you want to have a look an it, the branch feature/delay-fb-update-requests is available at https://download.bitctrl.de/novnc-u6Nm/noVNC-feature-delay-fb-update-requests/vnc.html?incfbureq_delay=2000.

All of my earlier mentioned features and your latest commits in our main are available at https://download.bitctrl.de/novnc-u6Nm/noVNC-main/vnc.html.

RFB.messages = {
    fbUpdateRequest(sock, incremental, x, y, w, h, { rfb, delay } = {}) {
        if (incremental && delay !== 0 && rfb?._incfbureqTimer !== false ) {
            if (delay !== undefined) {
                if (rfb._incfbureqTimer === null) {
                    // save bound function for calling in incfbureqDelay setter
                    // if value is changed while request is delayed
                    rfb._incfbureq = RFB.messages.fbUpdateRequest.bind(
                        null,
                        sock, incremental, x, y, w, h,
                        { rfb, delay: undefined }
                    );
                    rfb._incfbureqTimer = setTimeout(rfb._incfbureq, delay);
                }
                return;
            } else {
                rfb._incfbureq = null;
                rfb._incfbureqTimer = null;
            }
        }
        /* do it ... */
    },
}

Measurements

I did some more measurements. There are two devices, one in the lab and another one on the road over a metered mobile broadband network connection. The devices have 1920x1080 displays which are cut to 1920x360 (but they don't know). I don't want to transfer the "secret offscreen area" 1920x720+360+0. Each screen is observed first in full size and second without the video area 640x360-0+0. Each observation is done without a delay (the default behavior) and a delay of 2000 ms.

I realized that I accidentally cropped the region to a height of 320 instead of 360 pixels. I guess that's not a big matter for the results.

I did another comparison with different delays for the slow-changing non-video area (1280x360+0+0). The differences are not so big for one device here, but we have to monitor multiple devices in one vehicle at the same time over the same mbn-connection.

All in all, this feature is not useful for sessions where you want to work on a remote device and where the server is smarter. It is useful for monitoring of remote devices (at least if they run Windows and TightVNC). I guess we can have both together :-).

| dev (crop)         | delay | duration   | total      + headers    | average     + headers      | messages     | IP packets  | average send           | average recv             |
|--------------------|-------|------------|-------------------------|----------------------------|--------------|-------------|------------------------|--------------------------|
| lab (1920x320+0+0) |     0 | 0d00:10:02 |   0.57 GiB +  17.82 MiB |  3.41 GiB/h + 106.53 MiB/h | 912827 msg/h | 2602730 pph | 0.74 b/msg | 0.1 p/msg | 3.91 KiB/msg | 2.8 p/msg |
| lab (1920x320+0+0) |  2000 | 0d00:10:02 |  12.49 MiB + 390.29 KiB | 74.66 MiB/h +   2.28 MiB/h |  17996 msg/h |   55944 pph | 0.97 b/msg | 0.1 p/msg | 4.25 KiB/msg | 3.0 p/msg |
| lab (1280x320+0+0) |     0 | 0d00:10:00 | 407.65 KiB +  13.81 KiB |  2.39 MiB/h +  82.85 KiB/h |    600 msg/h |    1985 pph | 3.68 b/msg | 0.3 p/msg | 4.07 KiB/msg | 3.0 p/msg |
| lab (1280x320+0+0) |  2000 | 0d00:10:00 | 407.66 KiB +  14.09 KiB |  2.39 MiB/h +  84.54 KiB/h |    756 msg/h |    1998 pph | 2.97 b/msg | 0.2 p/msg | 3.23 KiB/msg | 2.4 p/msg |
| mbn (1920x320+0+0) |     0 | 0d00:12:04 | 171.85 MiB +   5.70 MiB |  0.83 GiB/h +  28.35 MiB/h | 208479 msg/h |  700588 pph | 0.86 b/msg | 0.1 p/msg | 4.20 KiB/msg | 3.3 p/msg |
| mbn (1920x320+0+0) |  2000 | 0d00:12:06 |  15.17 MiB + 508.50 KiB | 75.20 MiB/h +   2.46 MiB/h |  16094 msg/h |   61152 pph | 1.23 b/msg | 0.1 p/msg | 4.78 KiB/msg | 3.7 p/msg |
| mbn (1280x320+0+0) |     0 | 0d00:12:02 |   1.20 MiB +  44.52 KiB |  5.99 MiB/h + 221.96 KiB/h |   1850 msg/h |    5280 pph | 2.37 b/msg | 0.2 p/msg | 3.32 KiB/msg | 2.6 p/msg |
| mbn (1280x320+0+0) |  2000 | 0d00:12:04 |   1.01 MiB +  36.88 KiB |  5.01 MiB/h + 183.37 KiB/h |   1491 msg/h |    4365 pph | 2.55 b/msg | 0.2 p/msg | 3.43 KiB/msg | 2.7 p/msg |
|--------------------|-------|------------|-------------------------|----------------------------|--------------|-------------|------------------------|--------------------------|
| mbn (1280x360+0+0) |     0 | 0d00:10:00 |   2.74 MiB + 100.15 KiB | 16.43 MiB/h +   0.59 MiB/h |   4918 msg/h |   14299 pph | 1.96 b/msg | 0.2 p/msg | 3.42 KiB/msg | 2.7 p/msg |
| mbn (1280x360+0+0) |  2000 | 0d00:10:00 |   2.90 MiB +  99.25 KiB | 17.39 MiB/h +   0.58 MiB/h |   3994 msg/h |   14382 pph | 1.78 b/msg | 0.2 p/msg | 4.46 KiB/msg | 3.4 p/msg |
| mbn (1280x360+0+0) |  5000 | 0d00:10:02 |   2.43 MiB +  86.94 KiB | 14.55 MiB/h +   0.51 MiB/h |   3634 msg/h |   12472 pph | 2.44 b/msg | 0.3 p/msg | 4.10 KiB/msg | 3.1 p/msg |
| mbn (1280x360+0+0) | 10000 | 0d00:10:04 |   2.12 MiB +  73.66 KiB | 12.63 MiB/h + 438.90 KiB/h |   2806 msg/h |   10600 pph | 2.32 b/msg | 0.3 p/msg | 4.61 KiB/msg | 3.5 p/msg |
  • 20250912001-lab.png Image
  • 20250912001-lab-delay.png Image
  • 20250912001-lab-no-video.png Image
  • 20250912001-lab-no-video-delay.png Image
  • 20250912002-mbn.png Image
  • 20250912002-mbn-delay.png Image
  • 20250912002-mbn-no-video.png Image
  • 20250912002-mbn-no-video-delay.png Image

redneck-f25 avatar Sep 12 '25 16:09 redneck-f25