NinjaTerm
NinjaTerm copied to clipboard
1000Hz ASCII data streams (~50KBps)
Expected Behavior
Terminal streams arbitrarily "fast" data streams at ~60Hz refresh rate without blocking the event loop.
Current Behavior
I LOVE this project and see a bright future ahead! Generally, we write custom parsers per project, you know the deal 🙄.
Presently we have a rather demanding one that streams up to 5 channels at 1000Hz. Like 50KBps overall data rate (400,000 bps) - I know that normal BAUD rates can't support that, it's USB CDC ACM. This sorta stream seems to cause this PWA to flop - it updates somewhat in the terminal scroll, but it's batched heavily, then just stops. The app becomes unresponsive and the tab has to be closed.
Steps to Reproduce
Run a 1000Hz + ASCII stream. With or without graphing.
Possible Solution (Not obligatory)
I'd need to look into how the streams are being buffered by this app.
Your Environment
- NinjaTerm version (displayed in upper right-hand side of window): 4.13.0
- Windows 11
- Chrome
@JPHutchins thanks for the kind words r.e. NinjaTerm!
50kBps, a.k.a 50kchars/second is quite a lot :-D I've never tested it at such a rate, it's not surprising that it's not working so well. I'm interested to know how much faster we could make NinjaTerm, and if we could make it support such rates! What is your use case at this data rate? I presume that you're not trying to read the data in the terminal at these speeds, so are you just using NinjaTerm for logging purposes? Or are there bursts of data at this rate, and then you want to inspect it in the Terminal Pane during the quiet periods?
My thoughts r.e. speeding NinjaTerm up:
- Remove any debug log statements (console.log), these would really slow things down at those speeds
- Do you need ANSI escape code support? Potentially there could be a option to disable that with many other features to give bare bones display of text, this would me it a little quicker
- See what batching could be done to reduce the number of re-renders. I think this already happens, so I'm not confident this would offer much speed up.
- We could add in a "Disable rendering of data" mode which does not render the data to the terminal panes, it only logs the incoming data to disk. This would give the fastest speed up. Some visual indicators could be added to the bottom bar showing the current TX/DX data rate in kbps (averaged over the last few seconds).
- Perhaps there is a hybrid mode where it logs data to disk, and only renders it if it there are spare CPU cycles. i.e. NinjaTerm prioritizes logging and updates the display in a best effort case. Potentially threads (I think they are called web workers?) could help distribute the load over multiple cores and free up the main event loop.
The first thing to do is probably to use Chrome dev. tools (or similar) and analyze the performance under a heavy load. This will show where the bottle necks are and what is taking too long (time spent in each function e.t.c.).
Are you willing to help with this feature? I develop NinjaTerm in my spare time and it would be awesome to have other people contribute to it.
I'm hoping to find some time later this week to create a fake virtual port, dump data in at 50kB/s and see what happens :-)
Yes, consider it my responsibility until some other crazy people want it!
Good idea re: Chrome dev tools and console.log.
Use case is data acquisition + live graphing during product development. For the particular device I tested, the live graphing is presently handled by python application + matplotlib.
I've had good experiences with using com0com to test various serial applications in the past.
Worth noting that TeraTerm can in fact render the 50KBps text... nothing can stop TeraTerm... I briefly worked on a rust-based serial terminal and ran into performance issues with how quickly I was updating Windows Terminal. And issues with ANSI escapes 😄. More generally, I've had issues with "real time" data in web frameworks but I think it's largely due to my poor understanding of React + JS.
A few questions. Is this PWA using web workers? If not, are you open to using web workers? I've never used it, but there's new "shared memory" features that might be compelling.
The idea would be to allow the window OS thread to be mostly idle to handle the event loop and put the parseRxData work on its own thread. Typically with web workers this sort of approach isn't great because of the de/serialization overhead, but if shared memory is supported then the signaling between the threads could be simple events (e.g. "next buffer is ready to be printed to terminal window and/or graphed").
Good news - with a 1000Hz stream all data is received and rendered. Just not live.
Here's a recording, you can see where the stream starts.
I think the goal here should be to throttle changes to React state. Cause a lot of the stack looks like Reacty virtual DOM / GUI state updates that probably don't need to be happening so fast (at ~1ms interval 🤣).
Probably will need to understand if anything in this method could cause React to think that the DOM needs an update. Perhaps this.terminalRows is being listened to and causes GUI update? If so, it could be throttled so that it sees batches of characters.
https://github.com/gbmhunter/NinjaTerm/blob/76f7f2cc32093db9abc3ad2a3a6e8b9737c3e506/src/model/Terminals/SingleTerminal/SingleTerminal.ts#L706-L806
I think that the cause of frequent component re-renders may be the makeAutoObservable(this) on the TerminalRow.
If I can confirm that TerminalRow is re rendering on each character, then I'll look into triggering TerminalRow renders on a clock rather than on receipt of characters.
Hi @JPHutchins , wow great you've already made a good start on this.
Yes I would be open to using web workers if they would help (although I think the discussion has moved on a bit since then).
Yes throttling the changes to the UI would be a great first step, might be able to get huge gains from this (might be all we need) for relatively low effort.
Yes, this.terminalRows is being listened to, as you found out each row in an "observable" and can make React re-draw the UI. As a basic rundown, MobX is used to create the "model" (non UI bit), and the React UI re-renders when bits of the model changes. In particular, if you look in SingleTerminalView.tsx there is a FixedSizeList component, and terminal.filteredTerminalRows is passed in. This fixed size list only renders the rows currently in view, so performance doesn't take a hit when there are 1000's of rows being rendered into the DOM but only 20 of them visible on the screen.
The fixed size list is a 3rd party component, so I don't think we want to change how that works. One idea might be to create a new array in the model called "displayedRows". The fixed size list could look at that instead of the filtered terminal rows. After some timeout (say, 0.5s), you could make displayedRows equal to filteredRows.
Actually, even better, since filtered rows is only a UI concern, you could just update filtered rows on a timer based on the raw terminal rows.
Woops, I've just noticed that TerminalRow.tsx and SingleTerminalChar in the views directory should actually be in the model directory. Will update that at some point, sorry if that confused you.