xterm.js
xterm.js copied to clipboard
Terminal Image display has low resolution
From https://github.com/microsoft/vscode/issues/210672
Does this issue occur when all extensions are disabled?: Yes/No
- VS Code Version: 1.88.1
- OS Version: Mac: Universal Intel silicon
Steps to Reproduce:
- imgcat "anyimage.png"
Original image:
In VScode terminal
In Iterm2 terminal
@Tyriar Whats the right way to scale a canvas to operate on real device pixels instead of css pixels? Should that be done on width/height attribute level of the canvas object?
For example - lets say we have a DPR of 2 at hand - where should that "doubled" pixel values go? Would this be sufficent (and the browser deals with the low level stuff on its own):
const canvasObj = ...
// apply real pixel values to canvas (doubled here)
canvasObj.width = cssWidth * DPR;
canvasObj.height = cssHeight * DPR;
// output is still within the logical css pixels
canvasObj.style.width = cssWidth;
canvasObj.style.height = cssHeight;
// later on during painting:
canvasObj.drawImage(...<metrics here now in real device pixels>...);
Would that already fix things? (Sorry if that question looks dumb to you, but I've never done this DPR correction myself, also I cannot really test it...)
@jerch I did the same thing,And there are also scenarios of DPR changes. https://github.com/xtermjs/xterm.js/blob/c73793fa115924a05235c687e1186bf370ec8ed1/src/browser/services/CoreBrowserService.ts#L18-L19
However, my scene is relatively simple without scaling, so CSS * DPR is sufficient. reference: https://github.com/xtermjs/xterm.js/blob/c73793fa115924a05235c687e1186bf370ec8ed1/src/browser/renderer/shared/TextureAtlas.ts#L667-L676
I just looked into this and well - thats quite involved to solve correctly, conceptually and technically.
The basic conceptual issue here is, that the atomic information of images are pixels of unknown output size. Before the invention of hi-res displays things were easy - just map image pixels 1:1 to the device, so an output size of 1/96 inch was implicated. That 1/96 inch was a long standing standardization for display metrics, and a CSS pixel also got defined as such. So for classical DPR 1 images, mapping image to CSS pixels 1:1 is the correct assumption. Thats what we do atm and should keep doing for DPR 1 images, regardless of the actual DPR.
But how do we know, if an image is meant for DPR 1? We dont know from an image resource itself. If we only got the image w'o further knowledge of the assumed DPR, we have to apply DPR 1, again 1:1 CSS pixel mapping. Browsers have the same issue, they would stretch/shrink a single image resource to match the real device pixels. Thats the reason why webdevs are encouraged to provide multiple DPR versions of the same image, e.g. by delivering a hi-res version by media queries. So they provide that additional knowledge by pointing to a different image resource.
Now to the technical issues for a terminal sequence here:
- Sixels are currently 1:1 mappings, while the sixel standard knows a custom pixel mapping, thats not implemented in any newer TE, so it has to stay that way for now.
- IIP transfers image data unchanged, but allows to annotate width/height settings as parameters. With custom width/height xtermjs resizes the image first, currently done on CSS pixels. This can be changed to work on (width/height * DPR) pixels instead, which should fix the original issue.
- Later on the addon shall allow to serialize images in the terminal buffer. For this to work correctly, we have to store the original unmodified image resource somehow. Currently the addon only stores the output adjusted version to save memory. Since images can get quite big, holding the original as well will need some clever storing tricks.
CSI tcurrently announces xtermjs' size in CSS pixels, used by some terminal image libs. So on a hi-res display these libs will generate subpar images. Needs some thinking, how to overcome this. Do we need a DPR sequence?
So all in all - this issue can be solved, but will take some time to get the technical side sorted out.
Whats the right way to scale a canvas to operate on real device pixels instead of css pixels? Should that be done on width/height attribute level of the canvas object?
I'm only 6 months late 😅
You're basically doing it right, however I've found for such cases you also need to amend the canvas using a resize observer since the rounding of CSS -> device pixels differs across browsers. That's what this helper is for, to confirm that the device pixel size is correct after an element is resized:
https://github.com/xtermjs/xterm.js/blob/a5fc111d623dccf009508489726959afb87e1011/src/browser/renderer/shared/DevicePixelObserver.ts#L8
Generally you should be showing a single pixel as window.devicePixelRatio pixels, this is typically what images do on browsers. You want to be careful about stretching the image to a non-integer scale though as that leads to blurring. I think in this case if you're just trying to get the image to show up nicely it would be best to scale to Math.max(1, Math.floor(dpr)) or Math.ceil(dpr), where the image will come out a little smaller than expected, but it will play nice with very high DPI monitors (eg. macOS is typically 2).
You may hit some cases where the image was designed to show on a high density display, but that's better than showing a image that is not in a potentially tiny rectangle.