betrusted-wiki
betrusted-wiki copied to clipboard
Text-to-speech option for blind users
To make this security enclave accessible to blind users, it should have a text-to-speech output option. Current-generation natural-sounding speech synthesizers are certainly too demanding for the processor and RAM in this device, but something more robotic-sounding but still intelligible, like espeak-ng, should fit.
That's a great idea! Howabout for input -- what features might be helpful from a physical keyboard design to help vision impaired people to type?
Regarding input, there should be IMHO a microphone built-in unlike in the current device Precursor or at least a fixed part of the shipped package.
For the desired pocketable form factor, I think the best solution would be a braille keyboard and no screen, with the user holding the device in landscape orientation. A braille keyboard has 9 large keys, and you press them in chords to input braille cells, which can then be translated into text. I suggest you search for pictures of braille keyboards to get an idea of the layout. In "computer braille", each braille cell corresponds to an ASCII character, so the translation is simple. But the most proficient braille users prefer what's generally called contracted braille (a specific example is "grade 2" braille in English), which has lots of abbreviations, so it's more like an input method for non-western languages.
BTW, braille is also an option for output. There are devices called refreshable braille displays, that move little pins up and down. But they're expensive; the cheapest one I'm aware of is about $500 for a single line of 20 braille cells. I'm guessing 12 braille cells would fit in your form factor (in landscape orientation). But I'm not aware of any open implementation of this technology, and you probably wouldn't be able to get any of the existing vendors to sell you their braille cells. So I think we're better off sticking with text-to-speech for output.
Re: a microphone, I assume you're talking about speech recognition. First, I wouldn't compromise the security of this device for a built-in mic. But the real deal-killer is that you probably couldn't run good speech recognition on a 100 MHz processor. Anyway, speech recognition is by no means a necessity for blind users.
BTW, to clarify my previous comment, blind people are perfectly capable of typing on a QWERTY keyboard. But I, for one, never mastered the type of small QWERTY keyboard that this device would have. So even braille input isn't a necessity, but I do think it would be the most efficient and comfortable option.
This is really helpful. Maybe this is hard to ask, but do you have any links to specifications or examples of a braille keyboard that will help me develop one? I think this might be with in my ability to create a keyboard with 9 large keys and bumps on them, but iterating on it is hard as every prototype costs quite a bit of money.
I'd like to get it mostly right the first time through, but to be honest, I have no idea if I'm doing it right or not, and I could see myself completely going off in the weeds.
Some specific questions that can help me get it right on the first try:
- what is the pattern of dots?
- what is the recommended minimum space between raised dots?
- what is the minimum height of a raised dot?
- would it be acceptable if I just made the dot pattern, in exchange for a raised keycap? this is a little bit subtle but the problem is I have an absolute height ceiling I have to work with, so to add dots what I am planning to do is to reduce the height of the key cap. i could, for example, leave the rectangular outline of the keycap to identify the location of the braille dots, but I imagine actually the rectangular locator is more useful for sighted people. Perhaps the vision-impaired would even prefer just the pattern of dots, without any additional locating features?
So if it was just clusters of dots in the correct pattern on an otherwise flat surface, is that ... better?
Braille output sounds really cool, and maybe something to think about another day -- but yes, I agree, for now, a simple text-to-speech option is probably within the capability of the device. I am more concerned about the input -- text-to-speech is a computationally "easy" task (especially if you're OK with some mispronunciations and we just rely on a phoneme table?), but speech-to-text is intractable at the computational capabilities within the device. So facilitating input with a braille keyboard is probably a good place to start.
FYI I've continued the discussion about built-in microphone in https://github.com/betrusted-io/betrusted-wiki/issues/10 . And of course I wouldn't want to compromise any bit of information. See over there.
I'm sorry, I was unclear. A braille keyboard doesn't actually have bumps on it. We call it a braille keyboard because the chords that the user inputs map to the dot patterns in a braille cell, not because the keyboard itself has braille dots on it. The assumption is that the user has memorized the mapping from keys to braille dots.
For a picture, I'm hesitant to link to a product that's currently being sold. SO instead, I'll refer you to the Braille 'n Speak, one of the earliest devices in a category that we call braille note-takers, since their original killer application was taking notes. This one only has seven keys. I was just talking to one of my blind friends about this, and he thinks that limiting it to seven keys would be better, given the small form factor and constrained use cases. With the proper translation table and software, you can input any text with those keys. The key by itself at the bottom is the space bar, and I would make it longer than it was on the BNS. The other 6 keys map to braille dots; from left to right, we refer to them as dots 3, 2, 1, 4, 5, and 6. To input a braille cell, you press the keys for all the dots in that cell at once. You can also use the space bar as a modifier to issue commands; for example, to start a new line or press Enter, you'd press dots 1, 5, and the space bar at once, a.k.a. e-chord because dots 1 and 5 map to e in English braille.
I hope this clarifies things. Let me know if you have more questions.
As for the height of the keycaps, as long as it's possible to find the keys by touch, it's fine. The good thing about having just a few keys is that they can be large and widely spaced, even on a small device like this one. The keys are supposed to be laid out such that the user can put one or both thumbs on the space bar, and a finger on each of the other keys (left index finger on dot 1, right index on dot 4, left middle finger on dot 2, etc.). The user can type without ever moving their fingers from these positions.
As for the text-to-speech output, how primitive do you want to get? I understand that simplicity of implementation is good for auditability. But if the speech synthesis is too primitive, it will be unattractive to anyone who doesn't absolutely need the most stringent security. To give you an idea of what I'm talking about, at the most extreme, we could implement something as primitive as the Echo synthesizer from 1982. Granted, the distinctive sound of that synthesizer triggers some nostalgia for my childhood (I used it at school from first through seventh grade.) But I wouldn't actually want to use a synthesizer like that today.
As for the height of the keycaps, as long as it's possible to find the keys by touch, it's fine. The good thing about having just a few keys is that they can be large and widely spaced, even on a small device like this one. The keys are supposed to be laid out such that the user can put one or both thumbs on the space bar, and a finger on each of the other keys (left index finger on dot 1, right index on dot 4, left middle finger on dot 2, etc.). The user can type without ever moving their fingers from these positions.
I think I get it now. If I came up with a layout, is there any easy way to facilitate your feedback on it? For example, I imagine a picture isn't very useful to you, but if I sent you a 3D STL file could you 3D print a dummy block of plastic with the same shape/size/textures so you can feel it?
I'm a little shy to admit this but only just now it occurred to me that the display is actually not useful to you, so I am just realizing you want a version with no display and the entire bezel that is just the input switches, which lends itself to six fingers and two thumbs on the device at the same time.
I'm guessing this is what you were talking about all along but I am so biased to think about product design from the perspective of the sighted, it's taking me a while to wrap my head around things. Thanks for your patience.
The display-less, all-keyswitch format also is do-able, but the construction is quite different from the current form. Now that I've had this insight, I need to do more research, particularly on a suitable switch for the purpose. The current keyboard construction I use is essentially limited to only small switches, because the membrane is so thin it can't transmit force over long distances.
One question -- would it be OK if the switch height were actually quite a bit taller than the device, like, up to 3mm or so? The device itself is normally just 7.2mm thick, so this is quite a bit of extra height, but only due to the protrusion of the switch caps. This would make it overall thicker in the pocket, but it greatly expands the library of switch forms that I can consider using. Right now my height budget is about 1mm, and that greatly constraints the allowable construction techniques to some fairly bespoke and expensive formats.
What I'm thinking is maybe I can make a version with a custom bezel that holds custom key caps that are machined out of aluminum. I don't think the volume is going to be high enough to open injection mold tools, but I think aluminum can be cheap enough that making 7 custom key switch caps out of machined aluminum could be reasonable. I think you can get a good tactile feel out of this while keeping it somewhat thin, but the aluminum will feel cold.
Another option is I could look into using silicone tooling to make the switches. That's also affordable, but the switches will likely be quite a bit thicker (silicone key caps are usually 4mm+ tall). Silicone key switches have a less tactile feel to them, almost mushy, but they are warmer feeling than aluminum.
As for the text-to-speech output, how primitive do you want to get? I understand that simplicity of implementation is good for auditability. But if the speech synthesis is too primitive, it will be unattractive to anyone who doesn't absolutely need the most stringent security. To give you an idea of what I'm talking about, at the most extreme, we could implement something as primitive as the Echo synthesizer from 1982. Granted, the distinctive sound of that synthesizer triggers some nostalgia for my childhood (I used it at school from first through seventh grade.) But I wouldn't actually want to use a synthesizer like that today.
I'm not an expert in speech synthesis, but I think with a 100MHz CPU we can probably do better than speech from 1982. But due to memory limitations it may not be able to store, for example, whole sampled words for the entire English dictionary. Honestly, to set expectations, this is going to be a work-in-progress. Initially it will be pretty "minimum viable product" as I'm guessing we'll just pull in the smallest open source speech-to-text library that we can get to compile and run on the device for testing, but the good news is as people have time to work on it, it should get better.
so I am just realizing you want a version with no display and the entire bezel that is just the input switches, which lends itself to six fingers and two thumbs on the device at the same time.
Hm, for such a complex device as betrusted platform I'm not entirely sure it's this black & white. I've worked shortly with blind people 10 years ago and anything more complex required interaction with those seeing at least something to handle the more complex scenarios which inevitable appear in reality due to SW complexity.
It would be very cool to have betrusted platform 100% controllable by visually impared, but I doubt it. Even the root of all the trust in betrusted, i.e. visual inspection to find traces of tampering, is by definition impossible. Even the workflow towards "minimum viable product" you mentioned above leads to an opposite solution (namely, that support for visually impaired will not be a first class citizen, but just an addon with quite limited mapping to betrusted capabilities).
So I still think a display will be needed anyway unless you wanted to make a second alternative betrusted platform with less capabilities, much lower focus on tamper-resistance and with different hardware (not just keyboard).
Lots to respond to here. Thanks for taking the time to consider these things.
First, about text-to-speech output, I'm relieved that you're not going to push for the absolute simplest thing that would barely work. espeak-ng fits in ~600KB of compiled code (x86) and ~10MB of data, for all supported languages (and it supports many languages). My understanding is that it uses pre-recorded consonants but synthesizes the vowels. The sound is definitely an acquired taste, but many blind people do use it daily. At least in UK English, the pronunciation is quite accurate. (I'm American, but I prefer the UK accent because the US accent is noticeably off.) Here's a brief sample. On my PC laptop (with a Skylake Core i7 processor), it runs at ~500x real time. So it should be OK on a 100 MHz processor.
Bunnie, you're correct that what I had in mind was a bezel with no screen and just a keyboard. I don't have a 3D printer myself, but I'm sure I can find someone local who does, so sending a 3D STL file is a good solution. Extra thickness is absolutely fine. To give you an idea of what we're used to, before I got my first iPhone, I often carried a Victor Reader Stream (portable audio player and talking book reader) in my pocket, and it's 18 mm thick. I'm not saying that you should go for something that thick, just that a little extra thickness is fine. As for the material, I'd guess that silicone is the better option.
Regarding dumblob's comment about the need for a screen, you're right of course that we'd need sighted assistance to inspect the hardware for traces of tampering. But is that something that you expect users to do routinely? If not, it's reasonable that we should expect to have sighted help with that task once, and then use the device independently from then on. Remember, we're not talking about retrofitting a screen reader onto a complex GUI like Windows. That setup does sometimes break down, requiring sighted help to recover, though not as often as you might think. But the beauty of an open and relatively simple platform like Betrusted is that we can adapt the software to be entirely usable, and even pleasant to use, for a blind person with no sighted help. I'm a programmer, and I look forward to helping out with this (on my own time, not my employer's). Still, if there's a reason why you think a blind user of Betrusted would need sighted help when using the software itself, even with a fully adapted software stack, please explain.
But is that something that you expect users to do routinely?
My understanding is that every time you'll pick up the device after you left it somewhere (on your bedside table, on a table in your house, anywhere else...) for more than few minutes, you'll need to do the quick check for traces of tampering. Otherwise the platform doesn't make any sense to me.
Still, if there's a reason why you think a blind user of Betrusted would need sighted help when using the software itself, even with a fully adapted software stack, please explain.
It's actually not (just) about sighted help. It's more about stuff like "show me your qr code to check it's you" etc. Yes, you can do most of the stuff using text-to-speech or in other more or less blind-friendly ways, but it'll be very cumbersome and error prone. Still, I'm convinced it'll make the platform very different from what I've seen so far.
It's the same with web - most web sites are either blind-friendly (but still not perfect for blind people) and ugly or not ugly but have only a separate "section" for visually impaired and this section suffers from not being in sync with the rest.
In the same vein, to me it seems betrusted will be either less appealing (not just visually, but functionally etc.) for ordinary users as well as for blind (as it'll be a trade-off mixture of accessible and ordinary) or there'll need to be a second alternative betrusted platform tuned for visually impaired with the risk being out of sync with the ordinary one.
IMHO the second option is better, because out-of-sync in this case could be very beneficial as it'll put emphasis on what visually impaired people really prefer unlike some compromise-driven solution as in the first case.
Bunnie, do you share dumblob's assumption that the user will need to frequently inspect their device for tampering? My guess is that it depends on the user's situation. If there are situations (e.g. jobs or adversaries) where someone really would need to frequently check their device for tampering, then blind people really would be at a significant disadvantage in those situations. There's no denying that in some situations, blindness really is a disadvantage. Still, I'm guessing that there are plenty of situations where blind people would benefit from the strong security of Betrusted just as much as our sighted peers, even though we can't independently check for tampering.
Dumblob, can you please explain the "show me your QR code" scenario you mentioned? I don't understand how this would work even for two sighted people.
As for the point about a single compromise-laden platform versus separate platforms, this is a point of moderate contention in the online blind community that I'm part of. As I've mentioned in previous comments, there have been many devices designed specifically for blind people, including PDAs and talking ebook readers. There's a vocal contingent of blind people who believe that many of these devices are now thoroughly obsolete in light of modern smartphones, tablets, and laptops. Some of these people refer to such blindness-specific devices as "blind ghetto" products (check out a contrary opinion on the "blind ghetto" epithet). It's true that these devices typically lag behind their mainstream counterparts technologically, and they're more expensive as well. But as dumblob says, adding accessibility to a mainstream platform leads to compromises. And believe me, the compromises made by mainstream developers are always at the expense of the blind minority, never the sighted majority. I currently work at Microsoft on the Windows accessibility team (disclaimer: I'm posting my own opinions on my own initiative here), and I previously developed a third-party Windows screen reader from scratch, so I'm well aware of how this works. I use a smartphone with a screen reader daily, and there are things I wouldn't be able to do without it. But I would also appreciate being able to do some tasks with a UI that's truly built for speech output and keyboard input.
We've been discussing an alternate bezel with a braille keyboard and (probably) no screen. I realize that what I really want may also require an alternate software package, rather than an option built into the primary one. I think that's OK, as long as the alternate software shares a large proportion of code (including all of the difficult security-related code) with the main Betrusted software. I believe that's feasible.
@mwcampbell the frequency at which you need to inspect your device is up to your situation. Everyone has a risk and threat profile, and it is not the same for everyone. It also depends heavily on the value of secrets you wish to assign to the device. If you're just using it as say, an upgrade to the password manager that may already be on your smartphone or computer, it's still an improvement. But if you are in a high stakes situation like a journalist behind enemy lines or otherwise have a bullseye painted on your back then @dumblob is correct, it's the sort of thing you wear around your neck on a chain and sleep with.
The point of Precursor is that you can inspect it; not that you have to inspect it. The option to inspect moves the goalposts beyond the current situation of most gadgets where inspection simply isn't possible because (a) there isn't a published, well-known reference to compare against (b) the designers aren't available to ask if deviations are just regular manufacturing irregularities or security problems and (c) most gadgets are glued together so inspection almost certainly damages the device. However, the system does (hopefully) have additional benefits beyond inspectability, so if that's not a high priority for you there may be other features of value in Precursor.
As @mwcampbell is the immediate end user if his preference is no screen I will lean into that. As I work through the design, there might be an opportunity to hide a small (128x64 px 1.3" diagonal) OLED screen in a corner without compromise to the keyboard layout, which could be useful to print diagnostic messages in case you are really in trouble, but ideally, if it's not necessary, we should leave it out as it just runs up cost and complexity. The hardware is more complex, and so is the software layer because someone has to write the UI for the small OLED display. That may be no small task depending upon what it's expected to do; the less pixels you have, the harder it is to design a UX. And dev resources are scarce, I'd rather invest effort into getting the speech to text spot-on than creating a diagnostic UI for sighted people.
All that being said, part of the reason I am excited to get this keyboard form factor into the mix right now is that its availability will force me to factor braille key chords in at the "ground level" when designing the IME system. We already have QWERTZ, AZERTY, dvorak, and QWERTY layouts to make sure we have fewer reasons to build up technical debt around thorny UX issues. Adding a version of the hardware with no screen plus a braille keyboard means I'm able to have a test target for our nascent IME system that should really exercise the assumptions we are building in, avoiding a difficult refactor later on once we've poured the foundations. And the earlier I bake this in, the more this input method will hopefully be carried along with the rest of the software, so yes, the idea is to modularize things so that improvements to the rest of the code base are compatible with this input method. From where I'm sitting, I think if I do it right the keyboard input has a chance to be pretty seamless and maybe picked up into the mainline package, but the output side could be tricky.
For example, we are planning to build a graphical toolkit that forces any bitmaps coming from insecure sources to be rendered with a special border, to reduce phishing attacks (that way you know if you're looking at an actual password dialog box, or if you're merely looking at a screenshot of one). Presumably, for the speech to text engine, we'd want that engine in the secure domain and if someone sends you an audio sample to play, it should be considered untrusted and needs to have a special tone added in the background, so you can identify that it's from an insecure source. This will mitigate an attack where someone crafts audio samples to fool you into think you're getting system prompts to type a password, when in fact you're actually in a chat and you should not type your password in. Unfortunately, this type of system-level design may require a side-loaded package that isn't part of the mainline, in part because of the complexity and in part because I'm not sure we have the resources to develop it right now. But it's the sort of thing I'd at least like to be aware of and put hooks in place so when we get around to it we aren't left with nothing but bad choices and compromises.
The hard part is I have no concept of how these keyboards are used, so @mwcampbell I hope you don't mind if I lean on you a bit to help provide feedback as the concept evolves.
I'll have a look at different materials and methods for prototyping. The good news is I designed Precursor to be hackable, and this kind of application is where that hackability shines through -- I am optimistic that I can piggy back the prototyping cost for this on top of our existing runs for building the mainline stuff, so we can amortize setup costs for the variant. Some compromises are inevitable as I have zero budget to fund the prototypes but its within my discretionary margin of error to expand the scope at this moment to include some resources for a braille keyboard variant.
Lots to respond to here. Thanks for taking the time to consider these things.
First, about text-to-speech output, I'm relieved that you're not going to push for the absolute simplest thing that would barely work. espeak-ng fits in ~600KB of compiled code (x86) and ~10MB of data, for all supported languages (and it supports many languages). My understanding is that it uses pre-recorded consonants but synthesizes the vowels. The sound is definitely an acquired taste, but many blind people do use it daily. At least in UK English, the pronunciation is quite accurate. (I'm American, but I prefer the UK accent because the US accent is noticeably off.) Here's a brief sample. On my PC laptop (with a Skylake Core i7 processor), it runs at ~500x real time. So it should be OK on a 100 MHz processor.
Looks like a great starting point. I had a brief look at it and it looks like it's mostly in C. This probably means we'll have to sandbox it and maybe add an input validator, as this code may have attack surfaces of its own; but binary compatibility with C is a thing on our roadmap for Betrusted (fwiw we are trying to build everything "from scratch" with Rust, but we realize that's ... going to be hard ... and thus the sandboxing).
600kb code and 10MiB data will fit handsomely in the system. It'll be lighter weight than Mandarin support!
Thanks @bunnie for the thorough reply.
Neither I nor any of my blind friends that might use this device are in high-stakes situations like you described (though I do know a blind person who works for the FBI in an office environment). If this affects your level of interest in accommodating us, I'm sorry I didn't clarify that sooner.
To be honest, what excites me about this project isn't the security, but the opportunity to implement a hardware/software combination that's tailored for us in a device that's both hackable and truly mobile. That's something I've wanted for years. I'm excited that it looks like we finally have the opportunity to make this dream come true. I'm not an electrical engineer, or even an amateur hardware hacker. But I'm an experienced software developer, and I look forward to contributing as time permits.
My preference is indeed no screen. Let's try to make the text-to-speech support complete enough that we don't need a screen (and sighted help) even for troubleshooting. With a platform that we can control from top to bottom, I believe this isn't too much of a stretch.
@bunnie, you're right about the need to sandbox the speech synthesizer, and that it should run in the trusted domain (for true equality with screen output). I'm not aware of any vulnerabilities in espeak-ng in particular, but I know of another speech synthesizer that crashes if you feed it certain words (mostly misspelled words IIRC). Implementing text-to-speech in pure Rust could be an interesting project, and an opportunity to explore alternatives to the way espeak-ng does speech synthesis, but I don't think that project would be the best use of my limited time. As a user, I would be happy with espeak-ng. BTW, the largest single component of espeak-ng is the Russian dictionary (which surprises me, since Russian writing is based on an alphabet); the second largest is the Chinese dictionary.
Feel free to lean on me as much as necessary when designing the braille keyboard and the IME system to go with it. As for funding, I'll definitely back the crowdfunding campaign when it launches. I'd also consider providing some funding for a prototype braille keyboard from my personal savings; perhaps we should discuss that privately.
Your security scenario does not affect my enthusiasm for exploring this. Betrusted grew out of the concept of a "secure enclave for humans" and we are trying hard not to deliberately design-in prejudice toward one type of human user (this is hard, as I am finding out, because I am essentially prejudiced by my own life experiences).
I think the hardest part about text-to-speech is our audio drivers are pretty minimal right now and bare iron. We're shoving raw samples into FIFOs at the moment for testing purposes. Nothing insurmountable, but as you think about it, imagine you are compiling for a raw-iron embedded target, with no POSIX layer. A good approximate mental model would be trying to get this stuff to run on a 1990's-era Palm Pilot or Nintendo DS.
I'll be in touch with you later with some STL files that will give you an idea of the shape I think I can achieve without running up a huge bill on tooling costs. For costs, let's see how the campaign goes, if the response isn't strong I may need some help, but for now let's be optimistic. I'm also exploring a potential grant that could finance some of this.
I think if anything, this small side project could make for a great story as part of the campaign and hopefully pull in a broader audience of backers. That being said I at least like to have a prototype before I make any promises about how much anything will cost or how long it'll take to deliver, so realistically speaking this won't be part of our story on launch day (which is coming up soon).
Thanks again for the ideas and feedback!
I've done audio-related programming before, including mixing with sample rate conversion (I didn't write the sample rate converter myself, but found a C library that met my technical and licensing requirements). So I should be able to help out with that.
Thanks again for your enthusiasm and willingness to explore alternative input and output. I look forward to hearing more from you soon.
@bunnie , one more thing for you to think about as you consider publicity and possible grants: I was talking to a blind friend about this project, and he said that if you can get the final design with the custom SoC to be cheap enough, then this version with a braille keyboard and text-to-speech output could be good as a note-taker for blind people (especially students) in developing countries.
Remember that blind people don't have the option of pencil and paper for taking notes. The closest counterpart to pencil and paper is the slate and stylus; I've never used this setup myself, but it sounds like a slow, tedious way to write. A mechanical braille writer like the Perkins Brailler is much faster, but also bulky and noisy. And of course, neither option allows blind people to write in a format that most sighted people can immediately read.
Of course, I know that note-taking (really, word processing) is totally outside the scope of what you want to do with Betrusted, but that's where the hackability of Precursor comes in; I or someone else can develop this application while taking advantage of the work you're doing on the hardware.
A version with a custom SoC would probably get to a pretty decent price point, especially if you drop the LCD - the LCD is quite expensive all things considered. That being said, the SoC is a long way off; years off. There may be an opportunity for someone in the meantime to leverage the mechanical design with a less secure off the shelf SoC for the application. It is all open source, after all, and there's no reason people have to stick with all the choices I've made!
Good point about using a less secure off-the-shelf SoC for alternative applications. I've often wondered if the i.MX233 would be a good choice, especially with its integrated audio. (Yes, I know about your previous experience with that chip.) I don't know how much longer it will be manufactured though. In any case, board design is something I don't know anything about, so I'll work with the Precursor for now.
Thanks for the fruitful discussion.
I'll comment on just one thing - namely the sentence "I know that note-taking (really, word processing) is totally outside the scope of what you want to do with Betrusted" because I'm myself confident (secure) note-taking (with something like basic CommonMark support - i.e. 100% useful also for blind) is actually part of one of the major goals of the betrusted platform.
Thanks for the interesting discussion. In my previous work on accessibility technology with the Open Steno Project, I ran into several people requesting open source refreshable braille displays and I've always wanted to explore the idea further. If anyone here moves in that direction or is interested in doing so, please keep me posted.
Regardig the physical "tactile" keyboard (mainly for blind but also others), the problem of which switch to integrate could be potentially solved by using the new Cherry scissor switches being SMD-like top-mounted and only 3.5 mm height (i.e. not the "full-sized Cherry-MX key-switches" as mentioned in the recent Precursor campaign update blog post).
They should be small enough and the price much lower than custom-made scissor switches (and for prototyping it might be cheaper and faster to order an existing keyboard with them and unmount them :wink:).
I am so impressed with the braille hardware by bunnie. I wonder however how much benefit it contain beyond hardware keyboard itself (I am ignorant please forgive me). In the same horizontal configuration as the braille, I would greatly appreciate a full qwerty keyboard with better key spacing, perhaps only a small narrow screen above the keyboard, or even upon the backside of the device. This would make it even more compelling data input device for all users, especially the type like journalists who would prioritize text input, later reviwieng and editing their work. a larger keyboard would be much improved use and feel to the cramped one currently (though the current keyboard is infinite better than no keyboard!), and a narrow one or two line screen would accomplish for all but the most involved visual work. messages, one time pad, reviewing hashes and inputting and editing text would only require one or two line screen, while a more comfortable keyboard would allow much faster and proficient data input, especially without looking at the fingers. I hope this can be a possiblity in future development. secure hardware for keeping private notes and data does not exist in the world and is of such importance.
One thing I thought of reading this is that a small e-ink display that could be activated to show boot status/version info via QR code (and maybe on-screen if the data doesn't require a dense QR code), as well as "sharing" non-secret information via QR codes could come in handy, but maybe not secrets unless they can be encrypted with a simple one time password that the user could share via another channel with the intended recipient.
This could also be pretty useful for the debugging as @bunnie mentioned and could maybe be helpful if you need to input a rotating TOTP code on another device the Precursor/Betrusted could speak the code as well as display the QR code (again, only if the screen is activated by the user) and they could use a QR scanner app on their mobile (or laptop/desktop) to quickly grab the correct value to input to the MFA challenge if the site doesn't support FIDO2/WebAuthN.
Perhaps in the future there will be a browser extension for pulling the "right" secret from the vault based on the domain or active application like the Mooltipass or 1Password does.