Guide
Guide copied to clipboard
Using the guide via AI
There's a conversation in Discord, about evaluating bitcoin application usability via AI. Two examples:
- Mat took ~30 screenshots of Phoenix and asked ChatGPT for a design review based on the best practices outlined in the guide.
- Christoph asked Claude to browse boardwalkcash.com and do a design review. Using MCP, Claude had access to a browser, so it started clicking through the site, set up a wallet, and explored settings.
Verdict so far seems to be that these evaluations provide a good foundational review, which then allows for deeper looks into specific user flows and other design decisions.
Being able to automate these reviews makes them a no-brainer to integrate into regular design & development workflows.
So how can we push on this further? One idea is to create a page in the guide (maybe under resources?) that provides workflows that people can adopt easily. They could be based on use-cases and include step-by-step instructions, prompts, videos, and tips on evaluating the output. We're still feeling out all these capabilities, so I'd start it simple and iterate as we learn.
What do you think?
Love this idea. Automate as much as possible. Spread awareness to change culture and expectations. Make wallet builders feel good about this process and that it is helping them do better work.
I made a screen recording of Claude navigating nosta.me, set up a profile, and write a usability review. Setup, prompt and report are in the video description.
Would be cool to make this a one-click thing - type in a URL and get a report with annotated screenshots 15 minutes later. Maybe even a Discord bot in our design review channel. But not sure how realistic that is.
Big +1 on this.
Targeting specific wallets and giving developers reports specific to their own products, with direct insights and clear suggestions on what to improve is more impactful.
It’s less work for devs too. Right now they have to read the guide, think on their app’s design, and figure out what to fix. This would streamlines that process. A one-click “get report” flow would be much more useful.
The Claude Puppeteer thing is interesting. Personally, I don't find it super useful for a usability review (for a typical person navigating an by eye-sight). The reason is because Puppeteer seems to be navigating the DOM for elements it thinks it can interact with, which is not how a human is going to be doing it. I have tried ChatGPT Operator which navigates by actually controlling the mouse, but it's so sluggish and gets caught inside of stupid traps that would never fool a human.
Of course, I'm sure the agents will be excellent at usability review one day, but I don't find it super useful at the moment.
Having said that, I'm pretty bullish on the idea of using agents for end-to-end testing. In other words, say we have a flow in our product that's like: go to landing page, get redirected to login page, login to product, check inbox for messages. Agents can be given a goal like "login to the site and check the most recent message in the inbox if it exists". With this kind of test, we can ensure the product is always working and a software change doesn't introduce a bug. I think you can also pair that with visual regression testing. In other words, when the frontend team made an update, did they accidentally change the color of a button or the placement of an image? The agent can compare screenshots taken on this E2E test with screenshots of the previous E2E test and spot differences. That's very useful for a designer who wants to keep things looking a certain way.
tl;dr - Not sure if agents are great at usability testing today, but they will be. Today, I think they will great at e2e testing and visual regression testing, which should also be of interest to designers.
@sbddesign in my example, Claude did use Puppeteer, but took screenshots to understand each page, and then interacted with elements via JS and the DOM. For example, it saw a "View my profile" button in the screenshot, and then tried to select it by that label via JS and triggered a click. Should be pretty close to user interaction (probably with limitations like hover menus, etc).
But overall I agree that there's a lot of potential, but we just have to see if it can give us high-quality results with the right amount of effort, what tooling needs to be built, etc.