Is there a road map, a release schedule, prioritised backlog
While I think this is a fantastic project and wish you all the success in the world, it does look like chaos.
I arrived here because I thought this was going to be component of future release of Windows.
At the moment I don't see that happening at all.
I suggest you need
- a road map which contains what direction the project is going in.
- to know the feature set that Microsoft are expecting for the first release
- a prioritised back log of features/user stories etc
- an automated test mechanism/framework to prevent regression
- a release plan for how this will be include in the localised releases of Windows
For example the other Microsoft projects on GitHub have working groups and prioritise the features to be developed. Many feature requests are rejected.
For now, I'll focus on fixing bugs and merging some smaller features that I personally think are cool. My plan is to get other things (like the roadmap) sorted out after the initial period where we're receiving a lot of issues, comments, and PRs. I expect that to settle down over the next few weeks. Right now, just managing the repo alone is practically a full-time job on its own. 😅
I have thoughts on how to do automated testing. Do you have a plan?
I already have a project that can run a Win32 console program with pseudo-console and capture all the emitted text and control codes.
I had the idea that something similar could run edit and then have a Visual Studio mstest project send user input and process the results. It would hold a virtual copy of the console screen. Then test cases can inject user input and check the correct areas of the screen are updated and the cursor is in the correct place.
The advantage of mstest is it is tool that Microsoft already use and can be automated in build servers.
It could also verify the escape codes and check if you are repeatedly sending redundant values.
Do you already have a tool in mind?
I don't but I'd like to make sure that this tool remains portable, including any end-to-end test environments we add.
While we did consider writing just a renewed Edit for Windows in the beginning, it became obvious quite quickly that any editor that only works on 1 OS is practically doomed to fail. No one wants to learn shortcuts for an editor, just to SSH into a Linux server and suddenly you have to remember how to use nano efficiently. I believe this extends to any test software we use.
FWIW if we ever write end-to-end tests, I think it may be better to create a fake Framebuffer struct. This allows us some flexibility to improve the VT output without breaking all tests.
I have thoughts on how to do automated testing. Do you have a plan?
I think that for testing we can use similar approach to ratatui: for UI output and for keyboard events and handling. I used it a lot and it works nice.
think it may be better to create a fake
Framebufferstruct.
I would recommend against it. While you could do that for simple unit tests, I suggest a test tool that is 100% independent of the code of the program and treats the program as a black box. Ideally the test program is not written in Rust and shares absolutely no code with it.
Then you can run exactly the same tests against any compiled binary and confirm when a regression occurred.
Because the tool would be only using stdin and stdout of the editor it could run the editor using ssh, so the same tool could still test Linux builds. Success or failure is about what appears within the pseudo-console, not where it came from.