Compensating for prism misalignment
In order to compensate for the alignment error for the prism, first need to identify the angle of offset for each facet.
I think to do this that the image needs to be rotated in the opposing direction by it's angle of offset. My suggestion is to generate your test pattern as a PNG, rotate it, and then use the rotated version for a single facet exposure. I believe this will fix the alignment issue with the stitched image.
If it does then it's merely a matter of automating the process for each facet and then combining them into a single chunk of data (within piltoarray).
@GravisZro, thanks for your comment. If this is addressed, I would see the following steps;
- try to label the facet using their facet time. In the past it was observed that each facet, has slightly different number of ticks in facet, see https://hackaday.io/project/21933-prism-laser-scanner/log/169266-facet-times, use this to label the facets.
- with the optical alignment setup, where the laser is projected directly on a an image sensor. Verify that you can select facets & measure their aberration
- use measured aberrations in the slicing algorithm, adapt the data path so data comes with facet number attached,
- create a pattern and verify errors are removed.
@GravisZro — I’ve finished refactoring the Hexastorm codebase to use the latest Amaranth HDL syntax, and printing now works correctly. The earlier SPI problem turned out to be a bandwidth cap: the web server tops out at 2.9 MHz, whereas running without it lets us reach the full 3 MHz. My next focus is the timing and facet-detection logic, which I consider critical. I’ll start investigating that probably end of july will be on holiday first.
@GravisZro facet detection works, jitter issues have been resolved see https://hackaday.io/project/21933-prism-laser-scanner/log/243757-jitter-test next up is streaming lines to FPGA via SPI ram and correct for each facet.