Pico-DMX
Pico-DMX copied to clipboard
DmxInput improvement: Trigger the IRQ not based on number of received bytes but on the state of the line
As described in #20, some DMX senders might send less than 512 channels. Our current implementation waits for a specific amount of bytes (= DMA transfers) before the IRQ is triggered and the data is assumed to be ready. Since DMX doesn't really define an "end of frame" condition, we could maybe assert one of the PIO_IRQs in the PIO program whenever a break has been detected. This way, the user of the library has the chance to use the data that is already in the buffer up to this point, even if it's not yet the number of bytes expected.
Open for discussion, I'm not sure yet if it makes sense?
Makes sense to at least get an interrupt at break so you can error gracefully when getting a truncated packet and looking for a higher channel. Was this resolved?
Yes, indeed, an IRQ on BREAK or MAB would be great. But no, it's not yet implemented and not resolved.
I can confirm that many controllers (especially cheaper China consoles) sends less than 512channels. 192 for instance.
Triggering on break is also good for sync of independent receivers, for instance pixels. Lets say you have a RGB pixel, that consumes 3 channels, one patched to start address 1, and one that is patched to start address 508 would have no less than 22ms difference in the received data. 22ms difference can be noticeable. Especially on camera.
I had an idea yesterday, just didn't do any investigation if it's feasible. Sharing this anyways: The GPIO input pin is linked to the PIO state machine but also triggers interrupts on falling and rising edges. The PIO state machine is not yet started but configured. If an interrupt occurs, its timestamp (count of µs since boot) is registered and compared to the previous one. This way, we should be able to detect BREAK and MAB quite reliably. The main question here is CPU time consumption, especially when a lot of IRQs are being fired, meaning while a valid DMX signal is transmitting its bytes. When MAB is detected (or better: the falling edge ending it), the state machine is started and captures the bytes right away) (We could also start it earlier and have the PIO program detect the beginning of the packet itself?). As soon as "no IRQ happened for some time" (= we would need a timer-based IRQ as well), we know that we're in the BREAK phase and mark the frame as completed, passing it to the library user. I'm curious when I will find time to give it all a try ;)
Interesting idea. Let us know if you get time to try it (and it works 😉 )
Just look at falling edges to keep it simpler?
BTW, your library worked great for my project.
Thanks, Gavin
From: kripton @.> Sent: Monday, February 20, 2023 12:26 PM To: jostlowe/Pico-DMX @.> Cc: Gavin Perry @.>; Comment @.> Subject: Re: [jostlowe/Pico-DMX] DmxInput improvement: Trigger the IRQ not based on number of received bytes but on the state of the line (Issue #21)
I had an idea yesterday, just didn't do any investigation if it's feasible. Sharing this anyways: The GPIO input pin is linked to the PIO state machine but also triggers interrupts on falling and rising edges. The PIO state machine is not yet started but configured. If an interrupt occurs, its timestamp (count of µs since boot) is registered and compared to the previous one. This way, we should be able to detect BREAK and MAB quite reliably. The main question here is CPU time consumption, especially when a lot of IRQs are being fired, meaning while a valid DMX signal is transmitting its bytes. When MAB is detected (or better: the falling edge ending it), the state machine is started and captures the bytes right away) (We could also start it earlier and have the PIO program detect the beginning of the packet itself?). As soon as "no IRQ happened for some time" (= we would need a timer-based IRQ as well), we know that we're in the BREAK phase and mark the frame as completed, passing it to the library user. I'm curious when I will find time to give it all a try ;)
— Reply to this email directly, view it on GitHubhttps://github.com/jostlowe/Pico-DMX/issues/21#issuecomment-1437407977, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AVR4PP2H5WEYI36LPDPDGY3WYOZLZANCNFSM5KERQYMA. You are receiving this because you commented.Message ID: @.***>
@kripton any updates on this? :)
Indeed, an update on this from my side is overdue :) It's not yet working as I would like to have it but the main idea works. I created a new branch on the library (https://github.com/kripton/Pico-DMX/tree/6c52f0b97e081a2b6be07c17e9348788127acd00) and a corresponding test app (https://github.com/kripton/rp2040-dmxinput/tree/a85e14eff7f862e1d2a70787792a0db6b923d446)
How it works and what doesn't:
- The main decision on when one DMX frame is finished is no longer the number of bytes but the fact that the line stayed LOW for at least 88µs
- The alarm timer is created in the default alarm pool. This pool has space for 16 timers and we would use at most 8 of them
- This is detected by creating an alarm timer that triggers after 88µs but is reset on every falling edge of the GPIO pin (https://github.com/kripton/Pico-DMX/blob/6c52f0b97e081a2b6be07c17e9348788127acd00/src/DmxInput.cpp#L295)
- This creates quite a number of interrupts but the handler is rather short. It also needs to have higher-than-normal priority since otherwise the DMA could interrupt or delay it, leading to pre-mature alarm interrupts
- The PIO program is shortened a bit by no longer waiting for/during the BREAK (https://github.com/kripton/Pico-DMX/blob/6c52f0b97e081a2b6be07c17e9348788127acd00/extras/DmxInput.pio#L13)
- When the alarm is triggered (= BREAK detected), the PIO is reset & started (it will stall until the line goes HIGH for the MAB) and the DMA is also reset
- When a user wants only the first X channels, the DMA will trigger its "transfer complete" interrupt after "num_channels + 1" so that the library user gets access to the wanted data with less latency (compared to waiting for the BREAK-detect signal when all 512 frames are sent) and the RX-buffer can be made smaller as well
- I have removed the "start_channel" support since that's just not easy to do using the DMA-paced approach. What could be done later:
- On the BREAK, set DMA's transfer-count to "start_channel" and until that happens (indicated by the "transfer_finished" IRC), write all data to a dummy location (use "advance_write_address=false")
- When the IRC comes, reset transfer-count to "num_channels" and write the data to the user-supplied buffer until you got all relevant ones
- However, the DMA-reset is what gives me some trouble still:
- If num_channels is lower than the channels sent by the transmitter, it's all good:
- BREAK is detected, DMA setup, data comes in, DMA reaches transfer_count, IRQ is asserted, library-user-callback is executed and DMA is idle until next BREAK-detection
- However, if num_channels is higher that what comes in, it gets a bit weird:
- BREAK is detected, DMA set up, data comes in, DMA copies it and eventually, the next BREAK is detected. The IRQ handler sees that the library-user-callback has not yet been called, calls it, resets everything and it all begins
- However, if I set "num_channels" to 600 with 512 frames on the wire, on the SECOND DMX frame, the DMA triggers "transfer_complete" after about 87 bytes ........ It doesn't seem to reset its transfer_count correctly. Help welcome :)
- If num_channels is lower than the channels sent by the transmitter, it's all good:
- Also note that with this approach, when you tell the library to start reading, we don't start the DMA right away. We set up the BREAK-detect alarm, wait for the first BREAK and only then set up the DMA
- Also note that I'm currently testing my code against the https://github.com/OpenLightingProject/rp2040-dmxsun which sends 512 byte all the time and has a comfortable long BREAK of about 2ms. I didn't yet test DMX transmitters that:
- Have a BREAK that is as short as the spec allows (88µs)
- Send fewer than 512 packages + start code per frame
- Keep the line "idle high" after the last byte for some time. I don't understand why anyone would do this but the spec allows it for up to 1 second ....
- I also didn't test multiple instances in parallel yet
All that being said, the code is not yet cleaned up and contains lots of "debugging GPIO pins" that indicate program and PIO state and that I capture with the logic analyzer.
The design looks good. Nice work. I look forward to testing it when it's done. I have an old Chauvet that I think only sends 24 or 48 channels I could test with. I used your DMXOut on the other processor of my PICO for testing since I didn't like lugging that old box around when I didn't need to.
Keep up the good work.
From: kripton @.> Sent: Thursday, March 9, 2023 11:42 AM To: jostlowe/Pico-DMX @.> Cc: Gavin Perry @.>; Comment @.> Subject: Re: [jostlowe/Pico-DMX] DmxInput improvement: Trigger the IRQ not based on number of received bytes but on the state of the line (Issue #21)
Indeed, an update on this from my side is overdue :) It's not yet working as I would like to have it but the main idea works. I created a new branch on the library (https://github.com/kripton/Pico-DMX/tree/6c52f0b97e081a2b6be07c17e9348788127acd00) and a corresponding test app (https://github.com/kripton/rp2040-dmxinput/tree/a85e14eff7f862e1d2a70787792a0db6b923d446)
How it works and what doesn't:
- The main decision on when one DMX frame is finished is no longer the number of bytes but the fact that the line stayed LOW for at least 88µs
- This is detected by creating an alarm timer that triggers after 88µs but is reset on every falling edge of the GPIO pin (https://github.com/kripton/Pico-DMX/blob/6c52f0b97e081a2b6be07c17e9348788127acd00/src/DmxInput.cpp#L295)
- This creates quite a number of interrupts but the handler is rather short. It also needs to have higher-than-normal priority since otherwise the DMA could interrupt or delay it, leading to pre-mature alarm interrupts
- The PIO program is shortened a bit by no longer waiting for/during the BREAK (https://github.com/kripton/Pico-DMX/blob/6c52f0b97e081a2b6be07c17e9348788127acd00/extras/DmxInput.pio#L13)
- When the alarm is triggered (= BREAK detected), the PIO is reset & started (it will stall until the line goes HIGH for the MAB) and the DMA is also reset
- When a user wants only the first X channels, the DMA will trigger its "transfer complete" interrupt after "num_channels + 1" so that the library user gets access to the wanted data with less latency (compared to waiting for the BREAK-detect signal when all 512 frames are sent) and the RX-buffer can be made smaller as well
- I have removed the "start_channel" support since that's just not easy to do using the DMA-paced approach. What could be done later: * On the BREAK, set DMA's transfer-count to "start_channel" and until that happens (indicated by the "transfer_finished" IRC), write all data to a dummy location (use "advance_write_address=false") * When the IRC comes, reset transfer-count to "num_channels" and write the data to the user-supplied buffer until you got all relevant ones
- However, the DMA-reset is what gives me some trouble still:
* If num_channels is lower than the channels sent by the transmitter, it's all good:
- BREAK is detected, DMA setup, data comes in, DMA reaches transfer_count, IRQ is asserted, library-user-callback is executed and DMA is idle until next BREAK-detection * However, if num_channels is higher that what comes in, it gets a bit weird:
- BREAK is detected, DMA set up, data comes in, DMA copies it and eventually, the next BREAK is detected. The IRQ handler sees that the library-user-callback has not yet been called, calls it, resets everything and it all begins
- However, if I set "num_channels" to 600 with 512 frames on the wire, on the SECOND DMX frame, the DMA triggers "transfer_complete" after about 87 bytes ........ It doesn't seem to reset its transfer_count correctly. Help welcome :)
- Also note that with this approach, when you tell the library to start reading, we don't start the DMA right away. We set up the BREAK-detect alarm, wait for the first BREAK and only then set up the DMA
- Also note that I'm currently testing my code against the https://github.com/OpenLightingProject/rp2040-dmxsun which sends 512 byte all the time and has a comfortable long BREAK of about 2ms. I didn't yet test DMX transmitters that: * Have a BREAK that is as short as the spec allows (88µs) * Send fewer than 512 packages + start code per frame * Keep the line "idle high" after the last byte for some time. I don't understand why anyone would do this but the spec allows it for up to 1 second ....
All that being said, the code is not yet cleaned up and contains lots of "debugging GPIO pins" that indicate program and PIO state and that I capture with the logic analyzer.
— Reply to this email directly, view it on GitHubhttps://github.com/jostlowe/Pico-DMX/issues/21#issuecomment-1462486196, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AVR4PPZHHYFZPJSYBTCVDBDW3IJADANCNFSM5KERQYMA. You are receiving this because you commented.Message ID: @.***>
Is there anything I can do to help test? I use the DmxInput
class heavily in my own code. (A servo and stepper controller that works off DMX signals.)
Most of the time I'm using an Enttec DMX USB Pro hooked up to a Linux host running OpenLighting in a Docker container.
Thanks for your offers to test things. I'll let you know when I think it reached a state where it should be usable. Not yet, unfortunately
Indeed, an update on this from my side is overdue :) It's not yet working as I would like to have it but the main idea works. I created a new branch on the library (https://github.com/kripton/Pico-DMX/tree/6c52f0b97e081a2b6be07c17e9348788127acd00) and a corresponding test app (https://github.com/kripton/rp2040-dmxinput/tree/a85e14eff7f862e1d2a70787792a0db6b923d446) ...
However, if num_channels is higher that what comes in, it gets a bit weird:
- BREAK is detected, DMA set up, data comes in, DMA copies it and eventually, the next BREAK is detected. The IRQ handler sees that the library-user-callback has not yet been called, calls it, resets everything and it all begins
- However, if I set "num_channels" to 600 with 512 frames on the wire, on the SECOND DMX frame, the DMA triggers "transfer_complete" after about 87 bytes ........ It doesn't seem to reset its transfer_count correctly. Help welcome :)
Hello! Novice here!
It seems from this Discussion, that the counter doesn't clear with pio_sm_restart().
Would a line to manually write 0 to the counter after resetting the PIO work?
I could be way off! Im very much still learning!
Cheers, and thanks for all of your work
It seems from this Discussion, that the counter doesn't clear with pio_sm_restart().
Would a line to manually write 0 to the counter after resetting the PIO work?
@juicebox6030 : Thanks for the hint, I was not aware of that issue until today. I might have already tried that in the experiments some months back but I honestly can't remember anymore. I will keep this in my head when I come back to working in the DMX input. Definitely great that you mentioned this here!
Some small steps forward, really work in progress and not cleaned up a bit but might be some inspiration: https://github.com/unitware/Pico-DMX/tree/feature/rdm-rx
On rx path I'm using packet inspection to catch rdm-length and set transfer count properly and I am using a PIO program to detect break to catch short packages.
On the tx path I have setup an interrupt on tx done to be able to switch rx/tx properly without to much polling there is also an option to send rdm-discovery responses without the BREAK.
I do not expect to tidy it up an do a proper pull request, rather I think I'm heading in the direction of a freertos rp2040 only bidirectional dmx device so it might diverge a bit. Just wanted to share some inspiration
At the time of writing the code is dirty but a program with this code is passing the open lighting RDM test suite.