[TC-IDM-7.1] Step 2. Timeout to subscribe attribute
Reproduction steps
- pair DUT into 5 fabric
- Do Step 1 (run
RD*/step1.shusing the script below). And everything seems okay in this step. - use Ctrl-C to stop all chip-tool
- Do Step 2 (run
RD*/step2.sh). For RD1, subscribe can be done successfully. For RD2, subscribe is failed with timeout.
The DUT can not recover automatically. Once it happen, I have to reboot the ESP32, then it will recover.
Bug prevalence
always
GitHub hash of the SDK that was being used
8b6c0c0a7337c76c016891e26cc7b8df066b0739 (sve2 rpi image v2.4)
Platform
esp32
Platform Version(s)
No response
Anything else?
Scripts use to reproduce this issue: TC-IDM-7.1.zip Run step1 for RD1-5, then CTRL-C to stop
log files: RD1.log RD2.log <-- error message here RD3.log RD4.log RD5.log dut.log
So, the dut side logs are incomplete, since I can't seem to find some messages that I see on the chip-tool there.
Example, this message is not present in the DUT log, even though this was received from the DUT:
[1662602507.919170][9845:9850] CHIP:EM: Received message of type 0x10 with protocolId (0, 0) and MessageCo
unter:167217626 on exchange 31026i
Importantly, the set of steps are completely at odds with what the test plan says:
- Step 1 of IDM-7.1 requires multiple subs with 3 paths each. Instead, there are subs with just 1 path each.
- Step 2 requires the first sub from each RDx to have
keepSubscriptions= false. Otherwise, it's going to trigger evictions of subs on the target and that's not what we want. - Step 2 requires RD1 to subscribe with 4 paths.
While sending subscriptions in different RD's, please make sure your sending within minimum and maximum interval
In RPI platform, not facing this issue.
PFA log:- [TC-IDM-7.1]step2-RD1.txt [TC-IDM-7.1]Step 2-RD2.txt [TC-IDM-7.1]step2-RD3.txt [TC-IDM-7.1]step 2-RD4.txt [TC-IDM-7.1]step2-RD5.txt TC-IDM-7.1 [DUT]step2.txt
@mrjerryjohns
Importantly, the set of steps are completely at odds with what the test plan says:
- Step 1 of IDM-7.1 requires multiple subs with 3 paths each. Instead, there are subs with just 1 path each.
No, there are 3 path: node-label, location and start-up-on-off
- Step 2 requires the first sub from each RDx to have
keepSubscriptions= false. Otherwise, it's going to trigger evictions of subs on the target and that's not what we want.
The test step document says basic subscribe node-label 100 1000 1 0 --keepSubscriptions 1. So keepSubscriptions = false does not match what the document says.
- Step 2 requires RD1 to subscribe with 4 paths.
yep. But before the 4th subscribe sent, RD2 get timeout.
By reboot DUT between each test steps. The test scripts works.
No, there are 3 path: node-label, location and start-up-on-off
Please check the syntax of the command. Those are three separate subscribes, and not a single subscribe.
No, there are 3 path: node-label, location and start-up-on-off
Please check the syntax of the command. Those are three separate subscribes, and not a single subscribe.
Sorry. I do this step according to the VerificationStep 1.4 documentation.
The command for this step in the documentation just send 3 separate subscribes.
Could you please show me an chip-tool command example. So I can subscribe exactly 3 path in one single subscription?
@Kxuan pls follow the steps in the VS doc without those shell scripts and see if you are having the same issue that you reported earlier.
@mrjerryjohns Following is the command for to send 3 paths using a single subscribe and Verification step document will modify accordingly
@Kxuan Please follow the below mentioned step to verify the test step 2
-
Please run this test in chip tool interactive mode ./chip-tool interactive start
-
on the first reference device(RD1) send a Subscribe Request Messages to DUT. (Below is the example command to send a subscribe requests with 3 different paths in a single command ) any subscribe-by-id '0x0028,0x0028,0x0028' '5,6,16' 100 1000 1 '0,0,0' --keepSubscriptions true
-
on the 2nd reference device(RD2) send a Subscribe Request Messages to DUT.(Below is the example command to send a subscribe requests with 3 different paths in a single command ) any subscribe-by-id '0x0028,0x0028,0x0028' '5,6,16' 100 1000 2 '0,0,0' --commissioner-name beta --keepSubscriptions true
-
on the 3rd reference device(RD3) send a Subscribe Request Messages to DUT..(Below is the example command to send a subscribe requests with 3 different paths in a single command ) any subscribe-by-id '0x0028,0x0028,0x0028' '5,6,16' 100 1000 3 '0,0,0' --commissioner-name gamma --keepSubscriptions true
-
on the 4th reference device(RD4) send a Subscribe Request Messages to DUT. (Below is the example command to send a subscribe requests with 3 different paths in a single command ) any subscribe-by-id '0x0028,0x0028,0x0028' '5,6,16' 100 1000 4 '0,0,0' --commissioner-name 4 --keepSubscriptions true
-
on the 5th reference device(RD5) send a Subscribe Request Messages to DUT. (Below is the example command to send a subscribe requests with 3 different paths in a single command ) any subscribe-by-id '0x0028,0x0028,0x0028' '5,6,16' 100 1000 5 '0,0,0' --commissioner-name 5 --keepSubscriptions true
Once all 15 subscriptions are active send one more subscriptiopn command in RD1 and Verify that the subscriptions from RD2, RD3, RD4 and RD5 are not affected.
- on the 1st reference device(RD1) send a Subscribe Request Messages to DUT. basic subscribe product-id 100 1000 1 0 --keepSubscriptions true
Sounds like we cannot actually change the test steps at this point. Given that, the existing tests should work, even though they don't actually validate the right set of behaviors.
@Kxuan can you provide us complete logs from the DUT please?
I changed the command to use "any" cluster, and send only one subscribe request:
any subscribe-by-id '0x0028,0x0028,0x0030' '5,6,0' 10 20 1 '0,0,0' --keepSubscriptions true
It seems step 2 can be done successfully. And I do not need reboot DUT after step 1 (the command of step1 has also be replace to use any cluster).
@Kxuan from what you described above it seems we can close this issue, Could you pls let us know if this is still an issue ?