python-miio
python-miio copied to clipboard
The response time to get the current power usage varies a lot
Describe the bug The response time for the device to get the current power usage is sometimes long and sometimes short
Version information (please complete the following information):
- OS: Linux
- python-miio: miiocli, version 0.6.0.dev0
Device information:
If the issue is specific to a device [Use miiocli device --ip <ip address> --token <token> info
]:
- Model: cuco.plug.v3
- Hardware version: ESP32C3
- Firmware version: 1.0.9
- Supported using: GenericMiot
- Supported by genericmiot: True
To Reproduce Steps to reproduce the behavior:
- Run
miiocli genericmiot --ip 1.1.1.1 --token abcdef get_property_by 11 2
- Measure the time to completion of command execution, sometimes it will take more than 2000 ms to get the result, sometimes it will get the result in 100ms.
Expected behavior It should be possible to obtain power consumption data quickly. (e.g. below 200ms for all requests)
Hi,
There could be two reasons for this:
- The device ignores the subsequent calls using the same sequence id, making it necessary to re-try the request. This is a bug #1751 - if this is the case, the easiest solution until that is fixed is to write a simple python script that keeps track of the id between calls
- The delay is caused by
genericmiot
loading the protocol schema files in the background. These are cached, but they are refreshed regularly. If you run your command with--debug
you will see some output if that's the case. If the delay is indeed caused by this, and you just require some properties whose ids are known to you (like 11 and 2 in this case), you could try replacingmiotdevice
instead ofgenericmiot
in your command line to see if tat helps.
Let me know if that helps!
Thanks for the quick reply!
The problem I had seemed to be the first one, writing a simple python script to call the API solved the problem and now it works fine, thanks!
Sorry, I just tested it again and this weird issue came up again.
I wrote this python script:
from miio import DeviceFactory
import time
dev = DeviceFactory.create("1.1.1.1", "aaaa")
while True:
start = time.time() * 1000
print(dev.get_property_by(11,2))
print("Time used: " + str(int(time.time() * 1000 - start)) + " ms")
It's output is:
[{'did': '11-2', 'siid': 11, 'piid': 2, 'code': 0, 'value': 47}]
Time used: 281 ms
[{'did': '11-2', 'siid': 11, 'piid': 2, 'code': 0, 'value': 47}]
Time used: 233 ms
[{'did': '11-2', 'siid': 11, 'piid': 2, 'code': 0, 'value': 47}]
Time used: 222 ms
[{'did': '11-2', 'siid': 11, 'piid': 2, 'code': 0, 'value': 47}]
Time used: 229 ms
[{'did': '11-2', 'siid': 11, 'piid': 2, 'code': 0, 'value': 47}]
Time used: 230 ms
[{'did': '11-2', 'siid': 11, 'piid': 2, 'code': 0, 'value': 52}]
Time used: 291 ms
[{'did': '11-2', 'siid': 11, 'piid': 2, 'code': 0, 'value': 52}]
Time used: 220 ms
[{'did': '11-2', 'siid': 11, 'piid': 2, 'code': 0, 'value': 52}]
Time used: 225 ms
[{'did': '11-2', 'siid': 11, 'piid': 2, 'code': 0, 'value': 52}]
Time used: 221 ms
[{'did': '11-2', 'siid': 11, 'piid': 2, 'code': 0, 'value': 52}]
Time used: 230 ms
[{'did': '11-2', 'siid': 11, 'piid': 2, 'code': 0, 'value': 52}]
Time used: 244 ms
[{'did': '11-2', 'siid': 11, 'piid': 2, 'code': 0, 'value': 52}]
Time used: 237 ms
[{'did': '11-2', 'siid': 11, 'piid': 2, 'code': 0, 'value': 52}]
Time used: 218 ms
[{'did': '11-2', 'siid': 11, 'piid': 2, 'code': 0, 'value': 52}]
Time used: 229 ms
[{'did': '11-2', 'siid': 11, 'piid': 2, 'code': 0, 'value': 52}]
Time used: 229 ms
[{'did': '11-2', 'siid': 11, 'piid': 2, 'code': 0, 'value': 47}]
Time used: 299 ms
[{'did': '11-2', 'siid': 11, 'piid': 2, 'code': 0, 'value': 47}]
Time used: 238 ms
Time for each request is 200-300ms, that is normal.
But when I added a sleep between each request, something seems to go wrong.
from miio import DeviceFactory
import time
dev = DeviceFactory.create("1.1.1.1", "aaaa")
while True:
start = time.time() * 1000
print(dev.get_property_by(11,2))
print("Time used: " + str(int(time.time() * 1000 - start)) + " ms")
time.sleep(1) # <= Only change here
Now there're weird spikes in response time (some requests take >1000ms, some <100ms):
[{'did': '11-2', 'siid': 11, 'piid': 2, 'code': 0, 'value': 45}]
Time used: 84 ms
[{'did': '11-2', 'siid': 11, 'piid': 2, 'code': 0, 'value': 46}]
Time used: 130 ms
[{'did': '11-2', 'siid': 11, 'piid': 2, 'code': 0, 'value': 46}]
Time used: 359 ms
[{'did': '11-2', 'siid': 11, 'piid': 2, 'code': 0, 'value': 46}]
Time used: 2666 ms
[{'did': '11-2', 'siid': 11, 'piid': 2, 'code': 0, 'value': 45}]
Time used: 330 ms
[{'did': '11-2', 'siid': 11, 'piid': 2, 'code': 0, 'value': 45}]
Time used: 83 ms
[{'did': '11-2', 'siid': 11, 'piid': 2, 'code': 0, 'value': 46}]
Time used: 327 ms
[{'did': '11-2', 'siid': 11, 'piid': 2, 'code': 0, 'value': 46}]
Time used: 351 ms
[{'did': '11-2', 'siid': 11, 'piid': 2, 'code': 0, 'value': 46}]
Time used: 1396 ms
[{'did': '11-2', 'siid': 11, 'piid': 2, 'code': 0, 'value': 46}]
Time used: 299 ms
[{'did': '11-2', 'siid': 11, 'piid': 2, 'code': 0, 'value': 46}]
Time used: 338 ms
[{'did': '11-2', 'siid': 11, 'piid': 2, 'code': 0, 'value': 46}]
Time used: 87 ms
[{'did': '11-2', 'siid': 11, 'piid': 2, 'code': 0, 'value': 46}]
Time used: 332 ms
[{'did': '11-2', 'siid': 11, 'piid': 2, 'code': 0, 'value': 46}]
Time used: 358 ms
[{'did': '11-2', 'siid': 11, 'piid': 2, 'code': 0, 'value': 45}]
Time used: 1398 ms
[{'did': '11-2', 'siid': 11, 'piid': 2, 'code': 0, 'value': 45}]
Time used: 292 ms
[{'did': '11-2', 'siid': 11, 'piid': 2, 'code': 0, 'value': 47}]
Time used: 327 ms
[{'did': '11-2', 'siid': 11, 'piid': 2, 'code': 0, 'value': 47}]
Time used: 81 ms
[{'did': '11-2', 'siid': 11, 'piid': 2, 'code': 0, 'value': 46}]
Time used: 342 ms
[{'did': '11-2', 'siid': 11, 'piid': 2, 'code': 0, 'value': 60}]
Time used: 354 ms
[{'did': '11-2', 'siid': 11, 'piid': 2, 'code': 0, 'value': 60}]
Time used: 306 ms
[{'did': '11-2', 'siid': 11, 'piid': 2, 'code': 0, 'value': 61}]
Time used: 133 ms
[{'did': '11-2', 'siid': 11, 'piid': 2, 'code': 0, 'value': 61}]
Time used: 279 ms
[{'did': '11-2', 'siid': 11, 'piid': 2, 'code': 0, 'value': 49}]
Time used: 997 ms
[{'did': '11-2', 'siid': 11, 'piid': 2, 'code': 0, 'value': 48}]
Time used: 760 ms
Could you add a timestamp to the print output to see if those delays appear on regular periods? Just wondering if it could have something to do with the handshake handling (i.e., device expecting one in some intervals), but could it also be be, that the device happens to be in some sort of power save mode and/or that the wifi connection is congested?
My device is very close to the router and there are no other devices on the LAN interfering with it, so I don't think it should be the network quality that is causing the delay. (I've also repeated the experiment several times and the response time of the device is always very stable if time.sleep(1)
is removed)
I changed the code to print the timestamp:
from miio import GenericMiot
import time
import logging
logging.basicConfig(level=logging.DEBUG)
dev = GenericMiot("1.1.1.1", "xxxx", debug=True)
while True:
start = time.time() * 1000
print("Start request: " + str(int(start)))
print(dev.get_property_by(11,2))
print("Time used: " + str(int(time.time() * 1000 - start)) + " ms")
time.sleep(1)
Output:
Start request: 1714881561474
[{'did': '11-2', 'siid': 11, 'piid': 2, 'code': 0, 'value': 51}]
Time used: 1301 ms
Start request: 1714881563777
[{'did': '11-2', 'siid': 11, 'piid': 2, 'code': 0, 'value': 51}]
Time used: 1398 ms
Start request: 1714881566176
[{'did': '11-2', 'siid': 11, 'piid': 2, 'code': 0, 'value': 50}]
Time used: 110 ms
Start request: 1714881567287
[{'did': '11-2', 'siid': 11, 'piid': 2, 'code': 0, 'value': 50}]
Time used: 1403 ms
Start request: 1714881569692
[{'did': '11-2', 'siid': 11, 'piid': 2, 'code': 0, 'value': 51}]
Time used: 1182 ms
Start request: 1714881571875
[{'did': '11-2', 'siid': 11, 'piid': 2, 'code': 0, 'value': 51}]
Time used: 79 ms
Start request: 1714881572956
[{'did': '11-2', 'siid': 11, 'piid': 2, 'code': 0, 'value': 49}]
Time used: 1196 ms
Start request: 1714881575153
[{'did': '11-2', 'siid': 11, 'piid': 2, 'code': 0, 'value': 48}]
Time used: 139 ms
Start request: 1714881576294
[{'did': '11-2', 'siid': 11, 'piid': 2, 'code': 0, 'value': 47}]
Time used: 1399 ms
Start request: 1714881578694
[{'did': '11-2', 'siid': 11, 'piid': 2, 'code': 0, 'value': 54}]
Time used: 587 ms
Start request: 1714881580282
[{'did': '11-2', 'siid': 11, 'piid': 2, 'code': 0, 'value': 54}]
Time used: 71 ms
Start request: 1714881581354
[{'did': '11-2', 'siid': 11, 'piid': 2, 'code': 0, 'value': 47}]
Time used: 1197 ms
Start request: 1714881583552
[{'did': '11-2', 'siid': 11, 'piid': 2, 'code': 0, 'value': 48}]
Time used: 1398 ms
Start request: 1714881585950
[{'did': '11-2', 'siid': 11, 'piid': 2, 'code': 0, 'value': 49}]
Time used: 141 ms
It does seem like the delays are somewhat cyclical?