v2.0.0 with Async Architecture
v2.0.0 - Async Architecture Introduction (MAJOR REVISION)
This major release introduces the foundation for native asyncio-based device communication while fully preserving the existing synchronous API for backward compatibility.
Highlights:
- Version bump to 2.x to signal new async subsystem (legacy sync classes unchanged).
- Planning document
ASYNC.mdadded (vision, goals, milestones for XenonDeviceAsync & related classes). - No behavioral changes to existing synchronous code paths in this initial 2.0.0 tag.
- Future minor releases (2.1.x+) will add new async classes and examples without removing sync support.
Compatibility:
- Existing imports and synchronous usage continue to work (API surface of 1.x retained).
- New async classes will live alongside current modules (no name collisions) and require explicit opt‑in.
- Officially removed Python 2.7 support.
Migration Guidance:
- You can adopt async incrementally—no action required if you stay with sync API.
- When async classes land, prefer
await device.status_async()patterns in event loops for concurrency gains.
See ASYNC.md for roadmap details.
Re: #645 #646
That looks like a very long way to go. 😓
The #645 v.2.0.0-async branch works well for me with my small patches from #646. I've been running it in my real environment for over 8000 minutes with around 100 devices connected. There have been no issues so far with resource leaks, deadlocks, or delays—just an instant response without threading, and so on. So rather than discarding the entire branch, I think it would be better if we could reuse its logic where possible.
Yes, but I think it can be fast. And yes, @3735943886 - absolutely! Your code would be a good place to start.
In the #646 branch have you been using the direct Async classes or the wrappers?
Would you be willing to move those *Async classes over to this branch?
As a first step, I've committed XenonDeviceAsync as a foundational class for upcoming v2-async support, with minimal socket patches. It seems quite stable so far, likely because the original XenonDevice class was so robust. My only concern is whether my revised socket communication code works without issue.
In the #646 branch have you been using the direct Async classes or the wrappers?
Would you be willing to move those *Async classes over to this branch?
I'm currently running a DeviceAsync-based approach for 24/7 asynchronous multi-device monitoring. You can view the code I'm using at this link: https://github.com/3735943886/tuya2mqtt/blob/async-beta/core/tuya2mqtt.py
I've also tested some synchronous v1 examples and shared my feedback in the comments on PR #646.
I believe the major issue in PR #645 was that XenonDeviceAsync robustness was compromised when an AI refactored it. This caused a cascade of errors that transmitted to Device then BulbDevice(and other device specific classes). After replacing XenonDeviceAsync in PR #646, the approach using AsyncWrapper seemed quite promising. (At least for me, there were no issues.) Also, as you can see from the last comment on PR #645, callback supports for sync API were somewhat feasible.
However, it's hard to say whether the AsyncWrapper is the ultimate endpoint. Then again, simply copying and pasting the Device class and just adding async/await, as was done initially with the DeviceAsync class, isn't fancy. So, what should we do with the DeviceAsync class for the next step? It seems we need a consensus here.
As a first step, I've committed XenonDeviceAsync as a foundational class for upcoming v2-async support, with minimal socket patches
Perfect! Are you able to push to v2-async?
I may have misunderstood the discussion in #645 but I came away thinking (and agreeing) that we want to keep the existing sync code in tact. The attempt now will be to create an extension to the library that is async. It looks like you closed and deleted #646 - but likely we could start with some of your fixes there. My proposal is to start with copying in:
Classes:
- XenonDeviceAsync
- DeviceAsync
But I'm also wondering, do we need both? That inheritance paradigm came from pytuya and I often debated on why and where it made sense to put new functions.
Perhaps we just use XenonDeviceAsync and rename it as DeviceAsync and fold in the "Device" functions into that same class. Then we derive Outlet, Bulb, Cover, etc., from there.
+------------------------------+ +---------------------------+
| DeviceAsync (base) | | MessageHelper (shared) |
| - state machine |<--calls--> | pack/unpack (sync funcs) |
| - connection supervisor | | crypto helpers |
| - protocol v3.1..v3.5 | +---------------------------+
| - send queue (asyncio.Queue)|
| - recv task (reader loop) | +---------------------------+
| - handshake coroutine |<--uses---->| Crypto (AESCipher) |
+--------------+---------------+ +---------------------------+
| derives
+----------+-----------+
| Async Device Mixins |
| (Outlet/Bulb/etc.) |
+----------------------+
No sacred cows here. Love it, hate it? Cast your ideas out. 😁
My stance was to keep XenonDeviceAsync as close as possible to the XenonDevice, which means preserving the core class's legacy code intact(not the whole sync code). I was not opposed to the AsyncWrapper and of course, will defer to the opinions of the long-time contributors. However, I do think the AsyncWrapper may have the advantage of providing non-blocking status callback for those who prefer a sync style without running event loop.
Classes:
- XenonDeviceAsync
- DeviceAsync
But I'm also wondering, do we need both? That inheritance paradigm came from pytuya and I often debated on why and where it made sense to put new functions.
Got it. I'll create the DeviceAsync class based on XenonDeviceAsync and integrate the methods from the existing Device class. It'll be done shortly.
I may have misunderstood the discussion in #645 but I came away thinking (and agreeing) that we want to keep the existing sync code in tact.
I wanted to temporarily keep it untouched for the first couple of releases until the async version has proven itself in the real-world, and then replace it with a wrapper later.
Perhaps we just use XenonDeviceAsync and rename it as DeviceAsync and fold in the "Device" functions into that same class. Then we derive Outlet, Bulb, Cover, etc., from there.
Sounds good to me. I think everything already derives from Device anyway, I don't remember anything using XenonDevice directly.
Success on getting true async classes and asyncio functions in place would mean that scanner.py can be rewritten to use that class to accomplish the same performance. It has always bugged me that scanner.py is essentially a completely separate algorithm from the library and how most people would use the library.
Yes, I planned on rewriting the scanner to use the new API once said API has stabilized. This is what, the 4th async PR? I didn't want to rewrite it while things were still changing so much. That said, the force-scan requires brute forcing both the device version and the local key, so it is always going to be a separate algorithm compared to code which already knows both.
Ok, I think I have my callback idea mostly fleshed out. Please note that this does not prevent you from sending/receiving as you've always done, this callback method is an option in addition to that.
import asyncio
import tunytuya
async def data_handler( device, data, tuya_message ):
# do something with the received data here
print( 'in data handler', device.id, data )
print( 'device list', devices )
async def cmd_handler( device, tuya_message ):
# received a payload-less TuyaMessage, you can check the retcode here if you want
pass
async def connected( device, error ):
# connection to device established, or failed
if not error:
await device.status()
devices = {}
async def main():
async with asyncio.TaskGroup() as tg:
# start up a background scan job in case we have multiple devices using auto-ip
scanner = tinytuya.scanner.background_scan()
scanner_task = tg.create_task( scanner.run_task() )
for devdata in device_list:
d = tinytuya.DeviceAsync( devdata )
d.register_data_handler( data_handler )
d.register_command_handler( cmd_handler )
d.register_connect_handler( connected )
d.register_scanner( scanner )
task = tg.create_task( d.run_task() )
devices[d.id] = (d, task)
# The await is implicit when the context manager exits.
asyncio.run( main() )
Of note is the new scanner interface. The problem with the current scanner is it does not handle multiple devices with auto-ip well. You cannot simply fire up multiple instances (i.e. 1 for each device) because only 1 UDP receiver can receive broadcast packets at a time. Kicking off a single thread and then having each device register the ID of interest fixes that.
Can someone tell me where that @classmethod create(...) is supposed to be called? I have yet to see it actually get called from anywhere.
Also, I still plan on deleting all references to nowait and changing all functions to either send, or receive, but never both. The _send_receive() function is a hot mess and doing this will allow it to be cleanly broken up into 2 functions. Hopefully this will also help remind people that device communication is asynchronous and there are gotchas (i.e. calling data = d.status(nowait=False) may not actually return the status if the device slips an asynchronous DP update in before the status() result).
Can someone tell me where that
@classmethod create(...)is supposed to be called? I have yet to see it actually get called from anywhere.
Since the __init__ method is synchronous, it can't handle I/O operations find_device and device_info that the original XenonDevice class relied on. While using an async with block is the preferred way to handle initialization for asynchronous classes, I believe providing a separate @classmethod create(...) is a reasonable design choice for users who want to initialize the device without using the async with syntax.
I believe providing a separate
@classmethod create(...)is a reasonable design choice for users who want to initialize the device without using theasync withsyntax.
But where/who is supposed to call it? I am not using async with in the callback example I posted above and ended up needing to call initialize() myself since create() was never called.
The @classmethod create(...) method is intended to be called by end users as an alternative to the async with context manager for initialization.
The previous README in PR #645 documented it, but it's not available in the current branch. We'd eventually need restoration the detailed documentation.
# Recommended: Using async context manager
async with tinytuya.DeviceAsync(...) as device:
# Use the device inside this block
# Resources are automatically cleaned up when the block is exited.
# Alternative: Using the factory method
device = await tinytuya.DeviceAsync.create(...)
try:
# Use the device
finally:
await device.close() # You must manually call close() to release resources.
The previous
READMEin PR #645 documented it
Ah, I never noticed that.
Anyway, I hated it, so I rewrote things to eliminate it.
The previous
READMEin PR #645 documented itAh, I never noticed that.
Anyway, I hated it, so I rewrote things to eliminate it.
Great, much better. The create/initialize methods were based on my uncertainty about moving I/O operations out of the __init__ method. I tried to set those operations immediately after initialization whenever possible.
In this particular case I think the old behavior was actually a bug - why should a device being offline throw an exception when the program starts, but dropping offline later simply causes it to re-scan?
This is what, the 4th async PR?
Ha! Touche! I really liked the idea of preserving the sync classes and reverting them in the other PRs would have been messy. I also like the idea of cleaning up the inheritence to something that makes more sense.
The previous README in PR https://github.com/jasonacox/tinytuya/pull/645 documented it, but it's not available in the current branch
Easy to restore and I'm happy to help with the docs when we get to that. Also, I really want to get good test coverage with regression. I pushed one out there (test-devices.py) that uses DeviceAsync. There is a bug in DeviceAsync.py for find_device() right now for Auto-IP discovery, but didn't want to step on any changes you may be doing on that already.
Love what I see so far. :)
PS - Code Coverage Report: https://app.codecov.io/gh/jasonacox/tinytuya/tree/v2-async/tinytuya
Welcome to Codecov :tada:
Once you merge this PR into your default branch, you're all set! Codecov will compare coverage reports and display results in all future pull requests.
Thanks for integrating Codecov - We've got you covered :open_umbrella:
I've renamed example files:
async_send_and_receive.py->non_blocking_send_receive.py- Renamed to prevent future confusion with
asyncio-based functionality.
- Renamed to prevent future confusion with
threading.py->multi-threading.py- Renamed to avoid a naming conflict with the built-in
threadingmodule, which could cause import issues.
- Renamed to avoid a naming conflict with the built-in
Please feel free to re-commit or revert this if there's a better idea.
Is a callback really needed for an async base class? I think it’s sufficient to provide callbacks only for the sync API, since callbacks are not commonly used when the async/await pattern is available.
Is a callback really needed for an async base class?
I mean, kinda? Right now there's no way of getting the low-level TuyaMessage without them, though I guess I could add a new receive function which returns it (and the data) instead. Something like data, msg, err = d.receive_all(). While that would eliminate the data/command callbacks, using callbacks is the only way to receive connected/disconnected notifications (I forgot to add the disconnect callbacks originally). Both asyncio.start_server() and loop.create_server() (echo server example) use callbacks, so it's not like asyncio doesn't use them.
That makes sense. To be honest, I'm not yet familiar with the low-level TuyaMessage (I've mostly just used tinytuya as a library without a deep understanding of the Tuya protocol), but I think the connect/disconnect callbacks would be very useful.
I saw in the roadmap that asyncio support would be added under the tinytuya.asyncio namespace. This seems like a great approach to maintain backward compatibility for existing users. However, I'm curious about the long-term plan. If asyncio becomes the primary recommended way to use the library in the future, would the intention be to keep it within the submodule? Promoting it to the top-level namespace later on could, in turn, break backward compatibility for the early adopters of the asyncio implementation.
I was having issues where if you did something like call status from within the connect callback it was stepping on the toes of whatever caused it to connect in the first place, so I basically wrapped _send_receive() in a lock to avoid that. I'd still consider it bad form to do that in the connect callback, but at least it works now.
I also got the start of a new scanner working, though it's not finished enough to commit yet. Instead of the 1000+ line monolith that the old one was, I'm breaking the actual network scanning part out into its own module separate from the part that polls the device status, and the force-scanner is also its own module. So far it works great for broadcast device discovery from within what used to be find_device() but everything else still needs to be written.
Ok, I did it. nowait has been removed. All functions either send, or receive, but never both at the same time.
Devices are inherently asynchronous. If they decide to send you an update right before you call status() then the result in data = d.status() will be really confusing:
data = d.receive() # returns with nothing received
"""device now decides to send you an asynchronous update for DP 8 here"""
data = d.status() # this result will only contain DP 8: data = { "dps": {"8": "..."} }
data = d.receive() # this will return the result of the above .status()
By separating commands from receives I'm hoping people are more likely to realize that the received data may not necessarily be a result of the sent command.
Nice. Is there a way we can marry the seqno (TuyaMessage(seqno=X)) to match request to response? If so, perhaps the logic could be something like:
seqno = d.send(DP_QUERY)
response = d.receive(filter=seqno, timeout=5)
Only v3.3/v3.4 devices allow that. v3.5 devices use a global seqno that persists across connections and never matches what you sent and only resets to 0 on power cycle. Ugh.
Ah, good point. But maybe that could still work. My thought would be "filter" would mean to not send back anything < seqno to prevent getting an old response. As along as they increment with each payload it may work. If it is a 3.1 device, just ignore? :)
I don't know what v3.1 devices do offhand, I'd have to check.
Sadly v3.5 devices also increment when asynchronous updates are sent, so they're always incrementing. I.e.
data = d.receive() # previous update, received seqno=4181
"""device now decides to send you an asynchronous update for DP 8 here, received seqno=4182"""
data = d.status() # sent seqno=5, returns above asynchronous update with seqno=4182
data = d.receive() # this will return the result of the above .status(), received seqno=4183
Non-persistent sockets are now held open for 100ms to give the user a chance to .receive() after a send since those are separate operations now. The socket is closed and re-opened if 2 sends are sent back-to-back to keep the behavior similar to the old non-persistent mode.