mpf
mpf copied to clipboard
Fix direct fade not stopping running corutine
The direct fade class does not stop running corutines when a light update that is able to run in a single set_brightness_and_fade
call is executed.
This causes "race" conditions, ending up with unpredictable and wrong behavior.
I've only tested it on my own machine and it works just like the mpf monitor suggests (which was not true previously).
Thanks for noticing and debugging this! Is there a way to unit/integration test this somehow? Might be tricky. I added one comment. Can you add that? Afterwards, we can safely merge this.
Is there a way to unit/integration test this somehow?
Theoretically you can start a show with fading and then one without fading right afterwards (maybe even with higher priority). That should make the internal brightness of the interface be different from the one expected from mpf (the one shown in mpf monitor) if a fading corutine is running. Doing a double fade and a double non-fade would also check for double cancels.
we might double cancel it
@@ -62,12 +62,14 @@ class LightPlatformDirectFade(LightPlatformInterface, metaclass=abc.ABCMeta):
"""Implement a light which can set fade and brightness directly."""
- __slots__ = ["loop", "task"]
+ __slots__ = ["loop", "task", "double", "enable"]
def __init__(self, number, loop: AbstractEventLoop) -> None:
"""Initialise light."""
super().__init__(number)
self.loop = loop
+ self.double = False
+ self.enable = False
self.task = None # type: Optional[asyncio.Task]
@abc.abstractmethod
@@ -94,7 +96,14 @@ class LightPlatformDirectFade(LightPlatformInterface, metaclass=abc.ABCMeta):
self.task.cancel()
self.task = self.loop.create_task(self._fade(start_brightness, start_time, target_brightness, target_time))
self.task.add_done_callback(Util.raise_exceptions)
+ self.double = False
+ self.enable = True
else:
+ if self.task:
+ self.task.cancel()
+ if self.double and self.enable:
+ print("\n\n\nDOUBLE\n\n\n\n")
+ self.double = True
self.set_brightness_and_fade(target_brightness, max(fade_ms, 0))
async def _fade(self, start_brightness, start_time, target_brightness, target_time):
@@ -113,6 +122,7 @@ class LightPlatformDirectFade(LightPlatformInterface, metaclass=abc.ABCMeta):
brightness = target_brightness
self.set_brightness_and_fade(brightness, max(fade_ms, 0))
if target_fade_ms <= max_fade_ms:
+ self.enable = False
return
await asyncio.sleep(interval)
Doing the check above fills my screen with a lot of "doubles", and it would be a bit strange for me to not have encountered this problem. However what you're saying is probably true, I can go ahead and add a set to None tomorrow.
Is there a way to unit/integration test this somehow?
Theoretically you can start a show with fading and then one without fading right afterwards (maybe even with higher priority). That should make the internal brightness of the interface be different from the one expected from mpf (the one shown in mpf monitor) if a fading corutine is running. Doing a double fade and a double non-fade would also check for double cancels.
we might double cancel it
@@ -62,12 +62,14 @@ class LightPlatformDirectFade(LightPlatformInterface, metaclass=abc.ABCMeta): """Implement a light which can set fade and brightness directly.""" - __slots__ = ["loop", "task"] + __slots__ = ["loop", "task", "double", "enable"] def __init__(self, number, loop: AbstractEventLoop) -> None: """Initialise light.""" super().__init__(number) self.loop = loop + self.double = False + self.enable = False self.task = None # type: Optional[asyncio.Task] @abc.abstractmethod @@ -94,7 +96,14 @@ class LightPlatformDirectFade(LightPlatformInterface, metaclass=abc.ABCMeta): self.task.cancel() self.task = self.loop.create_task(self._fade(start_brightness, start_time, target_brightness, target_time)) self.task.add_done_callback(Util.raise_exceptions) + self.double = False + self.enable = True else: + if self.task: + self.task.cancel() + if self.double and self.enable: + print("\n\n\nDOUBLE\n\n\n\n") + self.double = True self.set_brightness_and_fade(target_brightness, max(fade_ms, 0)) async def _fade(self, start_brightness, start_time, target_brightness, target_time): @@ -113,6 +122,7 @@ class LightPlatformDirectFade(LightPlatformInterface, metaclass=abc.ABCMeta): brightness = target_brightness self.set_brightness_and_fade(brightness, max(fade_ms, 0)) if target_fade_ms <= max_fade_ms: + self.enable = False return await asyncio.sleep(interval)
Doing the check above fills my screen with a lot of "doubles", and it would be a bit strange for me to not have encountered this problem. However what you're saying might be true.
I just googled it looks like you can cancel a task multiple times but you should not: https://stackoverflow.com/questions/39688070/asyncio-prevent-task-from-being-cancelled-twice. I did not find anything about done tasks. I think I have seen exceptions with non-coroutine futures. Lets just set the task to None and be done with it ;-).
Kudos, SonarCloud Quality Gate passed!
0 Bugs
0 Vulnerabilities
0 Security Hotspots
0 Code Smells
No Coverage information
0.0% Duplication
I've added the set to None. Checking for done should be on the task creation side as well, so i would say that is an unrelated bug and should have it's own issue/pull request. (Maybe this isn't the only place that should have this check?)
Here is a git diff if you want to add it yourself, but I can push my changes to this branch or create a new pull request if you want that.
I did some more testing with the diff I mentioned in my previous message adding a check for self.task.done
. And it seems to not work. Not work as in I am having the same behavior I had before the fix (existing corutine fading lights are continuing beyond a new non-corutine light update).