xdg_activation_token_v1::destroy should not invalidate tokens
The documentation for the xdg_activation_token_v1::destroy method states the following:
Notify the compositor that the xdg_activation_token_v1 object will no longer be used. The received token stays valid.
However in Smithay's destroyed implementation the token is actually revoked.
The simplest solution here would be to just remove the destroyed method, which I was about to do, but I think noticed that there's actually methods to retain tokens that users are supposed to call to prevent leakage. Considering that users don't really have the ability to manage pending tokens, I don't think this API is as simple as it could be.
So since destroyed will no longer clear tokens, I think Smithay should also switch the HashMap to a Vec and limit its size to a fixed length (255 maybe?). However I wanted to make sure this was actually desired before sending a patch.
I think noticed that there's actually methods to retain tokens that users are supposed to call to prevent leakage. Considering that users don't really have the ability to manage pending tokens, I don't think this API is as simple as it could be.
My idea for this API was that people would usually just retain based on timestamp that is provided in token data that you can inspect while retaining. Perhaps a calloop helper like invalidate_token_after(duration) could make that easier. One would call it in a new token handler (that we don't have for some reason atm)
That being said hard limiting the amount of tokens sounds reasonable as well, no matter if it's an array, vec or a map.
My idea for this API was that people would usually just retain based on timestamp that is provided in token data that you can inspect while retaining. Perhaps a calloop helper like invalidate_token_after(duration) could make that easier. One would call it in a new token handler (that we don't have for some reason atm)
I don't think there's anything wrong with it, but I think this protocol can basically be supported easily by just hooking up a method or two and you're good to go. Introducing "gotchas" like having to manually manage the tokens is just a potential for people to do it wrong.
I think having these APIs is great, but the "default" behavior should still protect compositors from leaking resources.
no matter if it's an array, vec or a map.
I was just saying vec because that would make it easier to track the order in which the tokens arrived. And search performance isn't really an issue since rfind should be extremely quick in 99.9% of cases.