mtTkinter icon indicating copy to clipboard operation
mtTkinter copied to clipboard

queue.Full raised when using timer to read Scale value

Open gabycperezdias opened this issue 7 years ago • 7 comments

Hi,

I am getting this error:

File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/tkinter/__init__.py", line 3010, in get
    value = self.tk.call(self._w, 'get')
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/mttkinter/mtTkinter.py", line 136, in __call__
    self._tk._event_queue.put((self._attr, args, kwargs, response_queue), True, 1)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/queue.py", line 141, in put
    raise Full
queue.Full

I am getting this error when I try to get the position of a Scale. Basically, every second, I need to get the position of the Scale.

   def update_lbl(self):
	if(self.progress):
		p = self.progress.get()

def ext_timer(self):
	while(self.keep_doing):
		time.sleep(1)
		self.update_lbl()

Both methods are implemented inside the same class (tkinter class), but ext_timer is the target of the thread. I am new at this module... just using it because 'after' is not accurate enough although sleep is not that much precise... still...

I would like to know why the queue has a max_size of 1, and if this could be affecting/causing my issue, maybe because I check if progress exists and then I do a get? Or because I call the function and do a get? Maybe because of the frequency?

Frequency will be an issue? Because I might need to call the main thread a lot within 1 second...

Using python3.6 and Mac High Sierra.

Thank you

gabycperezdias avatar Jun 21 '18 03:06 gabycperezdias

Question It has been a while since I spent time with this code, so I'm a bit 'out of it', but I will do my best to answer your question the best I can.

As I understand it, the Queue has a maximum size of 1, because the thread controlling the Tk instance should be able to keep up with the threads sending the requests (that's the whole idea of off-loading cpu-time-intensive tasks to different threads, after all). If you manage to get the queue full, the only real reason I can think off is that you are inadvertently blocking the <MainThread> with some other task.

mtTkinter depends on the Tk mainloop running in order to retrieve its tasks from the Queue. If you have some function call that blocks the mainloop from running for a longer time than it takes the other thread to prepare the next request to put in the queue, issues will arise, but that is by design (if you make requests faster than the loop can keep up, something is wrong with your design).

There is a second option, and that is that the task you are trying to execute in the mainloop actually takes longer than it takes you to insert a new task into the queue, but if it is just a simple value get, that should not be the case.

Accuracy The accuracy of the after function depends on how much you use the mainloop. If you don't run many other tasks in it, it can be really accurate. The time.sleep function is hardly any more accurate. The after function really seems more suitable for this rather simple task, and you don't have to worry about dead-locks (which might be an issue in your code right now even with the GIL).

If you need real accuracy (up to milisecond or better), then I'm afraid Python itself is just not suitable for the task. Python is not deterministic at all, and its speed varies greatly across different platforms, and because of the overhead in the Python interpreter, you should not count on getting to that single second within 1% variation consistently. Even a slightly higher IO load, more memory usage than normal, all those factors can influence the speed of Python and make the time.sleep function more inaccurate.

macOS If you continue having issues, the possibility that this is an issue with the macOS version of the Tkinter/Tcl interpreter should be considered. But I'd need something reproducible for that, preferably something that can be run on Travis-CI (as they offer macOS environments and I really don't have any Apple devices at all).

I hope this answered your question. If it does not, or if you have any more questions, do not hesitate to comment. If it does, then I'd appreciate if you'd close the issue to let me know and to keep the repository tidy.

RedFantom avatar Jun 21 '18 14:06 RedFantom

About the max size queue, it makes sense... about the accuracy I really need that, so, I think python will not provide what I need...

Thanks for the time though.. I will close the issue

gabycperezdias avatar Jun 21 '18 14:06 gabycperezdias

I don't know your exact application, so not everything I say in this comment may apply, but I would like pose a critical question in an attempt to help you further. It's a rhetorical question, so you don't have to answer it, but maybe thinking about it can help you.

Why would you need the timing accuracy? In a UI-based application, a real human is the one controlling the UI. Most screens don't go further than 60Hz, and in normal UI applications, most users won't notice a difference up from 30FPS. That means that doing something in 1.000 seconds instead of 1.001 seconds doesn't produce any noticeable difference, so the timing should not be for the users' benefit.

Then a second option, perhaps, might be that you are using some sort of communication channel instead, and you need the value to be ready for sending at a particular point in time. If that is the case, then there are far better ways of retrieving the value in the first place. You could use a tk.IntVar, for example, to retrieve the value upon preparing the data to be sent, without the need for complicated (and fragile) timing schemes.

Now, perhaps you are controlling a real-time process, and that's what you need the accuracy for. However, the second you attach a UI to a real-time process, you introduce additional factors. Maybe the user is moving the slider at the moment you read it, or the mechanical hard drive hits a busy moment and you have to wait for 23ms until the OS is finished doing stuff. That not only applies to Python and Tkinter, but to any UI-based program.

Closing words What I'm really saying is: The second you attach a UI to something, you lose timing precision. If you need timing precision like that, then maybe you should consider not using a UI at all, or, even more so, whether you actually need that precision. Maybe there are better ways of doing what you want (like Queues, or tk.IntVar, or passing arguments around, or using more threads, or lambda functions, or...)

Like I said before, this might not apply to your particular case as I don't know the details, but if it does, consider critically evaluating why you need the timing accuracy in the first place, and whether there are better ways to achieve what you need.

RedFantom avatar Jun 24 '18 10:06 RedFantom

@RedFantom, we git hit with the queue.Full exception much more often than we'd like. You mention:

if you make requests faster than the loop can keep up, something is wrong with your design.

However, isn't the queue concept what provides value to mttkinter (over just using tkinter) and makes it thread safe? We have many threads running and many tkinter windows open producing operations at a very fast pace. When this happens, we get the queue.Full exception because mttkinter can't keep up. So, I guess my underlying question is what's the drawback behind bumping the max queue size to be something higher? Or just not providing a max size?

Or another suggestion, why not make the max queue size configurable? Then you can keep the default behavior as-is, but allow others to bump it up if needed. Happy to do the PR for it if you agree.

troygrosfield avatar Aug 18 '21 15:08 troygrosfield

@troygrosfield My apologies for the late response.

Let me start with the disclaimer that I have not used mtTkinter in years, as it's no longer necessary when using Python 3. If there's a practical application of what you're suggesting that I'm not aware of which discounts the reasoning I will try to set up below, please let me know, maybe even with a minimal, reproducable example.

Why I think the queue.Queue default size should remain of size 1 mtTkinter was written to work around bugs that exist in Python 2. It is however subject to the very same limitations that apply to all Threads in Python: The Global Interpreter Lock (GIL). The GIL ensures that only a single thread can execute Python code at any given time. What this means in practical terms is that when context-switching between Tcl and Python-code, while the Tcl-code is executing other Threads are allowed to run Python code.

This does not mean, however, that increasing the size of the Queue within mtTkinter is without issue. Increasing the Queue size implicitly allows applications that use Threads to generate more UI events than can be handled by the application. For the handling of events the GIL must be acquired by the main thread every mt_check_period milliseconds. If additional event must be queued before the GIL can be acquired and the event processed, this means that events will start stacking up rapidly.

There are two options. The first one is to execute all queued events and possibly allowing the execution time of the event to extend beyond the mt_check_period (which is already not impossible on a slow system, but preventing unresponsiveness on a slow system is difficult anyway), which would cause the need for processing of even more events during the next time events are executed, assuming the Threads keep generating events. This inevitably causes the UI to become unresponsive. The second option is to keep executing only a single event per period, but this clearly causes even bigger issues. This is why your first course of action should be to decrease the mt_check_period.

The only way I can see the first option yielding the desired result is that Threads generate multiple UI events in short bursts with plenty of time in-between to allow for the handling of those events. This is a use-case I can understand, but it does come with the mentioned caveats.

So really, the issue I think you're running into is that either:

  • Tkinter is too slow to do what you need it to, or
  • Your application is generating so many events it is impossible to maintain a responsive UI.

If the latter is the case and you wish to ignore the UI responsiveness, then I suppose increasing the size of the queue could work. But really the queue.Full you're getting is basically Tkinter telling you that it can't keep up with what you're trying to do. If that is something you can accept, then I would not mind getting a PR.

TL;DR: Tkinter is not very fast and if you're running into queue.Full, you might simply be generating too many UI events. If you want to open a PR to make the change, I'll welcome it, but with the mentioned warnings.

RedFantom avatar Sep 08 '21 10:09 RedFantom

@RedFantom, I appreciate the lengthy and detailed response! You mention:

"mtTkinter is no longer necessary when using Python 3"

We're using python 3.7, 3.8, and 3.9 and we experience immediate, hard crashes with no stacktrace when we don't use mtTkinter in Linux environments at times. When mttkinter is installed and used, we don't experience this same type of crash.

Why Bump up the Max Queue Size?

The idea behind bumping the max queue size up is so mttkinter can handle short bursts of rapid activity. If the queue jumps up to 10-20 items max, I don't feel like we'll see enough lag time to really become an issue. Even if we see moderate slowness for a short period of time, I feel like that's more acceptable than seeing the "queue.Full" exception. If this were constant activity, then yes, I would agree with you that the system would never be able to keep up and the system would do as you mentioned a just feel frozen all the time.

Our Workaround

We're still in testing, but our short term workaround was to bump the queue maxsize up to handle the short bursts of activity:

from mttkinter import mtTkinter as tkinter

root = tkinter.Tk()
root._event_queue.maxsize = 100

This would also simulate the system making the queue size configurable (with a future PR). Once we've let this change sit a bit and prove that it works in our case, I'll work on a PR for the configurable queue size changes.

Thanks for making useful software available to the open source community!

troygrosfield avatar Sep 08 '21 14:09 troygrosfield

@troygrosfield Thank you for your reply!

We're using python 3.7, 3.8, and 3.9 and we experience immediate, hard crashes with no stacktrace when we don't use mtTkinter in Linux environments at times. When mttkinter is installed and used, we don't experience this same type of crash.

This part of your reply is of particular interest to me. Ever since I left Python 2, I have never encountered issues with Tkinter and Threads, but this makes me reconsider whether all the research available in #5 is complete. Unfortunately, I do not have sufficient time to investigate this problem in more detail, but if you are experiencing bugs then I do recommend opening a bug report in CPython if you can make the problem reproducible, as it is definitely not intended behaviour.

If you could test out this potential solution in your case and it works, I will be open to a PR for it 👍🏻 . Thank you for your work on this, you might be on to a serious issue and if mtTkinter can be a workaround, I would like for it to be as easy as possible to use.

RedFantom avatar Sep 08 '21 21:09 RedFantom