p4p.client.thread.Context.put does not always work as expected when used inside a monitor call back
I would like to do a put to a PV within a monitor callback but I've run into an issue with it not working for certain configurations. The following code works as intended:
from p4p.client.thread import Context
from p4p.nt import NTScalar
from p4p.server import Server
from p4p.server.thread import SharedPV
ctxt = Context('pva')
class MyHandler:
def put(self, pv, op):
print(f'Putting value {op.value()} to {pv}')
pv.post(op.value())
op.done()
def cb(V):
print(f'Monitor callback with value {V}.')
ctxt.put('DEV:RW:DOUBLE3','{}')
pv1 = SharedPV(nt=NTScalar('i'), initial=0.0, handler=MyHandler())
pv2 = SharedPV(nt=NTScalar('i'), initial=0.0, handler=MyHandler())
pv3 = SharedPV(nt=NTScalar('i'), initial=0.0, handler=MyHandler())
S = Server(providers=[{
"DEV:RW:DOUBLE1": pv1,
"DEV:RW:DOUBLE2": pv2,
"DEV:RW:DOUBLE3": pv3
}])
print("Started server")
sub = ctxt.monitor('DEV:RW:DOUBLE1',cb)
sub2 = ctxt.monitor('DEV:RW:DOUBLE2',cb)
So if I do for example a put to DEV:RW:DOUBLE1, it will trigger a put to DEV:RW:DOUBLE3. If, however, things are ordered differently like this:
from p4p.client.thread import Context
from p4p.nt import NTScalar
from p4p.server import Server
from p4p.server.thread import SharedPV
ctxt = Context('pva')
class MyHandler:
def put(self, pv, op):
print(f'Putting value {op.value()} to {pv}')
pv.post(op.value())
op.done()
def cb(V):
print(f'Monitor callback with value {V}.')
ctxt.put('DEV:RW:DOUBLE1','{}')
pv1 = SharedPV(nt=NTScalar('i'), initial=0.0, handler=MyHandler())
pv2 = SharedPV(nt=NTScalar('i'), initial=0.0, handler=MyHandler())
pv3 = SharedPV(nt=NTScalar('i'), initial=0.0, handler=MyHandler())
S = Server(providers=[{
"DEV:RW:DOUBLE1": pv1,
"DEV:RW:DOUBLE2": pv2,
"DEV:RW:DOUBLE3": pv3
}])
print("Started server")
sub = ctxt.monitor('DEV:RW:DOUBLE2',cb)
sub2 = ctxt.monitor('DEV:RW:DOUBLE3',cb)
it fails when trying to create the monitor subscriptions with the following error:
Error processing Subscription event for DEV:RW:DOUBLE2
Traceback (most recent call last):
File "/home/user/.local/lib/python3.10/site-packages/p4p/client/thread.py", line 363, in put
value, i = done.get(timeout=timeout)
File "/home/user/.local/lib/python3.10/site-packages/p4p/client/queue.py", line 183, in get
raise Empty
_queue.Empty
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/user/.local/lib/python3.10/site-packages/p4p/client/thread.py", line 121, in _handle
self._cb(E)
File "/p4p_for_isis/examples/simple_monitor_test.py", line 16, in cb
ctxt.put('DEV:RW:DOUBLE1','{}')
File "/home/user/.local/lib/python3.10/site-packages/p4p/client/thread.py", line 366, in put
raise TimeoutError()
TimeoutError
Error processing Subscription event for DEV:RW:DOUBLE3
Traceback (most recent call last):
File "/home/user/.local/lib/python3.10/site-packages/p4p/client/thread.py", line 363, in put
value, i = done.get(timeout=timeout)
File "/home/user/.local/lib/python3.10/site-packages/p4p/client/queue.py", line 183, in get
raise Empty
_queue.Empty
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/user/.local/lib/python3.10/site-packages/p4p/client/thread.py", line 121, in _handle
self._cb(E)
File "/p4p_for_isis/examples/simple_monitor_test.py", line 16, in cb
ctxt.put('DEV:RW:DOUBLE1','{}')
File "/home/user/.local/lib/python3.10/site-packages/p4p/client/thread.py", line 366, in put
raise TimeoutError()
TimeoutError
(NB: I added a couple of print statements in thread.py and queue.py so the line numbers where the errors are thrown are slightly different to the out-of-the-box versions of thread.py and queue.py) This issue looks very similar to #164 and I don't understand why just having the put and monitor subscriptions in a different order would cause it to fail. I have looked at different configurations that work and configurations that don't work and it's not clear to me what the root cause of this is. Any help would be appreciated :-)