David Nadlinger

Results 149 comments of David Nadlinger

In my mind, the solution here is really just for the comms CPU (core0) not to interpret any of the payload, but ship it to the kernel CPU (core1) instead,...

Just for the record, the above test case still takes about 0.5 s on 1f40f3ce157f518ef03474d1408b08cd7c50e994/Kasli – but hey, at least, it doesn't time out anymore.

For me, exposing messaging as a primitive to kernel code directly (that is, instead of only as a synchronous request/response pair) would be more important than a hard-earned latency improvement...

(The units should be µs, by the way.)

> It should be possible, but you still need a thread to wait for the messages, and I don't think you can do it with low latency in python due...

@pathfinder49 One to add to the list…

Reopening, as fix was reverted in https://github.com/m-labs/artiq/commit/ae999db8f6814c63eae563aca69276feb59d305d (see commit message for details).

#1464 should improve the situation considerably by always saving to HDF5 once the run stage is reached, i.e. even if it is finished with an exception. If the user crashes/deadlocks...

You could also do the message processing in a fiber, context-switching back to normal execution every time you block waiting for input. Then again, requiring 2 x max_message_size in RAM...

(Fibers/green threads/… would also give you proper handling of multiple connections for free, although I must admit I'm not quite sure what the current story with the Rust runtime is...