arbor
arbor copied to clipboard
Scheduler API additions
Describe the feature you need I would like some extra Python features for schedulers:
- Default "all events"
.events()
equal to.events(0, float("inf"))
- join operator (pref
|
,+
might be confused with a translation of existing spike times) - (Access to the schedule of an
event_generator
orspike_source_cell
) - Picklable
*_schedule
objects
If there are any internal advantages of using regular_schedule
over an explicit_schedule
with the same spikes times, it would be nice if these advantages could be maintained.
Explain what it is supposed to enable Fire-and-forget composition of input signals on spike source cells and event generators from external formats like SONATA, tools or user code. Adding multiple schedules on a source cell and event generator might also solve this, but I discarded that as a change with more impact and possible overhead compared to more events on a single schedule.
Another problem point is that schedules aren't picklable, and for shared poisson schedules I have to create it on one node and send it across MPI to other nodes (which requires pickling). I have worked around this in the same way as the joining by converting schedules to their explicit events
.
Additional context
I'm writing an interface between arbor and my framework's network format, and I have definitions of devices
that might impinge multiple schedules on the same spike source cell or event generator.
Hi @Helveg,
let's go over your suggestions
- Sure, that's easy enough.
- A join operator
<>
on schedules would yield the time-ordered union, right? This is done internally when you define multiple sources (viagenerators_on
orspike_source
connections), so there's no need for this in Arbor and would just duplicated internal functionality. - Sure, but why? We are allowing at most R/O access.
- I am not sure what this is supposed mean.
- Then it would be nice if this internal functionality is accessible
- That's for the fire-and-forget part, so that I can read out what the final schedule looks like
- That they can be serialized to something and sent across a wire in a distributed environment. It's already relatively simple to extract the state and initialize a new one at the other side of the wire, but it would be nicer if it's supported from your side.
- No, it's deep within the communications code where these events are interleaved with the incoming spikes. It's nothing we can make accessible.
- Sure, I got that before, but why?
- You could just serialise a type tag and the associated parameters? We do not support sending this over the wire, since Arbor's model of distributed computation is the
recipe
we currently have no plans on changing that.
Just a heads up, but #1962 might make your life a bit easier here.
Stepping back a bit, your request for multiple interleaved schedules on a source cell sounds much more reasonable.
So, you now have access to spike source with multiple schedules. Regarding the remaining items:
- I'd rather not implement
schedule::events(from=0, to=time_max)
since it can potentially eat all your memory. If you want to get all events into eternity, you should be thinking about what you are asking for. Typing you that query will hopefully make people realise thatregular_schedule(0, 0.5).events(0, end_of_the_universe)
is probably not going to end well. - Merging events is covered from our side and not exposable, if you need it in C++ https://en.cppreference.com/w/cpp/algorithm/merge is a ready-made algorithm and Python has
heapq.merge
- Accessing schedules: Unfortunately these are stateful things (for reasons of GPU) and
events
manipulates that state. So, we cannot really give you access without breaking things in Arbor. - Similar for pickling. However, explicit generators are gone, making shipping the parameters easier. Is that enough?
Hey @Helveg,
it's been three weeks almost. Any thoughts?
There was 't much of a problem to begin with, I requested this to avoid spurious .events
calls when operating on schedules and generators. Fine for me :)
Good, so time to close this. If you need more than what's already in place, feel free to reopen and/or add new issues.