hnn-core icon indicating copy to clipboard operation
hnn-core copied to clipboard

API consistency, stability and deprecation cycles

Open jasmainak opened this issue 3 years ago • 4 comments

As we approach 0.2 release, I wanted to discuss the overall API consistency, policies we will have for deprecation cycles and the release from which we can consider the API to be stable.

First, regarding consistency, many API considerations were currently copied from MNE-Python -- for e.g., the use of functions in the pattern plot_xxxx which take ax as a handle and return fig. However, there are other areas that are new and must be discussed in the HNN context. One example is the shape of data containers. CellResponse is an object of shape n_cells x n_trials x n_times, Dipole object has shape n_times and LFP object shape is yet to be decided. Perhaps many of these objects should have a copy method, a save, a read_xxx method, and an average method etc. I also like to keep Neuron objects built through a separate method called build so the objects stay picklable until the simulation starts. Anything else? The more we anticipate and decide now, the less we have to deprecate later.

Until now, we have not added any deprecated warnings and changed code as we like. Moving forward, we have to be conscious that this might break user code. What this would imply is that any changes to the code that impact public-facing functions and break user code will need a deprecation warning for at least one release cycle. To keep the code as flexible as possible, we should use private functions/methods as much as possible. I'm willing to wait until 0.3 release for this to happen though ... when we have at least a few confirmed users :) Any opinions?

jasmainak avatar May 26 '21 11:05 jasmainak

Very timely to bring this up. I agree that from 0.3 onwards we should deprecate gracefully.

Something I've been thinking about when considering our simulation outputs: could we have Numpy array-based data containers? I can totally see that MPI wouldn't happen if net contained arbitrary objects on entry into parallel_backends, but what about on the way out? Any analysis of time series we generate will involve np.something, so sooner or later the casting has to happen. We might find that large LFP arrays sampled at the default 40 kHz for several seconds and hundreds of trials become very inefficient to maintain as plain Python-lists, with all the overhead involved.

cjayb avatar May 26 '21 14:05 cjayb

To help build consistency, what do you guys think about creating a base class for the data container that the other objects inherit from? CellReponse is probably the trickiest one to convert. In any case I am definitely in support of making numpy arrays a standard output.

ntolley avatar May 26 '21 18:05 ntolley

+1 for using numpy arrays. We have to be careful how we index gids though when we deal with cell_response.vsoma etc. Regarding base class, it's not clear to me how it would be done. I guess I would need to see the code to understand. In MNE, folks use Mixin rather than plain inheritance.

jasmainak avatar May 27 '21 00:05 jasmainak

Moved the container discussion elsewhere, as it was off-topic here.

cjayb avatar May 28 '21 13:05 cjayb

I think this can be closed, we now use deprecation warnings in our regular work, and NEURON objects are indeed separated from python objects

ntolley avatar Jul 31 '24 18:07 ntolley