Refactor docval shape validation
In the process of writing unit tests on shape validation in docval, I realized that the interaction of shape validation and default values is not clear cut. For example, if TimeSeries.timestamps accepts ('array_data', 'data', 'TimeSeries') and has a required shape of (None,), then a default value of None is not compatible with the required shape. Should we check the shape only if a non-None value is provided?
The interaction of shape validation and non-standard types, e.g. when TimeSeries.timestamps is another TimeSeries or a DataChunkIterator, also gets complicated and is not currently tested.
Lastly, in PyNWB, these checks are performed primarily on __init__, but the object shapes can change later.
As per discussion with @ajtritt, I think it would make sense to move shape validation from docval to the setters created for the attribute names in __nwbfields__ (this will eventually move over with NWBBaseType to HDMF see #100).
This is an enhancement that would be nice to do in order to test the code better and make maintaining the code easier.
The setters should also have type validation if that isn't already done (it looks like not).
shape validation is not performed on None. This is in line with typehints in the standard python library, and I can't think of a better way to do it. Perhaps it would be nice to note this in the docs but I think it's pretty clear from context, given that None is used as the default value regardless of whether their is a shape check.
How would moving shape validation from docval to the setters help us cover complex types like TimeSeries and H5DataIO?
I feel that this would be superseded by our desire to eventually integrate pydantic/refactored validation. @rly