nobrainer icon indicating copy to clipboard operation
nobrainer copied to clipboard

A framework for developing neural network models for 3D image processing.

Results 80 nobrainer issues
Sort by recently updated
recently updated
newest added

## Types of changes - [ ] Bug fix (non-breaking change which fixes an issue) - [ ] New feature (non-breaking change which adds functionality) - [ ] Breaking change...

Since the default metric is dice, resuming from a previously trained checkpoint won't pose any issue. However, if the model was initially trained using a different metric or loss. Then,...

bug

https://github.com/neuronets/nobrainer/blob/976691d685824fd4bba836498abea4184cffd798/nobrainer/processing/base.py#L28 `What would you like changed/added and why?` - Similar to `model_args`, I'd like to see `checkpoint_args` passed from outside. `What would be the benefit?` - For example, if I...

https://github.com/neuronets/nobrainer/blob/976691d685824fd4bba836498abea4184cffd798/nobrainer/processing/checkpoint.py#L57 What am I trying to do? Initialize from a previous checkpoint, to resume training over more epochs. For example, the following snippet ``` try: bem = Segmentation.init_with_checkpoints( "unet", model_args=dict(batchnorm=True),...

https://github.com/neuronets/nobrainer/blob/976691d685824fd4bba836498abea4184cffd798/nobrainer/dataset.py#L115-L122 If the number of volumes in the shard is too large, this snippet of code can be time-consuming. Alternatives are - use a combination of `n_volumes` and number of...

We decided to add an extra feature to each record/example labeled "data_count". While we do this, we also need to add logic to adjust the number of volumes in each...

Following is an example of how the steps per epoch are incorrect whereas the number of batches is correct. This is because the `get_steps_per_epoch` uses `n_volumes` whereas getting the number...

bug

``` File "/net/vast-storage/scratch/vast/gablab/hgazula/nobrainer/nobrainer/dataset.py", line 338, in None * lambda x, y: (x, tf.one_hot(y, self.n_classes)) TypeError: Value passed to parameter 'indices' has DataType float32 not in list of allowed values: uint8,...

bug

https://github.com/neuronets/nobrainer/blob/cb855feaadd4ac354e1e2d1c760a649df3f61ab4/nobrainer/dataset.py#L249-L250 Suggestion: ``` def scalar_labels(self): temp_ds = self.dataset.map( lambda _, y: tf.experimental.numpy.isscalar(y), deterministic=False, num_parallel_calls=AUTOTUNE, ) return tf.math.reduce_all(list(temp_ds.as_numpy_iterator())).numpy() ``` Notes: 1. The previous snippet collects all label volumes into a list...

For reasons described [here](https://github.com/neuronets/nobrainer/issues/313) and discussed elsewhere, I suggest repeat should be called before fit and not before performing any map operations in the Dataset class, which is redundant.

bug