Project dependencies may have API risk issues
Hi, In MedicalNet, inappropriate dependency versioning constraints can cause risks.
Below are the dependencies and version constraints that the project is using
pip>=9.0.1
torch==0.4.1
numpy==1.15.4
nibabel==2.4.1
scipy==1.1.0
argparse==1.1
The version constraint == will introduce the risk of dependency conflicts because the scope of dependencies is too strict. The version constraint No Upper Bound and * will introduce the risk of the missing API Error because the latest version of the dependencies may remove some APIs.
After further analysis, in this project, The version constraint of dependency scipy can be changed to >=0.9.0,<=1.7.3. The version constraint of dependency argparse can be changed to >=1.2.1,<=1.4.0.
The above modification suggestions can reduce the dependency conflicts as much as possible, and introduce the latest version as much as possible without calling Error in the projects.
The invocation of the current project includes all the following methods.
The calling methods from the scipy
scipy.ndimage.interpolation.zoom
The calling methods from the argparse
argparse.ArgumentParser.parse_args argparse.ArgumentParser
The calling methods from the all methods
torch.load self.bn1 self.__testing_data_process__ models.resnet.resnet101 self.__resize_data__.get_data torch.nn.DataParallel.load_state_dict os.path.isfile self.conv3 models.resnet.resnet10 self.bn3 m.bias.data.zero_ self.layer1 self.conv2 torch.optim.SGD torch.optim.lr_scheduler.ExponentialLR enumerate torch.cat torch.nn.functional.avg_pool3d self.modules str list argparse.ArgumentParser self.BasicBlock.super.__init__ torch.nn.MaxPool3d isinstance self.conv_seg torch.nn.Sequential self.__nii2tensorarray__ torch.nn.DataParallel.cuda new_parameters.append self.relu setting.parse_opts filter train layers.append models.resnet.resnet34 nibabel.load new_data.astype.astype model.load_state_dict volumes.cuda.cuda self.layer2 torch.nn.ReLU torch.nn.BatchNorm3d loss_seg.cuda.cuda loss.backward torch.nn.DataParallel.parameters self.layer3 self.ResNet.super.__init__ line.strip self.__random_center_crop__ torch.optim.SGD.zero_grad torch.nn.DataParallel.named_parameters format len self._make_layer models.resnet.resnet50 self.conv1 out.size.out.size.out.size.out.size.planes.out.size.torch.Tensor.zero_ datasets.brains18.BrainS18Dataset functools.partial os.makedirs models.resnet.resnet152 random.random numpy.random.normal scipy.ndimage.interpolation.zoom numpy.reshape torch.nn.DataParallel logging.getLogger exit ResNet model.state_dict torch.utils.data.DataLoader self.__training_data_process__ numpy.where m.weight.data.fill_ self.Bottleneck.super.__init__ new_label_masks.cuda.cuda model.state_dict.keys print block os.path.join logging.basicConfig models.resnet.resnet200 torch.optim.SGD.step os.path.exists range models.resnet.resnet18 model.state_dict.update argparse.ArgumentParser.add_argument self.__crop_data__ pname.find numpy.array numpy.min self.__resize_data__ torch.nn.Conv3d self.layer4 open pixels.std torch.Tensor os.path.dirname self.relu.size self.bn2 model.generate_model torch.nn.CrossEntropyLoss model map torch.autograd.Variable argparse.ArgumentParser.set_defaults torch.nn.init.kaiming_normal loss.item torch.save torch.optim.lr_scheduler.ExponentialLR.get_lr numpy.max numpy.zeros torch.nn.DataParallel.state_dict super torch.optim.SGD.state_dict new_label_masks.torch.tensor.to torch.optim.SGD.load_state_dict torch.manual_seed self.__itensity_normalize_one_volume__.get_data model.train loss_seg torch.load.items time.time argparse.ArgumentParser.parse_args id fio.read self.maxpool torch.nn.ConvTranspose3d self.downsample torch.tensor torch.optim.lr_scheduler.ExponentialLR.step fio.read.splitlines loss_seg.item utils.logger.log.info idx.self.img_list.split int self.__itensity_normalize_one_volume__ conv3x3x3 pixels.mean zero_pads.cuda.cuda self.__drop_invalid_range__
@developer Could please help me check this issue? May I pull a request to fix it? Thank you very much.