neural-style
neural-style copied to clipboard
Project dependencies may have API risk issues
Hi, In neural-style, inappropriate dependency versioning constraints can cause risks.
Below are the dependencies and version constraints that the project is using
numpy
Pillow
scipy
tensorflow
The version constraint == will introduce the risk of dependency conflicts because the scope of dependencies is too strict. The version constraint No Upper Bound and * will introduce the risk of the missing API Error because the latest version of the dependencies may remove some APIs.
After further analysis, in this project, The version constraint of dependency Pillow can be changed to ==9.2.0. The version constraint of dependency Pillow can be changed to >=2.0.0,<=9.1.1.
The above modification suggestions can reduce the dependency conflicts as much as possible, and introduce the latest version as much as possible without calling Error in the projects.
The invocation of the current project includes all the following methods.
The calling methods from the Pillow
PIL.Image.open PIL.Image.fromarray
The calling methods from the all methods
tensorflow.compat.v1.Variable.eval loss_vals.items tensorflow.compat.v1.ConfigProto int imresize key.loss_arrs.append len imsave map tensor.get_shape get_loss_vals.items vgg.load_net numpy.mean numpy.matmul tensorflow.compat.v1.placeholder fmt_imsave iteration_times.append numpy.uint8.styled_grayscale_rgb.astype.Image.fromarray.convert build_parser.add_argument functools.reduce float _tensor_size ax.semilogy numpy.dstack epsilon.beta2.beta1.learning_rate.tf.train.AdamOptimizer.minimize.run main bias.reshape.reshape collections.OrderedDict.keys numpy.random.normal loss.eval tensorflow.compat.v1.Graph.device style_losses.append math.floor numpy.transpose time.time numpy.array join numpy.clip os.path.dirname gray2rgb.astype path.Image.open.np.array.astype re.match combined_yuv.Image.fromarray.convert ax.set_ylabel numpy.empty tensorflow.compat.v1.random_normal.astype hms imread tensorflow.compat.v1.Graph argparse.ArgumentParser scipy.io.loadmat matplotlib.use rgb2gray list vgg.net_preloaded stylize.stylize matplotlib.pyplot.subplots tensorflow.compat.v1.nn.bias_add tensorflow.compat.v1.Session ax.set_xlabel numpy.uint8.original_image.astype.Image.fromarray.convert tensorflow.compat.v1.global_variables_initializer vgg.unprocess layer.get_shape collections.OrderedDict.values epsilon.beta2.beta1.learning_rate.tf.train.AdamOptimizer.minimize options.checkpoint_output.options.checkpoint_iterations.count tensorflow.compat.v1.nn.relu tensorflow.compat.v1.Variable build_parser _conv_layer numpy.reshape PIL.Image.fromarray.resize print_progress vgg.preprocess isinstance tensorflow.compat.v1.matmul val.eval numpy.dot fmt.format range numpy.zeros tensorflow.compat.v1.nn.l2_loss image.eval.reshape gray2rgb sum format tensorflow.compat.v1.reshape collections.OrderedDict.items tensorflow.compat.v1.transpose layer.net.eval tensorflow.compat.v1.nn.conv2d tensorflow.compat.v1.disable_v2_behavior tensorflow.compat.v1.Graph.as_default tensorflow.compat.v1.constant collections.OrderedDict PIL.Image.fromarray build_parser.error tensorflow.compat.v1.nn.avg_pool IOError ax.legend tensorflow.compat.v1.random_normal build_parser.parse_args get_loss_vals sess.run ValueError numpy.clip.astype _pool_layer loss_vals.keys img.Image.fromarray.save content_losses.append itr.append tensorflow.compat.v1.train.AdamOptimizer print arr.np.clip.astype numpy.savetxt enumerate tensorflow.compat.v1.nn.max_pool os.path.isfile PIL.Image.open numpy.std img.np.clip.astype fig.savefig
@developer Could please help me check this issue? May I pull a request to fix it? Thank you very much.
Hi! I'm guessing PyDeps is an academic research project?
Adding the correct (and reasonable) lower bounds and adding the right upper bounds seems reasonable. I don't understand why this bot is suggesting Pillow<=9.1.1
as an upper bound, though. If I were to choose something manually, I'd hope that Pillow devs follow semver and do Pillow<10
.