mRASP2
mRASP2 copied to clipboard
Project dependencies may have API risk issues
Hi, In mRASP2, inappropriate dependency versioning constraints can cause risks.
Below are the dependencies and version constraints that the project is using
subword-nmt
sacrebleu
sacremoses
kytea
six
The version constraint == will introduce the risk of dependency conflicts because the scope of dependencies is too strict. The version constraint No Upper Bound and * will introduce the risk of the missing API Error because the latest version of the dependencies may remove some APIs.
After further analysis, in this project, The version constraint of dependency sacrebleu can be changed to >=1.1.0,<=1.1.1. The version constraint of dependency sacrebleu can be changed to >=1.1.3,<=1.4.5.
The above modification suggestions can reduce the dependency conflicts as much as possible, and introduce the latest version as much as possible without calling Error in the projects.
The invocation of the current project includes all the following methods.
The calling methods from the sacrebleu
sacrebleu.corpus_bleu sacrebleu.compute_bleu
The calling methods from the all methods
get_hypo_and_ref numpy.array counts.append fairseq.utils.strip_pad max all_dataset_upsample_ratio.strip fairseq.data.PrependTokenDataset self.swap_sample FileNotFoundError self.tgt_dict.string model.encoder.forward.transpose torch.no_grad inspect.getfullargspec log.get float tqdm.tqdm hyps.append json.loads torch.cat f.read.split self.temperature.anchor_dot_contrast.torch.div.nn.LogSoftmax.diag cls.load_dictionary.index eval.readlines self.dataset.size self.temperature.anchor_dot_contrast.torch.div.nn.LogSoftmax.diag.sum isinstance torch.LongTensor _sentence_embedding self.inference_step fairseq.criterions.label_smoothed_cross_entropy.LabelSmoothedCrossEntropyCriterion.add_args open.close mask.float cls recover_bpe open hasattr id_num.score_dict.append self.dataset.prefetch fairseq.data.TruncateDataset eval.read numpy.array.sum src_list.append fairseq.models.transformer.transformer_wmt_en_de_big_t2t self.tgt_dict.pad eval toks.int src_datasets.append format bpe_symbol.line.replace.rstrip cls.load_dictionary.eos fairseq.data.data_utils.infer_language_pair torch.cat.contiguous logging.getLogger.info argparse.Namespace fairseq.models.register_model_architecture j.line.split similarity_function str Exception ValueError self.set_epoch open.write mask.float.sum.unsqueeze fairseq.data.AppendTokenDataset fairseq.data.encoders.build_tokenizer super.set_epoch super.__init__ cls.load_dictionary.unk size_ratio.dataset.len.np.ceil.astype super self.padding_idx.src_tokens.int.sum numpy.argsort super.reduce_metrics self.padding_idx.src_tokens.int itertools.count os.path.join self.padding_idx.target.int super.build_model generator.generate super.valid_step round int len fairseq.data.indexed_dataset.dataset_exists refs.append os.path.dirname torch.nn.LogSoftmax toks.int.cpu logging.getLogger re.compile mask.unsqueeze self.tokenizer.decode numpy.ceil remove_bpe_fn fairseq.tasks.register_task fairseq.tasks.translation.TranslationTask.add_args re.search.span torch.nn.CosineSimilarity self.dataset.num_tokens totals.append fairseq.utils.deprecation_warning self.compute_loss cls.load_dictionary self.target_dictionary.index prefix_tokens.to.to split_exists fairseq.utils.eval_bool remove_bpe torch.transpose self.len.np.random.permutation.astype getattr fairseq.tasks.translation.load_langpair_dataset torch.div re.search target.contiguous sum_logs fairseq.metrics.log_scalar self.padding_idx.target.int.sum contrast_feature.expand numpy.random.permutation tgt_list.append self.dataset.__getitem__ numpy.random.RandomState cls.load_dictionary.bos src_tokens.size numpy.random.RandomState.choice load_langpair_dataset bpe_symbol.line.replace.rstrip.replace fairseq.data.data_utils.load_indexed_dataset cls.load_dictionary.pad sacrebleu.compute_bleu fairseq.options.eval_bool mask.float.sum map self.get_contrastive_loss fairseq.data.StripTokenDataset self.build_generator fairseq.utils.split_paths fairseq.data.ConcatDataset fairseq.metrics.log_derived decode data.SubsampleLanguagePairDataset model join parser.add_argument id_num.hypothesis_dict.append tgt_datasets.append math.log fairseq.data.plasma_utils.PlasmaArray prefix_tokens.to.expand self.similarity_function all_dataset_upsample_ratio.strip.split fairseq.data.LanguagePairDataset id_num.pos_score_dict.append mask.unsqueeze.encoder_output.sum numpy.arange fairseq.utils.item o.write sacrebleu.corpus_bleu reprocess sample.size re.search.group fairseq.models.transformer.transformer_wmt_en_de fairseq.criterions.register_criterion self._inference_with_bleu mono_datas.append range anchor_feature.expand prefix_tokens.torch.LongTensor.unsqueeze sum model.encoder.forward
@developer Could please help me check this issue? May I pull a request to fix it? Thank you very much.