xnmt icon indicating copy to clipboard operation
xnmt copied to clipboard

Recipe las-pyramidal.yaml does not work

Open SrutiBh opened this issue 6 years ago • 13 comments

Hi @neubig,

I have installed all the requirements. But whenever I am trying to install xnmt by "python setup.py install". It is giving an error as "fatal: not a git repository" The error is as follows.

(base) D:\anaconda_install_3\new path\pkgs\xnmt-master>python setup.py install checking git revision in setup.py fatal: not a git repository (or any of the parent directories): .git Traceback (most recent call last): File "setup.py", line 36, in open("./xnmt/git_rev.py", "w").write("CUR_GIT_REVISION = "" + get_git_revision() + "" # via setup.py") TypeError: must be str, not NoneType

(base) D:\anaconda_install_3\new path\pkgs\xnmt-master>git init Initialized empty Git repository in D:/anaconda_install_3/new path/pkgs/xnmt-master/.git/

(base) D:\anaconda_install_3\new path\pkgs\xnmt-master>python setup.py install checking git revision in setup.py fatal: Needed a single revision Traceback (most recent call last): File "setup.py", line 36, in open("./xnmt/git_rev.py", "w").write("CUR_GIT_REVISION = "" + get_git_revision() + "" # via setup.py") TypeError: must be str, not NoneType

My conda version is 4.6.7. Please let me know if you require any additional information. I need to fix this as soon as possible. Thanks, Sruti

SrutiBh avatar Feb 26 '19 12:02 SrutiBh

@philip30 was the one who created the setup.py prompt I believe, so maybe he can help?

neubig avatar Apr 19 '19 18:04 neubig

That line was actually written by @msperber.

To quickly avoid some trivial bugs, @SrutiBh can you change that line 36 to:

open("./xnmt/git_rev.py", "w").write("CUR_GIT_REVISION = \"" + str(get_git_revision()) + "\" # via setup.py")

philip30 avatar Apr 20 '19 10:04 philip30

@philip30 @neubig @msperber The line still showing the same problem. and in next lines coming as error. I think you missed ')' after write("CUR_GIT_REVISION = "" and in python '#' is comment which is showing as error while closing bracket at 'via setup.py")'

SrutiBh avatar May 06 '19 05:05 SrutiBh

Ah yes I forgot the closing bracket. Maybe the problem is because open() returns None? perhaps you can comment the entire line as that line is just for INFO only. When not using conda, xnmt setup.py should be run on the root of XNMT directory.

philip30 avatar May 06 '19 06:05 philip30

@philip30 @neubig this is coming...

(base) E:\xnmt-master\examples>python -m xnmt.xnmt_run_experiments examples/01_standard.yaml [dynet] random seed: 1416409111 [dynet] allocating memory: 512MB [dynet] memory allocation done. Traceback (most recent call last): File "e:\xnmt-master\xnmt\persistence.py", line 837, in experiment_names_from_file with open(filename) as stream: FileNotFoundError: [Errno 2] No such file or directory: 'examples/01_standard.yaml'

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "D:\install_Anaconda3\lib\runpy.py", line 193, in _run_module_as_main "main", mod_spec) File "D:\install_Anaconda3\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "e:\xnmt-master\xnmt\xnmt_run_experiments.py", line 123, in sys.exit(main()) File "e:\xnmt-master\xnmt\xnmt_run_experiments.py", line 56, in main config_experiment_names = YamlPreloader.experiment_names_from_file(args.experiments_file) File "e:\xnmt-master\xnmt\persistence.py", line 840, in experiment_names_from_file raise RuntimeError(f"Could not read configuration file {filename}: {e}") RuntimeError: Could not read configuration file examples/01_standard.yaml: [Errno 2] No such file or directory: 'examples/01_standard.yaml'

(base) E:\xnmt-master\examples>

SrutiBh avatar Jun 28 '19 21:06 SrutiBh

Hi Sruti,

If you spend a little time to read the error message, it clearly says that "FileNotFoundError: [Errno 2] No such file or directory: 'examples/01_standard.yaml'"

It is because you are running the program from "examples" directory, and the program tries to open "examples/01_standard.yaml" from there, and of course "examples/examples/01_standard.yaml" does not exists. can you try to do cd .. and run the same command again?

Thank you

philip30 avatar Jun 30 '19 14:06 philip30

@philip30 @neubig Hi Philip, Thanks. It worked 👍 👍

Some other problems are occuring now. Whenever I am trying to run config.las-pyramidal.yaml from recipe it is giving error as follows. I am using different dataset and want to train my data with the same model given here using las. please have a look. I wonder if it is error from yaml file structure or my file directories or something else. Here is it showing AutoRegression error. In this case I basically want to classify data types that I have and see the accuracy.
[dynet] memory allocation done. fatal: Not a git repository (or any of the parent directories): .git running XNMT revision None on SRUTI on 2019-07-12 01:21:12 fatal: Not a git repository (or any of the parent directories): .git => Running las-pyramidal Traceback (most recent call last): File "D:\install_anaconda3\Scripts\xnmt-script.py", line 11, in load_entry_point('xnmt', 'console_scripts', 'xnmt')() File "e:\my xnmt\xnmt-master\xnmt\xnmt_run_experiments.py", line 108, in main raise e File "e:\my xnmt\xnmt-master\xnmt\xnmt_run_experiments.py", line 96, in main experiment = initialize_if_needed(uninitialized_exp_args) File "e:\my xnmt\xnmt-master\xnmt\persistence.py", line 1411, in initialize_if_needed return _YamlDeserializer().initialize_if_needed(root) File "e:\my xnmt\xnmt-master\xnmt\persistence.py", line 1141, in initialize_if_needed else: return self.initialize_object(deserialized_yaml_wrapper=obj) File "e:\my xnmt\xnmt-master\xnmt\persistence.py", line 1173, in initialize_object self.check_args(self.deserialized_yaml) File "e:\my xnmt\xnmt-master\xnmt\persistence.py", line 1190, in check_args _check_serializable_args_valid(node) File "e:\my xnmt\xnmt-master\xnmt\persistence.py", line 479, in _check_serializable_args_valid f"'{name}' is not a accepted argument of {type(node).name}.init(). Valid are {list(init_args.keys())}") ValueError: 'rnn_layer' is not a accepted argument of AutoRegressiveDecoder.init(). Valid are ['input_dim', 'trg_embed_dim', 'input_feeding', 'bridge', 'rnn', 'transform', 'scorer', 'truncate_dec_batches']

SrutiBh avatar Jul 11 '19 19:07 SrutiBh

Hi Sruti,

The error was because that the 'rnn_layers' fields are not found in the constructor of AutoRegressiveDecoder object. Can you reflect each object from the yaml file to the code and debug it? I believe if you are using a specific version of the yaml file which is working and written by @msperber, you should also checkout to the branch that the code was working. (I believe it is now somewhat diverges...)

philip30 avatar Jul 15 '19 05:07 philip30

Hi Sruti,

it seems that this recipe was not adjusted to the latest change in how decoders are specified. I can't send code right now unfortunately, but if you look in the examples (e.g. here: https://github.com/neulab/xnmt/blob/master/examples/01_standard.yaml ), you can see that all that needs to happen is to move a few things inside the decoder object.

-- Matthias (this comment is "not a contribution")

msperber avatar Jul 15 '19 08:07 msperber

Hi @msperber, @philip30 Can you specifically tell me where should I edit exactly and what should I include? I am scared that I am confused. I tried some of the example yaml files initially to test xnmt but they were not working so I wrote my own to make h5 files. As @philip30 told, rnn_layers' are not in the constructor of AutoRegressiveDecoder object. So where should I edit to to fix it as well? And @philip30 I am not getting how should I debug? Are you asking me to debug las-pyramidal's every line differently? If so can you give me an example? I am not sure how to do so. Here is the yaml file code I am using

las-pyramidal: !Experiment exp_global: !ExpGlobal dropout: 0.2 default_layer_dim: 512 placeholders: DATA_DIR: E:\my xnmt\xnmt-master\recipes\mande preproc: !PreprocRunner overwrite: False tasks: - !PreprocExtract in_files: - '{DATA_DIR}/for_auditory_roughness/db/train.yaml' - '{DATA_DIR}/for_auditory_roughness/db/test.yaml' - '{DATA_DIR}/for_auditory_roughness/db/train.yaml' out_files: - '{DATA_DIR}/for_auditory_roughness/feat/devout.h5' - '{DATA_DIR}/for_auditory_roughness/feat/testout.h5' - '{DATA_DIR}/for_auditory_roughness/feat/trainout.h5' specs: !MelFiltExtractor {} model: !DefaultTranslator src_embedder: !NoopEmbedder emb_dim: 40 encoder: !ModularSeqTransducer modules: - !PyramidalLSTMSeqTransducer layers: 4 reduce_factor: 2 downsampling_method: concat input_dim: 40 hidden_dim: 512 attender: !MlpAttender hidden_dim: 128 trg_embedder: !SimpleWordEmbedder emb_dim: 64 word_dropout: 0.1 fix_norm: 1 decoder: !AutoRegressiveDecoder rnn_layer: !UniLSTMSeqTransducer layers: 1 hidden_dim: 512 input_feeding: True bridge: !CopyBridge {} scorer: !Softmax label_smoothing: 0.1 src_reader: !H5Reader transpose: true trg_reader: !PlainTextReader vocab: !Vocab vocab_file: '{EXP_DIR}/vocab.char' output_proc: join-char train: !SimpleTrainingRegimen src_file: '{DATA_DIR}/for_auditory_roughness/feat/trainout.h5' trg_file: '{DATA_DIR}/for_auditory_roughness/transcript/train.char' max_src_len: 1500 max_trg_len: 350 run_for_epochs: 500 batcher: !WordSrcBatcher avg_batch_size: 24 pad_src_to_multiple: 8 trainer: !AdamTrainer alpha: 0.001 lr_decay: 0.5 lr_decay_times: 3 patience: 8 initial_patience: 15 dev_every: 0 restart_trainer: True dev_tasks: - !AccuracyEvalTask eval_metrics: wer,cer src_file: &dev_src '{DATA_DIR}/for_auditory_roughness/feat/devout.h5' ref_file: '{DATA_DIR}/for_auditory_roughness/transcript/dev.words' hyp_file: '{EXP_DIR}/logs/{EXP}.dev_hyp' inference: !AutoRegressiveInference max_src_len: 1500 post_process: join-char search_strategy: !BeamSearch max_len: 500 beam_size: 20 len_norm: !PolynomialNormalization apply_during_search: true m: 1.5 - !LossEvalTask max_src_len: 1500 src_file: *dev_src ref_file: '{DATA_DIR}/for_auditory_roughness/transcript/dev.char' evaluate: - !AccuracyEvalTask eval_metrics: wer,cer src_file: '{DATA_DIR}/for_auditory_roughness/feat/testout.h5' ref_file: '{DATA_DIR}/for_auditory_roughness/transcript/test.words' hyp_file: '{EXP_DIR}/logs/{EXP}.test_hyp' inference: !AutoRegressiveInference max_src_len: 1500 post_process: join-char search_strategy: !BeamSearch max_len: 500 beam_size: 20 len_norm: !PolynomialNormalization apply_during_search: true m: 1.5 Thanking you, Sruti

SrutiBh avatar Jul 15 '19 13:07 SrutiBh

@SrutiBh it is helpful to read the documentation https://xnmt.readthedocs.io/en/latest/getting_started.html so you won't need my help for future implementation if you find any problem. (If you plan to use the code, I think it is essential to understand how xnmt works). And I am sorry, I can't help with the debuging, as the logic of that particular code is not written by me (and I might make mistake in correcting that, which risk I don't want to take). BTW, as it says, rnn_layers might refer to the layer numbers of the RNN, so if you wanna fix, you might want to correctly reflect that logic to the code.

philip30 avatar Jul 17 '19 00:07 philip30

@philip30 the rnn bugs are still there. I tried to find out if there is any attribute name changes or anything but I did not found any problem. Can you tell me who wrote las-pyramidal so that I can discuss with him? However I see in place of rnn-layer it should be rnn...and I think for Listen-attend-spell paper architecture we need to use BiLSTMSeqTransducer but in the las-pyramidal it is structured for UniLSTMSeqTransducer. Do both of them refer to pyramidal lstm?

SrutiBh avatar Aug 18 '19 11:08 SrutiBh

@SrutiBh you are correct that the rnn_layer parameter needs to be renamed. The config specifies a (bidirectional) pyramidal LSTM as encoder, and a unidirectional LSTM as decoder. This is correct, as on the decoder side only unidirectional LSTMs can be used. Hope this helps!

-- Matthias (this comment is "not a contribution")

msperber avatar Aug 21 '19 10:08 msperber