sequitur
sequitur copied to clipboard
Library of autoencoders for sequential data
Hello there, I have added batch support for the LSTM_AE. I don't know if this is 100% compatible with the library, but I'm only using the LSTM_AE in one of...
 When I run the code "3. Train the autoencoder" in win10, RTX3070, torch1.7. There is a error appeared! What should I do?
hi, i was use the trail dataset like [[x1,y1], [x2,y2]] and change the shape to [[x1,x2,x3], [y1,y2,y3]] length about 150 and use LSTM AE to train it . i want...
Fix deprecation issue in MSELoss (reduction is now the proper keyword, 'sum' is the mode it used to operate in so I preserved it here) Fix issue where squeezing x...
Does it support batch size greater than 1?
Training models gives the warning: ``` /usr/local/lib/python3.6/dist-packages/torch/nn/_reduction.py:44: UserWarning: size_average and reduce args will be deprecated, please use reduction='sum' instead. warnings.warn(warning.format(ret)) ``` replacing line 28 of quick_train.py with ``` criterion =...
When i try to run the lstm_ae i get the following error: ``` IndexError Traceback (most recent call last) [c:\Users\sdblo\Mijn](file:///C:/Users/sdblo/Mijn) Drive\PhD\Publicaties\graph_node_autoencoder\sequitur_example.py in line 56 [31](file:///c%3A/Users/sdblo/Mijn%20Drive/PhD/Publicaties/graph_node_autoencoder/sequitur_example.py?line=30) # torch.use_deterministic_algorithms(True) [32](file:///c%3A/Users/sdblo/Mijn%20Drive/PhD/Publicaties/graph_node_autoencoder/sequitur_example.py?line=31) [33](file:///c%3A/Users/sdblo/Mijn%20Drive/PhD/Publicaties/graph_node_autoencoder/sequitur_example.py?line=32) #...
How to use cuda to accelerate when using quick_train, does the method quick_train provide some parameter to specify the device
Hi @shobrook In the README of this repo, it might help to link to an official academic paper. This one seems most fitting given the approach used here: https://arxiv.org/pdf/1607.00148.pdf Oliver