TableNet-pytorch
TableNet-pytorch copied to clipboard
could not find MARK
Hi, this project is very helpful for me to write my thesis, can you solve the problem about the installation guide of dependencies for local deployment?
Also, the link for processed Marmot dataset is not working, so please provide a new link, thank you!
Links are dead for processed data and trained models. I will re-create the project over weekend and provide fresh links
The links to the processed data and trained models are dead. I will rebuild the project this weekend and provide new links
Thank you very much!
The links for processed data and trained models are dead. I will rebuild the project over the weekend and provide new links Hello, has this project reconstruction been completed?
It will take some time, need to restart the project with new dependencies, will do that soon in a week. Raw and processed data both are corrupted on my end. Along with that , the trained model is also corrupted.
An alternative for this would be follow along the blog post I shared and reiterate everything on your own.
This will take some time and requires a restart of the project with new dependencies, which will happen within a week. In my case, both the original data and the processed data were corrupted. Rebuilding this project will hopefully allow you to recreate the data as progress is made.
If there is any progress in rebuilding this project, I hope you could share. I am also replicating according to your blog, but unfortunately there are still errors that I cannot solve.
Let me know till which point you were able to rebuild the project, if possible i shall guide you ahead from that point. Meanwhile I am rebuilding the project using the same blog
processed marmot dataset : https://drive.google.com/file/d/1ZkLjqywNF5I_5IoQjElrqGaxITWXmbqM/view?usp=share_link trained model link : https://drive.google.com/file/d/13eDDMHbxHaeBbkIsQ7RSgyaf6DSx9io1/view?usp=share_link training code : https://drive.google.com/file/d/1Ay-9ZBBKjCgSkoXLntI0iqIEIhn88tw6/view?usp=share_link processed_data_v2.csv : https://drive.google.com/file/d/1ngihDDNo7ToKlqzC3w1oh8tgz9Z08b5h/view?usp=share_link
The processed marmot data is generated using the steps mentioned in the blog and links of the raw data is already provided in the readme. The process is to use EDA_v1 followed by EDA_v2 to generate the processed_data_v2.csv.
There is no change done to the code. Training can be done using the link shared above which is exactly same as the code present in the repo, I used the same code to train on google colab. To train your own model, you just need to update the data paths in config.py and dataset.py from the training code.
The requirements to train the model is almost straight forward, I ll add latest requirements to train the model here.
torch==2.0.0 seaborn==0.12.2 pytorch_model_summary==0.1.2 efficientnet_pytorch==0.7.1 albumentations==1.2.1
Let me know if you face any other challenges.