Update pack and unpack methods
Update of the pack and unpack methods. https://github.com/pyiron/pyiron_base/issues/775
- [x]
packincludes the csv file inside the archive - [x]
packfilename is optional, uses ".tar.gz" - [x]
packwith same name as project should not delete the project - [ ]
packselected jobs by id - [x]
packall files in a job - [x]
packfrom a different directory than where project is located - [x]
unpackmethod can be called aspr = Project(filename.tar.gz, unpack=True)orpr = Project(filename, unpack=True) - [x]
unpackshould not nest project automatically - [ ]
unpackjobs into existing Project - [ ] Update tests
- [ ] Update docstrings
- [ ] Update workflow template
I am going to take over @srmnitc's work, so I copied the PR from his forked branch
ok I just realized that I had a different PR open and I opened this one on top. Let me correct this first XD
Thanks @samwaseda for picking this up. There are a couple of fixes also included in https://github.com/pyiron/pyiron_base/pull/1401 so maybe it makes sense to merge those as well.
Check out this pull request on ![]()
See visual diffs & provide feedback on Jupyter Notebooks.
Powered by ReviewNB
Ok this thing is now feature-complete. I might want to do some cleaning but otherwise is ready for a review.
@pmrv @jan-janssen @srmnitc I guess it's now feature complete and this is probably going to be the last PR of the series of pack/unpack. After this one a minor release can be made and it's hopefully settled.
As far as I understand the csv file is now included in the tar archive, correct? How do you handle the backwards compatibility for old archives which do not include the csv file? Maybe it makes sense to have a short example either in the Docstring or the jupyter notebook to handle this backwards compatibility. Finally, I liked the option to take a look at the csv file to see which jobs are included in the archive before importing the corresponding jobs. Previously, I did this by loading the csv file with pandas. I can still load the csv file manually from the tar archive, but for other users it would be great to take a look at the archive and get the job table of the contained project from the python side.
As far as I understand the csv file is now included in the tar archive, correct? How do you handle the backwards compatibility for old archives which do not include the csv file? Maybe it makes sense to have a short example either in the Docstring or the jupyter notebook to handle this backwards compatibility.
That's a good point that I should mention in the code. I guess it helps the future generation to understand the origin of some of the code.
Finally, I liked the option to take a look at the csv file to see which jobs are included in the archive before importing the corresponding jobs. Previously, I did this by loading the csv file with pandas. I can still load the csv file manually from the tar archive, but for other users it would be great to take a look at the archive and get the job table of the contained project from the python side.
That sounds good but I guess it's a different PR.
Can I merge this one or should I still wait for a review?