Error uploading from takeouts to immich server
Hi,
Im trying to upload to the immich server the downloaded takeouts from google photos and im getting the following error:
ERR upload error file=takeout-20250411T100635Z-001:Takeout/Google Fotos/Archivo/IMG_20181115_160436.jpg error=io: read/write on closed pipe
Idk what to do because im getting mad :(
Without a broader view on the context, I'd say it a problem of network configuration
Without a broader view on the context, I'd say it a problem of network configuration
Hi,
First I'd like to appreciate the good work, the immich-go does help a lot. Thank you.
I'm also having the same issues, it runs smoothly using --dry-run, but getting the error when uploading .mp4 (probably due to the sizes). Do we have specific command such as --include-unmatched to force the uploads continue despite the errors?
Maybe the system integrity check causes this too, referring to https://immich.app/docs/administration/system-integrity/ as read below:
The above error messages show that the server has previously (successfully) written .immich files to each folder, but now does not detect them. This could be because any of the following:
Permission error - unable to read the file, but it exists File does not exist - volume mount has changed and should be corrected File does not exist - user manually deleted it and should be manually re-created (touch .immich) File does not exist - user restored from a backup, but did not restore each folder (user should restore all folders or manually create .immich in any missing folders)
If it is the network, I'd love to read best practices on how to solve the issue. Worst scenario: I just need to re-open the cmd and resume the process, every 10-100 succesfull uploads.
Best,
@Giordeano I guess the size of the makes the server less responding for a while. I'd suggest to increase the --client-time-out value to 20m
Hi, will just add my experience here in case it helps find a good solution.
I'm pretty sure my network config isn't at fault, as I'm pushing from the same machine to the same immich server through the same router as I have done many times before. This time, however, I have rebuilt TrueNAS and installed immich fresh to version 2.1.0 as upgrading to the new folder structure was defeating me and this is just a dev instance.
Pushing takeout zips with 8000+ files was previously done without any issue. immich-go version 0.25.3
With the new server build and immich-go version 0.28.0, I was running into issues at least every 1000 files before having to restart:
./immich-go upload from-google-photos --server=http://immich.server.net:2283 --api-key=***** ./takeout-001.zip
I seem to have fixed this by adding:
--concurrent-uploads=4
I couldn't see anything in the logs to pin the error to specific file types. There were errors on MP4 files, but also sessions throwing error without any MP4 in the history.
Update: This fix lasted a bit longer before failure. Seem to get more uploads before finding an issue. Is it possible to include/exclude specific file types from upload? So I can test if excluding MP4 files is the trigger point?
Update Update: Increasing the resources to the immich container seems to help also. But nothing gave me full ability to upload my entire takeout archive.
If large MP4 files are identified as a problem, is it feasible to limit large file uploads to 1 at a time rather than allowing all concurrent uploads to be 'problem' files?
If large MP4 files are identified as a problem, is it feasible to limit large file uploads to 1 at a time rather than allowing all concurrent uploads to be 'problem' files?
Right idea here, lots of potential here to "prioritize" the upload queue. It could also have an --order-by size,asc argument (or something), that uploads photos ordered by file size, minimizing the problem scope.
In fact, since this is causing problems, surely the default value for --concurrent-uploads should be 1 ???
Those with fat enough servers should then be able to increase the value to their own limits?
My own testing was done to an Immich container running on HexOS/TrueNAS. The container is permitted 12 CPU cores and 16Gb RAM (which I consider way too high), but I've not done any more tuning than that (so I don't know if the containers are able to increase multithreading for example). The hardware TrueNAS is sitting on is an 8 core, 16 thread AMD Ryzen 7 7840HS with 64Gb RAM and storage provided by two 2Tb Nvme drives in mirroring Raid. So I don't know of any bottlenecks on my server side.
Is there any way of figuring out where the server-side bottleneck is?