docker-icloudpd
docker-icloudpd copied to clipboard
WARNING: failsafe file not found
I never get to download because it says the failsafe file does not exist. Problem is the .mounted file is there just is not being seen.
Can anyone help me get this to work? Would be forever grateful.
Thanks
I've been at this for the last 3 days solid. I'm also unable to have the .mounted file picked up. I've tried various iterations of folders combinations/permissions, but no luck Does anyone have a screenshot of the variables that is set related to file paths?
I've set 'download_path=/volume1/homes/freben/iCloud' as per recommendation and moved the .mounted file there, but the script still get's stuck on WARNING Failsafe...
I am using this on Synology, and haven't had this issue. I would recommend to check the user_id, group, and group_id settings. They should match those from the Synology host user that you want to run under. I haven't set download_path setting at all in my config.
Thanks for the response. I'm about to throw in the towel as I just can't get it right.
Here's my config:
[image: Screenshot 2024-04-09 at 14.35.33.png] Should I be adding the .mounted file in the volume settings?
[image: Screenshot 2024-04-09 at 14.34.41.png]
On Tue, 9 Apr 2024 at 06:42, lonevvolf @.***> wrote:
I am using this on Synology, and haven't had this issue. I would recommend to check the user_id, group, and group_id settings. They should match those from the Synology host user that you want to run under. I haven't set download_path setting at all in my config.
— Reply to this email directly, view it on GitHub https://github.com/boredazfcuk/docker-icloudpd/issues/529#issuecomment-2044187795, or unsubscribe https://github.com/notifications/unsubscribe-auth/AUQSD5XQN5RP47JRBXKA2V3Y4N5S7AVCNFSM6AAAAABFFIDL2KVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDANBUGE4DONZZGU . You are receiving this because you commented.Message ID: @.***>
-- Freben Serfontein +44 7813 009748
A few notes.
- You can remove synology_photos_app_fix, it doesn't work
- Is the user_id the same as your synology user?
- group should be a name like "users", group_id should be the id
- Try not using a home folder for the download_path - I would suggest using a base shared folder
Thanks, I have done as suggested. I've even gone so far as to remove everything I've done so far and started from scratch. Here is the log it produced after I've done the 2FA.
Folder permissions set to allow user freben
Folder with .mounted file as per the config
Config of the container. Note I cannot add anything related to group or group_id as the process identifies it is already there and exits:
Thanks for your help!
Following this as well. I was able to work around this issue by running sudo docker exec -it icloudpd touch /home/user1/iCloud/.mounted
which worked successfully. I believe this proves that the container was successfully able to create this file with its given permissions.
Once my photos started downloading, I could see all the file names in the log, but nothing was actually writing to the disk!
Edit: I threw the kitchen sink at it and still no luck. FYI, I'm also on Synology. I've tried setting the user to my local user/ID, docker's user/ID, toggling force_gid
on and off...no idea where to go next with this.
Edit2: I went into Container Manager and opened a new terminal with ash
. I was able to create a password for my user using passwd user
then login user
. From there, I could cd to /home/user1/iCloud and create and destroy files with no permission problems. I'm really beginning to suspect there's a real bug here. Happy to provide more info if needed.
This is exactly what I thought the issue was. I used the command you suggested and it seemed to have created the file, however I can't see it. I assume it is hidden. However I no longer get the waiting for failsafe file --- whooohooo!!!
It logged into the icloud account and found the photos. However now I'm waiting to see if it actually downloads them. I specifically used an iCloud account with like 50 images to ensure it works before I download the 2TB library. So waiting for the 50 images to come through first. Once they're there I'll amend and start the big one...
Thanks for your help so far! Much appreciated.
On Thu, 11 Apr 2024 at 07:22, petercockroach @.***> wrote:
Following this as well. I was able to work around this issue by running sudo docker exec -it icloudpd touch /home/user1/iCloud/.mounted which worked successfully. I believe this proves that the container was successfully able to create this file with its given permissions. Now I'm just waiting to see if my photos actually start downloading, but I spent a lot of time trying to figure this one out
— Reply to this email directly, view it on GitHub https://github.com/boredazfcuk/docker-icloudpd/issues/529#issuecomment-2048998961, or unsubscribe https://github.com/notifications/unsubscribe-auth/AUQSD5VEXGTI4IKPXWZLBP3Y4YTZLAVCNFSM6AAAAABFFIDL2KVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDANBYHE4TQOJWGE . You are receiving this because you commented.Message ID: @.***>
-- Freben Serfontein +44 7813 009748
If you create the failsafe file using sudo docker exec -it icloudpd touch /home/user1/iCloud/.mounted
or something similar, then you are creating that file inside the container. This defeats the purpose of the failsafe mechanism.
It's likely the photos will be downloaded inside the container, rather than on a volume, so all photos will be deleted whenever the container is re-created/upgraded.
@boredazfcuk yeah, I totally agree and understand this but I did this merely as a debugging exercise to see if it made any progress. The peculiar part is that no assets are writing to the disk even within the container so there's some sort of write error going on. Whether it's a permissions problem or something else is what I'm not clear on.
Edit: I'm diving into the code now. I see the check for .mounted here, but I don't see where the .mounted file is being created at the ${download_path}
Edit 2: I tried to create a completely new instance locally using Docker my Mac and - would you know it - I ran into the exact same issue!
@frebens whwn you create a file with a . preceding it, it is a hidden file. You should still be able to see it with ls or by turning on your option to view hidden files within your GUI
The script doesn't create the .mounted file. It has to be a manual process performed outside of the container.
It's basically a marker so you can say "this is the volume I want my photos to appear in".
When the container launches, if it can't see the .mounted file, it knows that the volume isn't attached to the container correctly. This way it avoids downloading all the photos inside the container and losing them when the container is upgraded.
Also, the Docker container files are often contained in the root partition. If you don't map a volume correctly, then download 2TB to the container, chances are that you will fill the root partition of your server.
The only other way I could test for this would be to mount the Docker sock file inside the container and start querying that. Seems a bit overly complicated and also gives permission to the script to interact with the Docker system.
I feel that's a major security concern for users as it would mean I could start querying all sorts of stuff from within the container, with a few modifications to the script.
Creating the mounted marker file is a simpler and more secure way of telling the container it's looking at the correct place.
Thanks for the reply. Pardon my ignorance but could you elaborate a bit about when the .mounted file is created? I now understand the script doesn't create it, but at what point during the process is it created and by what function?
Edit: okkkkk...so maybe I figured this whole thing out. It sounds like the .mounted file needs to be created...manually by the user? If this is the case, I would suggest it be added to the README or better yet, include it as part of the Initialization process - even if it's just a warning to the user to create it manually along with the command to do so.
Once my photos started downloading, I could see all the file names in the log, but nothing was actually writing to the disk!
I realized that this was just part of the process where the file names were logging to the console but not actually downloading. After leaving it for some time, the download has now started
What's the status on this?
What's the status on this?
I was unfortunately unable to make this work on my Synology NAS. I am hoping someone much more clever than me can solve it.
I am using this on Synology, and haven't had this issue. I would recommend to check the user_id, group, and group_id settings. They should match those from the Synology host user that you want to run under. I haven't set download_path setting at all in my config.
Can you please elaborate on this? Maybe provide screenshots or something so I'm clear on this. I'm fairly certain all the permissions are correct.
The script doesn't create the .mounted file. It has to be a manual process performed outside of the container.
It's basically a marker so you can say "this is the volume I want my photos to appear in".
When the container launches, if it can't see the .mounted file, it knows that the volume isn't attached to the container correctly. This way it avoids downloading all the photos inside the container and losing them when the container is upgraded.
Also, the Docker container files are often contained in the root partition. If you don't map a volume correctly, then download 2TB to the container, chances are that you will fill the root partition of your server.
The only other way I could test for this would be to mount the Docker sock file inside the container and start querying that. Seems a bit overly complicated and also gives permission to the script to interact with the Docker system.
I feel that's a major security concern for users as it would mean I could start querying all sorts of stuff from within the container, with a few modifications to the script.
Creating the mounted marker file is a simpler and more secure way of telling the container it's looking at the correct place.
Can you please elaborate on mapping the volume. I'm not clear on this. Every time I try to add a folder the second field is looking for another folder? Another file? No matter what I put in the second field the container settings will not save and I get a form error. I also am finding when I go to map the volume I don't get the option of selecting any shared folders... I don't have clue what is going here.. very frustrating.
I would suggest it be added to the README or better yet, include it as part of the Initialization process - even if it's just a warning to the user to create it manually along with the command to do so.
It's already part of the configuration guide. It will also tell you about it if running sync-icloud.sh --help
Can you please elaborate on mapping the volume. I'm not clear on this.
When working with containers, if you delete the container, or upgrade it, you will lose everything inside it. For this reason, you need to map external volumes to directories inside the container which act as persistent storage. The container needs two of these persistent volumes. One to store the /config data and another for your download location, which would default to /home/user/iCloud if the download_path variable is not configured.
Every time I try to add a folder the second field is looking for another folder? Another file? No matter what I put in the second field the container settings will not save and I get a form error. I also am finding when I go to map the volume I don't get the option of selecting any shared folders... I don't have clue what is going here.. very frustrating.
I'm not sure what any of this means. It sounds like a problem with the container managing system on the device you're using to be honest. That's not something I'm familiar with as I don't use any UnRAID, TrueNAS or Synology type NAS devices.
Okay.. I'm on synology. I've attached a screen shot of the fields in the container manager that frebens had. He added /config to the second field. So I would map the folder docker/icloudpd then /config for the config data and leave the download path to default. I understand now.
My other issue is making the .mounted file still. I read that you need to make the file outside of the container by the user with full permissions. I'm not exactly sure how to do that. I've tried two different ways and none of them have worked.
I tried ssh in as the user went to the folder a did touch .mounted. It creates the file just fine but still getting the error.
i tried using a text editor and creating a text doc .mounted.txt and removed the .txt and placed in the folder that didn't work.
I'm just not clear how to create the file outside of the container. Could you please help me in understanding that part? Not sure what I'm not doing...
thanks
I'm not familiar with the Synology Container Manager, but try creating two volumes and attach them to the container. One called "config" and attach it to "/config" and a second one, called "photos" and mapped to "/photos". Then set the "download_path" variable to "/photos"
Then, on the NAS, find the location that the volume lives using whatever file manager is on there, and create a file called ".mounted" in it.
I don't have a Synology so I don't really know what to advise beyond that.
I did that but permissions still seemed to be an issue. I fixed it by getting to the /bin/sh
shell of the container and running these commands:
# Change ownership of the directory to user
sudo chown -R user /path/to/directory
# Grant full access (read, write, execute) to user Bob
sudo chmod -R u+rwx /path/to/directory
The container does pretty much both of those on every launch as part of its initialisation. Just need to set the directory_permissions
and file_permissions
variables with the permissions you want.
I have mine set to 750 for directories and 640 for files, which is the default. What you've done there is the equivalent of setting both directory and file permissions to 777. The would make sense for directories, as they need to be executable to be able to move through them. By setting them to 777, you're allowing everyone with access to the system to be able to browse the folders.
Setting file permissions to 777 is a little overkill though. They're photos, not programs that need to be executed, so should really be set to 666 tops. This would allow read/write access for everyone who has access to the system, but doesn't set an unnecessary executable bit.
Hey guys, sorry for bringing this topic up again. I'm stuck at the fu****g failsafe thingy on synology for hours now. Tried nearly everything i could find about this without any success. Is anybody here who got it finally working on a syno NAS? This .mounted file is where it should be but the script seems to ignore it.
Here is where it always ends up for me:
2024-07-07 18:54:45 INFO Container initialisation complete
2024-07-07 18:54:45 DEBUG Group, users:100, already created
2024-07-07 18:54:45 DEBUG User, michael:1026, already created
2024-07-07 18:54:45 DEBUG Set owner and group on icloudpd temp directory
2024-07-07 18:54:45 DEBUG Set owner and group on config directory
2024-07-07 18:54:45 INFO Directory is writable: /config/python_keyring/
2024-07-07 18:54:45 DEBUG Configure password
2024-07-07 18:54:45 DEBUG Using password stored in keyring file: /config/python_keyring/keyring_pass.cfg
2024-07-07 18:54:45 INFO Check download directory mounted correctly...
2024-07-07 18:54:45 WARNING Failsafe file /volume1/homes/michael/Photos/icloud_M/.mounted file is not present. Waiting for failsafe file to be created...
I am in a Synology and this part is working fine for me. Be sure to check: User id in the container config - should match the user id from synology Username in the container config - should match the user in the synology Group id in the container config - should match the user’s group Permission on the photos folder should be correct for that user
Thanks for the quick reply.
User_id, username, group_id, group in the config are set according to this output of id command:
uid=1026(michael) gid=100(users) groups=100(users),101(administrators)
in the config file:
...
group=users
group_id=100
...
user=michael
user_id=1026
...
even changing group to administrators and group id 101 doesn't help.
The folder containing the ".mounted-file" is owned by this user, permissions are set to full access for this user and read and write for "everyone" user group, same for the file itself.
Must be wrong something else.
The folder path is set in the config and in my docker run as well, maybe I missed a "/" somewhere? (Although I c&p the path from file manager.)
config: download_path=/volume1/homes/michael/Photos/icloud_M
docker run -d \
--name iCloudPD \
--restart=always \
--env TZ=Europe/Berlin \
--volume /volume1/docker/icloudPD/config:/config \
--volume /volume1/homes/michael/Photos/icloud_M:/home/boredazfcuk/iCloud \
boredazfcuk/icloudpd:latest
Your download path is wrong.
You've mounted /volume1/homes/michael/Photos/icloud_M
to /home/boredazfcuk/iCloud
inside the container, so your download path would need to be /home/boredazfcuk/iCloud
.
But for that to be valid, you'd need your username to be boredazfcuk
but I know from another post that it's michael
.
For simplicity, just mount your volume like --volume /volume1/homes/michael/Photos/icloud_M:/photos
and set download_path=/photos
Thank you for your time. Now it seems to work. Will let it run and hope tomorrow morning seeing files appearing in my folder. ;-) Don't know, how I ended up with that wrong configuration. Changed and tried so much this weekend, probably some c&p from somwhere. But anyhow: finally it's sunday evening and the project is done so far. Thynk you very much.
Btw: Is there a way to map the log into the docker folder?
Just for my understanding: By setting download_path=/photos
in the config file, I tell the script to create this folder /photos
within the container? (It is not already there, so i could name it whatever I want, as long as I do the volume mapping to the same folder name.)
And with the volume mapping in the "docker run" I make it permanently available for direct access.
Did I get it right?
(I'm quite new to all this linux and server stuff and still have to learn very much)
Can you tell me where the config to the container is or how I find it? I'm not familiar..