docker-crashplan-pro icon indicating copy to clipboard operation
docker-crashplan-pro copied to clipboard

Share additional information for Docker on QNAP devices

Open johnripper1 opened this issue 7 years ago • 10 comments

hi there.

I really like to project, but for QNAP users some modification are required, that I want to share:

a) To install the docker the --rm command should not present, otherwise it will dive an error. See here already: https://github.com/jlesage/docker-crashplan-pro/issues/5#issuecomment-348257069

b) QNAP users need to specify the USER_ID and GROUP_ID of 0 (zero), to access the files. Otherwise authorization problems might occure. -e USER_ID=0
-e GROUP_ID=0 \

c) The location of the data files for a standard QNAP installation is /share, so volume should be mapped as "-v /share:/share:rw "

d) Also the location for the config files are adjusted, to make sure it is in the standard container folder: "/share/Container/appdata/crashplan-pro:/config:rw "

e) Changing the mapping from ro to rw enables a restore directly to the old file location: "-v /share:/share:rw "

For QNAP users the stanard config should be:

docker run -d \
    --name=crashplan-pro-for-QNAP \
    -e USER_ID=0 \
    -e GROUP_ID=0 \
    -p 5800:5800 \
    -p 5900:5900 \
    -v /share/Container/appdata/crashplan-pro:/config:rw \
    -v /share:/share:rw \
    jlesage/crashplan-pro

Maybe there is some location in your readme to share this information to make it easier for other QNAP users. If not, just close this ticket.

Thank you for the project.

johnripper1 avatar Nov 30 '17 18:11 johnripper1

Thanks for the info. Any reason why /share is not mapped to /storage ? /storage is an already-defined volume of the container.

-v /share:/storage:rw

jlesage avatar Nov 30 '17 18:11 jlesage

I use "-v /share:/share:rw " because the (user)data on QNAP devices actually lives on "share/CE_CACHEDEV1_DATA/folders"*. Therefore I keep it in sync with the file system on device itself and to an old system.

For new installation it doesnt matter. Besides the fact, that the linux file system has your files unter /share and the CP installation under /storage.

But, if you come from an old system you need to adjust your backup sets. If you add the new ones "storage/CE_CACHEDEV1_DATA/folder" and remove the old ones "share/CE_CACHEDEV1_DATA/folders" you might remove old version and deleted files with the next archive maintenance.

*) CE_CACHEDEV1_DATA might be CACHEDEV1_DATA or MD0_DATA as well.

johnripper1 avatar Nov 30 '17 22:11 johnripper1

Also, using USER_ID=0 and GROUP_ID=0 means that the container will run as root. Running stuff as root is generally not something recommended... Is it because data on QNAP are owned by multiple users?

jlesage avatar Dec 01 '17 01:12 jlesage

I have more than one user. Without root, I cant access the users data folders if the owner is not admin (what is root on QNAP devices).

johnripper1 avatar Dec 01 '17 08:12 johnripper1

I had this problem (User_ID) on a Synology (Also had the not 127.0.0.1 issue for connecting to the engine on reboot as per other thread) and tried several things to fix it but running with an ID=0 was the only way to "see" my files inside the container.

Bibbleq avatar Dec 05 '17 09:12 Bibbleq

According to the main page, the parameter CRASHPLAN_SRV_MAX_MEM defaults to 1024M. If this wasn't supplied in the command such as you have listed, can it be modified at a later time? If not, can the container be removed and redeployed without losing our Crashplan install and having to perform a resync?

tribunal88 avatar Dec 18 '17 19:12 tribunal88

All persistent data is saved into the appdata folder. So you can remove the container and re-run it with an additional parameter without problem. Nothing will be lost.

jlesage avatar Dec 18 '17 19:12 jlesage

I can fully confirm the discovery of johnripper1. On a QNAP, you need to map the /share path to the /storage container folder, or use multiple mappings like I demonstrate below. In my first container configuration, I missed out on the /share:/share mapping, and only included mapping of /share/CACHEDEV1_DATA for the /storage container folder. This caused CrashPlan to display /share with a "Missing files" alert, and the online CrashPlan configuration add a bunch of new /storage/... paths, together with all the existing /share/... paths. That did not seem right, even with deduplication.

After making the changes suggested by johnripper1, the /storage container folder was present, so I tried to delete the container, the image, and even the CrashPlan config folder, and started over. No luck. So far I am stuck with the /storage container folder, and I ended up simply making mapping for both the /storage and /share container folder, and just ignores the /storage container folder.

Lastly, I have deselected all backup selections under the /storage container folder branch, and only have selected backup of folders and files under the /share container folder.

Now the online CrashPlan backup is back to normal, with the backup continuing from the point it got to, in the last backup :-)

In the end, I used the following process to create the container on QNAP (using Putty):

mkdir -p /share/CACHEDEV1_DATA/Virtualize/Containers/appdata/crashplan-pro/config
	
docker pull jlesage/crashplan-pro

docker run -d \
	--name=CrashplanPro \
	-e USER_ID=0 \
	-e GROUP_ID=0 \
	-p 5800:5800 \
	-p 5900:5900 \
	-e TZ=Europe/Copenhagen \
	-e CRASHPLAN_SRV_MAX_MEM=5120m \
	-v /share/CACHEDEV1_DATA/Virtualize/Containers/appdata/crashplan-pro/config:/config:rw \
	-v /share/CACHEDEV1_DATA:/storage:ro \
	-v /share:/share:rw \
	jlesage/crashplan-pro

Thanks for publishing the container and documentation jlesage, much appreciated !

jakobon avatar Jan 10 '18 20:01 jakobon

Hello,

I have opened a new issue : https://github.com/jlesage/docker-crashplan-pro/issues/41 because of a problem on my QNAP with the exceeds inotify's max watch limit.

I someone could take a look into it.

Thank you!

AlexMihe avatar Jan 17 '18 10:01 AlexMihe

I think this is resolved ... no longer waiting. started again but used @johnripper1 thread and seems to be good now

LeeTaylorX12 avatar Feb 15 '18 13:02 LeeTaylorX12