docker-crashplan-pro
docker-crashplan-pro copied to clipboard
Update has stopped backup
I had to update to the latest docker image for crashplan pro, which is 8.5. Prior to that, the docker image stopped doing any backups. I updated the image and had to adopt my existing backup as the the update was treated as a new install. Everything looked lie it was working well and the last several days it appeared that it was verifying the data between the NAS and crashplan. However, all backups have stopped.
On the crashplan sever, the log shows that on startup:
10/24/20 12:20AM [MYNAS 2020_10] Scanning for files completed in 1 day: 2,126,088 files (4.50TB) found 10/24/20 03:00AM [MYNAS 2020_10] Scanning for files to back up 10/24/20 05:46AM [MYNAS 2020_10] Starting backup to CrashPlan Central: 265,023 files (1.10TB) to back up 10/24/20 05:47AM [MYNAS 2020_10] Scanning for files stopped 10/24/20 05:47AM [MYNAS 2020_10] Scanning for files to back up 10/24/20 05:47AM [MYNAS 2020_10] Stopped backup to CrashPlan Central in 0h:00m:06s: 0 files (0MB) backed up, 0MB encrypted and sent 10/24/20 05:47AM - Reason for stopping backup: Full filesystem scan started.
And then for the next three days, all that shows is this repeated hundreds of times:
10/27/20 06:16PM Code42 started, version 8.5.0, GUID [REDACTED] 10/27/20 06:16PM [MYNAS 2020_10] Scanning for files to back up 10/27/20 06:16PM [MYNAS 2020_10] Starting backup to CrashPlan Central: 3 files (0MB) to back up
The log from the docker shows this repeated for days:
[CrashPlanEngine] starting.... MAX_MEM=1976 USED_MEM=56
Repeated over and over. (the used mem does change, but doesn't appear to above 170]
First, if after the update your device is considered as new, you probably have not mapped /config
to the host, or didn't mapped it to the same folder as before.
Can you look at engine_error.log
and engine_output.log
(located under /config/log
inside the container)?
I think I'm seeing similar behavior as dontask2035. My upgrade was done automatically by WatchTower. My log is full of pairs of MAX_MEM=2048 and USED_MEM=49 or 51 or 48 or 74. The frequency varies. Sometimes ~30 seconds between each pair of log entries. Other times it is more than 60 minutes. I do not see a pattern. There are log entries every hour throughout the 24 hour day. It appears to be somewhat regular. I have 55 or 56 pair of log entries per day.
Crashplan does backup newly created files. I created a file, and confirmed it was backed up.
My daily "Crashplan for Small Business Backup Report" says: Last backup activity: 52 minutes Last completed backup: 19.9 days ago Selected for backup: 1.4TB
I don't understand why it indicates "Last completed backup" was 19.9 days ago.
I believe the update to v8.5 was ~6 days ago (within a couple hours after you provided the update).
The two logs you requested should be attached.
NOTE: I did restart Crashplan a couple days ago, to see if that would fix the "Last completed backup" message, but it did not.
Thanks, Jocelyn.
I'm also seeing this same behavior. Let me know if you need any additional logs.
Any errors in service.log.0
?
[10.27.20 17:40:08.368 INFO 1WeDftWkr169 ervice.peer.PeerVersionValidator] WE have an old version, localVersion=1525200006822, remoteVersion=1525200006860, remoteGuid=4200
[10.27.20 17:40:08.368 INFO 1WeDftWkr169 42.service.history.HistoryLogger] HISTORY:: Downloading a new version of Code42.
[10.27.20 17:40:08.382 INFO 1WeDftWkr169 .storage.UniversalStorageService] SM:: Saved ServiceModel of 41157 bytes to UDB in 13.58 ms.
[10.27.20 17:40:08.394 INFO 1WeDftWkr169 om.backup42.service.AppLogWriter] WRITE app.log in 12ms
[10.27.20 17:40:08.395 INFO 1WeDftWkr169 .service.upgrade.DownloadManager] UPGRADE:: DOWNLOAD:: DownloadManager already running, wakeup...; DownloadManager[patchDir = upgrade, patchFiles = [1525200006850 (2018-05-01T18:40:06:850+0000)], patchProblem = false]
[10.27.20 17:40:08.395 INFO 5417_DwldMgr .service.upgrade.DownloadManager] UPGRADE:: DOWNLOAD:: Downloading patch from https://download.code42.com/installs/agent/cloud/8.5.0/446/upgrade/linux/upgrade5/crashplansmb/1525200006850_446.jar?guid=855668565426245403; DownloadManager[patchDir = upgrade, patchFiles = [1525200006850 (2018-05-01T18:40:06:850+0000)], patchProblem = false]
[10.27.20 17:40:09.097 WARN 5417_DwldMgr .service.upgrade.DownloadManager] UPGRADE:: DOWNLOAD:: Unable to download patch. url=https://download.code42.com/installs/agent/cloud/8.5.0/446/upgrade/linux/upgrade5/crashplansmb/1525200006850_446.jar?guid=855668565426245403, java.io.IOException: Directory 'upgrade' could not be created; DownloadManager[patchDir = upgrade, patchFiles = [1525200006850 (2018-05-01T18:40:06:850+0000)], patchProblem = false], java.io.IOException: Directory 'upgrade' could not be created
STACKTRACE:: java.io.IOException: Directory 'upgrade' could not be created
at org.apache.commons.io.FileUtils.openOutputStream(FileUtils.java:356)
at org.apache.commons.io.FileUtils.openOutputStream(FileUtils.java:319)
at org.apache.commons.io.FileUtils.copyToFile(FileUtils.java:1552)
at org.apache.commons.io.FileUtils.copyInputStreamToFile(FileUtils.java:1528)
at org.apache.commons.io.FileUtils.copyURLToFile(FileUtils.java:1506)
at com.code42.net.WGet.download(WGet.java:77)
at com.backup42.service.CPService.downloadByHttp(CPService.java:2750)
at com.backup42.service.upgrade.DownloadManager.downloadPatch(DownloadManager.java:247)
at com.backup42.service.upgrade.DownloadManager.downloadPatches(DownloadManager.java:188)
at com.backup42.service.upgrade.DownloadManager.doWork(DownloadManager.java:134)
at com.code42.utils.AWorker.run(AWorker.java:153)
at java.base/java.lang.Thread.run(Unknown Source)
[10.27.20 17:40:09.097 WARN 5417_DwldMgr 42.service.history.HistoryLogger] HISTORY:: Download of upgrade failed - version 1525200006850, connection lost.
[10.27.20 17:40:09.110 INFO 5417_DwldMgr .storage.UniversalStorageService] SM:: Saved ServiceModel of 41130 bytes to UDB in 12.38 ms.
[10.27.20 17:40:09.111 WARN 5417_DwldMgr 42.service.history.HistoryLogger] HISTORY:: Unable to apply upgrade(s), retrying in one hour
Is the only obvious error.
CrashPlan wants to update itself. Looks like you need to upgrade to the latest Docker image...
I'm at the latest Docker image according to "docker pull"
Yes there are a few ERRORs, and WARNs.
Here are the ERRORs:
[10.25.20 19:13:11.038 ERROR eduledSrvc-0 ] [ScheduledSrvc-0] INFO org.eclipse.jetty.server.AbstractConnector - Stopped ServerConnector@191a8997{SSL, (ssl, http/1.1)}{127.0.0.1:4244} [10.25.20 19:13:11.129 ERROR eduledSrvc-0 ] [ScheduledSrvc-0] INFO org.eclipse.jetty.server.handler.ContextHandler - Stopped o.e.j.s.ServletContextHandler@10fa7b74{/,null,UNAVAILABLE}
[10.25.20 19:13:57.769 ERROR main ] [main] INFO org.mozilla.jss.CryptoManager - CryptoManager: loading JSS library [10.25.20 19:13:57.926 ERROR main ] [main] INFO org.mozilla.jss.CryptoManager - CryptoManager: loaded JSS library from java.library.path [10.25.20 19:13:57.929 ERROR main ] [main] INFO org.mozilla.jss.CryptoManager - CryptoManager: initializing NSS database at NotUsed
[10.25.20 19:14:22.496 ERROR main ] [main] INFO org.eclipse.jetty.util.log - Logging initialized @57203ms to org.eclipse.jetty.util.log.Slf4jLog
[10.25.20 19:14:46.831 ERROR main ] [main] INFO org.eclipse.jetty.server.Server - jetty-9.4.31.v20200723; built: 2020-07-23T17:57:36.812Z; git: 450ba27947e13e66baa8cd1ce7e85a4461cacc1d; jvm 11.0.8+10 [10.25.20 19:14:47.421 ERROR main ] [main] INFO org.eclipse.jetty.server.handler.ContextHandler - Started o.e.j.s.ServletContextHandler@58e9cc9a{/,null,AVAILABLE} [10.25.20 19:14:47.452 ERROR main ] [main] INFO org.eclipse.jetty.util.ssl.SslContextFactory - x509=X509@7382ec67(servicecert,h=[],w=[]) for Server@5cf81e6e[provider=null,keyStore=null,trustStore=null] [10.25.20 19:14:47.728 ERROR main ] [main] INFO org.eclipse.jetty.server.AbstractConnector - Started ServerConnector@6f7a7030{SSL, (ssl, http/1.1)}{127.0.0.1:4244} [10.25.20 19:14:47.732 ERROR main ] [main] INFO org.eclipse.jetty.server.Server - Started @82442ms
Here are some of the WARNs:
10.27.20 18:31:51.850 WARN inot-hand-2 ode42.jna.inotify.InotifyManager] Unable to remove watch for path /volume1/homes/@eaDir/@tmp/recycle.1603823511/admin, errno: 22 [10.27.20 18:31:51.852 WARN inot-hand-2 ode42.jna.inotify.InotifyManager] Unable to remove watch for path /volume1/homes/@eaDir/@tmp/recycle.1603823511/guest, errno: 22 [10.27.20 18:31:51.854 WARN inot-hand-2 ode42.jna.inotify.InotifyManager] Unable to remove watch for path /volume1/homes/@eaDir/@tmp/recycle.1603823511/julia, errno: 22 [10.27.20 18:31:51.862 WARN inot-hand-2 ode42.jna.inotify.InotifyManager] Unable to remove watch for path /volume1/homes/@eaDir/@tmp/recycle.1603823511/Server-PC, errno: 22 [10.27.20 18:31:51.864 WARN inot-hand-2 ode42.jna.inotify.InotifyManager] Unable to remove watch for path /volume1/homes/@eaDir/@tmp/recycle.1603823511/steve, errno: 22 [10.27.20 18:31:51.866 WARN inot-hand-2 ode42.jna.inotify.InotifyManager] Unable to remove watch for path /volume1/homes/@eaDir/@tmp/recycle.1603823511/StorageCraft, errno: 22 [10.27.20 18:31:51.868 WARN inot-hand-2 ode42.jna.inotify.InotifyManager] Unable to remove watch for path /volume1/homes/@eaDir/@tmp/recycle.1603823511, errno: 22
[10.27.20 22:21:01.014 WARN Thread-198 ode42.os.posix.PosixFileCommands] Problem in lchown() for file=/volume1/Steve/OneDrive/Downloads/Crashplan Test.txt, uid=1024, gid=100, Operation not permitted (null) STACKTRACE:: Operation not permitted (null) at jtux.UFile.lchown(Native Method) at com.code42.os.posix.PosixFileCommands.lchown(PosixFileCommands.java:249) at com.code42.os.linux.metadata.LinuxMetadataHandler.applyMetadata(LinuxMetadataHandler.java:64) at com.code42.backup.handler.RestoreHandler.applyMetadata(RestoreHandler.java:611) at com.code42.backup.handler.RestoreHandler.applyMetadata(RestoreHandler.java:548) at com.code42.backup.handler.RestoreHandler.finishRestore(RestoreHandler.java:515) at com.code42.backup.restore.BackupClientRestoreDelegate.processPendingFinishRestores(BackupClientRestoreDelegate.java:1282) at com.code42.backup.restore.BackupClientRestoreDelegate$ProcessPendingRestoresThread.run(BackupClientRestoreDelegate.java:1426) at java.base/java.lang.Thread.run(Unknown Source)
[10.27.20 23:00:42.968 WARN 796_ScanWrkr com.code42.backup.path.BackupSet] BSM:: SET-1: Given backup path does not exist - path=V3[cs]+/volume1/Backup/Steve-Note4/;w
[10.28.20 00:45:45.003 WARN BQTodoWkr-42 m.code42.backup.save.BackupQueue] BQ:: Access denied to file /volume1/photo/@eaDir/SYNOPHOTO_SMART_ALBUM_CONFIG; BC[608310129156686095>42, sameOwner=f, backupConnected=t, authorized=t, usingForBackup=t, backupNotReadyCode=null, backingUp=t, validating=f, closing=f, keepBlockState=0, con=2020-10-25T19:15:02:906, val=2020-10-28T00:45:29:031, readyCheckTime=2020-10-27T19:17:33:913, MM[BT 608310129156686095>42: openCount=2, initialized = true, dataFiles.open = true, /usr/local/crashplan/cache/42], session=977953487777052962, hosted=t, hostedCloud=t, replacing=f, selectedForBackup=t, selectedForRestore=f, validationNeeded=f, backupUsageTime=2020-10-28T00:45:29:052, cacheMaintenanceState=null, restoreUtil=BackupClientRestoreDelegate [restoreControl=RestoreReceiveControl@2072849202[ , hasRestoreEnv=false, hasRestoreJob=false ], selectedForRestore=false, remoteRestoreStats=RestoreStats[BT sourceId = 608310129156686095, targetId = 42, restoreId = 978262114832365858, selectedForRestore = false, restoring = false, completed = true, completedTimeInMillis = 1603837262026, stopwatch = 2hr 24min, numFilesToRestore = 1, numBytesToRestore = 8, numFilesRestored = 1, numBytesRestored = 8, %complete = 100.00%, receiveRateInBytesPerSec(B/s) = 293.000, sendRateInBytesPerSec(B/s) = 537.000, estimatedTimeRemaining = 0.000, fileNumBytesToRestore = 0, fileNumBytesRestored = 0, %completeCurrentFile = 100.00%, numSessionFilesRestored = 1, numSessionBytesRestored = 8, problemCount = 0], pendingRestoresCount=0], BackupQueue[608310129156686095>42, running=t, #tasks=0, sets=[BackupFileTodoSet[backupSetId=1, guid=42, doneLoadingFiles=t, doneLoadingTasks=f, FileTodoSet@1869362708[ path = /usr/local/crashplan/cache/cpft1_42, closed = false, dataSize = 1282, headerSize = 0], numTodos = 1, numBytes = 508]]], env=BackupEnv[envTime = 1603845929052, near = false, todoSharedMemory = SharedMemory[b.length = 2359296, allocIndex = -1, freeIndex = 0, closed = false, waitingAllocLength = 0], taskSharedMemory = SharedMemory[b.length = 2359296, allocIndex = -1, freeIndex = 0, closed = false, waitingAllocLength = 0]], TodoWorker@561038317[ threadName = BQTodoWkr-42, stopped = false, running = true, thread.isDaemon = false, thread.isAlive = true, thread = Thread[W1045370917_BQTodoWkr-42,5,main]], TaskWorker@1474535864[ threadName = BQTaskWrk-42, stopped = false, running = true, thread.isDaemon = false, thread.isAlive = true, thread = Thread[W408375652_BQTaskWrk-42,5,main]]] ]
[10.28.20 00:45:45.012 WARN BQTaskWrk-42 com.code42.backup.BackupClientV1] BC[608310129156686095>42:: Backup completed but 1 files todo; BC[608310129156686095>42, sameOwner=f, backupConnected=t, authorized=t, usingForBackup=t, backupNotReadyCode=null, backingUp=f, validating=f, closing=f, keepBlockState=0, con=2020-10-25T19:15:02:906, val=2020-10-28T00:45:29:031, readyCheckTime=2020-10-27T19:17:33:913, MM[BT 608310129156686095>42: openCount=2, initialized = true, dataFiles.open = true, /usr/local/crashplan/cache/42], session=977953487777052962, hosted=t, hostedCloud=t, replacing=f, selectedForBackup=t, selectedForRestore=f, validationNeeded=f, backupUsageTime=2020-10-28T00:45:29:052, cacheMaintenanceState=null, restoreUtil=BackupClientRestoreDelegate [restoreControl=RestoreReceiveControl@2072849202[ , hasRestoreEnv=false, hasRestoreJob=false ], selectedForRestore=false, remoteRestoreStats=RestoreStats[BT sourceId = 608310129156686095, targetId = 42, restoreId = 978262114832365858, selectedForRestore = false, restoring = false, completed = true, completedTimeInMillis = 1603837262026, stopwatch = 2hr 24min, numFilesToRestore = 1, numBytesToRestore = 8, numFilesRestored = 1, numBytesRestored = 8, %complete = 100.00%, receiveRateInBytesPerSec(B/s) = 293.000, sendRateInBytesPerSec(B/s) = 537.000, estimatedTimeRemaining = 0.000, fileNumBytesToRestore = 0, fileNumBytesRestored = 0, %completeCurrentFile = 100.00%, numSessionFilesRestored = 1, numSessionBytesRestored = 8, problemCount = 0], pendingRestoresCount=0], BackupQueue[608310129156686095>42, running=t, #tasks=0, sets=[BackupFileTodoSet[backupSetId=1, guid=42, doneLoadingFiles=t, doneLoadingTasks=t, FileTodoSet@1869362708[ path = /usr/local/crashplan/cache/cpft1_42, closed = false, dataSize = 1282, headerSize = 0], numTodos = 1, numBytes = 508]]], env=BackupEnv[envTime = 1603845929052, near = false, todoSharedMemory = SharedMemory[b.length = 2359296, allocIndex = -1, freeIndex = 0, closed = false, waitingAllocLength = 0], taskSharedMemory = SharedMemory[b.length = 2359296, allocIndex = -1, freeIndex = 0, closed = false, waitingAllocLength = 0]], TodoWorker@561038317[ threadName = BQTodoWkr-42, stopped = false, running = true, thread.isDaemon = false, thread.isAlive = true, thread = Thread[W1045370917_BQTodoWkr-42,5,main]], TaskWorker@1474535864[ threadName = BQTaskWrk-42, stopped = false, running = true, thread.isDaemon = false, thread.isAlive = true, thread = Thread[W408375652_BQTaskWrk-42,5,main]]] ];
I hadn't mapped the config file, but I saw your answer to another user and made the adjustment this time. Hopefully, the next update will be a little cleaner (though adopting isn't that big of an issue for me).
Attached are the two logs you requested. engine_output.log engine_error.log
Looks like my issue was due to Docker user error. Image updated, but I didn't destroy and create the new one.
FYI: My latest "Your Backup Status Report" from Crashplan now indicates the "Last completed backup: 1.8 hrs ago." The only change I've made is the backup folder selection - I removed a folder. So, I'm not concerned about this.
I still have dozens of MAX_MEM and USED_MEM log messages every day. But no other issues. Thanks.
@dontask2035, you should also check in service.log.0
for errors.
@SJLBoulder, it seems that CrashPlan doesn't have the proper permission to access all the files it needs to backup... Also, note that the latest image fixes the annoying log messages.
These are some of the errors that are in the service.log.0
[03.11.20 18:59:02.065 INFO WeDftWkr1645 ging.direct.DirectNetworkChannel] Channel became inactive. closedBy=THIS_SIDE, reason='Session close', channel=DIRECT [03.11.20 18:59:34.588 INFO DefaultGroup .code42.messaging.peer.PeerGroup] PG::DefaultGroup DONE Managing connected remote peers. numConnected=0, numFailedConnectedCheck=0, duration(ms)=0 [03.11.20 18:59:45.848 INFO Impl RUNNING .service.auth.ReauthorizeService] REAUTH:: SKIP reauthorize - not authenticated [03.11.20 18:59:45.849 INFO Impl RUNNING rvice.auth.ReauthCustomScheduler] REAUTH:: Reauthorization scheduled for 5min 0sec from now. [03.11.20 18:59:47.061 INFO WeDftWkr1651 up42.service.peer.PeerController] DENIED! Ignore connect, we are not authorized. [03.11.20 18:59:47.061 INFO WeDftWkr1651 ging.direct.DirectNetworkChannel] Channel became inactive. closedBy=THIS_SIDE, reason='Session close', channel=DIRECT [03.11.20 19:00:32.061 INFO WeDftWkr1657 up42.service.peer.PeerController] DENIED! Ignore connect, we are not authorized. [03.11.20 19:00:32.061 INFO WeDftWkr1657 ging.direct.DirectNetworkChannel] Channel became inactive. closedBy=THIS_SIDE, reason='Session close', channel=DIRECT [03.11.20 19:01:17.131 INFO WeDftWkr1663 up42.service.peer.PeerController] DENIED! Ignore connect, we are not authorized. [03.11.20 19:01:17.131 INFO WeDftWkr1663 ging.direct.DirectNetworkChannel] Channel became inactive. closedBy=THIS_SIDE, reason='Session close', channel=DIRECT [03.11.20 19:02:02.107 INFO WeDftWkr1669 up42.service.peer.PeerController] DENIED! Ignore connect, we are not authorized. [03.11.20 19:02:02.129 INFO WeDftWkr1669 ging.direct.DirectNetworkChannel] Channel became inactive. closedBy=THIS_SIDE, reason='Session close', channel=DIRECT [03.11.20 19:02:47.364 INFO WeDftWkr1675 up42.service.peer.PeerController] DENIED! Ignore connect, we are not authorized. [03.11.20 19:02:47.399 INFO WeDftWkr1675 ging.direct.DirectNetworkChannel] Channel became inactive. closedBy=THIS_SIDE, reason='Session close', channel=DIRECT [03.11.20 19:03:32.308 INFO WeDftWkr1681 up42.service.peer.PeerController] DENIED! Ignore connect, we are not authorized. [03.11.20 19:03:32.308 INFO WeDftWkr1681 ging.direct.DirectNetworkChannel] Channel became inactive. closedBy=THIS_SIDE, reason='Session close', channel=DIRECT [03.11.20 19:04:17.354 INFO WeDftWkr1687 up42.service.peer.PeerController] DENIED! Ignore connect, we are not authorized. [03.11.20 19:04:17.355 INFO WeDftWkr1687 ging.direct.DirectNetworkChannel] Channel became inactive. closedBy=THIS_SIDE, reason='Session close', channel=DIRECT
Thanks Jocelyn. Yes, with the new image, the annoying log messages are gone. The cause of the access/permission errors is because of the way Microsoft OneDrive handles OneNote notebooks. In fact, seeing that error has inspired me to make some changes in how I use OneNote. Thanks for your help.
Are there any ideas on why it is not working?
Since you the engine is restarting in loop, you should see a repeating sequence of messages in service.log.0
. Can you try to run a tail -f
on the log file ?
You could also try to clear the cache as instructed here : https://github.com/jlesage/docker-crashplan-pro#device-status-is-waiting-for-connection
I cleared the cache (it was empty anyway). Tail shows the same errors as in the service.log.0.
I can send you the service.log.0, but I am hesitant to put it on a public message board.
I am two weeks without a backup. Am I just SOL?
Maybe you can try to contact the CrashPlan support team with the errors you get from service.log.0
. Just don't tell them you are running CP under Docker, else they will be quick to tell you it's not supported.