firestore-backup-restore
firestore-backup-restore copied to clipboard
GoogleError, Timeout in version 1.0.0
I had an older version of this package, v0.10.0, I made a backup (took a couple of hours but worked, the database is quite large) I upgraded to v1.0.0 and I got the following errors:
GoogleError: Total timeout of API google.firestore.v1.Firestore exceeded 60000 milliseconds before any response was received.
at repeat (/path/to/project/node_modules/google-gax/build/src/normalCalls/retries.js:66:31)
at Timeout._onTimeout (/path/to/project/node_modules/google-gax/build/src/normalCalls/retries.js:101:25)
at listOnTimeout (node:internal/timers:559:17)
at processTimers (node:internal/timers:502:7) {
code: 4
}
Error: Error: Total timeout of API google.firestore.v1.Firestore exceeded 60000 milliseconds before any response was received.
at /path/to/project/node_modules/firestore-export-import/dist/export.js:24:1740
at step (/path/to/project/node_modules/firestore-export-import/dist/export.js:13:9)
at Object.throw (/path/to/project/node_modules/firestore-export-import/dist/export.js:7:402)
at rejected (/path/to/project/node_modules/firestore-export-import/dist/export.js:5:53)
at runMicrotasks (<anonymous>)
at runNextTicks (node:internal/process/task_queues:61:5)
at processTimers (node:internal/timers:499:9)
I ran it two times the following was the first error that I got
Error: 1 CANCELLED: Call cancelled
at Object.callErrorFromStatus (/path/to/project/node_modules/@grpc/grpc-js/build/src/call.js:31:26)
at Object.onReceiveStatus (/path/to/project/node_modules/@grpc/grpc-js/build/src/client.js:180:52)
at Object.onReceiveStatus (/path/to/project/node_modules/@grpc/grpc-js/build/src/client-interceptors.js:365:141)
at Object.onReceiveStatus (/path/to/project/node_modules/@grpc/grpc-js/build/src/client-interceptors.js:328:181)
at /path/to/project/node_modules/@grpc/grpc-js/build/src/call-stream.js:187:78
at processTicksAndRejections (node:internal/process/task_queues:78:11) {
code: 1,
details: 'Call cancelled',
metadata: Metadata { internalRepr: Map(0) {}, options: {} },
note: 'Exception occurred in retry method that was not classified as transient'
}
Then I reverted back to the older version v0.10.0 and it works fine again.
I used the same node version and everything the only thing that was changed is the package version.
Hi @janoshr, I just update the version of the google-gax
. Can you try it again in v1.0.1
?
I saw a potential fix for retry & timeout on their end.
https://github.com/googleapis/gax-nodejs/pull/1100
Unfortunately, the issue persists
Error: 1 CANCELLED: Call cancelled
at Object.callErrorFromStatus (/path/to/project/node_modules/@grpc/grpc-js/build/src/call.js:31:26)
at Object.onReceiveStatus (/path/to/project/node_modules/@grpc/grpc-js/build/src/client.js:180:52)
at Object.onReceiveStatus (/path/to/project/node_modules/@grpc/grpc-js/build/src/client-interceptors.js:365:141)
at Object.onReceiveStatus (/path/to/project/node_modules/@grpc/grpc-js/build/src/client-interceptors.js:328:181)
at /path/to/project/node_modules/@grpc/grpc-js/build/src/call-stream.js:187:78
at processTicksAndRejections (node:internal/process/task_queues:78:11) {
code: 1,
details: 'Call cancelled',
metadata: Metadata { internalRepr: Map(0) {}, options: {} },
note: 'Exception occurred in retry method that was not classified as transient'
}
Error: Error: 1 CANCELLED: Call cancelled
at /path/to/project/node_modules/firestore-export-import/dist/export.js:24:1740
at step (/path/to/project/node_modules/firestore-export-import/dist/export.js:13:9)
at Object.throw (/path/to/project/node_modules/firestore-export-import/dist/export.js:7:402)
at rejected (/path/to/project/node_modules/firestore-export-import/dist/export.js:5:53)
at runMicrotasks (<anonymous>)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
You don't see this error anymore?
Total timeout of API google.firestore.v1.Firestore exceeded 60000 milliseconds before any response was received
I think meanwhile you can use the old version before this one is fixed.
I was running into the same thing and adding google-gax
to my project fixed the issue.
Hi @mijiapps, which version of google-gax
did you install?
@dalenguyen [email protected]
.
Thanks, I upgrade the package with [email protected]
in v1.1.0. Hopefully, it solved this issue.
@janoshr Same problem here. Have you managed to fix it?
@bruno-machado I reverted back, not the best solution but it works for now
@janoshr What version? V0.30?
@bruno-machado v0.10.0
Still happen in version v1.35, how I can fix that?
Also getting the same error on v1.35.
It's related to collections data size, as specifying only one small collection to the "backups" function, instead of many big collections, solves the issue.
Also, the issue doesn't occur when running the same function locally using functions-framework.
Tried to add "google-gax" to package.json, but the error still triggers. Anyone else made some progress on this? @yamauchieduardo @dalenguyen ?
@DavenMartel Could you please provide:
- snapshot
- OS
- Node version
@dalenguyen
Sure, it works just fine on:
- macOS 12.5.1
- node v17.7.1
But won't run once the function has been deployed on:
- GCP Cloud Functions gen2
Also, the error only happens when trying to run the export, not when the app gets initialized.
Thanks, @DavenMartel. I will check it this weekend.
Hi @DavenMartel, your zip file contains service_account file. I deleted the link from your comment. However, you should disable that service account ASAP.
Hi @DavenMartel, just tried, and it's working for me.
Deploy script
gcloud functions deploy scheduledFirestoreExportJSON --trigger-http --gen2 --region=us-central1 --runtime=nodejs16 --entry-point=scheduledFirestoreExportJSON --project YOUR_PROJECT_ID
Dependencies
"dependencies": {
"@google-cloud/functions-framework": "^3.2.0",
"@google-cloud/storage": "^6.10.0",
"firestore-export-import": "^1.3.6",
"google-gax": "^3.6.0"
},
@dalenguyen Thanks a lot for the heads up and removal, I swapped the file just after uploading it, but didn't realize it stayed in history. We've rotated credentials and thankfully, this wasn't prod data.
As for the issue here, I ended up allowing more CPU and memory ressources to the associated Cloud Run container, and that solved it for me. I've seen in other threads that it might be related to ECONNRESET and/or unhandled promises, but I haven't found a clear solution.
I'm glad that you solved this issue 🙌
Hey. it looks like I face the same timeout issue. I'm running the code from the Cloud Build pipeline with 32 CPU and 32 GB RAM, so that's definitely not a resources-related issue.
I'm using the latest version of the library, but I'm trying to backup a pretty big collection.