[Bug]: Apple File Provider with bigger files result in NSFileProviderErrorDomain (-2005)
⚠️ Before submitting, please verify the following: ⚠️
- [X] This is a bug, not a question or a configuration issue.
- [X] This issue is not already reported on Github (I've searched it).
- [X] Nextcloud Server and Desktop Client are up to date. See Server Maintenance and Release Schedule and Desktop Releases for supported versions.
- [X] I agree to follow Nextcloud's Code of Conduct
Bug description
when adding a bigger file (in my case about 1GB in size) on the Apple File Provider the file will immediately get a cloud symbol with an exclamation mark with the error "NSFileProviderErrorDomain-Error -2005".
Works flawless when syncing or manually uploading the file.
Steps to reproduce
- Enable Apple File Provider
- Drag file in file provider
Expected behavior
file should be uploaded
Which files are affected by this bug
Sorbet_Plus.zip
Operating system
Mac OS
Which version of the operating system you are running.
13.4
Package
Appimage
Nextcloud Server version
26.0.2
Nextcloud Desktop Client version
3.9-RC1
Is this bug present after an update or on a fresh install?
Fresh desktop client install
Are you using the Nextcloud Server Encryption module?
Encryption is Enabled
Are you using an external user-backend?
- [ ] Default internal user-backend
- [ ] LDAP/ Active Directory
- [ ] SSO - SAML
- [ ] Other
Nextcloud Server logs
log doesnt contain anything
Additional info
No response
Hi, more information is needed in order to debug this -- could you provide some logs?
The file provider module uses Apple’s unified logging system, so you can extract relevant logs using the Console app or by using the log utility from the terminal. To make it easier to find the Nextcloud File Provider related logs, filter by process FileProviderExt and by subsystem com.nextcloud.cloud.FileProviderExt; if you're using the Console app, make sure to check "Include info messages" and "Include debug messages".


error is simply "file too large"
standard 18:42:16.200067+0200 fileproviderd ┳110e1b ✅ done executing <FS1 ✅ fetch-content(docID(450698)) why:itemChangedRemotely sched:utility#1685205736.1839154> → <item:<s:docID(450698) p:root n:"S{9}s.zip" doc sz:1420777599 m:rw- ct:1684362621.4011796 mt:1684362927.5042572 qtn v:sver:root/S{9}s.zip cver:104558555@1:sz:1420777599> content:fid(104558557) unchanged:false>
standard 18:42:16.200746+0200 fileproviderd ✍️ persist job: <J1 ⏳ create-item(propagated:<docID(450698) dbver:0 domver:<nil>>) why:itemChangedRemotely|contentUpdate sched:utility#1685205736.1839154 ⧗persisted>
standard 18:42:16.200789+0200 fileproviderd ┗110e1b
standard 18:42:16.209678+0200 FileProviderExt container_create_or_lookup_app_group_path_by_app_group_identifier: success
standard 18:42:16.210041+0200 FileProviderExt (501) Adopting Voucher for accountID:F82E00EE-8357-4165-84A9-100076B145A4
standard 18:42:16.210194+0200 FileProviderExt (501) No Cached Copy of voucher for Account:F82E00EE-8357-4165-84A9-100076B145A4, generating one from usermanagerd
standard 18:42:16.210248+0200 FileProviderExt (501) kernel voucher port is :206927
standard 18:42:16.211819+0200 FileProviderExt (501) retrieveReplacementVoucherFor failed with error:Error Domain=NSPOSIXErrorDomain Code=2 "No such file or directory"
standard 18:42:16.211884+0200 FileProviderExt (501) Adopting Voucher for accountID:F82E00EE-8357-4165-84A9-100076B145A4
standard 18:42:16.211909+0200 FileProviderExt (501) No Cached Copy of voucher for Account:F82E00EE-8357-4165-84A9-100076B145A4, generating one from usermanagerd
standard 18:42:16.211924+0200 FileProviderExt (501) kernel voucher port is :206931
standard 18:42:16.212511+0200 FileProviderExt (501) retrieveReplacementVoucherFor failed with error:Error Domain=NSPOSIXErrorDomain Code=2 "No such file or directory"
standard 18:42:16.215545+0200 FileProviderExt [WARNING] <private> must be called with a task in suspended (1) state, but task <private> has state 0. NSFileProviderManager will suspend the task and resume it again to work around this. To avoid this warning, resume the task from the completion handler.
standard 18:42:16.215542+0200 FileProviderExt Task <C3620D76-DA69-4D15-AD2F-DC1CFCA1435C>.<1602> resuming, timeouts(60.0, 604800.0) QOS(0x11) Voucher <private>
standard 18:42:16.215589+0200 FileProviderExt (501) Adopting Voucher for accountID:F82E00EE-8357-4165-84A9-100076B145A4
standard 18:42:16.215607+0200 FileProviderExt (501) No Cached Copy of voucher for Account:F82E00EE-8357-4165-84A9-100076B145A4, generating one from usermanagerd
standard 18:42:16.215626+0200 FileProviderExt (501) kernel voucher port is :327831
standard 18:42:16.215954+0200 FileProviderExt [Telemetry]: Activity <nw_activity 12:2[FD6FE0D5-B040-4610-A79E-38723B38EF85] (reporting strategy default)> on Task <C3620D76-DA69-4D15-AD2F-DC1CFCA1435C>.<1602> was not selected for reporting
standard 18:42:16.216101+0200 FileProviderExt (501) retrieveReplacementVoucherFor failed with error:Error Domain=NSPOSIXErrorDomain Code=2 "No such file or directory"
standard 18:42:16.216516+0200 FileProviderExt Task <C3620D76-DA69-4D15-AD2F-DC1CFCA1435C>.<1602> {strength 1, tls 8, sub 0, sig 0, ciphers 1, bundle 0, builtin 0}
standard 18:42:16.216566+0200 FileProviderExt [C112] event: client:connection_reused @1602.224s
standard 18:42:16.216906+0200 FileProviderExt Task <C3620D76-DA69-4D15-AD2F-DC1CFCA1435C>.<1602> now using Connection 112
standard 18:42:16.233359+0200 fileproviderd [NOTICE] ⏱ com.apple.fileprovider.indexing: new watcher registered for c{51}m.dev
standard 18:42:16.233402+0200 fileproviderd [NOTICE] ⏱ com.apple.fileprovider.indexing: registering xpc_activity
standard 18:42:16.243324+0200 FileProviderExt Task <C3620D76-DA69-4D15-AD2F-DC1CFCA1435C>.<1602> received response, status 413 content K
standard 18:42:16.243359+0200 FileProviderExt Task <C3620D76-DA69-4D15-AD2F-DC1CFCA1435C>.<1602> done using Connection 112
standard 18:42:16.243396+0200 FileProviderExt [C112] event: client:connection_idle @1602.251s
standard 18:42:16.243530+0200 FileProviderExt Task <C3620D76-DA69-4D15-AD2F-DC1CFCA1435C>.<1602> response ended
standard 18:42:16.243880+0200 FileProviderExt Task <C3620D76-DA69-4D15-AD2F-DC1CFCA1435C>.<1602> summary for task success {transaction_duration_ms=27, response_status=413, connection=112, reused=1, request_start_ms=0, request_duration_ms=0, response_start_ms=26, response_duration_ms=0, request_bytes=65733, response_bytes=289, cache_hit=false}
standard 18:42:16.243985+0200 FileProviderExt Task <C3620D76-DA69-4D15-AD2F-DC1CFCA1435C>.<1602> finished successfully
fehler 18:42:16.245987+0200 FileProviderExt Could not upload item with filename: Sorbet_Plus.zip, received error: The file is too large
Is this issue still relevant using 3.13.0-macOS-vfs?
Yes, it is. I am experiencing this issue too on MacOS Sonoma with 3.13.0. However it's interesting to see that the file is however actually uploaded to the server, but it shows an error on Finder.
Yes, it is. I am experiencing this issue too on MacOS Sonoma with 3.13.0. However it's interesting to see that the file is however actually uploaded to the server, but it shows an error on Finder.
Same here. Im on desktop version 3.13.2-macOS-vfs. Server is on version 28.0.7. The file is uploaded to the server (I see it in the network usage on the Mac and on the server), but after the upload is finished, the Finder reports the error (NSFileProviderErrorDomain error -2005) and the file is not displayed in the cloud (web interface).
Looks like there's a limitation in the request size in PHP ini and/or in the web server configuration. The stricter of the two applies. Uploads from the macOS-vfs client are not done using chunked IO yet, for what I can tell, so it's important to check for your request limits at server side (request size and timeouts too). Hope this helps, cheers!
I'm assuming the file won't show up in Nextcloud Files web interface either...
Looks like there's a limitation in the request size in PHP ini and/or in the web server configuration. The stricter of the two applies. Uploads from the macOS-vfs client are not done using chunked IO yet, for what I can tell, so it's important to check for your request limits at server side (request size and timeouts too). Hope this helps, cheers!
I changed the PHP memory limit and now I don't get this error anymore. But when I try to upload a file > 1 GB, I get an error
BadRequest Expected filesize of 2048000000 bytes but read (from Nextcloud client) and wrote (to Nextcloud storage) 1161461760 bytes. Could either be a network problem on the sending side or a problem writing to the storage on the server side.
I have tried a lot of different configs with different memroy and upload sizes. Apache is also configured with an unlimited LimitRequestBody. Also my ngnix reverse proxy should be configured correctly with client_max_body_size 0;
It works when I upload the file via the webinterface (Probably because the file gets chunked?)
Do you have any idea what could be wrong?
Timeouts are just as important as sizes. A snippet from my nginx conf: ...
client_max_body_size 0; proxy_buffering off; proxy_redirect off; proxy_connect_timeout 1000; proxy_read_timeout 1000; proxy_send_timeout 1000;
Hope this helps. Regards!
There are other important things like buffering and other stuff that come in the way when you allow for very big uploads. Every site has its own needs and there's no one-fits-all configuration. In general I'd suggest you to make sure you're up to date with the server install docuentation: https://docs.nextcloud.com/server/latest/admin_manual/installation/nginx.html
PHP ini also includes upload_max_filesize and post_max_size, I've set them to "high values".
I have a feeling that there is a timeout of 60s somewhere, but I can't find it. I've tried all the timeouts that could have been too low and increasing them, but it doesn't work. This is probably because the file is not being chunked. Because over the web interface and the Nautilus WebDav integration on my linux machine, a file upload of >1GB works just fine.
I'm still having this issue, has anyone managed to find a solution yet?
I have now implemented chunked uploads on NextcloudFileProviderKit, which should avoid hitting the memory limit and max upload size limit (the default chunk size is 100MB like on other clients).
The changes should be effective in NCFPK 2.0, which we will ship with desktop client version 3.16.0.
Thanks for the reports!
I have now implemented chunked uploads on NextcloudFileProviderKit, which should avoid hitting the memory limit and max upload size limit (the default chunk size is 100MB like on other clients).
The changes should be effective in NCFPK 2.0, which we will ship with desktop client version 3.16.0.
Thanks for the reports!
@claucambra Thank you for the correction, awaiting the update to 3.16, have a great day!
I'm still experiencing the issue with 3.17.2
I described the details in this post: https://help.nextcloud.com/t/error-2005-when-uploading-bigger-files/232368
@drybx You should probably open a separate issue since this one has been addressed. Also more details will be needed.. Specifically: https://github.com/nextcloud/desktop/issues/5737#issuecomment-1565539136