[Bug]: Excessive memory consumption for webdav: Request used more than 300 MB of RAM: 304.1 MB
⚠️ This issue respects the following points: ⚠️
- [x] This is a bug, not a question or a configuration/webserver/proxy issue.
- [x] This issue is not already reported on Github OR Nextcloud Community Forum (I've searched it).
- [x] Nextcloud Server is up to date. See Maintenance and Release Schedule for supported versions.
- [x] I agree to follow Nextcloud's Code of Conduct.
Bug description
Since upgrading to 31.0.5.1 (from 31.0.4.1) I get these:
Request used more than 300 MB of RAM: 304.1 MB
The underlying problem started somewhere in v30 (I just decreased max thread count to reduce contention after failing to identify any clear single root cause), but I did not see the backported warning before that upgrade.
Steps to reproduce
- install or upgrade to 31.0.5.1
- use python
requests(2.25.1+dfsg-2ubuntu0.1) to call PROPFIND on a non-empty folder (Depth: 1suffices) - observe excessive memory consumption (already the case earlier) and a warning (newly introduced)
Expected behavior
Clearly listing a directory of 1200 files should not need 300MB. But the warning does not point to relevant commands to be executed to diagnose it once emitted, and I could not find specific references in the documentation either.
Nextcloud Server version
31
Operating system
Debian/Ubuntu
PHP engine version
PHP 8.1
Web server
Nginx
Database engine version
PostgreSQL
Is this bug present after an update or on a fresh install?
None
Are you using the Nextcloud Server Encryption module?
Encryption is Disabled
What user-backends are you using?
- [ ] Default user-backend (database)
- [ ] LDAP/ Active Directory
- [x] SSO - SAML
- [ ] Other
Configuration report
{
"system": {
"version": "31.0.5.1",
"connectivity_check_domains": [],
"check_for_working_wellknown_setup": false,
"secret": "***REMOVED SENSITIVE VALUE***",
"trusted_domains": [
"***WHY DID occ config:list system NOT CONSIDER THIS SENSITIVE?***"
],
"datadirectory": "***REMOVED SENSITIVE VALUE***",
"dbtype": "pgsql",
"dbname": "***REMOVED SENSITIVE VALUE***",
"dbhost": "***REMOVED SENSITIVE VALUE***",
"dbport": "",
"dbtableprefix": "oc_",
"dbuser": "***REMOVED SENSITIVE VALUE***",
"dbpassword": "***REMOVED SENSITIVE VALUE***",
"installed": true,
"maintenance_window_start": 21,
"debug": false,
"default_phone_region": "DE",
"instanceid": "***REMOVED SENSITIVE VALUE***",
"updatechecker": false,
"upgrade.disable-web": true,
"updater.server.url": "***REMOVED SENSITIVE VALUE***",
"updater.release.channel": "stable",
"skeletondirectory": "",
"default_language": "de",
"overwrite.cli.url": "***WHY DID occ config:list system NOT CONSIDER THIS SENSITIVE?***",
"htaccess.RewriteBase": "\/",
"allow_local_remote_servers": true,
"loglevel": 2,
"log_type": "syslog",
"syslog_tag": "nextcloud",
"logfile": "",
"logdateformat": "Y-m-d H:i:s",
"logtimezone": "UTC",
"log.condition": {
"apps": [
"admin_audit"
]
},
"log_type_audit": "syslog",
"syslog_tag_audit": "nextcloud",
"logfile_audit": "",
"memcache.local": "\\OC\\Memcache\\APCu",
"memcache.distributed": "\\OC\\Memcache\\Memcached",
"memcache.locking": "\\OC\\Memcache\\Memcached",
"memcached_servers": [
[
"\/run\/nextcloud\/memcached.sock",
0
]
],
"mail_from_address": "***REMOVED SENSITIVE VALUE***",
"mail_smtpmode": "smtp",
"maintenance": false,
"knowledgebaseenabled": false,
"enable_previews": true,
"allow_user_to_change_display_name": false,
"mail_domain": "***REMOVED SENSITIVE VALUE***",
"mail_smtpauthtype": "LOGIN",
"mail_smtpsecure": "ssl",
"mail_smtpauth": 1,
"trashbin_retention_obligation": "14, 60",
"mail_send_plaintext_only": true,
"theme": "",
"passwordsalt": "***REMOVED SENSITIVE VALUE***",
"has_internet_connection": false
}
}
List of activated Apps
Enabled:
- activity: 4.0.0
- admin_audit: 1.21.0
- app_api: 5.0.2
- calendar: 5.2.4
- circles: 31.0.0
- cloud_federation_api: 1.14.0
- comments: 1.21.0
- contactsinteraction: 1.12.0
- dav: 1.33.0
- federatedfilesharing: 1.21.0
- files: 2.3.1
- files_accesscontrol: 2.0.0
- files_automatedtagging: 2.0.0
- files_downloadlimit: 4.0.0
- files_reminders: 1.4.0
- files_retention: 2.0.1
- files_sharing: 1.23.1
- files_trashbin: 1.21.0
- files_versions: 1.24.0
- logreader: 4.0.0
- lookup_server_connector: 1.19.0
- notifications: 4.0.0
- oauth2: 1.19.1
- privacy: 3.0.0
- profile: 1.0.0
- provisioning_api: 1.21.0
- related_resources: 2.0.0
- richdocuments: 8.6.5
- serverinfo: 3.0.0
- settings: 1.14.0
- systemtags: 1.21.1
- tasks: 0.16.1
- text: 5.0.0
- theming: 2.6.1
- twofactor_backupcodes: 1.20.0
- user_saml: 6.6.0
- viewer: 4.0.0
- webhook_listeners: 1.2.0
- workflowengine: 2.13.0
Disabled:
- bruteforcesettings: 4.0.0 (installed 1.0.3)
- dashboard: 7.11.0 (installed 7.0.0)
- encryption: 2.19.0
- federation: 1.21.0 (installed 1.2.0)
- files_external: 1.23.0
- files_pdfviewer: 4.0.0 (installed 2.10.0)
- nextcloud_announcements: 3.0.0 (installed 1.1)
- password_policy: 3.0.0 (installed 1.3.0)
- photos: 4.0.0-dev.1 (installed 1.0.0)
- recommendations: 4.0.0 (installed 0.8.0)
- support: 3.0.0 (installed 1.1.0)
- suspicious_login: 9.0.1
- twofactor_nextcloud_notification: 5.0.0
- twofactor_totp: 13.0.0-dev.0
- updatenotification: 1.21.0 (installed 1.2.0)
- user_ldap: 1.22.0
- user_status: 1.11.0 (installed 1.8.1)
- weather_status: 1.11.0 (installed 1.7.0)
Nextcloud Signing status
No errors have been found.
Nextcloud Logs
{"reqId":"redacted","level":2,"time":"2025-05-20 13:35:35","remoteAddr":"2a01::redacted","user":"redacted","app":"core","method":"PROPFIND","url":"/remote.php/dav/files/redacted/path/","message":"Request used more than 300 MB of RAM: 304.1 MB","userAgent":"python-requests/2.25.1","version":"31.0.5.1","data":{"app":"core"}}
Additional info
@juliusknorr What information do I need to provide to make something useful out of this?
Given zero Google hits for the warning seems to imply I did hit some less than ideal code path. I just don't see how that listing could possibly be that expensive? Maybe something about share permissions..?
Have similar issue while uploading 1700 photos from my phone. All photos in one directory. Upload was stopped and button start again (sync) did not work.
Also after update to version 31.0.5 the error started to occuring.
[core] Error: Request used more than 300 MB of RAM: 384.2 MB GET /nextcloud/index.php/avatar/USER-HASH/256?v=2%22 from ip to: user-hash o DATE
After couple of hours nextcloud consuming all RAM and die.
Clearly listing a directory of 1200 files should not need 300MB. But the warning does not point to relevant commands to be executed to diagnose it once emitted, and I could not find specific references in the documentation either.
It's a work in progress. The intent is to make it easier to find candidates in the field to optimize more easily.
We should perhaps add a backtrace to it automatically, which would make it easier to dig deeper into what is going on.
Upload was stopped and button start again (sync) did not work.
@Krysztal It's unlikely this was due to the new logging, but the logging may be indicating something that is a factor in whatever is causing this for you.
After couple of hours nextcloud consuming all RAM and die.
@SpacemanFly Same
I am seeing the same issue on a Nextcloud 31.0.5 on Debian with Apache2 and MariaDB. This seems to be related to the Log reader app. Not sure, if this helps, but I thought I'd leave it here.
Another data point: this appears to be related to failing initial sync against Newscloud News from API consumers NewsFlash and Pulp. Potentially dependent on the of size of that initial sync (the number of unread articles).
Seems to have started w/31.0.5:
Ubuntu Linux 24.04.2 MariaDB version 10.11.13 Apache version 2.4.58 Nextcloud Hub 10 (31.0.5)
[core] Fatal: Request used more than 300 MB of RAM: 662.6 MB GET /nextcloud/index.php/apps/logreader/api/poll?lastReqId=ZmqDm124g4rENczdNTOe from ip by jon at 3 Jun 2025, 1:05:26 am
Same for me - told FPM to use -1 max memory but did not help
Yep i also have the same issue with the 31.0.5 version of Nextcloud hub { "reqId": "DfPoxcBIkexA2xtvhhju", "level": 4, "time": "2025-06-04T09:31:35+00:00", "remoteAddr": "something", "user": "admin", "app": "core", "method": "GET", "url": "/index.php/core/preview?fileId=789102&x=256&y=256&a=1&mode=cover&forceIcon=0", "message": "Request used more than 300 MB of RAM: 528.8 MB", "userAgent": "Mozilla/5.0 (Android) Nextcloud-android/3.31.3", "version": "31.0.5.1", "data": { "app": "core" }, "id": "68401291b65e5" }
What i have notice is that even i
m getting these error in the logs(on the server), the upload still continue, and after some time on the mobile(because im uploading some images from my telephone) im getting push notification "Upload Failed - An error occurred while waiting the for the server. Could not complete the operation" . But for some reason the error is from 15 mins ago. Sometime the upload stops, sometime its going, without stopping
I dont know whether this will be helpful, im running nextcloud on docker, today by mistake, i`ve deleted, the container, and recreate it again, after that i have no issues, so far the error is not present
i`ve deleted, the container, and recreate it again, after that i have no issues, so far the error is not present
That is probably due the fact, that the Log Reader is reading the nextcloud.log and after re-creating the container, the log will be small.
Errors stopped happening with the emptying of the log file. Thanks for the helpful clues.
I also emptying of the log file, but unfortunately the error still appears.
Same. 31.0.5 PHP limit is set to 3 GB. Whole container limit to 4 GB.
Version 31.0.6 Nextcloud Docker compose
Request used more than 300 MB of RAM: 475.4 MB
Cron job used more than 300 MB of ram after executing job OC\FilesMetadata\Job\UpdateSingleMetadata (id: 6293, arguments: ["admin",16238]): 526.5 MB (before: 115.7 MB)
Message comes from https://github.com/nextcloud/server/blob/master/lib/base.php#L856, and as far as I can tell it's not actually a failure at all, there are 3 hard coded values (300MB, 400MB, 500MB) that trigger a Warning, Error, or Fatal message whenever "peak memory usage" has previously exceeded one of those.
IMHO it's a false positive -- at the very least the condition does not seem to include a failure status at all.
Also: I get many of these, with "no app in context", such as
[core] Fatal: Request used more than 300 MB of RAM: 1.6 GB
from ? by -- at 15 juin 2025, 00:14:14
{
"reqId": "RCG3dT3nt1IcNUtpAls2",
"level": 4,
"time": "2025-06-14T22:14:14+00:00",
"remoteAddr": "",
"user": "--",
"app": "core",
"method": "",
"url": "--",
"message": "Request used more than 300 MB of RAM: 1.6 GB",
"userAgent": "--",
"version": "31.0.6.2",
"data": {
"app": "core"
},
"id": "684fa6655c2cf"
}
Message was introduced by https://github.com/nextcloud/server/pull/52219. Perhaps it should be considered as a Warning, or Info. My setup has Recognize activated, which does need a huge amount of memory -- and my server is adequately dimensioned to handle this.
But perhaps I misunderstand the point of this shutdown function?
(@juliusknorr)
This log message was introduced to help developers identify endpoints that need further profiling and optimization. As an admin there is nothing you can do about this and it is not about general memory usage, but peak usages that individual requests may have. This is generally nothing to worry about. I guess the log level might be debatable indeed.
Its level should be no more than a warning at most, and arguably configurable to be just info, have specific admin set levels where it goes from info to a warning, or even optionally just not triggered at all.
It certainly is useful in the stated context, but once known and accounted for it adds nothing but log noise, quite overly alarming noise at that.
@juliusknorr
I reported this one because it seemed more likely to be an intentional request for feedback of when this was triggered.. but as of now (links to mkcol_with_body log for recent integration-litmus.yml run) there is at least one more Warning emitted during nominal operation, yet the CI checks keep showing green.
A simple line demanding no output (except severity==1) such as ! grep -v '"level":1' data/nextcloud.log added to the workflow file should do the trick, alerting next time debugging helpers are added to test-covered branches at excessive severity.
Hi all, I am running into this same error when uploading about 130 PNG files equally about 3.2 GB over webDav. Over a few minutes I get a bunch of warnings, errors, and fatal.
My main concern is if there is something that is not completing and therefore leaving some data in an invalid state. So I would like to know if there is any kind of clean up or file/data scans I need to run. According to some of the above comments there is not anything wrong but sometimes I get a Fatal which generally means there was an unrecoverable error.
By the way, only the warning have the information about the UpdateSingleMetadata job that was running. The error and fatal messages do not have any details about what was requesting more memory. They also have an Application of 'core' where the warnings have an Application of 'cron'.
{
"reqId": "KrjvFMERu0HlXOflC6CU",
"level": 2,
"time": "2025-06-27T16:56:16+00:00",
"remoteAddr": "",
"user": "--",
"app": "cron",
"method": "",
"url": "--",
"message": "Cron job used more than 300 MB of ram after executing job OC\\FilesMetadata\\Job\\UpdateSingleMetadata (id: 268118, arguments: [\"userId\",2370579]): 290.3 MB (before: 285.8 MB)",
"userAgent": "--",
"version": "31.0.6.2",
"data": {
"app": "cron"
},
"id": "686022953f840"
}
{
"reqId": "oLSvE4x1wxij9zUeBFx7",
"level": 4,
"time": "2025-06-27T16:55:36+00:00",
"remoteAddr": "",
"user": "--",
"app": "core",
"method": "",
"url": "--",
"message": "Request used more than 300 MB of RAM: 689.5 MB",
"userAgent": "--",
"version": "31.0.6.2",
"data": {
"app": "core"
},
"id": "686022953f843"
}
@prairietree : these messages are confusing. They do not imply that any request failed. You should be good to go
I think, these messages should just be warnings. If you run recognize, you get them regularly which is expected and acceptable. But it's always shocking when you go to the admin panel and see a number of errors which actually are no errors.
I think, these messages should just be warnings. If you run recognize, you get them regularly which is expected and acceptable. But it's always shocking when you go to the admin panel and see a number of errors which actually are no errors.
Yes, warnings with a configurable threshold ideally.
The issue at present is that it diminishes the signal-to-noise ratio on real errors that can be overlooked due to the sheer volume of errors you can ignore.
i have the same issue pls fix
im using the Nextcloud AIO
У мене та ж проблема, будь ласка, виправте
Я використовую Nextcloud AIO
https://raju.dev/nextcloud-aio-install-with-docker-compose-and-reverse-proxy/
Install according to this guide and everything will be fine. But now I have only one error. Exception HMAC not responding. Somehow I need to reduce(truncate) the “oc_credentials” table in the pgsql nextcloud database: SQL> TRUNCATE TABLE oc_credentials, but I still don't know how to get to the pqsl itself.
In the register_shutdown_function in the file lib/base.php there is arbitrary values :
$memoryPeak > 500_000_000 => ILogger::FATAL,
$memoryPeak > 400_000_000 => ILogger::ERROR,
$memoryPeak > 300_000_000 => ILogger::WARN,
Should we be able to retrieve the memory_liit value in php.ini according to the configuration server ? Or failing that, should we let the administrator set the warning/error/fatal thresholds in the settings of nextcloud ?
Greetings. At least in my use case, I only get this error, when I have "Recognize" App activated. According to the readme, this is expected. According to the readme:
Object recognition: 1GB Landmark recognition: 300MB Video action recognition: 50MB Music genre recognition: 50MB
Maybe this info helps other. At least this should be considered, when choosing the error level of the message. I would recommend to set the level warning, when there is more than 1GB usage (maybe starting from 1.2GB). It should be set to FATAL, with 2GB. If you don't have enough resources, there should be a warning in "Recognize" App itself, that it is not a good idea, to use it in this configuration (Raspberry Pi for example).
Additionally, the source of the memory consumption should be noted in the logfile. So in my case the "Recognize" app. Currently, i see only "core", which is misleading.
i agree with @Offerel
Fehler | core | Request used more than 300 MB of RAM: 392.8 MB -- | -- | -- Warnung | cron | Cron job used more than 300 MB of ram after executing job OCA\Recognize\BackgroundJobs\ClusterFacesJob (id: 11598, arguments: {"userId":" -- | -- | -- Fatal | core | Request used more than 300 MB of RAM: 777.6 MB -- | -- | -- Warnung | cron | Cron job used more than 300 MB of ram after executing job OCA\Recognize\BackgroundJobs\ClusterFacesJob (id: 11589, arguments: {"userId":"Sebastian"}): 777.6 MB (before: 33.3 MB) -- | -- | --
I do not use or have installed Recognize. This happens to me with plain uploads. I also agree that the Fatal level is too much for these kind of warnings. This should be a warning at most, or even something that should be logged only when explicitly debugging the system.
In general, my feeling is that Nextcloud logs too many things that are just irrelevant to the administrator, and most of the time, the logs do not highlight any inherent problem with the system. For this reason, as the admin, I stopped even looking at the registry because I know I will, in most cases, just waste time looking at warnings/fatal that do not matter. For the registry to be effective and useful for the administrators, I would suggest the dev team to seriously reconsider what is being logged, and at which importance level, and log at fatal level only very serious issues.
I have the exact issue even in Nextcloud Hub 25 Autumn (32.0.1) fresh installation, no plugin / addon has installed. When my phone try to upload the video / photo via Nextcloud Apps (WebDAV), the error log occurred. My photo folder has 287GB photos / videos.