flood
flood copied to clipboard
flood crashes after running OOM
Type: Bug Report
- [ ] Try to follow the update procedure described in the README and try again before opening this issue.
Your Environment
v4.6.1 (probably related to #399 so also affecting v4.7.0) CentOS 8.3.2011 node v14.16.0 npm 6.14.11
Summary
flood terminated with:
env[1084]: <--- Last few GCs --->
env[1084]: [1084:0x55b961134df0] 13502876253 ms: Scavenge 969.4 (987.1) -> 968.5 (987.1) MB, 2.6 / 0.0 ms (average mu = 0.998, current mu = 0.999) allocation failure
env[1084]: [1084:0x55b961134df0] 13502878283 ms: Scavenge 969.4 (984.1) -> 968.6 (985.1) MB, 3.0 / 0.0 ms (average mu = 0.998, current mu = 0.999) allocation failure
env[1084]: [1084:0x55b961134df0] 13502880303 ms: Scavenge 969.4 (984.1) -> 968.7 (985.1) MB, 2.7 / 0.0 ms (average mu = 0.998, current mu = 0.999) allocation failure
env[1084]: <--- JS stacktrace --->
env[1084]: FATAL ERROR: MarkCompactCollector: young object promotion failed Allocation failed - JavaScript heap out of memory
env[1084]: 1: 0x55b95ef3a2a4 node::Abort() [node]
env[1084]: 2: 0x55b95ee20b9c node::OnFatalError(char const*, char const*) [node]
env[1084]: 3: 0x55b95f0cf32a v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, bool) [node]
env[1084]: 4: 0x55b95f0cf5b6 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, bool) [node]
env[1084]: 5: 0x55b95f27d7a9 [node]
env[1084]: 6: 0x55b95f2bb1d8 v8::internal::EvacuateNewSpaceVisitor::Visit(v8::internal::HeapObject, int) [node]
env[1084]: 7: 0x55b95f2bb5ee void v8::internal::LiveObjectVisitor::VisitBlackObjectsNoFail<v8::internal::EvacuateNewSpaceVisitor, v8::internal::MajorNonAtomicMarkingState>(v8::internal::MemoryChunk*, v8::internal::MajorNonAtomicMarkingState*, v8::internal::Evacuat>
env[1084]: 8: 0x55b95f2c2e4e v8::internal::FullEvacuator::RawEvacuatePage(v8::internal::MemoryChunk*, long*) [node]
env[1084]: 9: 0x55b95f2aac55 v8::internal::Evacuator::EvacuatePage(v8::internal::MemoryChunk*) [node]
env[1084]: 10: 0x55b95f2aaf9f v8::internal::PageEvacuationTask::RunInParallel(v8::internal::ItemParallelJob::Task::Runner) [node]
env[1084]: 11: 0x55b95f2a0229 v8::internal::ItemParallelJob::Task::RunInternal() [node]
env[1084]: 12: 0x55b95f2a06c9 v8::internal::ItemParallelJob::Run() [node]
env[1084]: 13: 0x55b95f2bd62c void v8::internal::MarkCompactCollectorBase::CreateAndExecuteEvacuationTasks<v8::internal::FullEvacuator, v8::internal::MarkCompactCollector>(v8::internal::MarkCompactCollector*, v8::internal::ItemParallelJob*, v8::internal::MigrationO>
env[1084]: 14: 0x55b95f2c179f v8::internal::MarkCompactCollector::EvacuatePagesInParallel() [node]
env[1084]: 15: 0x55b95f2c1bb4 v8::internal::MarkCompactCollector::Evacuate() [node]
env[1084]: 16: 0x55b95f2d9c67 v8::internal::MarkCompactCollector::CollectGarbage() [node]
env[1084]: 17: 0x55b95f293868 v8::internal::Heap::MarkCompact() [node]
env[1084]: 18: 0x55b95f2942a0 v8::internal::Heap::PerformGarbageCollection(v8::internal::GarbageCollector, v8::GCCallbackFlags) [node]
env[1084]: 19: 0x55b95f2947cc v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [node]
env[1084]: 20: 0x55b95f2f2c2c v8::internal::ScavengeJob::Task::RunInternal() [node]
env[1084]: 21: 0x55b95f1c36b0 non-virtual thunk to v8::internal::CancelableTask::Run() [node]
env[1084]: 22: 0x55b95efb8375 node::PerIsolatePlatformData::RunForegroundTask(std::unique_ptr<v8::Task, std::default_delete<v8::Task> >) [node]
env[1084]: 23: 0x55b95efba0e8 node::PerIsolatePlatformData::FlushForegroundTasksInternal() [node]
env[1084]: 24: 0x55b95f8a777e [node]
env[1084]: 25: 0x55b95f8b9df4 [node]
env[1084]: 26: 0x55b95f8a7f58 uv_run [node]
env[1084]: 27: 0x55b95ef8aa62 node::NodeMainInstance::Run() [node]
env[1084]: 28: 0x55b95ef064cc node::Start(int, char**) [node]
env[1084]: 29: 0x7f1413ec67b3 __libc_start_main [/lib64/libc.so.6]
env[1084]: 30: 0x55b95ee97efe _start [node]
systemd[1]: flood.service: Main process exited, code=dumped, status=6/ABRT
systemd[1]: flood.service: Failed with result 'core-dump'.
After updating to the latest version it refused to start with:
env[716867]: Flood server 4.7.0 starting on http://0.0.0.0:3000
env[716867]: Starting without builtin authentication
env[716867]: [Axios v4.7.0] Transitional option 'clarifyTimeoutError' has been deprecated since v1.0.0 and will be removed in the near future
env[716867]: [Axios v4.7.0] Transitional option 'forcedJSONParsing' has been deprecated since v1.0.0 and will be removed in the near future
env[716867]: [Axios v4.7.0] Transitional option 'silentJSONParsing' has been deprecated since v1.0.0 and will be removed in the near future
env[716867]: FATAL internal error. Please open an issue.
env[716867]: Uncaught exception: Error: Cannot create a string longer than 0x1fffffe8 characters
Deleteing fiveMinSnapshot.db from $HOME/.local/share/flood/db/_config/history/ fixed the latter.
fiveMinSnapshot.db grew to over 600MB
Which raises the question if it is really necessary to keep this much historical data?
Looking in server/services/historyService.ts i don't see any retention logic which would probably fix both problems for good.
Expected Behavior
flood not running OOM
Current Behavior
flood running OOM
Possible Solution
Delete bloated fiveMinSnapshot.db
Steps to Reproduce
Run flood long enough
Context
Running flood
Just noticed this as well. Added to crontab:
* */12 * * * rm $HOME/.local/share/flood/db/_config/history/fiveMinSnapshot.db >/dev/null 2>&1
I just ran into the same issue after OOM as well:
Apr 09 16:15:59 cadoth env[87523]: Flood server 4.7.0 starting on http://127.0.0.1:3001
Apr 09 16:16:00 cadoth env[87523]: FATAL internal error. Please open an issue.
Apr 09 16:16:00 cadoth env[87523]: Uncaught exception: Error: Cannot create a string longer than 0x1fffffe8 characters
In my case, it was feeds.db that grew to 693 MB, moving it out of the way fixed the issue.
The issue has been resolved by af8de75e0593a48864d64c84c2e1c12d9ff0a7e1 a while ago, but I haven't published a new release yet.
@jesec anything holding you up from cutting a release?
@jesec anything holding you up from cutting a release?
I was on vacation. Apart from that, there was a bunch of stuff I wanted to get done before the release. Well, let's just leave them to the next release. I would do a release once I did a new release for pkg.