FUXA
FUXA copied to clipboard
[BUG]Automatic shutdown of device connection
Describe the bug In the device connection interface, it is displayed that the device connection is enabled, but in the status column, it is in the closed state. Even if it is manually re-enabled, the status is briefly displayed as OK, but after a while, it will disconnect again.
To Reproduce Steps to reproduce the behavior:
- In the device connection,import devices or re-enable the device connection;
- The status will display OK;
- Wait a moment or save the project,the status will be closed;
Expected behavior Please fix this bug or investigate the cause of its occurrence.
Screenshots
Environment
- FUXA version:
- Node.js version:
- npm version:
- Platform/OS:
- Browser:
Please provide the relevant logs and environment information. It seems the devices you have are using OPCUA so it should be something related to that.
Environment
FUXA version:1.1.14-1243 Node.js version:16.20.1 npm version:8.19.4 Platform/OS:Linux Browser:edge/chrome
Tips:In my testing environment, the software can run normally.
Hi, I followed your description but cannot reproduce the issue, anyway we are at version 1.1.18
I have the same problem. logs: 0|yarn | <--- Last few GCs ---> 0|yarn | [3170:0x556e8b662d40] 4846194 ms: Mark-sweep 1990.2 (2084.1) -> 1980.1 (2088.9) MB, 2801.4 / 0.2 ms (average mu = 0.340, current mu = 0.361) task scavenge might not succeed 0|yarn | [3170:0x556e8b662d40] 4849621 ms: Mark-sweep 1990.5 (2092.6) -> 1983.0 (2092.1) MB, 2859.1 / 0.1 ms (average mu = 0.262, current mu = 0.166) allocation failure scavenge might not succeed 0|yarn | <--- JS stacktrace ---> 0|yarn | FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory 0|yarn | 1: 0x7f0951d0e34c node::Abort() [/lib64/libnode.so.93] 0|yarn | 2: 0x7f0951a555d0 node::OOMErrorHandler(char const*, bool) [/lib64/libnode.so.93] 0|yarn | 3: 0x7f0952320d84 v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, bool) [/lib64/libnode.so.93] 0|yarn | 4: 0x7f0952321042 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, bool) [/lib64/libnode.so.93] 0|yarn | 5: 0x7f09524bba79 [/lib64/libnode.so.93] 0|yarn | 6: 0x7f09524bbb5d [/lib64/libnode.so.93] 0|yarn | 7: 0x7f09524d3f8e v8::internal::Heap::PerformGarbageCollection(v8::internal::GarbageCollector, v8::GCCallbackFlags) [/lib64/libnode.so.93] 0|yarn | 8: 0x7f09524d4961 v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [/lib64/libnode.so.93] 0|yarn | 9: 0x7f09524d784a v8::internal::Heap::AllocateRawWithLightRetrySlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [/lib64/libnode.so.93] 0|yarn | 10: 0x7f09524d78cb v8::internal::Heap::AllocateRawWithRetryOrFailSlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [/lib64/libnode.so.93] 0|yarn | 11: 0x7f095249dd30 v8::internal::Factory::AllocateRaw(int, v8::internal::AllocationType, v8::internal::AllocationAlignment) [/lib64/libnode.so.93] 0|yarn | 12: 0x7f0952497498 v8::internal::FactoryBasev8::internal::Factory::AllocateRawWithImmortalMap(int, v8::internal::AllocationType, v8::internal::Map, v8::internal::AllocationAlignment) [/lib64/libnode.so.93] 0|yarn | 13: 0x7f0952499388 v8::internal::FactoryBasev8::internal::Factory::NewRawTwoByteString(int, v8::internal::AllocationType) [/lib64/libnode.so.93] 0|yarn | 14: 0x7f09526f016d v8::internal::String::SlowFlatten(v8::internal::Isolate*, v8::internal::Handlev8::internal::ConsString, v8::internal::AllocationType) [/lib64/libnode.so.93]
to think about taking a look at the issue is missing more details
This issue should be fixed in the latest version. let me know if you have any issues.