openhab-core
                                
                                 openhab-core copied to clipboard
                                
                                    openhab-core copied to clipboard
                            
                            
                            
                        Measures to use less RAM by default
I assume a significant amount of openHAB instances are running on single board computers. These have usually a limited set of memory especially the Raspberry Pi 3 based.
For me openHAB 4.0.x runs successfully with some modifications on a Raspi 3a+ (512 MB). I would like to share these to upstream these changes / find some other improvements.
- 
Set -Dxtext.qn.interning=trueor callQualifiedNameInterning#enable()to intern the strings inside theQualifiedNameinstances. This will remove a lot of Strings (~50%) from memory and could be exposed to the UI and the default setting could be enabled.
- 
Disable Use IPv6inNetwork Settingsof openHAB to get rid of a second jmDNS instance. This might be a good default value for most users?
- 
Set -Xss1024k, this will half the thread stack size from the 2048k default This is a more advanced topic but could be documented somewhere. openHAB is using 200+ threads and this has quite an impact
- 
Set -XX:+UseG1GCand-XX:+UseStringDeduplicationto enable string deduplication on GC level
Do you think memory usage is an issue at all or are most of the users currently running a SBC with 2+gb RAM? Are there further tips and tricks to improve the memory consumption?
Your Environment
- Version used: openHAB 4.0.x
- Environment name and version: Raspi 3a+
- Operating System and version: Raspbian Bookworm
This might be a nice place to add some of these flags:
https://github.com/openhab/openhabian/blob/65cbf0d8819941e539520db780844e057832ebc5/functions/openhab.bash#L221-L228
Very interesting. Your suggestions have to be validated by experts. This is only an hypothesis (feeling) but I guess we have many users using a RPI with 1 GB of RAM.
Maybe we should propose two sets of settings, one for very small machines and one for others.
PS: thread pools size could also be setup differently depending on how much RAM you have.
but I guess we have many users using a RPI with 1 GB of RAM.
Interesting - I would have guessed different. Pi 4 came out in June 2019 - that's over four years ago. The 4GB version has been the sweet spot for a long time and I would have thought that many people would have upgraded their old hardware to that. Would it be possible to make an anonymous survey in the forums?
@joerg1985 I would have expected that these defaults have been picked so they make at least some sense. What's the trade off for these options? Is it more CPU time vs RAM usage?
- & 4. are indeed cpu vs memory. But most of these systems have zram active, which is the same situation with one difference. These flags invest once cpu time to reduce memory, zram has to recompress it all the time.
- Is disabling something most likley not used in private networks.
- Is limiting the stack, so you would run in a StackOverflow ealier. But these java defaults are choosen very conservative, as java is often used in bussiness critical applications.
Interesting - I would have guessed different. Pi 4 came out in June 2019 - that's over four years ago.
Based on the number of users who show up on the forum after upgrading to OH 4 and running out of RAM I'd say it is a significant portion of users who are still on RPi 3s.
I could get rid if some not needed threads #3874 & #3884 in the core, this should free at least some memory.
I was also struggling with memory issues recently due to the 1GB limit on older RPIs.
What can also help is to increase the swap file size: https://pimylifeup.com/raspberry-pi-swap-file/
It will only work for applications that load a lot of stuff in memory which is infrequently used.
You can check how well it works by monitoring the CPU load of the kswapd0 process using top.
If it continuously swaps pages to/from the swap file it can wear out your sdcard pretty quickly ofcourse. :wink:
@wborn using zram might be an option to not swap to the sdcard. I am using zram with zstd compression and a 200% ratio on my 3a+ with 512 mb ram.
I run OpenHab 4 with the following flags since I updated:
-XX:MaxHeapSize=384M 
-XX:+UseG1GC -XX:+UseStringDeduplication
-XX:MaxHeapFreeRatio=10 -XX:MinHeapFreeRatio=5 
I have very good results on a Synology ARMv7 (this is a 32bit platform) with 1GB RAM. The heap of 384M is rarely used more than 50% (usually only 34%-43%), but then I also have only a very small number of items and rules and only very few bindings/add-ons (hue, livisi, system info, JS scripting, homekit).
The Min/MaxHeapFreeRatio settings made a big difference as I want OpenHab to consume as little RAM as possible so that the Synology NAS can also perform its normal duties unaffected. Reference: https://docs.oracle.com/en/java/javase/17/gctuning/factors-affecting-garbage-collection-performance.html#GUID-7FB2D1D5-D75F-4AA1-A3B1-4A17F8FF97D0
I changed now to include -Xss1024kas well.
Since OpenHAB 4, these are the flags I use:
    -XX:MinHeapFreeRatio=4 \
    -XX:MaxHeapFreeRatio=12 \
    -Xms64m \
    -Xmx1024m \
    -XX:-TieredCompilation \
    -XX:G1PeriodicGCInterval=60000 \
    -XX:G1PeriodicGCSystemLoadThreshold=4"
A few notes:
- I run more than 10x OpenHAB4 instances in a server, and each requires about 500-700MB RAM with these settings. With the defaults, some instances grew up to several Gigabytes after a few days (the default -Xmxwas 25% of system memory which was 8GB).
- -XX:-TieredCompilationreduced memory by ~100MB for me.
- If I remember correctly, -XX:+UseStringDeduplicationis typically enabled by default, so it does not make any difference.
- -Xms64mto allow OpenHAB to start with as low memory as possible, and- -Xmx1024mto allow it to grow if required but not indefintely (otherwise the JVM defers GC too much to avoid GC pauses).
- -XX:MinHeapFreeRatio=4and- -XX:MaxHeapFreeRatio=12to make JVM allocate less memory by default. It will try to allocate 8% more heap memory when it has only 4% free. The defaults are 40 and 70 If I remember correctly, which means that the JVM will use between 40% and 70% more heap memory than what it needs.
- The most important new parameters for JVM 17 are:  -XX:G1PeriodicGCInterval=60000and-XX:G1PeriodicGCSystemLoadThreshold=4. These allow the JVM G1 Garbage collector to reclaim unused memory and return it back to the host. Only if both parameters are set, this task is performed. Reference: JEP 346: Promptly Return Unused Committed Memory from G1- This is only available in JVM 17 (and therefore OH4). In OH3, the only setting affecting Garbage collection was -Xmxset as low as possible (e.g. 512M) in order to force the JVM to run GC frequently. But this led to crashes when the heap was not enough.
- The G1 Garbage collector runs concurrent collection by default. In extremely limited memory scenarios, one can also set -XX:G1PeriodicGCInvokesConcurrenttofalseto invoke FULL GC, but I would not recommend it in general because it will probably lead to full "world pauses".
 
- This is only available in JVM 17 (and therefore OH4). In OH3, the only setting affecting Garbage collection was 
I also have a rule to run a full GC every night when I am not using OH. This reclaims some extra memory from some instances that like to grow a bit more every few days.
/**
 * This rule triggers the garbage collector proccess manualy using a Cron Trigger. To avoid running
 * GC for all openhab instances at the same time, a slight randomness is used at the trigger.
 */
const System = Java.type("java.lang.System");
const randomSecond = Math.floor(Math.random() * 60);
const randomMinute = Math.floor(Math.random() * 60);
rules.JSRule({
  name: "Run Garbage Collector Proccess",
  description: "Triggers the garbage collector manually",
  triggers: [
    triggers.GenericCronTrigger(`${randomSecond} ${randomMinute} 0 * * ?`),
  ],
  execute: (event) => {
    System.gc();
  },
  tags: [],
  id: "triggerGarbageCollector",
});
inserted some randomness to avoid all OpenHAB instances running GC at the same exact time (probably not needed since it only takes a few milliseconds).
A final note is that I compared the JVM memory when running OpenHAB4 in Docker Vs in the host, and there was no difference at all when using these settings. I mention this because I had read in the forum someone suggesting that the JVM consumes more memory in Docker, but according to my experiments this is not the case.
When running with the default JVM settings you may observe differences because the JVM uses different defaults when running in Docker, not because of Docker itself.
I mention this because I had read in the forum someone suggesting that the JVM consumes more memory in Docker, but according to my experiments this is not the case.
I think that is a misreading of what was said and lacks context.
The JVM isn't going to use any more or any less memory because it's running in Docker. The JVM isn't even going to know it's running in Docker. But the Docker container over all is going to consume more RAM than running openHAB on bare metal because there is additional overhead.
@joerg1985 I've experimented with a standalone parser for thing syntax and got some results. My goal was literally bypassing whole complexity of xtext and emf. Its doable. I see you use JS for rules, is Rule DSL remain your concern?
@splatch i was looking for low hanging fruits to reduce memory consumption. e.g. like PR https://github.com/openhab/openhab-addons/pull/16379
I guess tweaking the JVM options will not be universaly good: for some users the changes will be good, for others bad.
Moving more things from core to optional addons is the right way. What comes to my mind: voice things, Language server, DSL, Blockly.
Sure enabling flags should be done with caution, e.g. like https://github.com/openhab/openhabian/pull/1859 only for lowmem devices.