cassandra icon indicating copy to clipboard operation
cassandra copied to clipboard

Cassandra doesn't run with a low memory on docker

Open rodrigorodrigues opened this issue 1 year ago β€’ 1 comments

Hi all,

I'm trying to run Cassandra with the minimum memory as possible on docker but the server kills after a while, is there anyway to use Cassandra with a minimum memory possible? It could disable most all features like authorization, cluster just need a simple single node to store data in a fast way.

cassandra:
  image: 'cassandra:latest'
  environment:
    - 'HEAP_NEWSIZE=10M'
    - 'MAX_HEAP_SIZE=200M'
  ports:
    - '9042:9042'
  deploy:
    resources:
      limits:
        cpus: "0.5"
        memory: "300MB"

rodrigorodrigues avatar Feb 10 '24 10:02 rodrigorodrigues

Hmm, I'm not sure. I've never had much luck with limiting Cassandra explicitly -- using --env MAX_HEAP_SIZE='128m' --env HEAP_NEWSIZE='32m' is the best I've found to keep the memory usage low (which is very similar to what you're using). If you get rid of the explicit limit (or raise it), does that help?

(This might be better suited to a Java or Cassandra specific forum, since it's not really specific to the container image, and then you might find folks with more knowledge of Java and Cassandra memory usage and keeping it within sane thresholds. :sweat_smile:)

tianon avatar Feb 12 '24 21:02 tianon

I haven't tried yet but according to the docs, that isn't how you set those settings.

https://cassandra.apache.org/doc/latest/cassandra/getting-started/configuring.html#environment-variables

LaurentGoderre avatar Apr 23 '24 17:04 LaurentGoderre

This is the smallest I have managed to get it:

services:
  cassandra:
    image: 'cassandra:latest'
    environment:
      JVM_OPTS: -Xmn64m -Xms128m -Xmx500m
    ports:
      - '9042:9042'
    deploy:
      resources:
        limits:
          cpus: "0.5"
          memory: "500MB"

LaurentGoderre avatar Apr 23 '24 17:04 LaurentGoderre

I think if you run it hard enough it'll balloon higher than 500M though, so your limit will likely activate OOM killing. I'm not familiar enough with memory management in Java to say for sure exactly how much higher it'll go, but I've definitely seen it go higher.

tianon avatar Apr 23 '24 17:04 tianon

I wouldn't be surprised. I think the lesson here is that 300mb is just too low for latest cassandra to run.

LaurentGoderre avatar Apr 23 '24 17:04 LaurentGoderre

Hmm, I'm not sure. I've never had much luck with limiting Cassandra explicitly -- using --env MAX_HEAP_SIZE='128m' --env HEAP_NEWSIZE='32m' is the best I've found to keep the memory usage low (which is very similar to what you're using). If you get rid of the explicit limit (or raise it), does that help?

(This might be better suited to a Java or Cassandra specific forum, since it's not really specific to the container image, and then you might find folks with more knowledge of Java and Cassandra memory usage and keeping it within sane thresholds. πŸ˜…)

The Cassandra docker container did not start, I added those values ​​to the environment variables when creating the container, now it starts, I don't know why that happens to me now, before it started without having to add that.

thanks, it worked for me :D

carlosucros avatar May 28 '24 09:05 carlosucros

By default, if you do not specify a limit, Cassandra queries the host to use some (very large) percentage of all available resources. The values I provided are intentionally pretty unreasonably low, and I would not recommend them for a real deployment (you'll probably want/need higher values there).

tianon avatar May 28 '24 23:05 tianon

Closing since this is the nature of cassandra (and java-based applications) and there is a sufficient workaround: set hard limits (e.g. --memory or memory:) higher than the flags given to the JVM (e.g. -Xmx) values to prevent OOM kills.

yosifkit avatar May 29 '24 23:05 yosifkit