Kubernetes Memory is constant

I deployed my code to GCP kubernetes cluster. When I monitor the performance of my code the memory graph is steady and constant at a certain level. It does not change at all whether I increase the load for my code or decrease, it remains steady. It would be greatful if someone suggest me the reason behind this. This point is an issue because the steady constant level is in Gb. I am using akka streams in my code, this might be helpful in answering the logic behind this problem.

Hi.
My first question is, which memory graph are you looking at ?
If you are looking a the system memory, or the Java process memory, it is not really indicative of the memory performance of your code.
There is a configuration value for the maximum amount of memory a JVM will allocate, it will never go over this value. Also, the JVM usually doesn’t release the memory it allocated, even if internally it doesn’t need as much any more. So, from outside of the JVM, the memory graph will just grow for a while, then stay at this level.

If you are looking at the memory usage inside the JVM, for a simple computation on each element, it should not impact much the memory.
Objects are created to process the element, and they become quickly ready to be garbage collected. The garbage collector is triggered at a threshold, and removes those objects. When the throughput increases, the garbage collector is triggered more often, so the memory usage is stable.

Hi @SkyLuc
I am looking at the GCP memory graph. In GCP we have allocated memory of maximum 2 GB for our application. As soon as our application is deployed, it starts consuming around 1.2 GB memory. And it won’t varies on increase in load. So what might be the reason behind this behaviour?

Is my application taking around 1.2 GB memory by default? If yes, then whether it can be decreased?

My application is based on akka streams which take input from kafka and after certain processing it send s back to kafka.

The GCP memory graph is not precise enough to give you a good view of the memory usage in a JVM application. You would need to get to the JVM metrics to be able to see how your application performs.

Also, are you running a Cloudflow application with Akka stream components? or generic Akka stream application?
This section of the Discuss forum is more about Cloudflow applications.
It is not really a problem, but I was making assumption that you were running Akka stream in a Cloudflow application, and from your description, it looks more like your are running a generic Akka stream application.

I have deployed my generic Akka stream application to Google Cloud and monitoring there only.

The JVM does ‘internal’ memory management: it allocates big chunks of memory from the OS, and then internally allocates objects from that memory. Most objects are allocated in the area the JVM calls the ‘heap’. When garbage collection occurs, there will be more free space in the heap, but the JVM holds on to that OS memory anyway, so it can use it when new objects are created. For that reason the memory usage visible on the OS level may seem rather constant, even if the JVM is creating and freeing objects within the heap all the time.

To tune your memory usage, I would recommend reading up on JVM memory management a bit. You can then reduce the size of the heap with a flag when you start the JVM. You’ll have to get familiar with the tools to look at the various JVM memory sizes, though, to find out what the proper settings for your application are.

(moved the topic to Akka Streams so it’s easier to find - though essentially it’s even a general JVM question)