Types of Memory
About
The Java Virtual Machine (JVM) memory structure plays a crucial role in the performance and execution of Java applications. JVM memory is divided into different regions, each serving a specific purpose. Understanding these memory areas is essential for debugging memory leaks, optimizing performance, and configuring JVM memory settings effectively.
JVM memory is broadly categorized into two types:
Heap Memory → Stores objects and class instances.
Non-Heap Memory → Stores metadata, method area, and thread-related structures.
Each of these is further divided into sub-regions. Here’s a high-level breakdown:
Heap Memory
Stores Java objects and dynamically allocated data
Stack Memory
Stores method call frames, local variables, and execution state
Metaspace
Stores class metadata and method definitions (Java 8+)
Code Cache
Stores JIT-compiled code for optimized execution
Native Memory
Memory allocated outside JVM, used by OS, JNI, and threads
1. Heap Memory
The largest memory area in JVM, used for storing objects and class instances.
Divided into Young Generation (Eden, Survivor Spaces) and Old Generation (Tenured Space).
Garbage Collection (GC) periodically removes unused objects.
Objects with longer lifetimes eventually move to the Old Generation.
Memory issues like OutOfMemoryError: Java heap space occur if the heap is exhausted.
2. Stack Memory
Stores method call stack frames, including local variables and method execution data.
Each thread has its own stack, ensuring thread isolation.
Memory is allocated and deallocated automatically as methods are called and return.
Limited in size, causing StackOverflowError when exceeded (e.g., deep recursion).
3. Metaspace (Replaced PermGen in Java 8)
Stores class metadata, method definitions, and runtime constant pools.
Unlike PermGen, Metaspace resides in native memory and can dynamically grow.
Memory issues may result in OutOfMemoryError: Metaspace if it reaches the system limit.
Controlled using
-XX:MetaspaceSize
and-XX:MaxMetaspaceSize
JVM flags.
4. Code Cache
Stores JIT-compiled native code to improve execution performance.
Reduces the need to repeatedly interpret Java bytecode.
Optimized by JVM to ensure efficient execution of frequently used code paths.
5. Native Memory
Memory allocated outside the JVM heap, used by JNI (Java Native Interface) and direct byte buffers.
Managed by the operating system rather than the JVM.
Excessive native memory usage can lead to system-level OutOfMemoryError.
Percentage Allocation of JVM Memory Types
The percentage of total memory allocated to each JVM memory type depends on the JVM implementation, configuration, and runtime workload. However, typical allocations follow these general guidelines.
Heap Memory
50% - 80%
The largest portion of JVM memory. Used for object storage and garbage collection. The exact size is controlled using -Xms
(initial size) and -Xmx
(maximum size).
Stack Memory
1% - 10% per thread
Small but critical. Stores method call stacks, local variables, and function execution details. Size can be adjusted using -Xss
.
Metaspace
5% - 20%
Stores class metadata, method data, and runtime constant pools. Unlike the old PermGen, it grows dynamically. Controlled with -XX:MetaspaceSize
and -XX:MaxMetaspaceSize
.
Code Cache
5% - 15%
Stores compiled JIT code for optimized execution. Can be tuned using -XX:ReservedCodeCacheSize
.
Native Memory
Varies (10% - 30%)
Used by JNI, direct buffers, thread stacks, and OS-level memory allocations. Typically not directly managed by JVM but can impact overall system performance.
The heap takes the largest portion as it stores most runtime objects.
The stack is relatively small per thread but scales with the number of active threads.
Metaspace usage depends on the number of loaded classes and can grow dynamically.
Code Cache benefits JIT-compiled code and varies based on execution patterns.
Native Memory usage depends on external libraries, threads, and OS interactions.
These allocations can be adjusted using JVM options to optimize performance based on the application's needs.
JVM Memory Allocation in a Spring Boot Service
Let's consider a Spring Boot microservice running in an OpenShift pod with the following JVM memory configuration -
Total available memory for the container:
2 GB
JVM Heap Size (
-Xmx
):1 GB
JVM Initial Heap Size (
-Xms
):512 MB
Stack Size (
-Xss
):512 KB per thread
Metaspace Size (
-XX:MaxMetaspaceSize
):256 MB
Code Cache Size (
-XX:ReservedCodeCacheSize
):128 MB
Estimated Memory Breakdown
Memory Type
Size Allocation
Percentage (%)
Heap Memory
1024 MB (1 GB)
50%
Stack Memory
128 MB (for ~256 threads)
6%
Metaspace
256 MB
12%
Code Cache
128 MB
6%
Native Memory
512 MB (remaining for OS, buffers, JNI, etc.)
25%
Heap Memory (
1 GB
)Used for storing objects created by the Spring Boot application, such as controllers, service beans, repository objects, and caches.
Garbage Collection (GC) will periodically reclaim unused objects.
Stack Memory (
~128 MB
for multiple threads)Each thread gets a fixed stack size (
512 KB
per thread).A Spring Boot app handling concurrent requests may spawn ~256 threads, requiring around
128 MB
total.More threads can increase stack memory usage, potentially leading to OutOfMemoryError: unable to create new native thread.
Metaspace (
256 MB
)Stores class metadata, including Spring Boot's dynamic class generation (Proxies, Hibernate entities, etc.).
Since Spring Boot loads many classes dynamically, a larger Metaspace allocation helps avoid
OutOfMemoryError: Metaspace
.
Code Cache (
128 MB
)Stores JIT-compiled methods to speed up execution.
If insufficient, JIT optimizations may suffer, leading to slower application performance.
Native Memory (
512 MB
)Required for OS-level functions, thread management, buffers, socket connections, and JNI (e.g., database drivers, native libraries like Netty for networking).
If the pod runs multiple services or threads, native memory usage will be higher.
Considerations for OpenShift Perspective
OpenShift enforces memory limits at the container level. If the JVM exceeds the limit, OOMKilled events occur.
Tuning Garbage Collection (G1GC, ZGC) can help balance heap allocation and GC overhead.
Kubernetes/OpenShift resource requests and limits should be properly defined (
requests.memory
,limits.memory
).Using
-XX:MaxRAMPercentage=75
allows the JVM to adapt heap size dynamically based on available pod memory.
Commonly Configured JVM Memory Parameters
In a containerized environment like OpenShift, not all JVM memory settings are explicitly configured. Many are left to default values or dynamically managed based on the container's available memory. However, some key parameters are frequently set to control memory usage and avoid OutOfMemoryError (OOM) or excessive garbage collection.
1. Commonly Configured JVM Memory Parameters
Parameter
Description
Common Usage in OpenShift
-Xmx
(Max Heap)
Defines the maximum heap size
Set to 50-75% of the container’s memory (-Xmx512m
for 1GB pod)
-Xms
(Initial Heap)
Defines the initial heap size
Usually same as -Xmx
to avoid heap resizing overhead
-Xss
(Thread Stack Size)
Defines the memory per thread stack
Typically 512 KB - 1 MB per thread (-Xss512k
)
-XX:MetaspaceSize
Initial size for Metaspace
Usually 128MB - 256MB, auto-expands if needed
-XX:MaxMetaspaceSize
Maximum Metaspace limit
Not always set, but useful to prevent unbounded growth
-XX:ReservedCodeCacheSize
JIT Code Cache Size
Defaults to 240MB, may be tuned for high-performance apps
-XX:MaxRAMPercentage
Allows JVM to allocate heap as a % of container memory
Preferred over -Xmx
in containers (e.g., -XX:MaxRAMPercentage=75
)
-XX:+UseContainerSupport
Enables JVM to respect container limits
Enabled by default in Java 10+ (No need to set manually)
-XX:+HeapDumpOnOutOfMemoryError
Dumps heap memory on OOM for debugging
Often enabled for production (-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp/dump.hprof
)
-Duser.timezone=UTC
Ensures timezone consistency across containers
Common for global deployments
2. Less Commonly Set (But Useful in Specific Cases)
Parameter
When to Use It?
-XX:+UseG1GC
Default GC for Java 9+, good for moderate heap sizes (1GB-4GB)
-XX:+UseZGC
For low-latency applications, needs Java 11+
-XX:NewRatio=2
Controls Eden:Old ratio (useful for tuning young gen collection)
-XX:SurvivorRatio=8
Fine-tunes object survival rate before promotion to Old Gen
-XX:+ExitOnOutOfMemoryError
Ensures pod restart on OOM, instead of getting stuck
-XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap
Java 8 workaround to respect container memory limits
3. Which Parameters Are Typically Left as Default?
Code Cache (
-XX:ReservedCodeCacheSize
) → JVM manages it well unless JIT optimizations are requiredNative Memory (OS-level allocations) → JVM handles it dynamically
Garbage Collector (
-XX:+UseG1GC
or default GC) → Unless tuning for low-latency, JVM defaults are good
4. Example Configuration for OpenShift (1GB Memory Pod)
JAVA_OPTS="-XX:MaxRAMPercentage=75 -XX:MetaspaceSize=128m -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp/dump.hprof -Duser.timezone=UTC"
or
JAVA_OPTS="-Xms512m -Xmx750m -Xss512k -XX:+UseG1GC -XX:MetaspaceSize=128m -XX:+HeapDumpOnOutOfMemoryError"
Last updated