Memory Sharing Between Threads
About
In Java (and most modern programming languages), when we create multiple threads within the same process, they share a common memory space. This shared memory model allows threads to communicate and coordinate their actions by reading from and writing to shared variables or objects.
While this is powerful and efficient, it introduces complexity in terms of thread safety, data consistency, and visibility of changes. Understanding how memory is shared and managed across threads is essential for writing correct and performant multithreaded applications.
Why Threads Share Memory
All threads in a Java application run in the same process. A process is the operating system abstraction that provides memory, file handles, and other resources.
Since Java threads are lightweight and managed by the Java Virtual Machine (JVM), they:
Run within the same memory space (same heap).
Can reference the same objects.
Use their own execution stacks (method calls, local variables, etc.)
This design enables efficient communication between threads, unlike in multi-process architectures where communication requires IPC (Inter-Process Communication) mechanisms like sockets or pipes.
What’s Shared and What’s Not
Heap Memory
Yes
Includes all objects and class variables. Threads can read/write shared objects.
Stack Memory
No
Each thread has its own stack for local method variables. Not visible to other threads.
Static Variables
Yes
Belong to the class, not the instance, hence shared among all threads.
Instance Variables
Conditional
Shared only if multiple threads share a reference to the same object.
ThreadLocal Values
No
Each thread has its own isolated copy via ThreadLocal.
CPU Registers & Caches
No (by default)
Each CPU core/thread may cache values and not reflect them in main memory unless synchronized.
Dangers of Shared Memory
1. Race Condition
A race condition occurs when two or more threads access shared data and try to change it at the same time. The final outcome depends on the unpredictable timing of thread execution.
Threads “race” against each other to access or modify the same variable.
The program may produce different results on different runs even with the same input.
Happens due to lack of synchronization.
Example Scenario:
Two threads incrementing a shared counter simultaneously without locking it. One increment might get lost.
2. Lost Update
This is a specific type of race condition where an update to a shared variable by one thread is overwritten or ignored because another thread wrote to it just before or after.
Multiple threads read the same value, update it, and write it back.
Since both used the same original value, one update "loses" the effect of the other.
The final result reflects only one update.
Example Scenario:
Both threads read a counter as 5, increment it to 6, and save it. Final value remains 6 instead of 7.
3. Visibility Problem
Even if operations happen in order, one thread might not see the updated value of a shared variable written by another thread due to CPU caching or compiler optimization.
Java threads may cache variables locally.
Changes in one thread might not be visible to others unless synchronization or
volatileis used.The result: a thread acts on stale data.
Example Scenario:
Thread A updates a flag to true, but Thread B keeps seeing it as false because it's using a cached copy.
4. Atomicity Violation
Even reading and writing a primitive (like int) isn't always safe if combined operations are involved.
Occurs when compound operations (like read-modify-write) are not performed atomically, i.e., they are interrupted between steps by other threads.
Even if visibility is fine, operations like
x++are not atomic.They break down into read → compute → write.
Without synchronization, another thread might interleave between steps.
Example Scenario:
Multiple threads increment a value concurrently without locking. Final value is lower than expected due to lost increments.
5. Data Corruption
Multiple threads mutate a data structure without proper synchronization, causing inconsistent state.
This is the most severe outcome. When multiple threads modify shared data in an uncoordinated way, it may lead to inconsistent, invalid, or broken data.
Happens when updates are partially completed.
Can lead to invalid program state, crashes, or security vulnerabilities.
Common in data structures not designed for concurrency.
Example Scenario:
Two threads modify a shared list at the same time. One adds and the other removes, leading to a corrupted internal state or ConcurrentModificationException
What is Memory Visibility?
In a multithreaded Java program, memory visibility refers to whether a change made by one thread is visible to another thread.
Java threads do not always see the most updated value of a shared variable.
This happens because threads can cache variables locally (e.g., in registers or CPU caches), and those cached values may not be in sync with main memory.
So, even if Thread A updates a variable, Thread B might continue to see an old value.
Example of Memory Visibility Problem
In this code:
The writer thread sets
flag = true.But the reader thread may not see the update because the value of
flagmight be cached.
How to Fix It?
We can use:
The
volatilekeyword.Proper synchronization (
synchronizedblocks or locks).Classes from
java.util.concurrentpackage.
Declaring flag as volatile ensures visibility — changes to flag by one thread are immediately visible to others.
What is the Java Memory Model (JMM)?
The Java Memory Model is the formal set of rules that:
Define how threads interact with memory.
Specify when changes to variables made by one thread become visible to others.
Define synchronization rules to prevent race conditions, visibility problems, and instruction reordering issues.
It governs:
Reads and writes of variables.
Synchronization actions (
volatile,synchronized,locks,atomicoperations).Thread safety and ordering guarantees.
Without JMM:
The behavior of multithreaded code would be undefined or inconsistent across JVMs and hardware architectures.
CPUs and compilers can reorder instructions for optimization.
Without a memory model, there is no way to reason about correct synchronization.
JMM gives developers a well-defined contract for writing concurrent code.
Last updated