Java Memory Model (JMM) Tutorial

In this section, we will learn what the Java memory model is and how it works.

Note: we’re assuming you’re already familiar with the multi-threading and concurrency in Java.

What is Java Memory Model (JMM)?

When you create a program, there are instances of classes, instance variables, static variables, etc. Each of these members is stored in the main memory (In the heap area as well as the Metaspace).

Now, when we create a thread in a program, each thread has a working memory which consists of the CPU registers and the caches that exist between the main memory and the CPU. Let’s say a thread starts to work and one of its instructions was to update the value of an instance variable! What the thread will do is that it will take a copy of the target instance variable and bring it to its working memory and then it will update the value of that variable in this working memory space!

Basically, the read and write to a member that exists in the main memory (the heap area), will not happen immediately when we’re working with multiple threads in a program!

So, the problem that may happen is that one thread takes a copy of an instance variable, runs some operations on it and finally updates the variable with a new value but stores the updated value in its working memory instead of the main memory!

This is not because of a bug or a mistake in the design of the Java! In fact, they did it intentionally because of the CPU register and caches are closer to the CPU and so it can read and write data to these memories a lot faster compared to if it wanted to write a data to the main memory (RAM).

But the problem here is that threads don’t check the working memory of each other! Basically, their only way of sharing and communicating is by the shared memory (the main memory). So if one thread updated the old value of an instance variable for example and set a new one for it but stored the result in its working memory temporarily, if another thread came by and requested the value of the same instance variable, it will get a wrong data (or basically an outdated data) because the first thread didn’t update the main memory yet!

Now, Java JMM explains how and when and in what order, program variables are read and write from the main memory! If we follow the instructions of JMM, we can design a program in a way that no threads get an outdated value of a variable or no threads can access and modify the value of a variable while another thread is working on that variable. Hence, we can create a multithreaded and predictable program.

Thread Working Memory (Stack Memory as well as Caches and CPU Registers)

When a thread starts to work, it gets a small portion of the main memory (RAM) which is known as the stack. This is the area where it invokes the body of the target methods and starts to run their instructions.

But, a thread also uses other memory areas as well. For example, the CPU caches, registers, etc. These memories are a lot closer to the CPU and working with them takes much less time compared to communicating directly with the main memory. So it does make sense to use them to store data temporarily in order to complete the tasks faster and at the end when the work is done (for example, when the instructions of a method ran completely) then move these data to the main memory and update any related instance or static variables, etc.
So in short, the CPU registers and different level caches that are available and a thread may use them during its execution are called the working memory of a thread.

Java Memory Model and Atomicity, Visibility, and Ordering

According to JMM, when the instructions of a program are executed, there are three important aspects to be considered:

Atomicity

• Visibility

Ordering

What is Atomicity in Java?

According to the JMM, instructions should execute atomically! This means if we want to read or write to an instance variable or a static variable or an array’s element, it must be atomic.

Note that if the data type of the target variable is either long or double, the operation won’t be atomic unless we declare the variable as volatile. (We’ve explained the volatile variable in a later section).

What is Visibility in Java?

The visibility tells us when and how two threads can get and share their data with each other!

We will get into more details of visibility but for now, some of the rules of the visibility are as follow:

  • The first time a thread reads the value of a variable, it will read the value from the main memory (RAM). This value might be the initial value or written by another thread.
  • Variables can become volatile using the `volatile` keyword. Now, writing to a volatile variable or reading from it always happens in the main memory! That is, such variables will never be cached in the working memory of a thread. So if a thread wrote a value to this type of variable, that value will be immediately flushed to the main memory (RAM).
  • When a thread finishes its work, any value remained in the working memory will be flushed to the main memory of the target variables. At that time, the other threads (if any) can see the updated values. (For example, when a thread invoked the body of a method in its stack and after the thread is done with that method and exited from the method, this is when the thread is finished and so it will flush the values from its working memory to the main memory).

What is ordering in Java multithreading?

JMM explains that when a thread starts working, all the instructions of that thread will be in order from top to bottom. For example, when a thread starts the `run()` method, it will run the statements one by one from top to bottom, just like a single-threaded program.

But according to the JMM, there’s no order between multiple threads! They compete with each other and it’s not clear which one will run first and which one comes next! But there are tools in Java like the `synchronized` keyword that we can use to put some sort of orders between the threads of a program.

Java Multithreading and Critical Section:

Part of a program (a code block or a method, for example) that may get executed by multiple threads concurrently and hence cause an unpredictable result, is called a critical-section.

In Java, there are multiple built-in constructs that could be applied to a critical section to make sure that part of the program is accessed only by one thread at a time in order to prevent an unpredictable result.

This process of controlling and thread coordination in a critical section is known as thread-synchronization.

For example, let’s say we have a program with two threads named t1 and t2. Now there are two methods in this program called `increment()` and `decrement()` respectively. Both of these methods have the access to a class variable named `count` which is of type integer and the current value of this variable is 0. If we call the `increment()` method, the value of the variable will be increased by one, and if we call the method `decrement()`, the value of the variable will be decreased by one. Let’s say the t1 thread has the access to the `increment()` method and the t2 thread has the access to the `decrement()` method. Now, if you run such a program for about 3 seconds, do you think the final value of the `count` variable will stay 0 or it will be something different? Hint: we can’t control which thread takes the CPU’s time and for how long!

Example: critical section and multithreading in Java

public class Main {
    public static int count = 0;
    public static void main(String[] args) {
        
        Thread t1 = new Thread(Main::increment);
        Thread t2 = new Thread(Main::decrement);
        t1.start();
        t2.start();
    }

    public static void increment(){
        while(true){
            count++;
            try {
                Thread.sleep(250);
            } catch (InterruptedException e) {
                e.printStackTrace();
            }
            System.out.println("The value of the count variable is: "+count);
        }
    }
    public static void decrement(){
        while(true){
            count--;
            try {
                Thread.sleep(200);
            } catch (InterruptedException e) {
                e.printStackTrace();
            }
            System.out.println("The value of the count variable is: "+count);
        }
    }

}

Output:

The value of the count variable is: -1

The value of the count variable is: 0

The value of the count variable is: -1

The value of the count variable is: -2

The value of the count variable is: -1

The value of the count variable is: -2

The value of the count variable is: -1

The value of the count variable is: -2

The value of the count variable is: -1

The value of the count variable is: -2

The value of the count variable is: -1

The value of the count variable is: -2

The value of the count variable is: -1

The value of the count variable is: -2

The value of the count variable is: -3

The value of the count variable is: -2

The value of the count variable is: -3

The value of the count variable is: -2

The `count` variable in this program is a critical section. This is because two threads are competing with each other to access the variable and change its value. Hence, we will get an unpredictable result. (Note: the algorithm of the underlying scheduler of threads decides which thread can have access to the CPU’s time and for how long! So, one thread may get the CPU’s time for 10 microseconds while the other one gets it for 5 microseconds. For this reason, unless we don’t properly synchronize a critical section, the result will be unpredictable).

Note: in the example above, we’ve used the `sleep()` method; you can learn about the sleep() method in the next section.

Types of Thread Synchronization:

There are two methods that can be used to synchronize threads in a program when it comes to a critical section.

  • Mutual exclusion synchronization
  • Conditional synchronization

Java Thread Synchronization: Mutual Exclusion

Using the mutual-exclusion, when it comes to a critical section, only one thread at a time can access that critical section! For example, let’s say there’s a method in your program that should not be accessed by multiple threads at the same time (hence, a critical section). Now, by implementing the mutual-exclusion synchronization on this method, we’re rest assured that only one thread will access that method at a time.

Let’s explain theoretically how mutual exclusion is done in Java:

The mutual exclusion is done using something called `lock`. We will explain where this lock is coming from but for now, just remember that a lock has two parts:

  • Acquire
  • Release

Basically, when the `synchronization` is applied to a critical section (For example, to a method or a block of codes), threads that are competing with each other to access to that critical section, first have to get the lock (AKA acquire the lock). Here, the first thread that got the lock, have the permission to enter the critical section and run the instructions that are there. Now, if another thread wanted to access that critical section, it will be blocked until the work of the thread that is currently in the critical section, is done. After that, the target thread will `release` the acquired lock and so give the opportunity for other threads to take the lock and run the critical section.

Note: still we can’t predict which thread between multiple competing threads will take the lock, but at least now we’re assured that only one thread at a time can access the critical section.

(The practical implementation of mutual-exclusion synchronization is explained in the rest of this section).

Java Thread Synchronization: Conditional Synchronization

The second method of synchronization is achieved using a condition-variable and three operations:

  • Wait
  • Signal
  • Broadcast

Here’s how it works:

The condition-variable defines the condition in which the threads are synchronized when it comes to a critical section! The `wait` operation makes a thread to wait until a condition becomes true. The `signal` operation signals one of the waiting threads that the condition is true and so it can access the critical section and continue to run the instructions there. On the other hand, if there are multiple threads and all are waiting for a condition to become true, the `broadcast` operation will wake up all of them and it causes them to compete with each other so that one of them can enter the critical section.

Note: in the wait(), notify(), and notifyAll() section, we’ve explained the practical implementation of this method of synchronization.

Object’s Monitor and Thread Synchronization:

A monitor is a program construct that JVM associates one to each object you create in a Java program.

A monitor has `lock`, `condition-variable` and the operations that are associated with this condition-variable (The wait, signal, and broadcast operations).

We achieve thread synchronization in Java using this monitor!

Also, note that a monitor has two sections called `Entry-Set` and `Waiting-Set`. We will talk about these two sets later in this section and a couple of upcoming sections.

Alright, now, let’s move on and see how to practically implement synchronization in Java.

Java synchronized

The first way of synchronization is done using the `synchronized` keyword.

This keyword is used when we want to lock a method or a block of instructions and allow only one thread at a time to execute that method/block.

As mentioned before, a lock is part of a program construct that is called `monitor` and each object in Java has a monitor associated with it.

Now the rule number one that you should remember is that when a thread gets the lock of a monitor for a critical section (a method or a block), as long as that thread has the lock, no-other thread can get the lock neither for the same critical section nor for any other critical section (if any) of the same object.

Note that a thread gets the monitor’s lock automatically, the moment it enters a synchronized critical-section. (Of course if there are multiple threads competing with each other for a critical-section, it’s not clear which thread will win the monitor’s lock, but it’s clear that after getting that lock, the rest of threads will be blocked until the one that took the lock exits from the critical section. After that, the JVM automatically chooses another thread that wanted to enter the critical section and gives it the monitor’s lock).

Note: those threads that were blocked from entering a critical section will be in the `Entry-Set` of the target monitor that they were trying to acquire its lock.

Alright, let’s first move on to see how we can use this keyword and synchronize a critical section, and after that, we will continue our discussions related to the acquisition and release of a lock.

Java synchronized Syntax:

This is how we can add the synchronized keyword to a method or a block of instructions:

Access-specifier synchronized data-type methodName(){…}

For example:

public synchronized void test(){…}

Here’s the syntax for blocks:

synchronized (instance-object){…}

Note that synchronization happens relative to objects’ monitors. So, if you have a block of code somewhere in your program and want to synchronize the access to this block, you need to make it clear this synchronization happens relative to what object, so that JVM knows to use the lock of what object’s monitor for that synchronization. For this reason, we declare an instance of an object in the parentheses of the `synchronized` when it is used for a block of instructions.

But when we use the `synchronized` keyword for methods, the target object is already clear (which will be any object that invokes the synchronized method).

Alright, things might seem a bit confusing at first, so let’s move on and run an example to see how this synchronized keyword works in Java programs.

Example: using Java synchronized

public class Main {

    public static void main(String[] args) {
        
        Main main = new Main();

        Thread t1 = new Thread(main::m1, "Thread-one");
        Thread t2 = new Thread(main::m1, "Thread two");

        t1.start();
        t2.start();

    }
    public synchronized void m1(){
        for (int i = 0 ; i<5; i++){
            try {
                Thread.sleep(500);
                System.out.println(Thread.currentThread().getName());
            } catch (InterruptedException e) {
                e.printStackTrace();
            }
        }
    }
}

Output:

Thread-one

Thread-one

Thread-one

Thread-one

Thread-one

Thread two

Thread two

Thread two

Thread two

Thread two

How Does synchronized keyword work in Java?

In the last example, there are two threads named `t1` and `t2` that want to access a method named `m1`. Now, if you look at this method, we can see that it’s been synchronized using the `synchronized` keyword. Now, here’s what happened when we launched the program:

  1. The two threads started to compete with each other to access the `m1` method.
  2. Because this method is part of the `main` object and it is synchronized, that means the threads need to get the lock of the monitor of this object! So both of them will enter the monitor of the `main` object (they get into the entry-set of the monitor) and ask for the lock of the monitor.
  3. At this time, one of them gets the lock, which is the permission to take a copy of the method’s instructions and run them in its stack.
  4. While the other thread will be blocked as long as the thread that contains the lock is running the method.
  5. Note that the second method is now blocked in the entry-set of the object’s monitor. So it won’t be able to do anything.
  6. After that the first thread executed the method and exited from the method, it will `release` the `lock` (So now the monitor will get the lock again) and at this time, the object’s monitor will pass the lock to the thread that was blocked in the entry set of the monitor. So that the second thread will start to run the method’s instructions in its stack.

This is how the synchronization happened in this example.

Note that if we didn’t use `synchronized` keyword for the `m1` method, both of them would’ve accessed the method and ran it concurrently which would’ve ended up getting an unpredictable result (At least the order on the output wouldn’t be the way we saw in the last example).

More to Read:

In the next couple of sections, we will continue our discussions related to multithreading and the tools that Java provided related to threads.

Facebook
Twitter
Pinterest
LinkedIn

Top Technologies