Multithreading principle

Posted May 27, 202028 min read

1. Processes and threads

A. Process

In a computer, we call a task a process, a browser is a process, and a video player is another process; similarly, both music player and Word are processes.

B. Thread

Some processes also need to execute multiple subtasks at the same time. We call subtasks threads.

C. Process VS Thread

Processes and threads are inclusive, but multitasking can be achieved by multiple processes, by multiple threads within a single process, or by mixing multiple processes \ + multiple threads.

The advantages of multi-process:
Stability is higher than multi-threading, because in the case of multiple processes, a process crash will not affect other processes, and in the case of multi-threading, any thread crash will directly cause the entire process to crash.

Disadvantages of multi-process:

(1) Creating a process is more expensive than creating a thread, especially on a Windows system.
(2) Inter-process communication is slower than inter-thread communication, because inter-thread communication is to read and write the same variable, and the speed is very fast.

Second, multithreading

A Java program is actually a JVM process. The JVM process uses a main thread to execute the main() method. Inside the main() method, we can start multiple threads.
In addition, the JVM has other worker threads responsible for garbage collection.

Compared with single-threaded, the characteristics of multi-threaded programming are:multi-threaded often need to read and write shared data, and need to synchronize.

A. Thread creation

Java uses a Thread object to represent a thread, and a new thread is started by calling start(); a thread object can only call the start() method once.
The execution code of the thread is written in the run() method; the thread scheduling is determined by the operating system.
Thread.sleep() can suspend the current thread for a period of time.
1 . Derive a custom class from Thread, and then override the run() method.
Thread t = new MyThread();
t.start();

class MyThread extends Thread {
    @Override
    public void run() {}
}
2 . When creating a Thread instance, pass in a Runnable instance.
Thread t = new Thread(new MyRunnable());
t.start();

class MyRunnable implements Runnable {
    @Override
    public void run() {}
}

B. Thread status

In a Java program, a thread object can only call the start() method once to start a new thread, and execute the run() method in the new thread.
Once the run() method is executed, the thread ends.
1 . The state of the Java thread is as follows
New(new)
The newly created thread has not yet been executed.
Run(Runable)
The running thread is executing the Java code of the run() method.
Waiting indefinitely(Waiting)
A running thread because some operations are waiting.

There is no Object.wait() method that sets the Timeout parameter.
There is no Thread.join() method that sets the Timeout parameter.
LookSupport.park() method.
Timed Waiting
The running thread because the sleep() method is timing and waiting.

Thread.sleep() method.
The Object.wait() method with the Timeout parameter set.
The Thread.join() method with the Timeout parameter set.
LockSupport.parkNanos() method.
LockSupport.parkUntil() method.
Blocked
The running thread hangs because some operations are blocked.
Terminated
The thread has terminated because the run() method has been executed.

C. Threading methods

Thread.start()
Start a thread.
Thread.join()
Wait for a thread to finish execution.
Thread.interrupt()
Interrupt the thread, through(isInterrupted) method to determine whether this thread is interrupted.
Thread.setDaemon(true)
Set the thread as a daemon thread, and the daemon thread is not considered when the JVM exits.
Thread.setPriority(int n)
Set the thread priority(1 ~ 10, the default value is 5).
Thread.currentThread()
Get the current thread.
Synchronized
Lock and unlock.

Find the thread code block that modifies the shared variable.
Select a shared instance as the lock.
Use synchronized(lockObject) { }.

Three, thread pool

Thread pool is a form of multi-threaded processing, adding tasks to the queue during processing, and then automatically starting these tasks after creating threads; each thread uses the default stack size, runs at the default priority, and is in multi-threading In the unit. **

A. Advantages of using thread pools

1. Thread creation/destruction is accompanied by system overhead. Too often create/destroy threads will greatly affect the system processing efficiency; using thread pool can reduce the system consumption caused by thread creation/destruction.
2. Improve the response speed of the system. When a task arrives, by reusing the existing thread, it can be executed immediately without waiting for the creation of a new thread.
3. Facilitate the control of the number of concurrent threads, because the unlimited creation of threads may lead to excessive memory usage and OOM, and will cause excessive CPU switching.

B. Thread pool constructor

The concept of the thread pool is the interface Executor, the specific implementation is the ThreadPoolExecutor class.
ThreadPoolExecutor provides four constructors
1. public ThreadPoolExecutor
       (
        int corePoolSize,
        int maximumPoolSize,
        long ¡keepAliveTime,
        TimeUnit unit,
        BlockingQueue <Runnable> workQueue
       )

2. public ThreadPoolExecutor
       (
        int corePoolSize,
        int maximumPoolSize,
        long ¡keepAliveTime,
        TimeUnit unit,
        BlockingQueue <Runnable> workQueue,
        ThreadFactory ThreadFactory
       )

3. public ThreadPoolExecutor
       (
        int corePoolSize,
        int maximumPoolSize,
        long ¡keepAliveTime,
        TimeUnit unit,
        BlockingQueue <Runnable> workQueue,
        RejectedExecutionHandler handler
       )

4. public ThreadPoolExecutor
       (
        int corePoolSize,
        int maximumPoolSize,
        long ¡keepAliveTime,
        TimeUnit unit,
        BlockingQueue <Runnable> workQueue,
        ThreadFactory threadFactory,
        RejectedExecutionHandler handler
       )

C. Detailed explanation of constructor parameters

1 . Int corePoolSize(thread pool basic size)
The maximum number of core threads in this thread pool.

Core thread:
When creating a new thread in the thread pool, if the current total number of threads is less than corePoolSize, then the newly created core thread.
If it exceeds corePoolSize, the newly created non-core thread.

The core thread will always live in the thread pool by default, even if the core thread does nothing(idle state).
If the allowCoreThreadTimeOut attribute of ThreadPoolExecutor is specified as true, then the core thread will be destroyed if it is idle for more than a certain time.
2 . Int maximumPoolSize(the maximum size of the thread pool)
The maximum number of threads allowed in the thread pool. For an unbounded queue(LinkedBlockingQueue), this parameter can be ignored.

Total number of threads = number of core threads + number of non-core threads.

Non-core threads:
When the queue is full and the number of created threads is less than the maximumPoolSize, the thread pool will create a new non-core thread to execute the task.
For non-core threads, if the idle time exceeds the time set by the parameter(keepAliveTime), it will be destroyed.
3 . Long keepAliveTime(live keep time for non-core threads)
When the idle time of the non-core thread in the thread pool exceeds the thread survival time, then this thread will be destroyed.
4 . TimeUnit unit(keepAliveTime unit)
NANOSECONDS:1 microsecond = 1 microsecond/1000
MICROSECONDS:1 microsecond = 1 millisecond/1000
MILLISECONDS:1 millisecond = 1 second/1000
SECONDS:seconds
MINUTES:points
HOURS:hours
DAYS:days
5 . BlockingQueue workQueue(task queue)
The task queue in the thread pool maintains Runnable objects waiting to be executed.
When all core threads are working, the newly added task will be added to this queue for processing. If the queue is full, a new non-core thread will execute the task.
Common types of workQueue:
SynchronousQueue
When this queue receives a task, it will be directly submitted to the thread for processing without retaining it.
If all core threads are working, a new thread will be created to handle this task.
In order to ensure that there is no error(the total number of threads reaches the maximumPoolSize and no new threads can be created), when using this type of queue, the maximumPoolSize is generally specified as Integer.MAX_VALUE.
LinkedBlockingQueue
When this queue receives a task, if the current number of threads is less than the number of core threads, a new core thread is created to process the task.
If the current number of threads is equal to the number of core threads, enter the queue and wait.
Because there is no maximum limit for this queue, that is, all tasks that exceed the number of core threads will be added to the queue, which will cause the setting of maximumPoolSize to become invalid.
ArrayBlockingQueue
You can limit the length of the queue. When the task is received, if the value of corePoolSize is not reached, a new core thread is created to execute the task.
If it is reached, the team will wait.
If the queue is full, create a new non-core thread to perform the task.
If the total number of threads reaches the maximumPoolSize and the queue is full, an error is reported.
DelayQueue
The elements in the queue must implement the Delayed interface, which means that the newly added task must first implement the Delayed interface.
When this queue receives a task, it is first enumerated. Only when the specified delay time is reached will the task be executed.
6 . ThreadFactory threadFactory(thread factory)
Used to create a new thread:
The thread created by threadFactory also adopts new Thread() method.
The thread names created by threadFactory have a unified style:pool-m-thread-n(m is the thread pool number, n is the thread number in the thread pool).
You can use Thread.currentThread(). GetName() to view the current thread.
public Thread new Thread(Runnable r) {} or use Executors.defaultThreadFactory().
7 . RejectedExecutionHandler handler(thread saturation strategy)
When the thread pool and queue are full, then join the thread will execute this strategy.

ThreadPoolExecutor.AbortPolicy():Do not execute a new task, directly throw an exception, indicating that the thread pool is full.
ThreadPoolExecutor.DisCardPolicy():Do not perform new tasks, nor throw exceptions.
ThreadPoolExecutor.DisCardOldSetPolicy():Replace the first task in the message queue with the current new task execution.
ThreadPoolExecutor.CallerRunsPolicy():Call execute directly to execute the current task.

D. The execution strategy of ThreadPoolExecutor

As shown in the figure above, when a task is added to the thread pool:
First, determine whether any threads in the thread pool are idle, and if so, execute the task directly.
If not, it is determined whether the number of threads reaches corePoolSize. If not, a new thread(core thread) is created to execute the task.
If the number of threads reaches corePoolSize, move the task to the queue and wait.
When the queue is full and the total number of threads does not reach the maximumPoolSize, a new thread(non-core thread) executes the task.
When the queue is full and the total number of threads reaches the maximumPoolSize, the handler is called to implement the rejection strategy.

E. Four common thread pools in JAVA

Java provides four types of thread pools through Executors. These four types of thread pools are implemented directly or indirectly by configuring parameters of ThreadPoolExecutor.
1 . CachedThreadPool()
Cacheable thread pool.
The number of threads is Integer.max_value, which is an unlimited size.
If there is an idle thread, the idle thread is reused, and if there is no idle thread, a new thread is created.
It is suitable for situations that take less time and have more tasks.
Source code:
public static ExecutorService newCachedThreadPool() {
    return new ThreadPoolExecutor(0, Integer.MAX_VALUE, 60L, TimeUnit.SECONDS, new SynchronousQueue <Runnable>());
}
Creation method:
ExecutorService cachedThreadPool = Executors.newCachedThreadPool();
2 . FixedThreadPool()
Fixed-length thread pool.
Can control the maximum number of concurrent threads(the number of threads executing simultaneously).
The excess thread will wait in the queue.
Source code:
public static ExecutorService newFixedThreadPool(int nThreads) {
    return new ThreadPoolExecutor(nThreads, nThreads, 0L, TimeUnit.MILLISECONDS, new LinkedBlockingQueue <Runnable>());
}
Creation method:
ExecutorService fixedThreadPool = Executors.newFixedThreadPool(int nThreads);
3 . ScheduledThreadPool()
Thread pool for scheduled and periodic task execution.
Source code:
public static ScheduledExecutorService newScheduledThreadPool(int corePoolSize) {
    return new ScheduledThreadPoolExecutor(corePoolSize);
}

public ScheduledThreadPoolExecutor(int corePoolSize) {
    super(corePoolSize, Integer.MAX_VALUE, DEFAULT \ _KEEPALLIVE \ _MILLS, MILLISECONDS, new DelayedWorkQueue());
}
Creation method:
ExecutorService scheduledThreadPool = Executors.newScheduledThreadPool(int corePoolSize);
Implementation method:
Perform a task after 1 second
    scheduledThreadPool.schedule(new Task, 1, TimeUnit.SECONDS);

Start a scheduled task after 2 seconds, every 3 seconds(no matter how long the task needs to be executed)
    scheduledThreadPool.scheduledAtFixedRate(new Task, 2, 3, TimeUnit.SECONDS);

Start the scheduled task after 2 seconds, at 3 second intervals(after the last task is completed)
    scheduledThreadPool. **scheduledWithFixedDelay**(new Task, 2, 3, TimeUnit.SECONDS);
4 . SingleThreadExecutor()
Single threaded thread pool.
There is only one worker thread to perform the task.
All tasks are executed in the specified order, that is, follow the queue enqueue and dequeue rules.
Source code:
public static ExecutorService newSingleThreadExecutor() {
    return new FinalizableDelegatedExecutorService(new ThreadPoolExecutor(1, 1, 0L, TimeUnit.MILLISECONDS, new LinkedBlockingQueue <Runnable>()));
}
Creation method:
ExecutorService singleThreadPool = Executors.newSingleThreadPool();

F. Execution and close

execute()
Perform a task with no return value.
submit()
Submit a thread task with a return value.
shutdown()
Close the thread pool, wait for the task being executed to complete first, and then close it.
shutdownNow()
Immediately stop the task being executed.
awaitTermination()
Wait for the specified time for the thread pool to shut down.

4. Java Memory Model

The main goal of the Java memory model is to define the access rules for variables in the program, that is, store the variables in the virtual machine to the main memory or remove the variables from the main memory.

A. Main memory

The main memory of the virtual machine is a part of the memory of the virtual machine.
The Java virtual machine stipulates that all variables(variables here refer to instance fields, static fields, and elements that constitute array objects, but excluding local variables and method parameters) must be generated in main memory.

B. Working memory

Each thread in the Java virtual machine has its own working memory, which is private to the thread.
The working memory of the thread keeps a copy of the variables needed by the thread in the main memory.

The virtual machine stipulates that the modification of the main memory variable by the thread must be performed in the working memory of the thread, and the variable in the main memory cannot be directly read or written.
Different threads cannot access each other's working memory; if the values   of variables need to be passed between threads, they must be passed through the main memory as an intermediary.

C. Interaction between main memory and working memory in the Java virtual machine

It is how a variable is transferred from the main memory to the working memory, and how to synchronize the modified variable from the working memory back to the main memory.

lock
Identify a variable as a thread exclusive state; the active object is in main memory.
unlock(unlock)
Release a variable in a locked state before it can be locked by other threads; the target object is in main memory.
read(read)
Transfer the value of a variable from main memory to the working memory of the thread for use by the load operation; the object of action is in main memory.
load
Put the variable value of the read operation from the main memory into the working memory; the acting object is in the working memory.
use(use)
Pass the value of a variable in the working memory to the execution engine. When the virtual machine encounters a bytecode instruction that needs to use the value of the variable, the operation will be performed; the object is in the working memory.
assign(assignment)
Assign a value received from the execution engine to a variable in working memory, and perform this operation whenever the virtual machine encounters a bytecode instruction that assigns a value to the variable. The working object is in working memory.
store(storage)
Transfer the value of a variable in the working memory to the main memory for write operation; the working object is in the working memory.
write(write)
Put the value of the variable obtained by the store operation from the working memory into the variable of the main memory; the target object is in the **main memory**.

D. Specifications for the above 8 operations

The Java memory model only requires that the above operations must be performed in sequence, and there is no guarantee that they must be performed continuously.

Example:Variables a, b
The variable a is copied from the main memory to the working memory, and the sequential access is:read a, load a.    
The variable b is synchronized from the working memory back to the main memory, and the sequential access is:store b, write b.
If a and b are copied from main memory to working memory, the possible sequence is:read a, read b, load b, load a.
1 . One of the read, load, store and write operations is not allowed to appear alone
That is, it is not allowed to read the value of the variable from the main memory, but the working memory does not receive it, or it is not allowed to write the value of the variable from the working memory back to the main memory, but the main memory is not received.
2 . Do not allow a thread to discard the latest assign operation
That is, the thread is not allowed to modify the value of the variable in its own worker thread without writing back to main memory.
3 . Do not allow a thread to write back unmodified variables to main memory
That is, if no assign operation has occurred in a variable in the working memory of the thread, the value of the variable is not allowed to be written back to the main memory.
4 . Variables can only be generated in main memory
It is not allowed to directly use an uninitialized variable in the working memory, that is, no load or assign operation is performed.
5 . A variable can only be locked by one thread at a time
That is to say, once a thread locks a variable, other threads cannot lock it until the thread releases the lock.
However, after the same thread locks a variable, it can continue to lock. At the same time, the number of lock releases must be the same as the number of locks.
6 . The lock operation on the variable will clear the value of the variable in the workspace
Before the execution engine uses this variable, it needs to reload or assign the operation to initialize the value of the variable.
7 . Unlocking of variables without lock is not allowed
If a variable is not locked, it cannot be unlocked. Of course, a thread cannot be unlocked by a variable locked by another thread.
8 . Before unlocking a variable, the variable must be synchronized back to the main memory
That is to perform store and write operations.

E. Reorder

Reordering refers to a mechanism to adjust the order of instruction execution at compile time or run time in order to improve the performance of instruction execution.
1 . Compile and reorder
It means that the compiler analyzes the code execution order when compiling the source code, and adjusts the execution order of the source code under the premise of following the as-if-serial principle.
As-if-serial principle
Refers to the single-threaded environment, no matter how to reorder, the execution result of the code is determined.
2 . Reorder at runtime
In order to increase the running speed of execution, the system adjusts the execution order of the machine's execution instructions.

F. volatile keyword

Volatile is the most lightweight synchronization mechanism provided by the Java Virtual Machine.
When accessing volatile variables, no locking operation will be performed, so the execution thread will not be blocked, and the atomicity of the operation cannot be guaranteed.

Variables modified by volatile have visibility to all threads. When a thread modifies the value of a variable modified by volatile, volatile ensures that the new value can be synchronized to the main memory immediately and refreshed from the main memory immediately before each use.
Variables modified by volatile prohibit instruction reordering.

Volatile read performance consumption is almost the same as ordinary variables, but write operations are slightly slower.
Because it needs to insert many memory barrier instructions in the local code to ensure that the processor does not execute out of order.

Fifth, thread synchronization

The principle of multi-threaded coordinated operation is:when the conditions are not met, the thread enters a waiting state; when the conditions are met, the thread is woken up and continues to execute the task. **
The keyword volatile is the most lightweight synchronization mechanism provided by the Java Virtual Machine.
volatile guarantees the visibility of variables to all threads, but operations are not atomic operations and are not safe in concurrent situations.
Heavyweight synchronization mechanism uses Synchronize.

A. Features to be noted in concurrent operations

1 . Atomicity
Atoms are the smallest unit and are indivisible.

Example:a = 0;(a is not long and double) This operation is inseparable, then we say that this operation is an atomic operation.
Example:a ++;(a ++ can also be written as:a = a + 1) This operation is divisible, so it is not an atomic operation.

Non-atomic operations will have thread safety issues, you need to use sychronized to make it into an atomic operation.
Java provides several atomic classes under the concurrent package.
2 . Visibility
Visibility refers to the visibility between threads, and the state modified by one thread is visible to another thread.
That is, the modification result of one thread can be seen immediately by another thread. The main implementation method is to synchronize the value to the main memory after modifying the value.
In Java, volatile, synchronized, and final modified variables are all visible.

Volatile can only make the content modified by it visible, but it cannot be guaranteed to be atomic.
3 . Ordering
If inside the thread, all operations are ordered.
If you observe another thread in one thread, all operations are out of order.

The two keywords volatile and synchronized can guarantee the order of operations between threads.
volatile is because it contains the semantics of "prohibiting instruction reordering".
Synchronized is obtained by the rule that "a variable can only be locked by one thread at a time".

B. The cost of thread blocking

1. If you want to block or wake up a thread, the operating system needs to intervene, and you need to switch between user mode and core mode. Such switching will consume a lot of system resources.
2. Because user mode and kernel mode have their own dedicated memory space, dedicated registers, etc.
3. Switching from user mode to kernel mode needs to pass many variables and parameters to the kernel.
4. The kernel also needs to protect some register values, variables, etc. in the user mode when switching, so that the kernel mode is switched back to the user mode to continue working after the call.

C. Types of locks

1 . Optimistic locking
Optimistic locking is an optimistic idea, that is, read more and write less, and the possibility of concurrent write is low.
Every time I go to get the data, I think that others will not modify it, so it will not be locked.
However, when updating, it will judge whether others have updated this data during this period.
To read the current version number when writing, and then lock operation.
Compare with the last version number, if it is the same, then perform the write operation, if it is not the same, repeat the read ~ compare ~ write operation.
Optimistic locking in Java is basically achieved through CAS operations. CAS is an atomic operation for updating. It compares whether the current value is the same as the incoming value. If it is the same, it updates, otherwise it fails.
2 . Pessimistic lock
Pessimistic locking is pessimistic thinking, that is to say that there is more writing, and the possibility of concurrent writing is high.
Every time I go to get the data, I think others will modify it.
Every time when reading and writing data, it will be locked, so that other threads want to read and write this data will block until the lock is obtained.
3 . Spin lock
If the thread holding the lock can release the lock resource in a short time, the thread waiting for the contention lock does not need to switch between kernel mode and user mode to enter the blocking suspend state.
They only need to wait for a while(spin, that is, do not release the CPU), and wait for the thread holding the lock to release the lock immediately, so as to avoid switching consumption between the user thread and the kernel.

D. Principle of JVM lock implementation

1 . Introduction to Heavyweight Synchronized Lock
Synchronized locks are stored in Java object headers.
2 . The scope of the synchronized lock
Synchronized can treat any non-NULL object as a lock.
When Synchronized acts on a method, it locks the instance of the object(this).
When Synchronized acts on a static method, it locks the class instance.
When Synchronized acts on an object, it locks all code blocks that use the object as a lock.
3 . Working principle of Synchronized
The JVM implements method synchronization and code block synchronization based on entering and exiting monitor objects.
Code block synchronization is achieved using the monitorenter and monitorexit instructions.
The monitorenter instruction is inserted at the beginning of the synchronized code block after compilation, and monitorexit is inserted at the end of the method and at the exception.
Any object has a monitor associated with it. When a monitor is held, it will be locked.
4 . Classification and explanation of JVM locks

No lock status
No lock means no resource is locked. All threads can access the same resource, but only one thread can successfully modify the resource.

The object header has a space of 25 bits for storing the hashcode of the object, and 4 bits for storing the generational age.
1bit is used to store the identification bit of bias lock, and 2bit is used to store the lock identification bit is 01.
Biased Locking

(-XX:-UseBiasedLocking = false)

Biased locks will be biased towards the first thread to access the lock, if in the process of running, the synchronization lock is only accessed by one thread.
If there is no multi-thread contention, the thread does not need to trigger synchronization. In this case, a bias lock is added to the thread.

Bias locks open up 23 bits of space for storing thread IDs, and 2 bits for storing epochs.
4bit is used to store the generational age, 1bit is used to store the lock bias(0-No, 1-Yes), and 2bit is used to store the lock ID is 01.

epoch:
The simple understanding here is that the epoch value can be used as a timestamp to detect the effectiveness of the bias lock.
Lightweight lock
When the current lock is a biased lock and is accessed by another thread, the biased lock will be upgraded to a lightweight lock, and other threads will try to acquire the lock through spin, and will not block, thereby improving performance.

In the lightweight lock, a 30-bit space is directly opened to store the pointer to the lock record in the stack, and 2 bits are used to store the identification bit of the lock, and the identification bit is 00.
Heavyweight lock
That is usually called synchronized object lock, where the pointer points to the start address of the monitor object. When a monitor is held by a thread, it is in a locked state.

Open up a space of 30 bits to store the pointer for performing heavyweight locks, and 2 bits to store the identification bit of the lock, the identification bit is 10.
GC tag
The GC flag opens up 30 bits of memory space but is not occupied. The 2 bits store the lock's identification bit, and its identification bit is 11.
5 . Synchronized execution process
Check if the current thread ID is in Mark Word.
If it is, it means that the current thread is in a biased lock.
If not, then use CAS to replace the current thread ID with Mark Word.
If it succeeds, it means that the current thread has obtained a bias lock and sets the bias flag to 1.
If it fails, it means that competition occurred, the partial lock was cancelled, and then upgraded to a lightweight lock.

The current thread uses CAS to replace the Mark Word of the object header with the lock record pointer.
If successful, the current thread acquires the lock.
If it fails, it means that other threads are competing for the lock, and the current thread attempts to use spin to acquire the lock.
If the spin is successful, it is still in a lightweight state.
If the spin fails, upgrade to a heavyweight lock.
6 . Locking process
The locking process is implemented internally by the JVM itself.
When executing a synchronized block, the JVM will decide how to perform the synchronization operation based on the enabled lock and the contention of the current thread.
With all threads enabled.
When the thread enters the critical section, it will first acquire the bias lock.
If there is already a biased lock, it will try to acquire a lightweight lock and enable the spin lock.
If the spin does not acquire the lock, use a heavyweight lock.
Threads that have not acquired the lock hang until the thread holding the lock finishes executing the synchronization block to wake them up.

If thread contention is intense, biased locking should be disabled **(-XX:-UseBiasedLocking = false)**.
Locking steps
When the code enters the synchronization block, if the lock state of the synchronization object is the lock-free state(the lock flag is 01, whether the bias lock is 0).
The virtual machine first creates a space called Lock Record in the stack frame of the current thread, which is used to store the current Mark Word copy of the lock object.
Mark Word in the copy object header is copied to the lock record(Lock Record).

Copy successful
The virtual machine will use the CAS operation to attempt to update the Mark Word of the object to a pointer to Lock Record.
And point the owner pointer in Lock Record to the Mark Word of the object.
If this action is successful, then the thread has the object's lock.
And the lock flag bit of the object Mark Word is set to 00, indicating that the object is in a lightweight lock state.

Copy failed
The virtual machine will first check whether the Mark Word of the object points to the stack frame of the current thread.
If it means that the current thread already has a lock on this object, then you can directly enter the synchronization block to continue execution.
Otherwise, it indicates that multiple threads are competing for locks. Lightweight locks must be expanded to heavyweight locks. The status value of the lock flag becomes 10.
Mark Word stores the pointer to the heavyweight lock, and the thread waiting for the lock will also enter the blocking state.
7 . Synchronized lock competition status
When the Java thread executes synchronized, it will form two bytecode instructions, which is equivalent to a monitor monitor, monitor the synchronized protected area, the monitor will set several states to distinguish the request thread.

ContentionList
In the contention queue, all threads requesting locks are first placed in the contention queue.
EntryList
The threads in the ContentionList that qualify as candidate resources are moved to the EntryList.
WaitSet
The thread blocked by calling the wait method is placed here.
OnDeck
At any time, at most only one thread is competing for lock resources, and this thread is called OnDeck.
Owner
The thread that has currently acquired the lock resource is called the owner.

######! Owner

The thread currently releasing the lock.
7.1) The JVM takes one data from the tail of the queue at a time for the lock competition candidate(OnDeck).
However, in the case of concurrency, ContentionList will be CAS accessed by a large number of concurrent threads.
In order to reduce the competition for the tail element, the JVM will move a part of the thread to an thread in the EntryList as the OnDeck thread(generally the most advanced thread).
The Owner thread does not directly pass the lock to the OnDeck thread, but gives the OnDeck the right to lock competition. OnDeck needs to re-compete for the lock.
Although this sacrifices some fairness, it can greatly improve the throughput of the system. In the JVM, this selection behavior is also called "competitive switching".
7.2) The OnDeck thread will become the owner thread after acquiring the lock resource.
Those who do not get the lock resource still stay in the EntryList.
If the Owner thread is blocked by the wait method, it is transferred to the WaitSet queue until it is awakened by notify or notifyAll at a certain moment, and it will re-enter the EntryList.
7.3) Threads in ContentionList, EntryList, WaitSet are blocked.
The blocking is done by the operating system(pthread \ _mutex \ _lock kernel function is implemented under the Linux kernel).
7.4) Synchronized is an unfair lock.
Synchronized when the thread enters ContentionList.
The waiting thread will first try to spin to acquire the lock, and enter the ContentionList if it cannot acquire it.
This is obviously unfair to threads that have entered the queue.
Another unfair thing is that the thread that spins acquire the lock may also directly seize the lock resources of the OnDeck thread.
8 . Lock optimization
Reduce lock time.
Code that does not need to be executed synchronously can be executed in the synchronized block without being executed in the synchronized block, so that the lock can be released as soon as possible.
Reduce the granularity of the lock.
Dismantle a physical lock into multiple logical locks to increase the degree of parallelism, thereby reducing lock contention. Its idea is to use space for time.
ConcurrentHashMap
ConcurrentHashMap in Java uses a Segment array.
Segment inherited from ReenTranLock.
Each segment is a reentrant lock.
Each segment has a HashEntry <k, v> data to store the data. When put operation, first determine which segment to put the data, only need to lock the segment to execute put, other segments will not be locked.
The number of segments in the array allows the number of threads to store data at the same time, which increases the concurrency capability.
LongAdder
The implementation idea is similar to ConcurrentHashMap.
LongAdder has a Cell array that changes dynamically according to the current concurrency.
There is a long value in the Cell object to store the value.
LinkedBlockingQueue
   LinkedBlockingArray         
CopyOnWriteArrayList  CopyOnWriteArraySet
CAS
                                 CAS      

                                    volatiled + CAS            

AQS

A. Lock API

1.  void lock()                          
2.  void lockInterruptibly()  lock()                 java.lang.InterruptedException   
3.  boolean tryLock()                    true 
4.  boolean tryLock(long timeout, TimeUnit timeUnit)              
5.  void unlock()     

B. Lock

ReentrantLock
ReentrantLock     
       Lock     
ReentrantReadWriteLock
ReentrantReadWriteLock        
ReentrantReadWriteLock          ReadLock   WriteLock         Lock   
StampedLock
StampedLock JDK8         

StampedLock                           

C. AQS

AQS
AbstractQueuedSynchronized     
AQS   Java                 Java                  
AQS   int               volatile int state  
AQS    FIFO                         
state
getState()   setState()   compareAndSetState()
FIFO
           FIFO   

                                 Node   

D. AQS

  ReentrantLock
  ReentrantReadWriteLock

E. AQS

1.
waitStatus
waitStatus     Node        

CANCELLED 1                              
SIGNAL -1                                      SIGNAL 
CONDITION -2                         
PROPAGATE -3                                       
0             
acquire(int)
acquire             

acquire      
tryAcquire()                     
addWaiter()                        
acquireQueued()                                             true     false 

                                     selfInterrupt()       
tryAcquire(int)
tryAcquire                         true     false 
AQS                            state get/set/CAS    


          abstract 
            tryAcquire   tryRelease 
           tryAcquireShared   tryReleaseShared 
      abstract                      

addWaiter(Node)
addWaiter                          
Node.EXCLUSIVE      
Node.SHARED      

           compareAndSetTail   CAS                     
     enq(node)                    

enq(Node)
enq Node               
                          head     tail     
     CAS                   

acquireQueued(Node, int)
acquireQueued                             acquire             

                    tryAcquire           
    park()  waiting     unpark()   interrupt()      


      head                              
          waiting 

shouldParkAfterFailedAcquire(Node, Node)
shouldParkAfterFailedAcquire                                    

           Node.SIGNAL                                 

      Node.SIGNAL,                  Node.SIGNAL 

parkAndCheckInterrupt()
parkAndCheckInterrupt                 
  park()      waiting   

             unpark()  interrupt() 

       Thread.interrupted()              

acquire
ReentrantLock.lock()     acquire    acquire(1)   

         tryAcquire()                       
        addWaiter()                        EXCLUSIVE  
acquireQueued()                      unpark()            
                            true     false 
                                    selfInterrupt()       

2.
release(int)
release(int)                       
release(int)                      state = 0                      

tryRelease(int)
 tryAcquire()    tryRelease()                      
     tryRelease()             EXCLUSIVE  
                                  state -= arg  


  release()   tryRelease()                       
                         state=0     true     false 

unparkSuccessor(Node)
unparkSuccessor                                next   
                          unpark()     

3.
acquireShared(int)
acquireShared                    

tryAcquireShared            
AQS               


0               
acquireShared
tryAcquireShared()                
       doAcquireShared()       park()    unpark() interrupt()            
doAcquireShared(int)
doAcquireShared()                     


                 head.next                             
            head.next        
  head.next                     

 head.next            park()           

     AQS                           

setHeadAndPropagate(Node, int)
setHeadAndPropagate   setHead()         

4.
releaseShared()
releaseShared()                      

doReleaseShared()
doReleaseShared             

releaseShared()          release()    
      tryRelease()          state = 0       true        
      releaseShared()                             

      10      A B C    A  5    B  3    C  4    
A B           8         2    C     
 A          1           3 C         
 B          1           4 C         

F.

                   state             AQS           **                  **
isHeldExclusively
                condition        
tryAcquire int
                 true      false 
tryRelease int
                 true      false 
tryAcquireShared int
0               
tryReleaseShared int
                 true     false 
ReentrantLock
State    0         
A  ReentrantLock.lock()      tryAcquire()       state+1 
      tryAcquire()       
  A  ReentrantLock.unlock()  state=0                     
A         A             state++            

                      state        

CAS

CAS Compare And Swap                  
CAS                                             
1. CAS
              CAS               
                              CAS         

            V       A         B 
          V                
  A V         
        B  V     
2. ABA 1 2 CAS
  1   1           A          
  2   2    CAS       A   B 
  3   2    CAS        B   A 
  4   1                                     2      
3. ABA
                       1  A-B-A   1A-2B-3A 

JAVA

1. Synchronized
           JVM   
2. Wait()/notify()/notifyAll()
wait()           
notify()           
notifyAll()             
3. ReadWriteLock
ReadWriteLock                           
4. StampedLock
      StampedLock                         
5. ReentrantLock
  AQS   
        1        2      
Lock  lock = new ReentrantLock() 

lock.lock()      
lock.unlock()      
6. ReentrantReadWriteLock
  AQS   
     state   16   16       16       16     
7. Condition
  condition     wait()  notify()    
  condition     condition   Lock()   newCondition   

private final Lock lock = new ReentrantLock() 
private final Condition condition = lock.newCondition() 
condition.await()              
condition.signal()          
condition.signalAll()          
     await()           
8. Atomic
Atomic       lock-free            thread-safe    
Atomic        CAS        
9. Future
      Runnable   Runnable        
           Callable   Callable                    

        Callable         Future   
   Future            
10. CompletableFuture
CompletableFuture   Future              
11. ForkJoin
ForkJoin                                      
12. ThreadLocal
ThreadLocal                    ThreadLocal          
ThreadLocal                                     
  ThreadLocal  try   finally     finally    

JAVA Concurrent

A.

CopyOnWriteArrayList
    /                                             
ConcurrentHashMap
     HashMap         segment    
    map     Segment   16   
  ConcurrentHashMap        segment 
                 segment                 
CopyOnWriteArraySet
CopyOnWriteArraySet       **CopyOnWriteArrayList**              
ArrayBlockingQueue/LinkedBlockingQueue
ArrayBlockingQueue/LinkedBlockingQueue            
ArrayBlockingQueue        ArrayBlockingQueue            ReentrantLock 
LinkedBlockingQueue       LinkedBlockingQueue             ReentrantLock 
LinkedBlockingDeque
LinkedBlockingDeque                                  
                   ReentrantLock     Condition