Skip to main content

Thread Synchronization Related Problems

Thread Synchronization Related Problems

-          Deadlocks : Occurs when two threads wait for each other and suspend indefinetly.

To a deadlock to occur each thread must own a resource and wait for release of the other resource to advance.  Which means neither of the threads relase its resource and none of them continues its execution. To avoid deadlocks thread execution shuould be designed in a such a way that threads obtain resources in the same order. So that a situation that a thread waiting for release of other resource never occurs.
-          Livelock: Happens when a thread acts in respond to act of another thread and other thread acts similiarly, acts in respond to act ot other thread. This way threads keeps busy each other preventing each other from doing  their actual work, thus producing nothing.
-          Starvation: Happens when a thread with low priority has never find a chance to obtain a specific resource because there are always higher priority threads obtaining the required resource.
-          Thread Interference: May occur in the case that two threads are working on the same data. Consider following example:
int a=7;

void commonMethod(){
    a+=5;
}

Suppose Thread 1 and Thread 2 reads the value of variable at the same time, obtaining value 7. Then each thread adds 5 to value of varible a and writes result  back to variable a. At the end value of variable a will be 12. However, since two threads represent two seperate actions, result must have been 17. This is a simple example of how lack of thread synchronization may cause incorrect results.

-          Memory Consistency Errors: In a multiprocesor environment each thread may run on a different processer with its own local cahce. Memory consistency errors occurs when a thread’s modifications on local chache are not visible to other threads actually running on the same data.
int a=7;

void commonMethod(){
    a+=5;
}

Consider the same example we used for thread interference. This time suppose every thing goes fine. Thread 1 arrives does the calculation and assigns value 12 to variable a. Then thread 2 comes and finds out that value of variable a is 12 and does its part laeving result of 17, what is really expected. However this is not a guaranteed case. Even though threads arrives without interference, it is possible that each thread has its own copy of variable a at thier local caches. In such a case, even thoug thread 1 assigned value 12 to its copy of variable a, it won’t be visible to thread 2. Thread 2 will still use variable a, actually its local copy of variable a, with value 5.

Unfortunately there are more to Memory Consistencey Errors. Consider following example:

int a=7;
boolean done=false;

void increment(){
    a=+5;
    done=true;
}

int get(){
    if(done){
        return a;
    }
   
    return -1;//indicating result is not ready yet
}

Consider get and increment methods are executed by seperate threads. Thread 1 increments a by 5 and sets done flag to indicate operation is done. Thread 2 checks done flag and if it is true it will be sure that result is ready, because first value of a is calculated and then flag is set. Every thing looks perfect.

However in real world, it is not so bright. Because compilers , under the name of optimization, may change the order of instructions as they see fit, as long as semantics of the code remains the same. As a result, it is possible that compiler changes order of a=+5 and  done=true if it decides this way performance will be better. Lets consider thread execution again. Thread 1 sets done flag first and before it calculates value of a, thread 2 comes and checks and observes that done flag is set and thinks value of variable a is ready. But actually it is not ready yet. This is a totally undesired situation.


Conclusion: To deal with Memory Consistency Erros and other syncronization related problems mentione above, shared resources must be correctly synchronized. 

Comments

Popular posts from this blog

Obfuscating Spring Boot Projects Using Maven Proguard Plugin

Introduction Obfuscation is the act of reorganizing bytecode such that it becomes hard to decompile. Many developers rely on obfuscation to save their sensitive code from undesired eyes. Publishing jars without obfuscation may hinder competitiveness because rivals may take advantage of easily decompilable nature of java binaries. Objective Spring Boot applications make use of public interfaces, annotations which makes applications harder to obfuscate. Additionally, maven Spring Boot plugin creates a fat jar which contains all dependent jars. It is not viable to obfuscate the whole fat jar. Thus obfuscating Spring Boot applications is different than obfuscating regular java applications and requires a suitable strategy. Audience Those who use Spring Boot and Maven and wish to obfuscate their application using Proguard are the target audience for this article. Sample Application As the sample application, I will use elastic search synch application from my GitHub repository.

Hadoop Installation Document - Standalone Mode

This document shows my experience on following apache document titled “Hadoop:Setting up a Single Node Cluster”[1] which is for Hadoop version 3.0.0-Alpha2 [2]. A. Prepare the guest environment Install VirtualBox. Create a virtual 64 bit Linux machine. Name it “ubuntul_hadoop_master”. Give it 500MB memory. Create a VMDK disc which is dynamically allocated up to 30GB. In network settings in first tab you should see Adapter 1 enabled and attached to “NAT”. In second table enable adapter 2 and attach to “Host Only Adaptor”. First adapter is required for internet connection. Second one is required for letting outside connect to a guest service. In storage settings, attach a Linux iso file to IDE channel. Use any distribution you like. Because of small installation size, I choose minimal Ubuntu iso [1]. In package selection menu, I only left standard packages selected.  Login to system.  Setup JDK. $ sudo apt-get install openjdk-8-jdk Install ssh and pdsh, if not already i

Java: Cost of Volatile Variables

Introduction Use of volatile variables is common among Java developers as a way of implicit synchronization. JIT compilers may reorder program execution to increase performance. Java memory model[1] constraints reordering of volatile variables. Thus volatile variable access should has a cost which is different than a non-volatile variable access. This article will not discuss technical details on use of volatile variables. Performance impact of volatile variables is explored by using a test application. Objective Exploring volatile variable costs and comparing with alternative approaches. Audience This article is written for developers who seek to have a view about cost of volatile variables. Test Configuration Test application runs read and write actions on java variables. A non volatile primitive integer, a volatile primitive integer and an AtomicInteger is tested. Non-volatile primitive integer access is controlled with ReentrantLock and ReentrantReadWriteLock  to compa