Technology Corner

Home » Multithreading

Category Archives: Multithreading

Follow Technology Corner on WordPress.com

Multithreading Best Practices

Multithreading requires careful programming. For most tasks, you can reduce complexity by queuing requests for execution by thread pool threads. This topic addresses more difficult situations, such as coordinating the work of multiple threads, or handling threads that block.

Deadlocks and Race Conditions

Multithreading solves problems with throughput and responsiveness, but in doing so it introduces new problems: deadlocks and race conditions.

Deadlocks

A deadlock occurs when each of two threads tries to lock a resource the other has already locked. Neither thread can make any further progress.

Many methods of the managed threading classes provide time-outs to help you detect deadlocks. For example, the following code attempts to acquire a lock on the current instance. If the lock is not obtained in 300 milliseconds, Monitor.TryEnter returns false.

if (Monitor.TryEnter(this, 300)) {

try {

// Place code protected by the Monitor here.

}

finally {

Monitor.Exit(this);

}

}

else {

// Code to execute if the attempt times out.

}

Race Conditions

A race condition is a bug that occurs when the outcome of a program depends on which of two or more threads reaches a particular block of code first. Running the program many times produces different results, and the result of any given run cannot be predicted.

Race conditions can also occur when you synchronize the activities of multiple threads. Whenever you write a line of code, you must consider what might happen if a thread were preempted before executing the line (or before any of the individual machine instructions that make up the line), and another thread overtook it.

Number of Processors

  • Multithreading solves different problems for the single-processor computers that run most end-user software, and the multiprocessor computers typically used as servers.

Single-Processor Computers

  • Multithreading provides greater responsiveness to the computer user, and uses idle time for background tasks. If you use multithreading on a single-processor computer:
  • Only one thread runs at any instant.
  • A background thread executes only when the main user thread is idle. A foreground thread that executes constantly starves background threads of processor time.
  • When you call the Thread.Start method on a thread, that thread does not start executing until the current thread yields or is preempted by the operating system.
  • Race conditions typically occur because the programmer did not anticipate the fact that a thread can be preempted at an awkward moment, sometimes allowing another thread to reach a code block first.

Multiprocessor Computers

Multithreading provides greater throughput. Ten processors can do ten times the work of one, but only if the work is divided so that all ten can be working at once; threads provide an easy way to divide the work and exploit the extra processing power. If you use multithreading on a multiprocessor computer:

The number of threads that can execute concurrently is limited by the number of processors.

A background thread executes only when the number of foreground threads executing is smaller than the number of processors.

When you call the Thread.Start method on a thread, that thread might or might not start executing immediately, depending on the number of processors and the number of threads currently waiting to execute.

Race conditions can occur not only because threads are preempted unexpectedly, but because two threads executing on different processors might be racing to reach the same code block.

Static Members and Static Constructors

A class is not initialized until its class constructor (static constructor in C#, Shared Sub New in Visual Basic) has finished running. To prevent the execution of code on a type that is not initialized, the common language runtime blocks all calls from other threads to static members of the class (Shared members in Visual Basic) until the class constructor has finished running.

For example, if a class constructor starts a new thread, and the thread procedure calls a static member of the class, the new thread blocks until the class constructor completes.

This applies to any type that can have a static constructor.

General Recommendations

Consider the following guidelines when using multiple threads:

  • Don’t use Thread.Abort to terminate other threads. Calling Abort on another thread is akin to throwing an exception on that thread, without knowing what point that thread has reached in its processing.
  • Don’t use Thread.Suspend and Thread.Resume to synchronize the activities of multiple threads. Do use Mutex, ManualResetEvent, AutoResetEvent, and Monitor.
  • Don’t control the execution of worker threads from your main program (using events, for example). Instead, design your program so that worker threads are responsible for waiting until work is available, executing it, and notifying other parts of your program when finished. If your worker threads do not block, consider using thread pool threads. Monitor.PulseAll is useful in situations where worker threads block.
  • Don’t use types as lock objects. That is, avoid code such as lock(typeof(X)) in C#, or the use of Monitor.Enter with Type objects. For a given type, there is only one instance of System.Type per application domain. If the type you take a lock on is public, code other than your own can take locks on it, leading to deadlocks.
  • Use caution when locking on instances, for example lock(this). If other code in your application, external to the type, takes a lock on the object, deadlocks could occur.
  • Do ensure that a thread that has entered a monitor always leaves that monitor, even if an exception occurs while the thread is in the monitor. The C# lock statement provide this behavior automatically, employing a finally block to ensure that Monitor.Exit is called. If you cannot ensure that Exit will be called, consider changing your design to use Mutex. A mutex is automatically released when the thread that currently owns it terminates.
  • Do use multiple threads for tasks that require different resources, and avoid assigning multiple threads to a single resource. For example, any task involving I/O benefits from having its own thread, because that thread will block during I/O operations and thus allow other threads to execute. User input is another resource that benefits from a dedicated thread. On a single-processor computer, a task that involves intensive computation coexists with user input and with tasks that involve I/O, but multiple computation-intensive tasks contend with each other.
  • Consider using methods of the Interlocked class for simple state changes, instead of using the lock statement. The lock statement is a good general-purpose tool, but the Interlocked class provides better performance for updates that must be atomic. Internally, it executes a single lock prefix if there is no contention. In code reviews, watch for code like that shown in the following examples. In the first example, a state variable is incremented:
  lock(lockObject) 
    { 
	 myField++; 
    }

You can improve performance by using the Increment method instead of the lock statement, as follows:

System.Threading.Interlocked.Increment(myField);

In the second example, a reference type variable is updated only if it is a null reference (Nothing in Visual Basic).

   if (x == null)
            {
                lock (lockObject)
                {
                    if (x == null)
                    {
                        x = y;
                    }
                }
            }

Performance can be improved by using the CompareExchange method instead, as follows:

System.Threading.Interlocked.CompareExchange(ref x, y, null);
  • Use ReaderWriterLock object in case if there is more threads are reading data and less threads are writing data. using lock in this case may hamper performance of reading and writing.
  • Background Thread: When you are using background thread and application terminates (all foreground threads), all background threads will automatically terminate then any finally blocks in the execution stack of background threads will bypass. This is a problem if your program employs finally (or using) blocks to perform cleanup work such as releasing resources or deleting temporary files. To avoid this, you can explicitly wait out such background threads upon exiting an application. There are two ways to accomplish this:
    • If you’ve created the thread yourself, call Join on the thread.
    • If you’re on a pooled thread, use an event wait handle.

Recommendations for Class Libraries

Consider the following guidelines when designing class libraries for multithreading:

  • Avoid the need for synchronization, if possible. This is especially true for heavily used code. For example, an algorithm might be adjusted to tolerate a race condition rather than eliminate it. Unnecessary synchronization decreases performance and creates the possibility of deadlocks and race conditions.
  • Make static data (Shared in Visual Basic) thread safe by default.
  • Do not make instance data thread safe by default. Adding locks to create thread-safe code decreases performance, increases lock contention, and creates the possibility for deadlocks to occur. In common application models, only one thread at a time executes user code, which minimizes the need for thread safety. For this reason, the .NET Framework class libraries are not thread safe by default.
  • Avoid providing static methods that alter static state. In common server scenarios, static state is shared across requests, which means multiple threads can execute that code at the same time. This opens up the possibility of threading bugs. Consider using a design pattern that encapsulates data into instances that are not shared across requests. Furthermore, if static data are synchronized, calls between static methods that alter state can result in deadlocks or redundant synchronization, adversely affecting performance.
Advertisements

Volatile and Thread.MemoryBarrier

Volatile variable and Thread.MemoryBarrier method use in condition when you want to access variable across threads without putting lock.

You can read more about Volatile and Thread.MemoryBarrier on MSDN.

Volatile keyword:


Excerpts from MSDN

The volatile modifier is usually used for a field that is accessed by multiple threads without using the lock statement to serialize access.

The volatile keyword can be applied to fields of these types:

  • Reference types.

  • Pointer types (in an unsafe context). Note that although the pointer itself can be volatile, the object that it points to cannot. In other words, you cannot declare a "pointer to volatile."

  • Types such as sbyte, byte, short, ushort, int, uint, char, float, and bool.

  • An enum type with one of the following base types: byte, sbyte, short, ushort, int, or uint.

  • Generic type parameters known to be reference types.

  • IntPtr and UIntPtr.

    The volatile keyword can only be applied to fields of a class or struct. Local variables cannot be declared volatile


Thread.MemoryBarrier Method

MemoryBarrier is memory fence which prevents any kind of instruction reordering or caching around that fence.

A good approach is to start by putting memory barriers before and after every instruction that reads or writes a
shared field, and then strip away the ones that you don’t need. If you’re uncertain of any, leave them in. Or
better: switch back to using locks!


 

class MemoryBarrierExample
    {
        int _valueTobeSet;
        bool _flag;

        public void RunOnThread1()
        {
            _valueTobeSet = 1900;
            _flag= true;
         }
        public void RunOnThread2()
        {
            if (_flag)
            {
                Console.WriteLine(_valueTobeSet);
            }
        }
    }

If method “RunOnThread1” and “RunOnThread2” run in concurrently on different threads, it might possible that output is “0” because compiler/CLR may do caching optimizations like assignment of variable in one thread would not visible in other threads right away. This may happen mostly on multiprocessor system.

If you put MemoryBarrier before and after reading and writing value to variable, most updated value can be available in all threads.

class MemoryBarrierExample
    {
        int _valueTobeSet;
        bool _flag;

        public void RunOnThread1()
        {
            _valueTobeSet = 1900;
	    Thread.MemoryBarrier();
            _flag= true;
	    Thread.MemoryBarrier();
         }
        public void RunOnThread2()
        {
		Thread.MemoryBarrier();
            if (_flag)
            {Thread.MemoryBarrier();
                Console.WriteLine(_valueTobeSet);
            }
        }
    }

You can also achieve this functionally by using “lock” but lock is associate performance penalty.

Note: you can also introduce memory barriers by using Thread.VolatileRead/Thread.VolatileWrite (these two methods successfully replace thevolatile keyword), Thread.MemoryBarrier, or even with the C# lock keyword etc.

Multithreading Concept using .Net – Part III

Uses of Threads

Foreground and Background Threads

By default threads are foreground and if application is trying to close and if any foreground thread is running then application will close only after closing of all foreground threads, while background threads will automatically abort when all foreground threads will close.

Therefore, you should use foreground threads to execute tasks that you really want to complete, like flushing data from a memory buffer out to disk. And you should use background threads for tasks that are not mission-critical, like recalculating spreadsheet cells or indexing records, because this work can continue again when the application restarts, and there is no need to force the application to stay active if the user wants to terminate it.

public static class Program
        {
            public static void Main()
            {
                // Create a new thread (defaults to foreground)
                Thread t = new Thread(Worker);
                // Make the thread a background thread
                t.IsBackground = true;
                t.Start(); // Start the thread
                // If t is a foreground thread, the application won't die for about 10 seconds
                // If t is a background thread, the application dies immediately
                Console.WriteLine("Returning from Main");
            }
            private static void Worker()
            {
                Thread.Sleep(10000); // Simulate doing 10 seconds of work
                // The line below only gets displayed if this code is executed by a foreground thread
                Console.WriteLine("Returning from Worker");
            }
        }

Thread Scheduling and Priorities Every thread is assigned a priority level ranging from 0 (the lowest) to 31 (the highest). When the system decides which thread to assign to a CPU, it examines the priority 31 threads first and schedules them in a round-robin fashion. If a priority 31 thread is schedulable, it is assigned to a CPU. At the end of this thread’s time-slice, the system checks to see whether there is another priority 31 thread that can run; if so, it allows that thread to be assigned to a CPU. So long as priority 31 threads are schedulable, the system never assigns any thread with a priority of 0 through 30 to a CPU. This condition is called starvation, and it occurs when higher-priority threads use so much CPU time that they prevent lower-priority threads from executing. Starvation is much less likely to occur on a multiprocessor machine because a priority 31 thread and a priority 30 thread can run simultaneously on such a machine. The system always tries to keep the CPUs busy, and CPUs sit idle only if no threads are schedulable.   .Net has priority levels as Enumeration so developer need not give 1-31 as priority. There are six types of priorities: Below Normal, Normal, Above Normal, High, and Highest. Of course, Normal is the default and is therefore the most common priority by far.

static void Main(string[] args)
        {
            
            sample s = new sample();
            Thread th1 = new Thread(s.Go1);
            th1.Name = "Thread1";
            th1.Priority = ThreadPriority.Lowest;
            Thread th2 = new Thread(s.Go2);
            th2.Name = "Thread2";
            th2.Priority = ThreadPriority.Highest;
            th1.Start();
            th2.Start();
            th1.Join();
            th2.Join();
            Console.WriteLine("Thread execution Over");   Console.ReadLine();
        }
class sample
    {
        public void Go1()
        {
            Console.WriteLine("Running thread {0}",Thread.CurrentThread.Name);
        }   public void Go2()
        {
            Console.WriteLine("Running thread {0}", Thread.CurrentThread.Name);
        }
       }   

OutPut Running Thread Thread2 Running Thread Thread1   In Above Code Priority of Thread1 is lowest and Thread2 with Highest priority. CPU runs Highest Priority thread first.

Thread Pooling

Creating and destroying a thread is an expensive operation in terms of time. In addition, having lots of threads wastes memory resources and also hurts performance due to the operating system having to schedule and context switch between the runnable threads. To improve this situation, the CLR contains code to manage its own thread pool. You can think of a thread pool as being a set of threads that are available for your application’s own use. There is one thread pool per CLR; this thread pool is shared by all AppDomains controlled by that CLR. If multiple CLRs load within a single process, then each CLR has its own thread pool.

The great thing about the thread pool is that it manages the tension between having a few threads, to keep from wasting resources, and having more threads, to take advantage of multiprocessors, hyper threaded processors, and multi-core processors. And the thread pool is heuristic. If your application needs to perform many tasks and CPUs are available, the thread pool creates more threads. If your application’s workload decreases, the thread pool threads kill themselves.

I hope this post gave you good clarity on uses of threads.

Multithreading Concept using .Net – Part II

Synchronization Concepts

There are different strategies to make your thread safe or synchronize.

Some of features are below:

Basic Synchronization

Thread .Sleep : Blocks execution for provided time period.

Thread.Sleep(0) ; //will do context switch

Thread.Sleep(100); // will block execution for 100 miliseconds

Thread.Sleep(TimeSpan.FromMinutes(1)); // block for 1 minute

Thread.Join : block thread execution until another thread ends.

Thread th1=new Thread(Go());

th1.Start();

th1.Join(); //block here untill thread th1 complete its execution.

Advance Synchronization
Lock :

Lock ensures one thread can access critical section of code at a time. Lock keyword expect synchronized object as reference type.

NOTE: It is highly recommend that synchronized object should be privately scoped like private field to prevent unintentional interaction from external code locking the same object.

Very common example of using lock is in collection while reading and writing item into collection.

Here is an example of ThreadSafe generic List object.

  public class SynchronizedCollection : IList
    {
        object locker = new object();
        private List _list = new List();     #region IList Members   public int IndexOf(T item)
        {
            return _list.IndexOf(item);
        }   public void Insert(int index, T item)
        {
            lock (locker)
                _list.Insert(index, item);
        }   public void RemoveAt(int index)
        {
            lock (locker)
                _list.RemoveAt(index);
        }   public T this[int index]
        {
            get      {
                return _list[index];
            }
            set
            {
                _list[index] = value;
            }
        }   #endregion   #region ICollection Members   public void Add(T item)
        {
            lock (locker)
                _list.Add(item);
        }   public void Clear()
        {
            lock (locker)
                _list.Clear();
        }   public bool Contains(T item)
        {
            return _list.Contains(item);
        }   public void CopyTo(T[] array, int arrayIndex)
        {
            _list.CopyTo(array, arrayIndex);
        }   public int Count
        {
            get { return _list.Count; }
        }   public bool IsReadOnly
        {
            get { return false; }
        }   public bool Remove(T item)
        {
            lock (locker)
               return _list.Remove(item);
        }   #endregion   #region IEnumerable Members   public IEnumerator GetEnumerator()
        {
            return _list.GetEnumerator();
        }   #endregion   #region IEnumerable Members   System.Collections.IEnumerator System.Collections.IEnumerable.GetEnumerator()
        {  return _list.GetEnumerator();
        }   #endregion
    }

Monitor : This class is just like lock statement but having more functions which are helpful in synchronization and to avoid deadlock.

Monitor.Enter(object obj) : This method acquires exclusive lock on the specified object.

Monitor.Exit(object obj) : Release exclusive lock on the specified object.

Monitor.TryEnter(object obj): It try to acquire exclusive, if fails return false else return true. You can also put time period to wait for acquiring lock.

Monitor.Wait(object obj) : It release locks and block current thread until reacquires lock on specified object.

Monitor.Pulse(object obj): Notifies a thread in the waiting queue of a change in locked object state.

Monitor.PulseAll(object obj): Notifies all threads in the waiting queue of a change in locked object state.

	try
            {
                Monitor.Enter(lock1);
                counter++;
            }
            finally
            {
                Monitor.Exit(lock1);
            }

One cannot call Monitor.Exit method without Monitor.Enter else runtime exception will occur. Best practice to call Monitor.Exit is in finally block as it will ensure safe release of synchronized object. NOTE: lock statement is shortcut of implementing Monitor.Enter and Monitor.Exit method. Compiler converts lock in above statement in MSIL.   Mutex : Ensures just one thread can access a resource, or section of code. It can work for inter process synchronization.  

Mutex mt = new Mutex(true,"test");
            try
            {
                if(!mt.WaitOne(TimeSpan.FromSeconds(10)))
                {
                    Console.WriteLine("Another instance of this application is running");
                    return;
                }   }
            finally
            {
                mt.ReleaseMutex();
            }

Note: Common example of Mutex is to run only one instance of application on machine.

Semaphore : It limits the number of threads that can access a resource or pool or resources concurrently. Use the Semaphore class to control access to a pool of resources. Threads enter the semaphore by calling the WaitOne method, which is inherited from the WaitHandle class, and release the semaphore by calling the Release method.   The count on a semaphore is decremented each time a thread enters the semaphore, and incremented when a thread releases the semaphore. When the count is zero, subsequent requests block until other threads release the semaphore. When all threads have released the semaphore, the count is at the maximum value specified when the semaphore was created.   A thread can enter the semaphore multiple times, by calling the WaitOne method repeatedly. To release some or all of these entries, the thread can call the parameterless Release()()() method overload multiple times, or it can call the Release(Int32) method overload that specifies the number of entries to be released.   The Semaphore class does not enforce thread identity on calls to WaitOne or Release. It is the programmer’s responsibility to ensure that threads do not release the semaphore too many times. For example, suppose a semaphore has a maximum count of two, and that thread A and thread B both enter the semaphore. If a programming error in thread B causes it to call Release twice, both calls succeed. The count on the semaphore is full, and when thread A eventually calls Release, a SemaphoreFullException is thrown.  

class sample
    {                                                                            
        public int counter;
  Semaphore sm = new Semaphore(1, 1);   public void SemaphoreExample()
        {
            try
            {
                sm.WaitOne();
                counter++;
                Console.WriteLine("Counter {0} increased by Thread {1}", counter, Thread.CurrentThread.Name);
                Thread.Sleep(1000);
                sm.Release();
            }
            catch (Exception ex)
            {
                Console.WriteLine(ex.Message);
            }   }
	}
    class Program
    {
        static void Main(string[] args)
        {
            
            sample s = new sample();
            Thread th1 = new Thread(s.SemaphoreExample);
            th1.Name = "Thread1";
            Thread th2 = new Thread(s.SemaphoreExample);
            th2.Name = "Thread2";
            th1.Start();
            th2.Start();
            th1.Join();
            th2.Join();
            Console.WriteLine("Thread execution Over");
            Console.ReadLine();     }

In above code only one thread can access critical section between sm.WaitOne()and sm.Release().

Wait Handlers: Wait Handlers are synchronization mechanism used for signaling. If one task is dependent on another task then you should use wait handlers. One thread waits to be signaled and another thread signal first thread to resume its task. There are three classes derived from WaitHandle class ie. Mutex,Semaphore and Event WaitHandle. I have already covered Mutex and Semaphore classes. EventWaitHandle has two subclasses: AutoResetEvent and ManualResetEvent.   AutoResetEvent   AutoResetEvent allows threads to communicate with each other by signaling. Typically, this communication concerns a resource to which threads need exclusive access. A thread waits for a signal by calling WaitOne on the AutoResetEvent. If the AutoResetEvent is in the non-signaled state, the thread blocks, waiting for the thread that currently controls the resource to signal that the resource is available by calling Set. Calling Set signals AutoResetEvent to release a waiting thread. AutoResetEvent remains signaled until a single waiting thread is released, and then automatically returns to the non-signaled state. If no threads are waiting, the state remains signaled indefinitely. If a thread calls WaitOne while the AutoResetEvent is in the signaled state, the thread does not block. The AutoResetEvent releases the thread immediately and returns to the non-signaled state.

class sample
      {
AutoResetEvent _waitHandle = new AutoResetEvent(false);   public void RunThread1()
        {
            for (int cntr = 0; cntr < 2; cntr++)
            {
                Thread.Sleep(1000);
                _waitHandle.Set();
            }
        }
        public void RunThread2()
        {
            for (int cntr = 0; cntr < 2; cntr++)
            {
               Console.WriteLine("Waiting to be Signaled by thread1 Time:"+DateTime.Now.ToString("HH:mm:ss"));
                _waitHandle.WaitOne();
                Console.WriteLine("Signaled by thread1 Time:"+DateTime.Now.ToString("HH:mm:ss "));            }
        }
}
static void Main(string[] args)
        {   sample s = new sample();
            Thread th1 = new Thread(s.RunThread1);
            th1.Name = "Thread1";
            Thread th2 = new Thread(s.RunThread2);
            th2.Name = "Thread2";
            th1.Start();
            th2.Start();
            th1.Join();
            th2.Join();
            Console.WriteLine("Thread execution Over");   Console.ReadLine();
        }

Output:

Waiting to be Signaled by thread Time:11:17:32

Signaled by thread1 Time: 11:17:33

Waiting to be Signaled by thread Time: 11:17:34

Signaled by thread1 Time: 11:17:34

In above code thread1 is calling “Set” method of AutoResetEvent object and thread2 wait for signaled by thread1. AutoReset Event object automatically reset to false when we call Set and block call on WaitOne method unlike ManualResetEvent.

If you replace AutoResetEvent with ManualResetEvent in above example then output will be

Waiting to be Signaled by thread Time:11:17:32

Signaled by thread1 Time: 11:17:33

Waiting to be Signaled by thread Time: 11:17:33

Signaled by thread1 Time: 11:17:33

If you don’t call Reset method of ManualResetEvent instance then WaitOne method will not block execution.

Put _waitHandle.Reset(); after _waitHandle.Set();

In next post I’ll cover  Uses of Threads.

Multithreading with C#

This is first post on using multithreading concepts .Net. I am discussing basic thing of multithreading in this post.

First question in our mind is what thread is and why we should use it.

A thread is independent execution path which run parallel with main tasks. A thread is contained inside a process and multiple threads can execute in same process so they can share process memory and resources. On a single processor multithreading generally occurs by time-division multiplexing: the processor switches between different threads. On a multiprocessor system, the threads or tasks will generally run at a same time, with each processor or core running a particular thread or task.

Below is some difference between process and thread:

  • Process is independent, runs in its isolated boundary which its OS has assigned address spaces while thread is subset of process which share same memory and resources.
  • Context Switching between threads in same process is typically faster than context switching between processes.
  • Processes interact only through system provided inter-process communication (like remoting, message queues etc.)

Advantages and uses of multithreading

Multithreading is widespread and programming model which allows multiple threads to run in same context of process. Advantage of multithreading is allowing to operate faster execution of tasks on multiple CPUs machine which makes true concurrent execution. In implementing parallelism of program we should need to take care of Race Conditions and Dead locks.

Race Condition:

A race condition is a bug that occurs when the outcome of a program depends on which of two or more threads reaches a particular block of code first. Running the program many times produces different results, and the result of any given run cannot be predicted.

class Program
    {
        static void Main(string[] args)
        {
sample s = new sample();
            Thread th1 = new Thread(s.Increment);
            th1.Name = "Thread1";
            Thread th2 = new Thread(s.Increment);
            th2.Name = "Thread2";
            th1.Start();
            th2.Start();
            Console.ReadLine();       }
    }   class sample
    {
        public int counter;   public void Increment()
        {  for (int i = 0; i < 4; i++)
            {
                counter++;
                Console.WriteLine("{0}:{1}",Thread.CurrentThread.Name, counter);
            }  }  }

Output:

Thread1:2

Thread2:2

Thread1:3

Thread2:4

Thread1:5

Thread2:6

Thread2:8

Every time you run this program output will be different because of race condition. Thread1 and Thread2 race to execute code which result in incorrect data.

Now to handle this situation we can use synchronized concept.

public void Increment()
        {
            lock (this)
            {
                for (int i = 0; i < 4; i++)
                {
                    counter++;     Console.WriteLine("{0}:{1}", Thread.CurrentThread.Name, counter);
                    Debug.WriteLine(Thread.CurrentThread.Name + ":" + counter.ToString());
                }
            }
        }

Output: Thread1:2 Thread2:2 Thread1:3 Thread2:4 Thread1:5 Thread2:6 Thread2:8   Every time you run this program output will differ because of race condition. Thread1 and Thread2 race to execute code which result in incorrect data. Now to handle this situation we can use synchronized concept.

public void Increment()
        {
            lock (this)
            {
                for (int i = 0; i < 4; i++)
                {
                    counter++;     Console.WriteLine("{0}:{1}", Thread.CurrentThread.Name, counter);
                    Debug.WriteLine(Thread.CurrentThread.Name + ":" + counter.ToString());
                }
            }
        }

Output

Thread1:1

Thread1:2

Thread1:3

Thread1:4

Thread2:5

Thread2:6

Thread2:7

Thread2:8

Now above output is synchronized and will always correct because of lock block in this there is instance of sample class instance as a key. Thread1 will get lock first, release after execution code and thread2 will wait for releasing of this lock. So we can avoid race condition. Deadlock A deadlock occurs when each of two threads tries to lock a resource the other has already locked. Neither thread can make any further progress.

class sample
    {
        public int counter;
        private object lock1 = new object();
        private object lock2 = new object();   public void Increment()
        {
            lock (lock1)
            {
                Thread.Sleep(1000);
                lock (lock2)
                {
                    for (int i = 0; i < 4; i++)  {
                        counter++;
                        Console.WriteLine("{0}:{1}", Thread.CurrentThread.Name, counter);
                        DeadLockExample();
                    }
                }
            }
        }   public void DeadLockExample()
        {            
            lock (lock2)
                lock (lock1)
                    Console.WriteLine("DeadLock Example Called");
        }   }
static void Main(string[] args)
        {
            sample s = new sample();
            Thread th1 = new Thread(s.Increment);
            th1.Name = "Thread1";
            Thread th2 = new Thread(s.DeadLockExample);
            th2.Name = "Thread2";
            th1.Start();
            th2.Start();
            th1.Join();     th2.Join();
            Console.WriteLine("Thread execution Over");   Console.ReadLine();
        }

In above example Increment method is trying to call DeadlockExample Method and this method is having lock statement in reverse as implemented in Increment method. So thread2 is trying acquire lock on lock2 object but this object is already locked by thread1 which tends to deadlock situation. Avoiding Deadlocks with Lock Leveling A common approach to avoid deadlock is lock levelling or lock ordering. ). This strategy factors all locks into numeric levels, permitting components at specific architectural layers in the system to acquire locks only at lower levels.

        public void Increment()
        {
            lock (lock1)
            {
                Thread.Sleep(1000);
                lock (lock2)
                {
                    for (int i = 0; i < 4; i++)  {
                        counter++;
                        Console.WriteLine("{0}:{1}", Thread.CurrentThread.Name, counter);
                        DeadLockExample();
                    }
                }
            }
        }     public void DeadLockExample()
        {            
            lock (lock1)
                lock (lock2)
                    Console.WriteLine("DeadLock Example Called");
        }

You can detect and avoid deadlock using Monitor.TryEnter method which return true of false value whether resource is locked or not.

Monitor.TryEnter(this,300):// it will wait for 300 milliseconds to release lock

How To Start Thread

Threads in dotnet can be created using ThreadStart or Parametrized ThreadStart delegate.

Thread th1=new Thread(new ThreadStart(Run));
Th1.Start();

Run is method which will run in thread. There are different syntaxes which are more convenient to run method in thread .

1.

Thread th1= new Thread(Run);
th1.Run();

2.

new Thread(Run).Start();

3. Running Thread using Anonymous Method

new Thread(()=>{Console.WriteLine("Running in thread");}).Start();

Passing data while starting Thread

You can use ParameterizedThreadStart delegate to pass object while starting Thread.

Thread th1 = new Thread(new ParameterizedThreadStart(Run));
th1.Start();

Thread th1= new Thread(Run);
th1.Start("hello");

new Thread(Run).Start("Hello");

Thread th1= new Thread(delegate(){ Run("hello"));
th1.Start();

 

Next post will cover Synchronization concepts.