Skip to content

Instantly share code, notes, and snippets.

@onyb
Last active September 12, 2015 01:38
Show Gist options
  • Save onyb/16f3f085d2b0ac499704 to your computer and use it in GitHub Desktop.
Save onyb/16f3f085d2b0ac499704 to your computer and use it in GitHub Desktop.
Operating Systems (Third Edition): Chapter 5
Operating Systems (Third Edition)
Harvey M. Deitel, Paul J. Deitel, David R. Choffnes
-------------------------------------------------------------------------------
CHAPTER 5: Asynchronous Concurrent Execution
-------------------------------------------------------------------------------
# Introduction
Asynchronous execution of threads
* threads usually independent of one another
* must occasionally communicate and synchronous to perform cooperative
tasks.
* complex and difficult to manage such interactions.
# Mutual Exclusion
- Problem of two or threads updating a shared data simultaneously.
- Data can be put in inconsistent state.
- Context switch can occur at any time, such as before a thread finishes
modifying the value.
- Need to access such data with mutual exclusion.
- Only one thread allowed access at a time.
- Other threads must wait until shared resource is unlocked.
- Serialized access
- Must be managed such that wait time is reasonable.
# Producer-Consumer Relationship in Java
- One thread (producer) creates data to store in a shared object.
- Other thread (consumer) reads from that object.
/* Buffer.java
* Buffer interface specifies methods to access buffer data
*/
public interface Buffer
{
public void set(int value); /* Place value into Buffer*/
public int get(); /* Return value from Buffer */
}
/* Producer.java
* Producer's run() method controls a producer thread that stores value from
1 to 4 in Buffer sharedLocation
*/
public class Producer extends Thread
{
private Buffer sharedLocation; /* Reference to shared object */
/* Producer constructor */
public Producer(Buffer shared)
{
super("Producer"); /* Create thread named "Producer" */
sharedLocation = shared; /* initialize sharedLocation */
}
/* run() stores values from 1 to 4 in Buffer sharedLocation */
public void run()
{
for(int count=1; count<=4; count++)
{
/* Sleep 0 to 3 seconds, then place value in Buffer */
try
{
Thread.sleep((int)(Math.random()*3001));
sharedLocation.set(count);
}
catch(InterruptedException e)
{
e.printStackTrace();
}
}
System.err.println(getName() + " done producing");
System.err.println("Terminating " + getName());
}
}
/* Consumer.java
* Consumer's run() method controls a producer thread that loops 4 times and
reads a value from sharedLocation each time.
*/
public class Consumer extends Thread
{
private Buffer sharedLocation; /* Reference to shared object */
/* Consumer constructor */
public Consumer(Buffer shared)
{
super("Consumer"); /* create thread named "Consumer" */
sharedLocation = shared;
}
/* read sharedLocation's value four times and sum the values */
public void run()
{
int sum = 0;
for(int count=1; count<=4; count++)
{
try
{
Thread.sleep((int)(Math.random()*3001));
sum += sharedLocation.get();
}
catch(InterruptedException e)
{
e.printStackTrace();
}
}
System.err.println(getName() + " read values totaling: " + sum);
System.err.println("Terminating " + getName());
}
}
/* UnsynchronizedBuffer.java
* UnsynchronizedBuffer represents a single shared integer
*/
public class UnsynchronizedBuffer implements Buffer
{
private int buffer = -1; /* shared by Producer and Consumer */
public void set(int value)
{
System.err.println(Thread.currentThread().getName() + " writes "
+ value);
buffer = value;
}
public int get()
{
System.err.println(Thread.currentThread().getName() + " reads "
+ buffer);
return buffer;
}
}
/* SharedBufferTest.java
* Driver program
*/
public class SharedBufferTest
{
public static void main(String[] args)
{
Buffer sharedLocation = new UnsynchronizedBuffer();
Producer producer = new Producer(sharedLocation);
Consumer consumer = new Consumer(sharedLocation);
producer.start();
consumer.start();
}
}
Sample output 1:
Consumer reads -1
Producer writes 1
Consumer reads 1
Producer writes 2
Consumer reads 2
Producer writes 3
Consumer reads 3
Consumer read values totalling: 5
Terminating Consumer
Producer writes 4
Producer done producing
Terminating Producer
Sample Output 2:
Consumer reads -1
Producer writes 1
Producer writes 2
Producer writes 3
Consumer reads 3
Producer writes 4
Producer done producing
Terminating Producer
Consumer reads 4
Consumer reads 4
Consumer read values totalling: 10
Terminating Consumer
# Critical Sections
- Most code is safe to run concurrently
- Sections where shared data is modified must be protected. These sections
are called critical sections.
- Only one thread should be allowed to be in a critical section at once.
- Caution should be exercised to avoid infinite loops and blocking inside a
critical section.
- A thread could terminate while in its critical section, and never release
mutual exclusion. The threads waiting to enter that critical section
would never be able to enter. Thus it is necessary for OS to perform
termination housekeeping, if a thread terminates in a critical section.
- Need to use Mutual Exclusion Primitives to indicate when data is about to
be accessed.
- Such primitives are provided by programming languages or libraries.
- Delimit beginning and end of critical section as:
enterMutualExclusion()
exitMutualExclusion()
# Implementing Mutual Exclusion Primitives
- Each mutual exclusion machine language instruction is executed
indivisibly, i.e. once started it completes without interruption.
- Cannot make assumptions about relative speed of thread execution.
- Thread not in its critical section cannot block other threads from
entering their critical sections.
- A thread may not be indefinitely postponed from entering its critical
section.
# Dekker's Algorithm v1
- Succeeds in enforcing mutual exclusion.
- Uses a variable to which both threads have access, to control which
thread can execute.
- Constantly tests whether critical section is available
* busy waiting
* wastes significant processor time
- Major drawback is that threads must enter and leave their critical
sections in strict alternation. If one thread needs to enter its critical
section more frequently than the other, the faster thread will be
constrained to operate at the speed of the slower thread. This is called
lockstep synchronization problem.
System:
int threadNumber = 1;
startThreads(); /* Initialize and launch two threads */
Thread T1:
void main()
{
while(!done)
{
while(threadNumber == 2); /* enterMutualExclusion() */
/* Critical section code */
threadNumber = 2; /* exitMutualExclusion() */
/* Code outside critical section */
}
}
Thread T2:
void main()
{
while(!done)
{
while(threadNumber == 1); /* enterMutualExclusion() */
/* Critical section code */
threadNumber = 1; /* exitMutualExclusion() */
/* Code outside critical section */
}
}
# Dekker's Algorithm v2
- Removes lockstep synchronization
- Violates mutual exclusion
- Thread could be preempted while updating flag variable, in which case
both threads will be executing in the critical section simultaneously.
System:
boolean T1_inside = false;
boolean T2_inside = false;
startThreads(); /* initialize and launch both threads */
Thread T1:
void main()
{
while(!done)
{
while(T2_inside); /* enterMutualExclusion() */
T1_inside = true;
/* Critical section code */
T1_inside = false; /* exitMutualExclusion() */
}
}
Thread T2:
void main()
{
while(!done)
{
while(T1_inside); /* enterMutualExclusion() */
T2_inside = true;
/* Critical section code */
T2_inside = false; /* exitMutualExclusion() */
}
}
# Dekker's Algorithm v3
- Set critical section flag before entering critical section test
- Guarantees mutual exclusion, but introduces possibility of deadlock.
- Deadlock can occur when both threads set flag simultaneously. Neither
would ever be able to break out of loop.
System:
boolean T1_WantsToEnter = false;
boolean T2_WantsToEnter = false;
startThreads(); /* initialize and launch both threads */
Thread T1:
void main()
{
while(!done)
{
T1_WantsToEnter = true; /* enterMutualExclusion() */
while(T2_WantsToEnter);
/* critical section code */
T1_WantsToEnter = false; /* exitMutualExclusion() */
}
}
Thread T2:
void main()
{
while(!done)
{
T2_WantsToEnter = true; /* enterMutualExclusion() */
while(T1_WantsToEnter);
/* critical section code */
T2_WantsToEnter = false; /* exitMutualExclusion() */
}
}
# Dekker's Algorithm v4
- This version is able to break out of deadlocks by forcing each looping
thread to repeatedly set its flag to false for brief periods.
- Guarantees mutual exclusion, but introduces indefinite postponement.
- Both threads could set flags to same values at same time.
- Unacceptable in mission-critical and business-critical systems.
System:
boolean T1_WantsToEnter = false;
boolean T2_WantsToEnter = false;
startThreads(); /* initialize and launch both threads */
Thread T1:
void main()
{
while(!done)
{
T1_WantsToEnter = true; /* enterMutualExclusion() */
while(T2_WantsToEnter) /* handle deadlock here */
{
T1_WantsToEnter = false;
/* wait for some small random amount of time */
T1_WantsToEnter = true;
}
/* critical section code */
T1_WantsToEnter = false; /* exitMutualExclusion() */
}
}
Thread T2:
void main()
{
while(!done)
{
T2_WantsToEnter = true; /* enterMutualExclusion() */
while(T1_WantsToEnter) /* handle deadlock here */
{
T2_WantsToEnter = false;
/* wait for some small random amount of time */
T2_WantsToEnter = true;
}
/* critical section code */
T2_WantsToEnter = false; /* exitMutualExclusion() */
}
}
# Dekker's Algorithm v5 - Proper Solution
- Proper, but very complicated solution.
- Uses notion of favored threads to determine entry into critical sections
in case of a contention.
- Resolves conflict over which thread should execute first.
- Favored status alternates between threads.
- Thread temporarily unsets critical section request flag in case it is not
the favored thread.
- Guarantees mutual exclusion.
- Avoids previous problems of deadlock, and indefinite postponement.
System:
int favoredThread = 1;
boolean T1_WantsToEnter = false;
boolean T2_WantsToEnter = false;
startThreads(); /* initialize and launch both threads */
Thread T1:
void main()
{
while(!done)
{
T1_WantsToEnter = true; /* enterMutualExclusion() */
while(T2_WantsToEnter) /* handle deadlock here */
{
if(favoredThread == 2)
{
T1_WantsToEnter = false;
while(favoredThread == 2); /* busy wait */
T1_WantsToEnter = true;
}
}
/* critical section code */
favoredThread = 2;
T1_WantsToEnter = false; /* exitMutualExclusion() */
}
}
Thread T2:
void main()
{
while(!done)
{
T2_WantsToEnter = true; /* enterMutualExclusion() */
while(T1_WantsToEnter) /* handle deadlock here */
{
if(favoredThread == 1)
{
T2_WantsToEnter = false;
while(favoredThread == 1); /* busy wait */
T2_WantsToEnter = true;
}
}
/* critical section code */
favoredThread = 1;
T2_WantsToEnter = false; /* exitMutualExclusion() */
}
}
# Peterson's Algorithm
- Simpler version of Dekker's algorithm v5, and less complicated.
- Still uses busy waiting technique, and favored thread.
- Requires fewer steps to perform mutual exclusion primitives.
- Easier to demonstrate correctness.
- Does not exhibit indefinite postponement, or deadlock.
System:
int favoredThread = 1;
boolean T1_WantsToEnter = false;
boolean T2_WantsToEnter = false;
startThreads(); /* initialize and launch both threads */
Thread T1:
void main()
{
while(!done)
{
T1_WantsToEnter = true;
favoredThread = 2;
while(T2_WantsToEnter && favoredThread == 2);
/* Critical section code */
T1_WantsToEnter = false;
}
}
Thread T2:
void main()
{
while(!done)
{
T2_WantsToEnter = true;
favoredThread = 1;
while(T1_WantsToEnter && favoredThread == 1);
/* Critical section code */
T2_WantsToEnter = false;
}
}
# N-Thread Mutual Exclusion: Lamport's Bakery Algorithm
- Lamport's algorithm is modeled on a bakery in which one employee services
customer requests for baked goods, one customer at a time
- When multiple customers concurrently request service, service is given in
FCFS order on the basis of a numbered ticket
- Unlike real world ticket dispenser, Lamport's algorithm allows multiple
threads to obtain the same ticket number. There is a mechanism to settle
ties so that only one thread can access the critical section at a time.
Ties can occur when a context switch happens during ticket dispensation.
- Unlike Dekker's and Peterson's Algorithms, the Bakery Algorithm works in
multiprocessor systems and for 'n' threads
- Relatively simple to understand due to its real-world analog
- When Tx exits its critical section, it sets its ticket value to 0 to
indicate that it is no longer executing in its critical section nor it is
attempting to enter its critical section.
- Mutual exclusion is violated if thread Tx does not wait when Ti is
selecting a ticket. This might happen if a thread is preempted during
ticket dispensation, and some other thread is granted a ticket. The
violation arises if the other thread is preempted in its critical section
and the first thread is granted the processor.
- Lamport's Bakery algorithm does not require its instructions to be
executed atomically, unlike Dekker's and Peterson's algorithm which needs
multiple threads to modify a shared variable in order to control access to
their critical sections.
- Though all threads in the system share the arrays choosing[] and ticket[],
thread Tx is the only thread that can modify values in choosing[x] and
ticket[x]. This prevents threads from reading inconsistent data.
- FCFS order until multiple threads posses the same ticket.
System:
/* Array to record which threads are taking a ticket */
boolean choosing[n];
/* Value of the ticket for each thread initialized to 0 */
int tickets[n];
/* Initialize and launch all threads */
startThreads();
Thread Tx:
void main()
{
x = ThreadNumber();
while(!done)
{
/* Take a ticket */
choosing[x] = true; /* begin ticket selection process */
ticket[x] = maxValue(ticket)+1;
choosing[x] = false; /* end ticket selection process */
/* Wait until turn of current ticket comes */
for(i=0; i < n; i++)
{
if(i == x)
continue;
/* Busy wait while Ti is choosing a ticket */
while(choosing[i] != false);
/* Busy wait while current ticket value is lowest */
while(ticket[i] != 0 && ticket[i] < ticket[x]);
/* In case of tie, favor smaller thread number */
if(ticket[i] == ticket[x] && i < x)
{
/* Busy wait until Ti exits critical section*/
while(ticket[i] != 0);
}
}
/* Critical section code */
ticket[x] = 0; /* exitMutualExclusion() */
}
}
# Hardware Solutions for Mutual Exclusion
* Disabling Interrupts
- Works only on uniprocessor systems
- Threads are preempted usually by interrupts. Disabling interrupts is
thus a possible solution. Prevents the currently executing thread from
being preempted.
- Possible deadlock: thread waiting for I/O event in critical section
- Rarely used
* Test and Set instruction
- Shared data can become corrupted because a system may preempt a thread
after it has read the value at a memory location, but before the
thread can write a new value to the location.
- Test-and-set instructions enable a thread to perform Read Modify Write
(RMW) memory operations atomically (without interruptions).
- testAndSet(a, b) -> copies the value of b to a, then sets b to true
- testAndSet(a, b) does not ensure mutual exclusion. Algoritm requires
a favouredThread variable to prevent indefinite postponement.
System:
boolean occupied = false;
startThreads(); /* initialize and launch both threads */
Thread T1:
void main()
{
boolean p1MustWait = true;
while(!done)
{
while(p1MustWait)
testAndSet(p1MustWait, occupied);
/* Critical section code */
p1MustWait = true;
occupied = false;
}
}
Thread T2:
void main()
{
boolean p2MustWait = true;
while(!done)
{
while(p2MustWait)
testAndSet(p2MustWait, occupied);
/* Critical section code */
p2MustWait = true;
occupied = false;
}
}
* Swap Instruction
- swap(a, b) -> Exchanges the value of a and b atomically
System:
boolean occupied = false;
startThreads(); /* initialize and launch both threads */
Thread T1:
void main()
{
boolean p1MustWait = true;
while(!done)
{
do
{
swap(p1MustWait, occupied);
}while(p1MustWait);
/* Critical section code */
p1MustWait = true;
occupied = false;
}
}
Thread T2:
void main()
{
boolean p2MustWait = true;
while(!done)
{
do
{
swap(p2MustWait, occupied);
}while(p2MustWait);
/* Critical section code */
p2MustWait = true;
occupied = false;
}
}
# Semaphores
- A semaphone contains a protected variable whose integer value, once
initialized, can be accessed and altered by only one of two operations, P
and V.
- A thread calls P (wait) operation when it wants to enter its critical
section, and calls V (signal) operation when it wants to exit its critical
section.
- P -> enterMutualExclusion()
V -> exitMutualExclusion()
- Semaphore must be initialized first. Initialization sets the value of the
protected variable to indicate that no thread is executing in its critical
section.
- Wait
If no threads are waiting, allow thread to execute critical section, and
decrement value of protected variable. Otherwise place in waiting queue.
- Signal
Indicate that thread has exited critical section, and increment value of
protected variable. Waiting threads may now enter.
System:
/* Create semaphore and initialize value to 1 */
Semaphore occupied = new Semaphore(1);
startThreads(); /* initialize and launch both threads */
Thread Tx:
void main()
{
while(!done)
{
P(occupied); /* Wait */
/* Critical section code */
V(occupied); /* Signal */
}
}
# Thread Synchronization with Semaphores
- Semaphores can be used to synchronize two or more concurrent threads
- A thread can notify other threads that events have occured by using P and
V constructs
- Producer-Consumer relationship using semaphores
* A variable, sharedValue shared between producer and consumer
* Producer generates value, and assigns it to sharedValue in critical
section
* Consumer is blocked until producer finishes (wait)
* Consumer enters its critical section to read value
* Producer cannot update value until it is consumed (wait)
System:
int sharedValue; /* Variable shared between producer and consumer */
/* Semaphores that synchronize access to shared value */
Semaphore valueProduced = new Semaphore(0);
Semaphore valueConsumed = new Semaphore(1);
Producer Thread:
void main()
{
int nextValueProduced;
while(!done)
{
nextValueProduced = generateTheValue(); /* produce value */
P(valueConsumed); /* wait until value consumed */
sharedValue = nextValueProduced; /* critical section */
V(valueProduced); /* signal when value is produced */
}
}
Consumer Thread:
void main()
{
int nextValueConsumed; /* variable to store value consumed */
while(!done)
{
P(valueProduced); /* wait until value produced */
nextValueConsumed = sharedValue; /* critical section */
V(valueConsumed); /* signal when value is consumed */
processTheValue(nextValueConsumed); /* process the value */
}
}
# Counting Semaphores
- Semaphores initialized with values greater than one, to control access to
a pool of identical resources
* Decrement the semaphore's counter when taking resource from pool
* Increment the semaphore's counter when returning it to pool
* If no resources are available, thread is blocked until a resource
becomes available
# Implementing Semaphores
- Semaphores can be implemented at application or kernel level
– Application level: typically implemented by busy waiting
* Inefficient
– Kernel level
* avoids busy waiting by blocking waiting threads until they are ready
* disable interrupts
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment