Home > Concurrent programming > Explanation of concurrent programming. Part 3: implementation

Explanation of concurrent programming. Part 3: implementation

At this moment we have covered some of the theory of concurrent programming, and even given some examples of it. Now let’s discuss how does this work inside.

We’ll start from scheduler. There are two common approaches: cooperative tasks, which give away control themselves, and pre-emptive task switching, where scheduler has a power to stop execution of any task at any moment. For interpreted language a scheduler is easier to implement as a part of interpreter: it can just count instructions and switch to another task then number of processed instructions reached some threshold. Making scheduler a part of interpteter means that scheduler works on lower abstraction level (and on higher capability level) then logic of tasks. For tasks which are executed as machine code a scheduler is also implemented in machine code, because there are no lower abstraction level available. When scheduler and tasks work at same abstraction layer, then switching happens in following way: at some point an execution should be interrupted and transferred to scheduler itself, and scheduler will run another task.

Usually this is achieved using build-in facility of CPU for called “interrupt”. Originally this facility was designed for on-demand processing for events on external devices. Interruption takes place when device sends electrical signal to CPU. Since one of those devices could be a timer, interrupt facility could be used for interrupting a task. So, even a scheduler is programmed in machine code, an interruption facility is a part of CPU, so it is on lower abstraction level.

Low-level scheduler for CPU-native tasks is usually a part of OS. Main reason is that OSes already use interruption facility of CPU for I/O operations, so giving to a user program an ability to use (and misuse) interruption facility whould be dangerous. Another reason is that OSes follow a design philosophy that all “system-level” code should be a part of an OS. As a side effect of including a scheduler, OS itself takes advantage of multi-tasking by splitting itself into several tasks.

Now let’s talk about task interaction facilities. Message-passing facilities are usually a part of OS, because it requires memory manipulations, which is usually done  by OS. However, it is not nesessary.

We alrady explained that If tasks use unprotected shared memory, then they should use some facilities to ensure that shared data are consistent.

Some shared objects of small size could be realized without any facility at all, by using “atomic” operations of CPU. To understand the importance of atomic operations, let’s use classical “array traversal” code as an example:

int[] a;  // shared
int i = 0; // shared
for (i = 0; i < size; i++) { /* do something with a[i] */ }

This code works well in single-tasking environment, but if we want to run several instances of this loop simultaneously, we’ll get in trouble. This code doesn’t protect several tasks from processing same array element several times. Let’s try to modify this code:

int[] a; // shared
int i = 0; // shared
for (i = 0; i++ < size; ) { /* do something with a[i-1]*/ }

Here we have tried to increase an index just after check so other instance of task will process next element. However, it will not always work as expected, because check and increase are two separate operations. The following sequence is possible: taskA checks i, task B checks i, taskA increases i, taskB increases i. To solve this problem, we need atomic “check and increase” operation. Modern CPUs provide such operations. Using them is a recommended way for small objects, because such operations are fast and don’t depend on OS.

Achieving atomic behaviour for larger objects is easy once we have atomic operations for small objects. We just need a small “flag” for each large object. Task which modifies an object should do those four steps:

  1. Check if flag is in “ready” state. Repeat until this condition is satisfied.
  2. If yes, then switch flag into “object is being modified” state. Note that checking and switching should be performed as single atomic operation.
  3. Read/modify change large object as needed.
  4. Finally put flag back into “ready” state.

This flag is usually called “the lock”, because it controls access to some other object. It’s states are called “locked” and “unlocked”, and operations are called “try to lock” and “unlock”.

In lots of multi-tasking implementations locks are used for ad-hoc critical sections. If task is unable to lock an object because it was locked by another task, then it voluntarily “spins around” the lock repeatedly trying to lock it. So, the task which has locked an object is virtually uninterruptible: scheduler will switch between tasks as usual, but other tasks virtually do nothing, they just spin around the lock.

To summarize abovementioned: usage of unprotected shared memory is possible without ineraction with scheduler, so without any OS-related facility. But sometimes using an OS could be good.

SInce spinning around a lock is just a waste of time, it is a good idea to put all waiting tasks to sleeping, and wake them up then object will be unlocked. A usual approach is to have a list of pending tasks for each lock. If object is locked, then current task is added to the list, and scheduler is asked to put this task asleep. Then lock is released, then the scheduler is asked to wake up all pending tasks. Since interaction with scheduler is nesessary, such “blocking locks” should be implemented as part of an OS.

An ability to wake up a particular sleeping task is useful regardless locks, so it is also available through OS.

Let’s summarize it all. A cornerstone of multi-tasking implementation is a scheduler, which is usually a part of an OS. Simplest form of inter-task communication is through unprotected shared memory. Data consistence could be achieved by using atomic operations or spinlocks without dealing with scheduler. If objects are locked for a long time, it is a good idea to use blocking locks, which are available from OS.

Further reading:

  1. No comments yet.
  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: