Home > Concurrent programming > Explanation of concurrent programming. Part 1: theory

Explanation of concurrent programming. Part 1: theory

I’ll start an introduction to concurrent programming with example. Imagine a real-time multiplayer strategy game. Lots of things are going on simultaneously: your units move to destinated point, AI-controlled units also move, animated scenery is drawn, number of harvested resources increases, player’s mouse pointer moves across the screen. How is this program written ? Some time ago there was an approach to program’s structure called “main loop”: program first checks if mouse was moved and updates the pointer, then AI analyzes the situation and takes decisions, then all units move according to their orders, then screen update takes place, then again from the beginning. Several tasks are executed sequentally in loop. Loop itself is quite simple, main difficulty is to implement tasks themselves. They must be short, because otherwise it will be noticable that those actions happen sequentally. All tasks are coded along the same pattern: restore a context (like last mouse position, last drawn frame of unit, last amount of resource), perform action, which usually changes the context, then store a context back. Temporary data which were generated during execution of task are just thrown away. Writing of such games is not an easy task, because developer should remember what should be in context, and how to store/restore it. Making task too short is also bad: context will be bigger to hold data which otherwise would be temporary. For example, it is a good idea to have whole unit sprite drawn in one cycle and just increase unit’s frame number, otherwise it will be nesessary to remember screen position of last drawn pixel.

A concurrent programming is an attempt to perform storing/restoring of context automatic and transparent. Instead of having specific context for each task, some low-level “execution context” is introduced. Main loop is replaced with “scheduler”, which works on time basis and switches tasks. So, each task has a “slice” of CPU’s time for execution. Main benefit of concurrent programming is a transparency of context switching: code becomes much simpler.

Another benefit of concurrent programming is that it works transparently on several CPUs. Tasks are really executed in parallel, and automatic context switching allows to continue task execution on another CPU.

Introduction of concurrent programming rised several interesting issues. One of them is a consistence of common data. In sequental programming, task is continuous and could not be interrupted, so it can do any actions as long as it will leave a shared data in context in consistent state upon exit. In concurrent programming, tasks could be interrupted at any time, possibly leaving data in broken state.

Let’s stop at this point, and summarize that was already said. Concurrent programming is a solution for certain kind of programs. It works by moving a context outside of program’s scope. If program is compiled into machine language, then context is located at hardware level. Concurrent program works like it has several sequental tasks which are execuded in parallel by time-slicing single CPUs. There is an entity called “thread”, which is this sequential executor of some task. At this point a “thread” should look like as completelly artifitial entity, but there are reasons for it’s existence, and they will be revealed soon. Some tasks are completelly independent, some tasks work together, so there should be means for task isolation and for task interaction. And there is an entity called “scheduler” which chooses what task (or tasks, in multi-CPU machines) should be executed at each moment. 

If tasks are completelly isolated, then we just have a single machine which works as several independent machines. For example, if we will launch several inet server daemons (for FTP, HTTP, e-mail protocols), then we just have a single machine which is totally equivalent to several machines, each running a single daemon. There are absolutelly nothing which should be done to convert simple sequental task into “isolated task”. However, some actions are possible for isolated tasks to take benefit of concurrent environment.

Tasks are isolated from each other, but they can interact with scheduler. For example, if task is sure that there will be no work for it for some time, it can ask scheduler to “put her asleep” and use CPU for other tasks. Another possibility is to modify task’s “priority”, which defines how much of CPU should be given to this task compared to other tasks. All these actions use “thread” entity to indentify a particular task. So, “thread” is an indentification entity used then task should interact with scheduler, so scheduler can work with tasks as abstractions.

Now let’s discuss interaction between tasks. There are two known approaches. First is to have all task’s data isolated from other tasks, and have special facility for ineraction between tasks. For example, in a computer game a task which tracks mouse state can detect that mouse was moved, then it can calculate a rectangle between old and new position, then it can signal to task which paints on the screen to re-draw this rectangle because mouse pointer should be drawn in different place. Such approach is called “message passing”, because tasks pass messages between each other. Then some task sends a message, then delivery facility copies all the supplied data from local memory of sending task into internal buffer. Receiving task can copy data from facility’s buffer into it’s local memory.

This approach has shown us that task interaction adds a new entity into whole picture. Except tasks and scheduler we need a message delivery facility. But, should tasks and scheduler change in order to make interaction possible? No, it is not nesessary, but it could be very useful. For example, instead of having receving task to poll a facility in a loop waiting for a message, it is possible to make scheduler “to put asleep” a task by not scheduling it for execution, so CPU time could be used by other tasks. Programmers say that “non-blocking” receive operation is turned into “blocking” operation, which allows to avoid “spinning” around it.

This approach solves data integrity problem by copying data two times. When data has been copied into internal buffer, nobody can change them. Access to buffer is available only then “send” operation has been completed.

Message passing is somehow similar to manual multi-tasking described at the beginning. Instead of context store/restore there are send/receive operations. However, this approach doesn’t restrict “task granularity”: tasks can send messages when they want, which is very convenient.

Now let’s duscuss a second approach to task interaction, called “shared memory”. Tasks still have some local data which are isolated to access from other tasks, and there are some common data, which could be accessed by all tasks simultaneously. For example, to signal a need of screen refresh, one task can set a common flag, and screen painter task will detect this and re-draw a screen.

Let’s investigate if this approach requires some special facility or not. First, a special facility may (or may not) be needed to define which memory is shared and which is not. Second, an access to shared memory may happen through some facility. Both of these statements use word “may”, meaning that “it is an implementation detail, not required by theory”.

The reason of possible data inconsistence is the uncertainty of order of read and write operations. Writer task could be interrupted in the middle of “write” operation, and reader task will try to read shared data. So, the problem should be solved by making access to shared data non-interruptible. Two solutions exist.

First solution is to make all shared memory access through special facility. This facility interacts with scheduler so all access operations are non-interruptible. It is obvious that this approach is very similar to message passing. Instead of “receive” operation which either blocks or returns nothing if there are no messages there is a “read” operation which returns old value of shared data.

Second solution is to have a facility which will just make some task non-interruptible, regardless if it accesses shared data or not. Part of the task during execution of which it will not be interrupted is called “critical section”. Simplest approach is to have this facility just to ask scheduler, but for multi-CPU machines it means that some of CPUs will waste their time. This problem is solved by having several shared areas, and specifying a specific shared area then entering non-interruptible section, so other CPUs can execute tasks which don’t access this shared area. Since dividing a task into interruptible and non-interruptible parts is a responsibility of a developer, this solution lacks unified mechanism for enforcing data integrity.

That’s all for today. Let’s summarize it all. Concurrent programming is an approach for solving following tasks: to emulate several sequental machines on one through “time sharing” of CPU, to simplify internal structure of certain kind of programs, and to take benefit of multi-CPU machines. Concurrent programming introduces following entities: tasks, scheduler and interaction facilities. Task interaction could be done using one of three approaches: message passing, guarded shared memory and shared memory with non-interruptible code sections. Facilities for task interaction are usually (but not always) internally interact with scheduler.

Advertisements
  1. No comments yet.
  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: