Concurrency and Operating Systems

An operating system can have a very simple design, if the computer it controls has just a single user running a single process the whole of which is small enough to fit into memory running on a single processor because many design problems are avoided. The system, however, is far too simplistic to be useful, is extremely wasteful of resources and is operating far below potential.

A operation system can be:

and together with hardware can also support: These features can be achieved by giving each process a slice of time on a processor and a slice of memory for running and allocating resources as necessary.

A scheduler allocates time. It decides when a process has used up enough time and should be forced to relinquish a processor. Often processes are forced off a processor before their alloted time is up because they are doing I/O and have to wait for I/O to complete - devices are typically very slow compared to the CPU. Processes can also be forced off as a result of signals or because higher priority processes want time.

A pager moves pages (contents of blocks of memory) in and out of memory to disk so that a process can appear to have a large address space, independent of all other processes but also shared where desired, and much larger than actual memory. A swapper moves processes to and from memory, by moving process pages and process data that the kernel has for the process.

 
Concurrency
Concurrency cannot be avoided because:

Whenever concurrency is involved certain issues, discussed below, arise. An understanding of these issues is important when:

Concurrency issues (expressed using processes):

Various kinds of locks can be used to avoid these problems.

Last update: 2001 April 5