Operating Systems Basics

    As is known to the reader, an operating system is a vital component of almost every computer system.

    The user is interested in using application software on his hardware. The operating system acts as a mediating layer between the application and the hardware layer. Which brings us to our first problem. How exactly is operating system defined?

    Introduction

    What is an operating system?

    In general, there are two approaches to defining what an operating system (OS) is.

    • The top-down approach
    • The bottom-up approach

    The Top-Down Approach

    The first approach is the approach of a software developer. An average software developer does not want to be concerned with hardware intricacies, such as the rotating speed or the position of the reading head of his hard drive. And what about floppy disks that function entirely different? Here the operating system comes to the rescue and provides an easy enough to understand interface abstracting physical hardware. Eventually, the programmer has access to a set of convenient functions that allow read, write and executive access to files on the physical drive. In a sense, we can say that the operating system is a providing a virtual machine to the programmer/user that he can work on.

    The Bottom-Up Approach

    The second approach is the approach of a hardware manufacturer. The operating system is the layer, that manages all of the available resources. These include, but are not limited to the processor, memory, timers, network interfaces, printers and a huge variety of other peripherals. Ideally, the OS should manage the resources in the most ordered, controlled and efficient way possible. What exactly and how it manages the resources is depicted in a later section.

    Design Goals of an OS

    Let’s first summarize the purposes of an operating system:

    • providing reasonable programming interfaces for programmers
    • process management
    • hardware organization
    • resource management

    Criterions to evaluate the quality of an operating system:

    • efficiency (memory usage and time efficiency)
    • reliability
    • maintainability
    • portability

    Thus, in order to successfully realize a high-quality operating system, the following design goals are definded:

    • layered structure (same reasoning as for networks)
    • component interfaces universal and complete
    • insensitive to machine and user errors
    • consequences of errors local
    • no loss of persistently saved data
    • compatibility of adjacent versions
    • adaptable to environmental changes (new hardware, new programming languages)
    • adaptable to different hardware and sofware choices

    OS Types

    Protection rings
    One way to increase system stability and security is to prevent “misleading user behavior”, which includes both erroneous user behavior and attacks from the outside. Without delving too deep into the matter this means, that different privilege levels so-called rings are introduced. The innermost ring (ring 0) is the most privileged ring, the kernel ring. The farther a ring is away from the center, the less privileged it is.

    Simplifying this model we have two privilege levels. One is the supervisor or kernel mode, the other is the user mode. This supervisor mode is a hardware-mediated flag, which is set by system-level code. Programs running in supervisor mode are trusted never to fail, which could lead to a system crash. Programs running in user mode are less privileged, but can still indirectly have access to system functions through system calls. These lines of trusted code run in supervisor mode and then return the result to the program, which continues to run in user mode. This way both security and stability of the system can be increased.

    OS structures

    Monolithic operating systems
    Monolithic operating systems have no real structure. They consist of a set of procedures that can call each other. These procedures are linked to one big program. As a result, there’s no information hiding. However, there’s still some structure in monolithic operating systems. In order to call system functions, programs must put specific pre-defined parameters into specific places, such as the stack or processor registers. Then the user has to make a system call (also called supervisor call) with the TRAP (assembly language) command. By calling the TRAP command the desired service routine is looked up in a dispatch table. Eventually, the service routine returns a value and hands the control back to the user program. The combination of high-level languages and compilers provide convenient interfaces so that as a programmer you don’t really need to worry about this, though.

    Layered systems
    A more abstract, generalized version of the aforementioned monolithic approach is the layered systems approach. Similar to OSI-reference model for networks, a hierarchy of functional levels is established. E.g. if the lowest level is responsible for scheduling the processor, the levels above can act as if they were the exclusive user of the processor.

    Client-Server system
    Modern operating systems take a slightly different approach. User programs run as client processes in the user mode. These client processes now request service processes, which also run in user mode and return the desired results. In this model, the operating system is solely responsible for the communication between clients and servers. Advantages include the reduced scope of errors. An error in the file system server does now not affect the memory server. Another advantage is the easy adaptability to distributed systems. A client whether his message is processed on a local or a remote server.

    Resource and Process Management

    Motivation

    The purpose of resource and process management should be quite obvious at this point. The most basic goal is to make multitasking possible. Basically, this means that you progress a process for a certain little amount of time and then go on to the next. This creates the illusion of multiple processes conducted at the same time. Enabling multitasking, you also want great sturdiness, high efficiency, approximative fairness and avoidance of errors (which otherwise can be costly).

    Sturdiness in this context means, that you do not want your system to freeze if a single task fails. Our keywords here are preemption, and (the avoidance and prevention – yes there is a nuance here – of) deadlocks. Both topics are discussed in a later section.

    Each time we switch the context – which is a fancy way of saying hopping between the processes that currently get processing time on the processor – we inevitably lose a small amount of time, which is required to conduct said context switch. Obviously, we can avoid these time losses by reducing the number of context switches, thereby increasing the overall efficiency – or not? It depends on how you define efficiency. This is discussed in the scheduling algorithms section.

    If you did no context switching you would not be able to do multitasking at all. If you switch between processes at a very low rate or prefer specific tasks over the other that might not be fair. However, you might want to prioritize some process over another, e.g. if someone is willing to pay more for a higher priority. On the other hand, you would not want your free customers to “starve”. The intricacies of this subject are discussed in the section about priority based scheduling algorithms.

    Resources and Their Classification

    A resource is virtually any hardware or software “component” that is needed to complete an action. Example: If you want to print a digital file, the printer would be a hardware resource, whereas the file would be a software resource.

    Here is an incomplete table of hardware and software resources:

    Overview Hardware and Software Resources
    Hardware Resources Software Resources
    CPU, GPU, RAM, printer and other devices compiler, semaphore, process control blocks (PCB), files

    These resources can be further classified as demonstrated next:

    Resource Classification
    Classification (2)1 Alternative 1 Alternative 2
    (non-)reusable resources memory segments, datasets printing paper, writing on ROM
    (in)alienable resources processors, memory, I/O devices printers
    (non-)exclusive (access) resources printers reading access on files

    Explanation

    reusable resource: resource that can be used more than once
    alienable resource: resource that can be taken away from the process
    exclusive resource: resource that can only be used by one process at a time

    Resource Scheduling

    Resource Scheduling Goals

    The most basic goal that comes to mind would be the avoidance of conflicts. For instance, you probably would not want multiple processes to access the same file and write in the same lines in unpredictable ways. Here is a more or less comprehensive list:

    • avoidance of conflicts
    • correct execution in limited time
    • steady high workload of resources
    • high performance (jobs/time unit)
    • low dwell time (time units)
    • high reliability/availability (productive time/total time)

    Resource Scheduling Responsibilities

    • manage existing resources
    • bookkeep occupied resources
    • queue resource requests
    • check access permissions to resources
    • select and coordinate resource scheduling strategy
    • allot resources
    • withdraw resources
    • release unused resources

    Scheduling Algorithms

    Alienable resources, such as the prominent CPU can be scheduled in a way, such that different processes get some attention for a defined period of time before the previously active one is finished.

    There are different scheduling algorithms available, to realize the different goals mentioned earlier.

    Scheduling algorithms
    abbreviation name efficient (jobs/time)? efficient (dwell time)? fair? reliable? comments
    FIFO/FCFS first in, first out / first come, first serve no no partially yes easiest to implement, fair in terms of FCFS
    SJF shortest job first yes yes no yes
    RR Round Robin with “little” $ \delta $ no no yes yes very inefficient, due to large overhead compared to processing time
    RR Round Robin with “big” $ \delta $ no no yes yes smaller overhead compared to RR with “little” $ \delta $
    priority based no no no yes
    priority weight based no no partially yes makes paying for faster processing time possible, degree of fairness depends on weights

    Explanation

    FIFO/FCFS means that the job that comes first is processed first. This is a queue concept, which is fair in so far, that whoever comes first is served first. However, if you define fair in a way such that the every process waiting for processing time gets served every once in a while – no “starvation” – then this algorithm is very unfair. Implementation is very easy, all that is needed is a queue.

    SJF will always process the shortest job available , but it will not switch to another when a shorter job comes in. This is very efficient in terms of jobs per time. If your jobs are all equally important this might be a good choice. Since it also optimizes (reduces) the number of jobs waiting for processing time it is also efficient in terms of a low dwell time.

    The Round Robin algorithm is fair in so far that it periodically switches between processes. As mentioned above this is called context switching. Each time the context is switched there’s a little loss of time. The period $\delta$ determines after how many clock cycles / how much time these switches take place. A smaller $\delta$ means more context switches, thus dropping performance. On the other hand it is “more fair” due to lower “starvation times”

    In case you have (time) critical processes (e.g. in nuclear power plants) you would always want to execute them first. This can be achieved by priority-based scheduling. Naturally, this scheduling algorithm cannot be considered fair.

    If you want to prioritize important jobs, but still do not want to starve the others you could introduce weights and penalties to the algorithm. The scheme presented in the lecture works as following. Each job has a base priority, e.g 10. The process with the lowest (in this example the process with the lowest priority is the most important) priority gets a defined time slot of computing time. After that, a penalty of e.g. 60 points is added to only that process. Then, all processes get their priority halfed. The last step is adding the base priority to all processes.

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    Prove, that you are human! *