Tuesday, April 15, 2014

How to measure the time spent in a context switch

Problem

How can you measure the time spent in a context switch?

Solution

This is a tricky question, but let’s start with a possible solution.

A context switch is the time spent switching between two processes (e.g., bringing a waiting process into execution and sending an executing process into waiting/terminated state). i.e.  it is the computing process of storing and restoring the state (context) of a CPU so that execution can be resumed from the same point at a later time.

There are three potential triggers for a context switch:

Multitasking

Most commonly, within some scheduling scheme, one process needs to be switched out of the CPU so another process can run. This context switch can be triggered by the process making itself unrunnable, such as by waiting for an I/O or synchronization operation to complete. On a pre-emptive multitasking system, the scheduler may also switch out processes which are still runnable. To prevent other processes from being starved of CPU time, preemptive schedulers often configure a timer interrupt to fire when a process exceeds its time slice. This interrupt ensures that the scheduler will gain control to perform a context switch.

Interrupt handling

Modern architectures are interrupt driven. This means that if the CPU requests data from a disk, for example, it does not need to busy-wait until the read is over; it can issue the request and continue with some other execution. When the read is over, the CPU can be interrupted and presented with the read. For interrupts, a program called an interrupt handler is installed, and it is the interrupt handler that handles the interrupt from the disk.

When an interrupt occurs, the hardware automatically switches a part of the context (at least enough to allow the handler to return to the interrupted code). The handler may save additional context, depending on details of the particular hardware and software designs. Often only a minimal part of the context is changed in order to minimize the amount of time spent handling the interrupt. The kernel does not spawn or schedule a special process to handle interrupts, but instead the handler executes in the (often partial) context established at the beginning of interrupt handling. Once interrupt servicing is complete, the context in effect before the interrupt occurred is restored so that the interrupted process can resume execution in its proper state.

User and kernel mode switching

When a transition between user mode and kernel mode is required in an operating system, a context switch is not necessary; a mode transition is not by itself a context switch. However, depending on the operating system, a context switch may also take place at this time. 

The operating system must bring the state information of waiting processes into memory and save the state information of the running process.
So, roughly context switch happens because:
  1. User process enters the kernel via system call or a trap (e.g. page fault) and requested data (e.g. file contents) is not yet available, so the kernel puts said user process into sleep state and switches to another runnable process.
  2. Kernel detects that given user process consumed its full time quanta (this happens in code invoked from timer interrupt.)
  3. Data becomes available for higher current priority process that is presently sleeping (this happens from code invoked from/around IO interrupts.)

In order to solve this problem, we would like to record timestamps of the last and first instruction of the swapping processes. The context switching time would be the difference in the timestamps between the two processes.

Let’s take an easy example: Assume there are only two processes, P1 and P2.
P1 is executing and P2 is waiting for execution. At some point, the OS must swap P1 and P2 — let’s assume it happens at the Nth instruction of P1. So, the context switch time for this would be Time_Stamp(P2_1) – Time_Stamp(P2_N)
Easy enough.

The tricky part is this: how do we know when this swapping occurs? Swapping is governed by the scheduling algorithm of the OS. We can not, of course, record the timestamp of every instruction in the process.
Another issue: there are many kernel level threads which are also doing context switches, and the user does not have any control over them.
Overall, we can say that this is mostly an approximate calculation which depends on the underlying OS. One approximation could be to record the end instruction timestamp of a process and start timestamp of a process and waiting time in queue.
If the total timeof execution of all the processes was T, then the context switch time = T – (SUM for all processes (waiting time + execution time)).

Reference

0 comments:

Post a Comment