Cs 60 Solved Assignment 2010 Dodge

G22.2250 Operating Systems
2010-11 Fall
Allan Gottlieb
Tuesdays 5:00-6:50pm Room 109 Ciww

Start Lecture #1

Chapter -1: Administrivia

I start at -1 so that when we get to chapter 1, the numbering will agree with the text.

(-1).1: Contact Information

  • my-last-name AT nyu DOT edu (best method)
  • http://cs.nyu.edu/~gottlieb
  • 715 Broadway, Room 712
  • 212 998 3344

(-1).2: Course Web Page

There is a web site for the course. You can find it from my home page, which is http://cs.nyu.edu/~gottlieb

  • You can also find these lecture notes on the course home page. Please let me know if you can't find it.
  • The notes are updated as bugs are found or improvements made.
  • I will also produce a separate page for each lecture after the lecture is given. These individual pages might not get updated as quickly as the large page.

(-1).3: Textbook

The course text is Tanenbaum, "Modern Operating Systems", Third Edition (3e).

  • We will cover nearly all of the first six chapters, plus some material from later chapters.

(-1).4: Computer Accounts and Mailman Mailing List

  • You are entitled to a computer account on one of the NYU sun machines. If you do not have one already, please get it asap.
  • Sign up for the Mailman mailing list for the course. You can do so by clicking here.
  • If you want to send mail just to me, use my-last-name AT nyu DOT edu not the mailing list.
  • Questions on the labs should go to the mailing list. You may answer questions posed on the list as well. Note that replies are sent to the list.
  • I will respond to all questions; if another student has answered the question before I get to it, I will confirm if the answer given is correct.
  • Please use proper mailing list etiquette.
    • Send plain text messages rather than (or at least in addition to) html.
    • Use to contribute to the current thread, but NOT to start another topic.
    • If quoting a previous message, trim off irrelevant parts.
    • Use a descriptive Subject: field when starting a new topic.
    • Do not use one message to ask two unrelated questions.
    • Do NOT make the mistake of sending your completed lab assignment to the mailing list. This is not a joke; several students have made this mistake in past semesters.
  • As a favor to me, please do NOT , that is, when replying, I ask that you either place your reply after the original text or interspersed with it.
    • I know that there are differing opinions on top posting, but I find it very hard to follow conversations that way.
    • Exception: I realize Blackberry users top post.

(-1).5: Grades

Grades are based on the labs and the final exam, with each very important. The weighting will be approximately
40%*LabAverage + 60%*FinalExam (but see homeworks below).

(-1).6: The Upper Left Board

I use the upper left board for lab/homework assignments and announcements. I should never erase that board. Viewed as a file it is group readable (the group is those in the room), appendable by just me, and (re-)writable by no one. If you see me start to erase an announcement, let me know.

I try very hard to remember to write all announcements on the upper left board and I am normally successful. If, during class, you see that I have forgotten to record something, please let me know. HOWEVER, if I forgot and no one reminds me, the assignment has still been given.

(-1).7: Homeworks and Labs

I make a distinction between homeworks and labs.

Labs are

  • Required.
  • Due several lectures later (date given on assignment).
  • Graded and form part of your final grade.
  • Penalized for lateness. The penalty is 1 point per day up to 30 days; then 2 points per day.
  • This penalty is much too mild; but it is enforced.
  • Computer programs you must write.

Homeworks are

  • Optional.
  • Due the beginning of Next lecture.
  • Not accepted late.
  • Mostly from the book.
  • Collected and returned.
  • Able to help, but not hurt, your grade.

(-1).7.1: Homework Numbering

Homeworks are numbered by the class in which they are assigned. So any homework given today is homework #1. Even if I do not give homework today, the homework assigned next class will be homework #2. Unless I explicitly state otherwise, all homeworks assignments can be found in the class notes. So the homework present in the notes for lecture #n is homework #n (even if I inadvertently forgot to write it to the upper left board).

(-1).7.2: Doing Labs on non-NYU Systems

You may solve lab assignments on any system you wish, but ...

  • You are responsible for any non-nyu machine. I extend deadlines if the nyu machines are down, not if yours are.
  • Be sure to test your assignments on the nyu systems. In an ideal world, a program written in a high level language like Java, C, or C++ that works on your system would also work on the NYU system used by the grader. Sadly, this ideal is not always achieved despite marketing claims that it is achieved. So, although you may develop your lab on any system, you must ensure that it runs on the nyu system assigned to the course.
  • If somehow your assignment is misplaced by me and/or a grader, we need a to have some timestamp ON AN NYU SYSTEM that can be used to verify the date the lab was completed. So once the lab is completed and safely located on an NYU system, do not touch it since that will change the time stamps on the file.
  • When you complete a lab and have it on an nyu system, email the lab to the grader and copy yourself on that message. Please use one of the following two methods of mailing the lab.
    1. Send the mail from your CIMS account. (Not all students have a CIMS account.)
    2. Use the feature from home.nyu.edu or mail.nyu.edu and select the option.
    Keep the copy until you have received your grade on the assignment. I realize that I am being paranoid about this. It is rare for labs to get misplaced, but they sometimes do and I really don't want to be in the middle of an debate. Thank you.

(-1).7.3: Obtaining Help with the Labs

Good methods for obtaining help include

  1. Asking me during office hours (see web page for my hours).
  2. Asking the mailing list.
  3. Asking another student, but ...
    Your lab must be your own.
    That is, each student must submit a unique lab. Naturally, simply changing comments, variable names, etc. does not produce a unique lab.

(-1).7.4: Computer Language Used for Labs

You may write your lab in Java, C, or C++.

(-1).8: A Grade of

The rules set by GSAS state:

3.6. Incomplete Grades: An unresolved grade, I, reverts to F one year after the beginning of the semester in which the course was taken unless an extension of the incomplete grade has been approved by the Vice Dean. 3.6.1. At the request of the departmental DGS and with the approval of the course instructor, the Vice Dean will review requests for an extension of an incomplete grade. 3.6.2. A request for an extension of incomplete must be submitted before the end of one year from the beginning of the semester in which the course was taken. 3.6.3. An extension of an incomplete grade may be requested for a period of up to, but not exceeding, one year 3.6.4. Only one one-year extension of an incomplete may be granted. 3.6.5. If a student is approved for a leave of absence (See 4.4) any time the student spends on that leave of absence will not count toward the time allowed for completion of the coursework.

(-1).9 Academic Integrity Policy

This email from the assistant director, describes the policy.

Dear faculty, The vast majority of our students comply with the department's academic integrity policies; see www.cs.nyu.edu/web/Academic/Undergrad/academic_integrity.html www.cs.nyu.edu/web/Academic/Graduate/academic_integrity.html Unfortunately, every semester we discover incidents in which students copy programming assignments from those of other students, making minor modifications so that the submitted programs are extremely similar but not identical. To help in identifying inappropriate similarities, we suggest that you and your TAs consider using Moss, a system that automatically determines similarities between programs in several languages, including C, C++, and Java. For more information about Moss, see: http://theory.stanford.edu/~aiken/moss/ Feel free to tell your students in advance that you will be using this software or any other system. And please emphasize, preferably in class, the importance of academic integrity. Rosemary Amico Assistant Director, Computer Science Courant Institute of Mathematical Sciences

(-1).10: An Introductory OS Course with a Programming Prerequisite

(-1).10.1: A Introductory Course ...

I do not assume you have had an OS course as an undergraduate, and I do not assume you have had extensive experience working with an operating system.

If you have already had an operating systems course, this course is probably not appropriate. For example, if you can explain the following concepts/terms, the course is probably too elementary for you.

  • Process Scheduling
  • Round Robbin Scheduling
  • Shortest Job First
  • Context Switching
  • Demand Paging
  • Segmentation
  • Page Fault
  • Memory Management Unit
  • Processes and Threads
  • Virtual Machine
  • Virtual Memory
  • Least Recently Used (page replacement)
  • Device Driver
  • Direct Memory Access (DMA)
  • Interrupt
  • Seek Time / Rotational Latency / (Disk) Transfer Rate

(-1).10.2: ... with a Programming Prerequisite

I do assume you are an experienced programmer, at least to the extent that you are comfortable writing modest size (several hundred line) programs.

Chapter 0: Interlude on Linkers

Originally called a linkage editor by IBM.

A linker is an example of a utility program included with an operating system distribution. Like a compiler, the linker is not part of the operating system per se, i.e. it does not run in supervisor mode. Unlike a compiler it is OS dependent (what object/load file format is used) and is not (normally) language dependent.

0.1: What does a Linker Do?

Link of course.

When the compiler and assembler have finished processing a module, they produce an object module that is almost runnable. There are two remaining tasks to be accomplished before object modules can be run. Both are involved with linking (that word, again) together multiple object modules. The tasks are relocating relative addresses and resolving external references.

The output of a linker is called a load module because, with relative addresses relocated and the external addresses resolved, the module is ready to be loaded and run.

0.1.1: Relocating Relative Addresses

The compiler and assembler (mistakenly) treat each module as if it will be loaded at location zero.

  • For example, the machine instruction

    is used to indicate a jump to location 120 of the current module.

To convert this relative address to an absolute address, the linker adds the base address of the module to the relative address. The base address is the address at which this module will be loaded.

  • Example: Module A is to be loaded starting at location 2300 and contains the instruction
    The linker changes this instruction to

How does the linker know that Module A is to be loaded starting at location 2300?

  • It processes the modules one at a time. The first module is to be loaded at location zero. So relocating the first module is trivial (adding zero). We say that the relocation constant is zero.
  • After processing the first module, the linker knows its length (say that length is L1).
  • Hence the next module is to be loaded starting at L1, i.e., the relocation constant is L1.
  • In general the linker keeps the sum of the lengths of all the modules it has already processed; this sum is the relocation constant for the next module.

0.1.2: Resolving External References

If a C (or Java, or Pascal, or ada, etc) program contains a function call
to a function that is compiled separately, the resulting object module must contain some kind of jump to the beginning of .

  • But this is impossible!
  • When the C program is compiled, the compiler and assembler do not know the location of so there is no way they can supply the starting address.
  • Instead a dummy address is supplied and a notation made that this address needs to be filled in with the location of . This is called a use of .
  • The object module containing the definition of contains a notation that is being defined and gives the relative address of the definition, which the linker converts to an absolute address (as above).
  • The linker then changes all uses of f() to the correct absolute address.

0.1.3: An Example from the Lab

To see how a linker works lets consider the following example, which is the first dataset from lab #1. The description in lab1 is more detailed.

The target machine is word addressable and each word consists of 4 decimal digits. The first (leftmost) digit is the opcode and the remaining three digits form an address.

Each object module contains three parts, a definition list, a use list, and the program text itself. Each definition is a pair (sym, loc). Each use is also a pair (sym, loc), but sym is used in other loc's as well (see below).

The program text consists of a count N followed by N pairs (type, word), where word is a 4-digit instruction described above and type is a single character indicating if the address in the word is Immediate, Absolute, Relative, or External.

The actions taken by the linker depend on the type of the instruction, as we now illustrate. Consider the first input set from the lab.

Input set #1 1 xy 2 2 z xy 5 R 1004 I 5678 E 2000 R 8002 E 7001 0 1 z 6 R 8001 E 1000 E 1000 E 3000 R 1002 A 1010 0 1 z 2 R 5001 E 4000 1 z 2 2 xy z 3 A 8000 E 1001 E 2000

The first pass simply finds the base address of each module and produces the symbol table giving the values for xy and z (2 and 15 respectively). The second pass does the real work using the symbol table and base addresses produced in pass one.

The resulting output (shown below) is more detailed than I expect you to produce. The detail is there to help me explain what the linker is doing. All I would expect from you is the symbol table and the rightmost column of the memory map.

Symbol Table xy=2 z=15 Memory Map +0 0: R 1004 1004+0 = 1004 1: I 5678 5678 2: xy: E 2000 ->z 2015 3: R 8002 8002+0 = 8002 4: E 7001 ->xy 7002 +5 0 R 8001 8001+5 = 8006 1 E 1000 ->z 1015 2 E 1000 ->z 1015 3 E 3000 ->z 3015 4 R 1002 1002+5 = 1007 5 A 1010 1010 +11 0 R 5001 5001+11= 5012 1 E 4000 ->z 4015 +13 0 A 8000 8000 1 E 1001 ->z 1015 2 z: E 2000 ->xy 2002

You must process each module separately, i.e. except for the symbol table and memory map your space requirements should be proportional to the largest module not to the sum of the modules. This does NOT make the lab harder.

Remark: It is faster (less I/O) to do a one pass approach, but is harder since you need whenever a use occurs in a module that precedes the module with the definition.

The linker on unix was mistakenly called ld (for loader), which is unfortunate since it links but does not load.

Historical remark: Unix was originally developed at Bell Labs; the seventh edition of unix was made publicly available (perhaps earlier ones were somewhat available). The 7th ed man page for ld begins (see http://cm.bell-labs.com/7thEdMan). .TH LD 1 .SH NAME ld \- loader .SH SYNOPSIS .B ld [ option ] file ... .SH DESCRIPTION .I Ld combines several object programs into one, resolves external references, and searches libraries.

By the mid 80s the Berkeley version (4.3BSD) man page referred to ld as and this more accurate name is now standard in unix/linux distributions.

During the 2004-05 fall semester a student wrote to me

BTW - I have meant to tell you that I know the lady who wrote ld. She told me that they called it loader, because they just really didn't have a good idea of what it was going to be at the time.

The wikipedia reference

Lab #1: Implement a two-pass linker. The specific assignment is detailed on the class home page.

Chapter 1: Introduction

Homework: Read Chapter 1 (Introduction)

Levels of abstraction (virtual machines)

Software (and hardware, but that is not this course) is often implemented in layers. The higher layers use the facilities provided by lower layers.

Alternatively said, the upper layers are written using a more powerful and more abstract virtual machine than the lower layers.

In other words, each layer is written as though it runs on the virtual machine supplied by the lower layers and in turn provides a more abstract (pleasant) virtual machine for the higher layers to run on.

Using a broad brush, the layers are.

  1. Applications (e.g., web browser) and utilities (e.g., compiler, linker).
  2. User interface (UI). It may be text oriented (Unix/Linux shell, DOS prompt) or graphical (GUI, e.g., MS Windows, Gnome/KDE, MAC).
  3. Libraries (e.g., libc).
  4. The OS proper (the kernel).
  5. Hardware.

An important distinction is that the kernel runs in privileged/kernel/supervisor mode); whereas compilers, editors, shell, linkers. browsers etc run in user mode.

The kernel itself is itself normally layered, e.g.

  1. Filesystems
  2. Machine independent I/O
  3. Machine dependent device drivers

The machine independent I/O part is written assuming . For example, the machine independent I/O portion simply reads a block from a . But in reality one must deal with the specific disk controller.

Often the machine independent part is more than one layer.

The term OS is not well defined. Is it just the kernel? How about the libraries? The utilities? All these are certainly system software but it is not clear how much is part of the OS.

1.1: What is an operating system?

As mentioned above, the OS raises the abstraction level by providing a higher level virtual machine. A second related key objective for the OS is to manage the resources provided by this virtual machine.

1.1.1: The Operating System as an Extended Machine

The kernel itself raises the level of abstraction and hides details. For example a user (of the kernel) can write to a file (a concept not present in hardware) and ignore whether the file resides on a floppy, a CD-ROM, or a hard disk. The user can also ignore issues such as whether the file is stored contiguously or is broken into blocks.

Well designed abstractions are a key to managing complexity.

1.1.2: The Operating System as a Resource Manager

The kernel must manage the resources to resolve conflicts between users. Note that when we say users, we are not referring directly to humans, but instead to processes (typically) running on behalf of humans.

Typically the resource is shared or multiplexed between the users. This can take the form of time-multiplexing, where the users take turns (e.g., the processor resource) or space-multiplexing, where each user gets a part of the resource (e.g., a disk drive).

With sharing comes various issues such as protection, privacy, fairness, etc.

Homework: What are the two main functions of an operating system?

How is an OS fundamentally different from a compiler (say)?

Answer: Concurrency! Per Brinch Hansen in Operating Systems Principles (Prentice Hall, 1973) writes.

The main difficulty of multiprogramming is that concurrent activities can interact in a time-dependent manner, which makes it practically impossibly to locate programming errors by systematic testing. Perhaps, more than anything else, this explains the difficulty of making operating systems reliable.

Homework: 1. (unless otherwise stated, problems numbers are from the end of the current chapter in Tanenbaum.)

1.2: History of Operating Systems

The subsection heading describe the hardware as well as the OS; we are naturally more interested in the latter. These two development paths are related as the improving hardware enabled the more advanced OS features.

1.2.1: The first Generation (1945-55) Vacuum Tubes (and No OS)

One user (program; perhaps several humans) at a time. Although this time frame predates my own usage, computers without serious operating systems existed during the second generation and were now available to a wider (but still very select) audience.

I have fond memories of the Bendix G-15 (paper tape) and the IBM 1620 (cards; typewriter; decimal). During the short time you had the machine, it was truly a personal computer.

1.2.2: The Second Generation (1955-65) Transitors and Batch Systems

Many jobs were batched together, but the systems were still uniprogrammed, a job once started was run to completion without interruption and then flushed from the system.

A change from the previous generation is that the OS was not reloaded for each job and hence needed to be protected from the user's execution. Previously, the beginning of your job contained the trivial OS-like support features.

The batches of user jobs were prepared offline (cards to magnetic tape) using a separate computer (an IBM 1401 with a 1402 card reader/punch). The tape was brought to the main computer (an IBM 7090/7094) where the output to be printed was written on another tape. This tape went back to the 1401 and was printed on a 1403.

Start Lecture #2

1.2.3: The Third Generation (1965-1980) and Multiprogramming

In my opinion this is the biggest change from the OS point of view. It is with multiprogramming (many processes executing concurrently) that we have the operating system fielding requests whose arrival order is non-deterministic (at the pragmatic level). Now operating systems become notoriously hard to get right due to the inability to test a significant percentage of the possible interactions and the inability to reproduce bugs on request.

The Purpose of Multiprogramming

The purpose of multiprogramming is to overlap CPU and I/O activity and thus greatly improve CPU utilization. Recall that these computers, in particular the processors, were very expensive.

Multiple Batch Streams

  • IBM OS/MFT (Multiprogramming with a Fixed number of Tasks).
    • OS for IBM system 360.
    • The (real) memory is partitioned and a batch is assigned to a fixed partition.
    • The memory assigned to a partition does not change.
    • Jobs are queued for a specific partition. These jobs are not multiprogrammed: When a job is loaded into a partition, it stays there until completion, other jobs destined for that partition wait their turn.
    • Jobs residing in separate partitions are multiprogrammed. When a job residing in one partition starts a (slow) I/O, the CPU switches to executing a job in another partition.
    • Jobs were spooled from cards into the memory; similarly output was spooled from the memory to a printer.
  • IBM OS/MVT (Multiprogramming with a Variable number of Tasks) (then other names).
    • Each job gets just the amount of memory it needs. That is, the partitioning of memory changes as jobs enter and leave.
    • MVT is a more user of resources, but is more difficult.
    • When we study memory management, we will see that, with varying size partitions, questions like compaction and arise.

Time Sharing

This is multiprogramming with rapid switching between jobs (processes) and with individual users their own jobs on a remote terminal. Deciding when to switch and which process to switch to is called scheduling.

We will study scheduling when we do processor management.

1.2.4: The Fourth Generation (1980-Present) Personal Computers

Serious PC Operating systems such as Unix/Linux, Windows NT/2000/XP/Vista/7 and (the newer) MacOS are multiprogrammed OSes.

GUIs have become important. What is not clear is whether the GUI should be part of the kernel.

Early PC operating systems were uniprogrammed and their direct descendants lasted for quite some time (e.g., Windows ME), but now all (non-embedded) OS are multiprogrammed.

Homework: Why was timesharing not widespread on second generation computers?

Homework: 2.

Remark: I very much recommend reading all of 1.2, not for this course especially, but for general interest. Tanenbaum writes well and is my age so lived through much of the history himself.

1.3: Computer Hardware Review

The picture above is very simplified. (For one thing, today separate buses are used to Memory and Video.)

A bus is a set of wires that connect two or more devices. Only one message can be on the bus at a time. All the devices the message: There are no switches in between to steer the message to the desired destination, but often some of the wires form an address that indicates which devices should actually process the message.

1.3.1: Processors

Only at a few points will we get into sufficient detail to need to understand the various processor registers such as program counter (a.k.a, instruction pointer), stack pointers, and Program Status Words (PSWs). We will ignore computer design issues such as pipelining and superscalar.

We do, however, need the notion of a trap, that is an instruction that atomically switches the processor into privileged mode and jumps to a pre-defined physical address. We will have much more to say about traps later in the course.

Multithreaded and Multicore Chips

Many of the OS issues introduced by multi-processors of any flavor are also found in a uni-processor, multi-programmed system. In particular, successfully handling the concurrency offered by the second class of systems, goes a long way toward preparing for the first class. The remaining multi-processor issues are not covered in this course.

1.3.2: Memory

We will ignore caches, but will (later) discuss demand paging, which is very similar (although demand paging and caches use largely disjoint terminology). In both cases, the goal is to combine large, slow memory with small, fast memory to achieve the effect of large, fast memory. We cover caches in our computer design (aka architecture) courses (you can access my class notes off my home page).

The central memory in a system is called RAM (Random Access Memory). A key point is that it is volatile, i.e. the memory loses its data if power is turned off.


ROM (Read Only Memory) is used for (low-level control) software that often comes with devices on general purpose computers, and for the entire software system on non-user-programmable devices such as microwaves and wristwatches. It is also used for non-changing data. A modern, familiar ROM is CD-ROM (or the denser DVD, or the even denser Blu-ray). ROM is non-volatile.

But often this unchangable data needs to be changed (e.g., to fix bugs). This gives rise first to PROM (Programmable ROM), which, like a CD-R, can be written once (as opposed to being mass produced already written like a CD-ROM), and then to EPROM (Erasable PROM), which is like a CD-RW. Early EPROMs needed UV light for erasure; EEPROM, Electrically EPROM or Flash RAM) can be erased by normal circuitry, which is much more convenient.

Memory Protection and Context Switching

As mentioned above when discussing OS/MFT and OS/MVT, multiprogramming requires that we protect one process from another. That is we need to translate the virtual addresses (a virtual address is the address in the program itself) into physical addresses (a physical address is the address in the actual computer) such that, at any point in time, the physical address of each process are disjoint. The hardware that performs this translation is called the MMU or Memory Management Unit.

When context switching from one process to another, the translation must change, which can be an expensive operation.

1.3.3: Disks

When we do I/O for real, I will show a real disk opened up and illustrate the components

  • Platter
  • Surface
  • Head
  • Track
  • Sector
  • Cylinder
  • Seek time
  • Rotational latency
  • Transfer time

Devices are often quite difficult to manage and a separate computer, called a controller, is used to translate OS commands into what the device requires.

1.3.4: Tapes

The bottom of the memory hierarchy, tapes have large capacities, tiny cost per byte, and very long access times. Tapes are becoming less important since their technology improvement has not kept up with the improvement in disks.

1.3.5: I/O Devices

In addition to the disks and tapes just mentioned, I/O devices include monitors (and graphics controllers), NICs (Network Interface Controllers), Modems, Keyboards, Mice, etc.

The OS communicates with the device controller, not with the device itself. For each different controller, a corresponding device driver is included in the OS. Note that, for example, many graphics controllers are capable of controlling a standard monitor, and hence the OS needs many graphics device drivers.

In theory any SCSI (Small Computer System Interconnect) controller can control any SCSI disk. In practice this is not true as SCSI gets inproved to wide scsi, ultra scsi, etc. The newer controllers can still control the older disks and often the newer disks can run in degraded mode with an older controller.

How Does the OS Know When the I/O Is Complete?

Three methods are employed.

  1. The OS can busy wait, constantly asking the controller if the I/O is complete. This is the easiest method, but can have low performance. It is also called polling or PIO (Programmed I/O).
  2. The OS can tell the controller to start the I/O and then switch to other tasks. The controller must then interrupt the OS when the I/O is done. This method induces Less waiting, but is harder to program (concurrency!). Moreover, on modern processors a single interrupt is rather costly, much more than a single memory reference, but much, much less than a disk I/O.
  3. Some controllers can do DMA (Direct Memory Access) in which case they deal directly with memory after being started by the CPU. This takes work from the CPU and halves the number of bus accesses.

We discuss these alternatives more in chapter 5. In particular, we explain the last point about halving bus accesses.

Homework: 3.

1.3.6: Buses

On the right is a figure showing the specifications for an Intel chip set introduced in 2000. Many different names are used, e.g., hubs are often called bridges. Most likely due to their location on the diagram to the right, the Memory Controller Hub is often called the Northbridge and the I/O Controller Hub the Southbridge.

As shown on the right this chip set has two different width PCI buses. The figure below, which includes some devices themselves, does not show the two different PCI buses. This particular chip set supplies USB. An alternative is to have a PCI USB controller.

The use of ISA (Industry Standard Architecture) is decreasing.

Note that the diagram below, which is close to figure 1-12 of the 3e differs from the figure to the right in at least two respects. The connection between the bridges is a proprietary bus on the top figure and the PCI bus is generated by the Northbridge itself on the bottom figure. The figure on the right is definitely correct for the specific chip set described and is very similar to the Wikipedia article.

Remark: In January 2008, I received an email reply from Tanenbaum stating that he will try to fix the diagram in the book in the next printing.

1.3.7 Booting the Computer

When the power button is pressed control starts at the BIOS a (typically flash) ROM in the system. Control is then passed to (the tiny program stored in) the MBR (Master Boot Record), which is the first 512-byte block on the disk. Control then proceeds to the first block in the partition and from there the OS (normally via an OS loader) is finally invoked.

The above assumes that the boot medium selected by the bios was the hard disk. Other possibilities include: floppy, CD-ROM, NIC.

1.4: OS Zoo

There is not much difference between mainframe, server, multiprocessor, and PC OS's. Indeed the 3e has considerably softened the differences given in the 2e. For example Unix/Linux and Windows runs on all of them.

This course covers these four classes (or one class).

1.4.1 Mainframe Operating Systems

Used in data centers, these systems ofter tremendous I/O capabilities and extensive fault tolerance.

1.4.2 Server Operating Systems

Perhaps the most important servers today are web servers. Again I/O (and network) performance are critical.

1.4.3 Multiprocessor Operating systems

A multiprocessor (as opposed to a multi-computer or multiple computers or computer network or grid) means multiple processors sharing memory and controlled by a single instance of the OS, which typically can run on any of the processors. Often it can run on several simultaneously.

These existed almost from the beginning of the computer age, but now are not exotic. Indeed even my laptop is a multiprocessor.

Multiple computers

The operating system(s) controlling a system of multiple computers often are classified as either a Network OS, which is basically a collection of ordinary PCs on a LAN that use the network facilities available on PC operating systems. Some extra utilities are often present to ease running jobs on other processors.

A more sophisticated Distributed OS is a more version of the above where the boundaries between the processors are made nearly invisible to users (except for performance).

This subject is not part of our course (but often is covered G22.2251).

1.4.4 PC Operating Systems (client machines)

In the recent past some OS systems (e.g., ME) were claimed to be tailored to client operation. Others felt that they were restricted to client operation. This seems to be gone now; a modern PC OS is fully functional. I guess for marketing reasons some of the functionality can be disabled.

1.4.5 Handheld Computer Operating Systems

This includes PDAs and phones, which are rapidly merging.

The only real difference between this class and the above is the restriction to very modest memory. However, keeps getting bigger and some phones now include a stripped-down linux.

1.4.6 Embedded Operating Systems

The OS is the device, e.g., microwave ovens, and cardiac monitors. The OS is on a ROM so is not changed.

Since no user code is run, protection is not as important. In that respect the OS is similar to the very earliest computers. Embedded OS are very important commercially, but not covered much in this course.

1.4.7 Sensor Node Operating Systems

Embedded systems that also contain sensors and communication devices so that the systems in an area can cooperate.

1.4.8 Real-time Operating Systems

As the name suggests, time (more accurately timeliness) is an important consideration. There are two classes: Soft vs hard real time. In the latter missing a deadline is a fatal error—sometimes literally. Very important commercially, but not covered much in this course.

1.4.9 Smart Card Operating Systems

Very limited in power (both meanings of the word).

1.5 Operating System Concepts

This will be very brief. Much of the rest of the course will consist of .

1.5.1 Processes

A process is program in execution. If you run the same program twice, you have created two processes. For example if you have two editors running in two windows, each instance of the editor is a separate process.

Often one distinguishes the state or context of a process—its address space (roughly its memory image), open files, etc.—from the thread of control. If one has many threads running in the same task, the result is a .

The OS keeps information about all processes in the process table. Indeed, the OS views the process as the entry. This is an example of an active entity being viewed as a data structure (cf. discrete event simulations), an observation made by Finkel in his (out of print) OS textbook.

The Process Tree

The set of processes forms a tree via the fork system call. The forker is the parent of the forkee, which is called a child. If the system always blocks the parent until the child finishes, the is quite simple, just a line.

However, in modern OSes, the parent is free to continue executing and in particular is free to fork again, thereby producing another child.

A process can send a signal to another process to cause the latter to execute a predefined function (the signal handler). It can be tricky to write a program with a signal handler since the programmer does not know when in the program the signal handler will be invoked.

Each user is assigned a User IDentification (UID) and all processes created by that user have this UID. A child has the same UID as its parent. It is sometimes possible to change the UID of a running process. A group of users can be formed and given a Group IDentification, GID. One UID is special (the superuser or administrator) and has extra privileges.

Access to files and devices can be limited to a given UID or GID.


A set of processes is deadlocked if each of the processes is blocked by a process in the set. The automotive equivalent, shown below, is called gridlock. (The photograph below was sent to me by Laurent Laor.)

1.5.2 Address Spaces

Clearly, each process requires memory, but there are other issues as well. For example, your linkers (will) produce a load module that assumes the process is loaded at location 0. The result is that every load module has the same (virtual) address space. The operating system must ensure that the virtual addresses of concurrently executing processes are assigned disjoint physical memory.

For another example note that current operating systems permit each process to be given more (virtual) memory than the total amount of (real) memory on the machine.

1.5.3 Files

Modern systems have a hierarchy of files. A file system tree.

  • In MSDOS the hierarchy is a forest not a tree. There is no file, or directory that is an ancestor of both a:\ and c:\.
  • In recent versions of Windows, is the parent of a:\ and c:\.
  • In unix the existence of (hard) links weakens the tree to a DAG (directed acyclic graph).
  • Unix also has symbolic links, which when used indiscriminately, permit directed cycles (i.e., the result is not a DAG).
  • Windows has shortcuts, which are somewhat similar to symbolic links.

You can name a file via an absolute path starting at the root directory or via a relative path starting at the current working directory.

In Unix, one file system can be mounted on (attached to) another. When this is done, access to an existing directory on the second filesystem is temporarily replaced by the entire first file system. Most often the directory chosen is empty before the mount so no files become (temporarily) invisible.

In addition to regular files and directories, Unix also uses the file system namespace for devices (called special files, which are typically found in the /dev directory. Often, utilities that are normally applied to (ordinary) files can be applied as well to some special files. For example, when you are accessing a unix system using a mouse and do not have anything serious going on (e.g., right after you log in), type the following command

cat /dev/mouse and then move the mouse. On my more modern system the command is cat /dev/input/mice You kill the cat (sorry) by typing cntl-C. I tried this on my linux box (using a text console) and no damage occurred. Your mileage may vary.

Before a file can be accessed, it must be opened and a file descriptor obtained. Subsequent I/O system calls (e.g., read and write) use the file descriptor rather that the file name. This is an optimization that enables the OS to find the file once and save the information in a file table accessed by the file descriptor. Many systems have standard files that are automatically made available to a process upon startup. These (initial) file descriptors are fixed.

  • standard input: fd=0
  • standard output: fd=1
  • standard error: fd=2

A convenience offered by some command interpreters is a pipe or pipeline. The pipeline

dir | wc which pipes the output of dir into a character/word/line counter, will give the number of files in the directory (plus other info).

1.5.4: Input/Output

There are a wide variety of I/O devices that the OS must manage. For example, if two processes are printing at the same time, the OS must not interleave the output.

The OS contains device specific code (drivers) for each device (really each controller) as well as device-independent I/O code.

1.5.6 Protection

Files and directories have associated permissions.

  • Most systems supply at least rwx (readable, writable, executable).
  • Separate permissions can be defined for the file's owner (files have UIDs), for other users with the same GID as the file, and for everyone else.
  • A more general mechanism is to provide an access control list for each file.
  • When a file is opened, permissions are checked and, if the open is permitted, a file descriptor is returned that is used for subsequent operations.
  • Often files have as well. For example the linux ext2 and ext3 file systems support a attribute that is a hint to the dump program not to backup this file.

Memory assigned to a process, i.e., an address space, must also be protected.


Security has of course sadly become a very serious concern. The topic is quite deep and I do not feel that the necessarily superficial coverage that time would permit is useful so we are not covering the topic at all.

1.5.7 The Shell or Command Interpreter (DOS Prompt)

The command line interface to the operating system. The shell permits the user to

  • Invoke commands.
  • Pass arguments to the commands.
  • Redirect the output of a command to a file or device.
  • Pipe one command to another (as illustrated above via ).

Instead of a shell, one can have a more graphical interface.

Homework: 7.

Ontogeny Recapitulates Phylogeny

Some concepts become obsolete and then reemerge due in both cases to technology changes. Several examples follow. Perhaps the cycle will repeat with smart card OS.

Large Memories (and Assembly Language)

The use of assembly languages greatly decreases when memories get larger. When minicomputers and microcomputers (early PCs) were first introduced, they each had small memories and for a while assembly language again became popular.

Protection Hardware (and Monoprogramming)

Multiprogramming requires protection hardware. Once the hardware becomes available monoprogramming becomes obsolete. Again when minicomputers and microcomputers were introduced, they had no such hardware so monoprogramming revived.

Disks (and Flat File Systems)

When disks are small, they hold few files and a flat (single directory) file system is adequate. Once disks get large a hierarchical file system is necessary. When mini and microcomputer were introduced, they had tiny disks and the corresponding file systems were flat.

Virtual Memory (and Dynamically Linked Libraries)

Virtual memory, among other advantages, permits dynamically linked libraries so as VM hardware appears so does dynamic linking.

1.6 System Calls

System calls are the way a user (i.e., a program) directly interfaces with the OS. Some textbooks use the term envelope for the component of the OS responsible for fielding system calls and dispatching them to the appropriate component of the OS. On the right is a picture showing some of the OS components and the external events for which they are the interface.

Note that the OS serves two masters. The hardware (at the bottom) asynchronously sends interrupts and the user (at the top) synchronously invokes system calls and generates page faults.

Homework: 14.

What happens when a user executes a system call such as read()? We show a more detailed picture below, but at a high level what happens is

  1. Normal function call (in C, Ada, Pascal, Java, etc.).
  2. Library routine (probably in the C, or similar, language).
  3. Small assembler routine.
    1. Move arguments to predefined place (perhaps registers).
    2. Poof (a trap instruction) and then the OS proper runs in supervisor mode.
    3. Fix up result (move to correct place).

The following actions occur when the user executes the (Unix) system call

count = read(fd,buffer,nbytes) which reads up to from the file described by into . The actual number of bytes read is returned (it might be less than if, for example, an eof was encountered).
  1. Push third parameter on to the stack.
  2. Push second parameter on to the stack.
  3. Push first parameter on to the stack.
  4. Call the library routine, which involves pushing the return address on to the stack and jumping to the routine.
  5. Machine/OS dependent actions. One is to put the system call number for read in a well defined place, e.g., a specific register. This requires assembly language.
  6. Trap to the kernel. This enters the operating system proper and shifts the computer to privileged mode. Assembly language is again used.
  7. The envelope uses the system call number to access a table of pointers to find the handler for this system call.
  8. The read system call handler processes the request (see below).
  9. Some magic instruction returns to user mode and jumps to the location right after the trap.
  10. The library routine returns (there is more; e.g., the count must be returned).
  11. The stack is popped (ending the function invocation of read).

A major complication is that the system call handler may block. Indeed, the read system call handler is likely to block. In that case a context switch is likely to occur to another process. This is far from trivial and is discussed later in the course.

1.6.1 System Calls for Process Management

Process Management
ForkCreateProcessClone current process
exec(ve)Replace current process
wait(pid)WaitForSingleObjectWait for a child to terminate.
exitExitProcessTerminate process & return status
File Management
openCreateFileOpen a file & return descriptor
closeCloseHandleClose an open file
readReadFileRead from file to buffer
writeWriteFileWrite from buffer to file
lseekSetFilePointerMove file pointer
statGetFileAttributesExGet status info
Directory and File System Management
mkdirCreateDirectoryCreate new directory
rmdirRemoveDirectoryRemove empty directory
link(none)Create a directory entry
unlinkDeleteFileRemove a directory entry
mount(none)Mount a file system
umount(none)Unmount a file system
chdirSetCurrentDirectoryChange the current working directory
chmod(none)Change permissions on a file
kill(none)Send a signal to a process
timeGetLocalTimeElapsed time since 1 jan 1970

We describe very briefly some of the unix (Posix) system calls. A short description of the Windows interface is in the book.

To show how the four process management calls enable much of process management, consider the following highly simplified shell.

while (true) display_prompt() read_command(command) if (fork() != 0) waitpid(...) else execve(command) endif endwhile

The system call duplicates the process. That is we now have a second process, which is a child of the process that actually executed the . The parent and child are very, very nearly identical. For example they have the same instructions, they have the same data, and they are both currently executing the system call.

But there is a difference!

The system call returns a zero in the child process and returns a positive integer in the parent. In fact the value return is the PID (process ID) of the child.

Since zero represents false in C and a positive integer represents true, the parent and child execute different branches of the in the code above.

Note that simply removing the waitpid(...) gives background jobs.

1.6.2 System Calls for File Management

Most files are accessed sequentially from beginning to end. In this case the operations performed are

-- possibly creating the file
multiple s and s

For non-sequential access, is used to move the , which is the location in the file where the next read or write will take place.

1.6.3 System Calls for Directory Management

Directories are created and destroyed by and . Directories are changed by the creation and deletion of files. As mentioned, creates files. Files can have several names is used to give another name and to remove a name. When the last name is gone (and the file is no longer open by any process), the file data is destroyed. This description is approximate, we give the details later in the course where we explain Unix i-nodes.

Homework: 18.

1.6.4 Miscellaneous System Calls


1.6.5 The Windows Win32 API


1.6A Addendum on Transfer of Control

The transfer of control between user processes and the operating system kernel can be quite complicated, especially in the case of blocking system calls, hardware interrupts, and page faults. Before tackling these issues later, we begin with the familiar example of a procedure call within a user-mode process.

An important OS objective is that, even in the more complicated cases of page faults and blocking system calls requiring device interrupts, simple procedure call semantics are observed from a user process viewpoint. The complexity is hidden inside the kernel itself, yet another example of the operating system providing a more abstract, i.e., simpler, virtual machine to the user processes.

More details will be added when we study memory management (and know officially about page faults) and more again when we study I/O (and know officially about device interrupts).

A number of the points below are far from standardized. Such items as where to place parameters, which routine saves the registers, exact semantics of trap, etc, vary as one changes language/compiler/OS. Indeed some of these are referred to as , i.e. their implementation is a matter of convention rather than logical requirement. The presentation below is, we hope, reasonable, but must be viewed as a generic description of what could happen instead of an exact description of what does happen with, say, C compiled by the Microsoft compiler running on Windows XP.

1.6A.1 User-mode procedure calls

Procedure f calls g(a,b,c) in process P. An example is above where a user program calls .

Actions by f Prior to the Call

  1. Save the registers by pushing them onto the stack (in some implementations this is done by g instead of f).

  2. Push arguments c,b,a onto P's stack.
    Note: Stacks usually grow downward from the top of P's segment, so pushing an item onto the stack actually involves decrementing the stack pointer, SP.
    Note: Some compilers store arguments in registers not on the stack.

Executing the Call Itself

  1. Execute <start-address of g>.
    This instruction pushes the program counter PC (a.k.a. the instruction pointer IP) onto the stack, and then jumps to the start address of g. The value pushed is actually the updated program counter, i.e., the location of the next instruction (the instruction to be executed by f when g returns).

Actions by g upon Being Called

  1. Allocate space for g's local variables by suitably decrementing SP.

  2. Start execution from the beginning of the program, referencing the parameters as needed. The execution may involve calling other procedures, possibly including recursive calls to f and/or g.

Actions by g When Returning to f

  1. If g is to return a value, store it in the conventional place.

  2. Undo step 4: Deallocate local variables by incrementing SP.

  3. Undo step 3: Execute , i.e., pop the stack and set PC to the value popped, which is the return address pushed in step 4.

Actions by f upon the Return from g:

  1. (We are now at the instruction in f immediately following the call to g.)
    Undo step 2: Remove the arguments from the stack by incrementing SP.

  2. Undo step 1: Restore the registers while popping them off the stack.

  3. Continue the execution of f, referencing the returned value of g, if any.

Properties of (User-Mode) Procedure Calls

  • Predictable (often called synchronous) behavior: The author of f knows where and hence when the call to g will occur. There are no surprises, so it is relatively easy for the programmer to ensure that f is prepared for the transfer of control.
  • LIFO (structure of control transfer: we can be sure that control will return to this f when this call to g exits. The above statement holds even if, via recursion, g calls f. (We are ignoring language features such as and exceptions, and the use of unstructured assembly coding. In the latter case all bets are off.)
  • Entirely in user mode and user space.

1.6A.2 Kernel-mode procedure calls

We mean one procedure running in kernel mode calling another procedure, which will also be run in kernel mode. Later, we will discuss switching from user to kernel mode and back.

There is not much difference between the actions taken during a kernel-mode procedure call and during a user-mode procedure call. The procedures executing in kernel-mode are permitted to issue privileged instructions, but the instructions used for transferring control are all unprivileged so there is no change in that respect.

One difference is that often a different stack is used in kernel mode, but that simply means that the stack pointer must be set to the kernel stack when switching from user to kernel mode. But we are not switching modes in this section; the stack pointer already points to the kernel stack. Often there are two stack pointers one for kernel mode and one for user mode.

1.6A.3 The Trap instruction

The trap instruction, like a procedure call, is a synchronous transfer of control: We can see where, and hence when, it is executed. In this respect, there are no surprises. Although not surprising, the trap instruction does have an unusual effect: processor execution is switched from user-mode to kernel-mode. That is, the trap instruction normally itself is executed in user-mode (it is naturally an UNprivileged instruction), but the next instruction executed (which is NOT the instruction written after the trap) is executed in kernel-mode.

Process P, running in unprivileged (user) mode, executes a trap. The code being executed is written in assembler since there are no high level languages that generate a trap instruction. There is no need to name the function that is executing. Compare the following example to the explanation of given above.

Actions by P prior to the trap

  1. Save the registers by pushing them onto the stack.

  2. Store any arguments that are to be passed. The stack is not normally used to store these arguments since the kernel has a different stack. Often registers are used.

Executing the trap itself

  1. Execute TRAP <trap-number>.
    This instruction switch the processor to kernel (privileged) mode, jumps to a location in the OS determined by trap-number, and saves the return address. For example, the processor may be designed so that the next instruction executed after a trap is at physical address 8 times the trap-number.
    The trap-number can be thought of as the of the code-sequence to which the processor will jump rather than as an argument to trap.

Actions by the OS upon being TRAPped into

  1. Jump to the real code.
    Recall that trap instructions with different trap numbers jump to locations very close to each other. There is not enough room between them for the real trap handler. Indeed one can think of the trap as having an extra level of indirection; it jumps to a location that then jumps to the real start address. If you learned about writing jump tables in assembler, this is very similar.

  2. Check all arguments passed. The kernel must be paranoid and assume that the user mode program is evil and written by a bad guy.

  3. Allocate space by decrementing the kernel stack pointer.
    The kernel and user stacks are separate.

  4. Start execution from the jumped-to location.

Actions by the OS when returning to user mode

  1. Undo step 6: Deallocate space by incrementing the kernel stack pointer.

  2. Undo step 3: Execute (in assembler) another special instruction, RTI or ReTurn from Interrupt, which returns the processor to user mode and transfers control to the return location saved by the trap. The word appears because an RTI is also used when the kernel is returning from an interrupt as well as the present case when it is returning from an trap.

Actions by P upon the return from the OS

  1. We are now in at the instruction right after the trap
    Undo step 1: Restore the registers by popping the stack.

  2. Continue the execution of P, referencing the returned value(s) of the trap, if any.

Properties of TRAP/RTI

  • Synchronous behavior: The author of the assembly code in P knows where and hence when the trap will occur. There are no surprises, so it is relatively easy for the programmer to prepare for the transfer of control.
  • Trivial control transfer when viewed from P: The next instruction of P that will be executed is the one following the trap. As we shall see later, other processes may execute between P's trap and the next P instruction.
  • Starts and ends in user mode and user space, but executed in kernel mode and kernel space in the middle.

Remark: A good way to use the material in the addendum is to compare the first case (user-mode f calls user-mode g) to the TRAP/RTI case line by line so that you can see the similarities and differences.

Start Lecture #3


  1. The final exam will be as expected in this room at 5pm on Tuesday 17 May. The final is required (naturally) so don't make travel plans for that day.
  2. Kushank Wad*wa N14190946 is not listed on the registrar's list.

1.7: Operating System Structure

I must note that Tanenbaum is a big advocate of the so called microkernel approach in which as much as possible is moved out of the (supervisor mode) kernel into separate processes. The (hopefully small) portion left in supervisor mode is called a microkernel.

In the early 90s this was popular. Digital Unix (now called True64) and Windows NT/2000/XP/Vista/7 are examples. Digital Unix is based on Mach, a research OS from Carnegie Mellon university. Lately, the growing popularity of Linux has called into question the belief that .

1.7.1 Monolithic approach

The previous picture: one big program

The system switches from user mode to kernel mode during the poof and then back when the OS does a (an RTI or return from interrupt).

But of course we can structure the system better, which brings us to.

1.7.2 Layered Systems

Some systems have more layers and are more strictly structured.

An early layered system was operating system by Dijkstra and his students at Technische Hogeschool Eindhoven. This was a simple batch system so the was the user.

  1. The operator process
  2. User programs
  3. I/O mgt
  4. Operator console—process communication
  5. Memory and drum management

The layering was done by convention, i.e. there was no enforcement by hardware and the entire OS is linked together as one program. This is true of many modern OS systems as well (e.g., linux).

The multics system was layered in a more formal manner. The hardware provided several protection layers and the OS used them. That is, arbitrary code could not jump into or access data in a more protected layer.

1.7.3 Microkernels

The idea is to have the kernel, i.e. the portion running in supervisor mode, as small as possible and to have most of the operating system functionality provided by separate processes. The microkernel provides just enough to implement processes.

This does have advantages. For example an error in the file server cannot corrupt memory in the process server since they have separate address spaces (they are after all separate process). Confining the effect of errors makes them easier to track down. Also an error in the ethernet driver can corrupt or stop network communication, but it cannot crash the system as a whole.

But the microkernel approach does mean that when a (real) user process makes a system call there are more processes switches. These are not free.

Related to microkernels is the idea of putting the mechanism in the kernel, but not the policy. For example, the kernel would know how to select the highest priority process and run it, but some user-mode process would assign the priorities. One could envision changing the priority scheme being a relatively minor event compared to the situation in monolithic systems where the entire kernel must be relinked and rebooted.

Microkernels Not So Different In Practice

Dennis Ritchie, the inventor of the C programming language and co-inventor, with Ken Thompson, of Unix was interviewed in February 2003. The following is from that interview.

What's your opinion on microkernels vs. monolithic?

Dennis Ritchie: They're not all that different when you actually use them. "Micro" kernels tend to be pretty large these days, and "monolithic" kernels with loadable device drivers are taking up more of the advantages claimed for microkernels.

I should note, however, that the Minix microkernel (excluding the processes) is quite small, about 4000 lines.

1.7.4 Client-Server

When implemented on one computer, a client-server OS often uses the microkernel approach in which the microkernel just handles communication between clients and servers, and the main OS functions are provided by a number of separate processes.

A distributed system can be thought of as an extension of the client server concept where the servers are remote.

Today with plentiful memory, each machine would have all the different servers. So the only reason am OS-internal message would go to another computer is if the originating process wished to communicate with a specific process on that computer (for example wanted to access a remote disk).

Distributes systems are becoming increasingly important for application programs. Perhaps the program needs data found only on certain machine (no one machine has all the data). For example, think of (legal, of course) file sharing programs.

Homework: 24

1.7.5 Virtual Machines

Use a (i.e., beyond supervisor, i.e. beyond a normal OS) to switch between multiple Operating Systems. A more modern name for a hypervisor is a .


The hypervisor idea was made popular by IBM's CP/CMS (now VM/370). CMS stood for Cambridge Monitor System since it was developed at IBM's Cambridge (MA) Science Center. It was renamed, with the same acronym (an IBM specialty, cf. RAID) to Conversational Monitor System.

  • Each App/CMS runs on a virtual 370.
  • CMS is a single user OS.
  • A system call in an App (application) traps to the corresponding CMS.
  • CMS believes it is running on the actual hardware so issues I/O instructions but ...
  • ... I/O instructions in CMS trap to VM/370.
  • This idea is still used but the guest OS is now normally a full-featured operating system rather than a simple system like CMS. For example, the newest IBM systems can run multiple instances of linux as well as multiple instances of traditional IBM Operating Systems on a single hardware platform.

Virtual Machines Redicovered

Recently, virtual machine technology has moved to machines (notably x86) that are not fully virtualizable. Recall that when CMS executed a privileged instruction, the hardware trapped to the real operating system. On x86, privileged instructions are ignored when executed in user mode, so running the guest OS in user mode won't work. Bye bye (traditional) hypervisor. But a new style emerged where the hypervisor runs, not on the hardware, but on the host operating system. See the text for a sketch of how this (and another idea ) works. An important academic advance was Disco from Stanford that led to the successful commercial product VMware.

Sanity Restored

Both AMD and Intel have extended the x86 architecture to better support virtualization. The newest processors produced today (2008) by both companies now support an additional (higher) privilege mode for the VMM. The guest OS now runs in the old privileged mode (for which it was designed) and the hypervisor/VMM runs in the new higher privileged mode from which it is able to monitor the usage of hardware resources by the guest operating system(s).

The Java Virtual Machine

The idea is that a new (rather simple) computer architecture called the Java Virtual Machine (JVM) was invented but not built (in hardware). Instead, interpreters for this architecture are implemented in software on many different hardware platforms. Each interpreter is also called a JVM. The java compiler transforms java into instructions for this new architecture, which then can be interpreted on any machine for which a JVM exists.

This has portability as well as security advantages, but at a cost in performance.

Of course java can also be compiled to native code for a particular hardware architecture and other languages can be compiled into instructions for a software-implemented virtual machine (e.g., pascal with its p-code).

1.7.6 Exokernels

Similar to VM/CMS but the virtual machines have disjoint resources (e.g., distinct disk blocks) so less remapping is needed.

1.8 The World According to C

1.8.1 The C Language

Assumed knowledge.

1.8.2 Header Files

Assumed knowledge.

1.8.3 Large Programming Projects

Mostly assumed knowledge. Linker's are very briefly discussed. Our earlier discussion was much more detailed.

1.8.4 The model of Run Time

Extremely brief treatment with only a few points made about the running of the operating itself.

  • The text (code) segment is normally read only.
  • The stack is initially of size zero; it grows and shrinks as functions are called and return.
  • The data segment is initially not empty, with some of it is initialized. It can grow under program control and perhaps can shrink.

1.9 Research on Operating Systems


1.10 Outline of the Rest of this Book


1.11 Metric Units

Assumed knowledge. Note that what is covered is just the prefixes, i.e. the names and abbreviations for various powers of 10.

1.12 Summary

Skipped, but you should read and be sure you understand it (about 2/3 of a page).

Chapter 2 Process and Thread Management

Tanenbaum's chapter title is . I prefer to add the word management. The subject matter is processes, threads, scheduling, interrupt handling, and IPC (InterProcess Communication—and Coordination).

2.1 Processes

Definition: A process is a program in execution.

  • We are assuming a multiprogramming OS that can switch from one process to another.
  • Sometimes this is called pseudoparallelism since one has the illusion of a parallel processor.
  • The other possibility is real parallelism in which two or more processes are actually running at once because the computer system is a parallel processor, i.e., has more than one processor.
  • We do not study real parallelism (parallel processing, distributed systems, multiprocessors, etc) in this course.

2.1.1 The Process Model

Even though in actuality there are many processes running at once, the OS gives each process the illusion that it is running alone.

  • Virtual time: The time used by just this processes. Virtual time progresses at a rate independent of other processes. (Actually, this is false, the virtual time is typically incremented a little during the systems calls used for process switching; so if there are other processes, virtual time occurs.)

  • Virtual memory: The memory as viewed by the process. Each process typically believes it has a contiguous chunk of memory starting at location zero. Of course this can't be true of all processes (or they would be using the same memory) and in modern systems it is actually true of no processes (the memory assigned to a single process is not contiguous and does not include location zero).

    Think of the individual modules that are input to the linker. Each numbers its addresses from zero; the linker eventually translates these relative addresses into absolute addresses. That is the linker provides to the assembler a virtual memory in which addresses start at zero.

Virtual time and virtual memory are examples of abstractions provided by the operating system to the user processes so that the latter experiences a more pleasant virtual machine than actually exists.

2.1.2 Process Creation

From the users' or external viewpoint there are several mechanisms for creating a process.

  1. System initialization, including daemon (see below) processes.
  2. Execution of a process creation system call by a running process.
  3. A user request to create a new process.
  4. Initiation of a batch job.

But looked at internally, from the system's viewpoint, the second method dominates. Indeed in early Unix only one process is created at system initialization (the process is called init); all the others are decendents of this first process.

Why have init? That is why not have all processes created via method 2?
Ans: Because without init there would be no running process to create any others.

Definition of Daemon

Many systems have process lurking around to perform tasks when they are needed. I was pretty sure the terminology was related to mythology, but didn't have a reference until a student found at http://developer.syndetic.org/query_jargon.pl?term=demon

daemon: /day'mn/ or /dee'mn/ n. [from the mythological meaning, later rationalized as the acronym `Disk And Execution MONitor'] A program that is not invoked explicitly, but lies dormant waiting for some condition(s) to occur. The idea is that the perpetrator of the condition need not be aware that a daemon is lurking (though often a program will commit an action only because it knows that it will implicitly invoke a daemon). For example, under {ITS}, writing a file on the LPT spooler's directory would invoke the spooling daemon, which would then print the file. The advantage is that programs wanting (in this example) files printed need neither compete for access to nor understand any idiosyncrasies of the LPT. They simply enter their implicit requests and let the daemon decide what to do with them. Daemons are usually spawned automatically by the system, and may either live forever or be regenerated at intervals. Daemon and demon are often used interchangeably, but seem to have distinct connotations. The term `daemon' was introduced to computing by CTSS people (who pronounced it /dee'mon/) and used it to refer to what ITS called a dragon; the prototype was a program called DAEMON that automatically made tape backups of the file system. Although the meaning and the pronunciation have drifted, we think this glossary reflects current (2000) usage.

As is often the case, wikipedia.org proved useful. Here is the first paragraph of a more thorough entry. The wikipedia also has entries for other uses of daemon.

In Unix and other computer multitasking operating systems, a daemon is a computer program that runs in the background, rather than under the direct control of a user; they are usually instantiated as processes. Typically daemons have names that end with the letter "d"; for example, syslogd is the daemon which handles the system log.

2.1.3 Process Termination

Again from the outside there appear to be several termination mechanism.

  1. Normal exit (voluntary).
  2. Error exit (voluntary).
  3. Fatal error (involuntary).
  4. Killed by another process (involuntary).

And again, internally the situation is simpler. In Unix terminology, there are two system calls kill and exit that are used. Kill (poorly named in my view) sends a signal to another process. If this signal is not caught (via the signal system call) the process is terminated. There is also an signal. Exit is used for self termination and can indicate success or failure.

2.1.4 Process Hierarchies

Modern general purpose operating systems permit a user to create and destroy processes.

  • In unix this is done by the fork system call, which creates a child process, and the exit system call, which terminates the current process.
  • After a fork both parent and child keep running (indeed they have the same program text) and each can fork off other processes.
  • A process tree results. The root of the tree is a special process created by the OS during startup.
  • A process can choose to wait for children to terminate. For example, if C issued a wait() system call it would block until G finished.

Old or primitive operating system like MS-DOS are not fully multiprogrammed, so when one process starts another, the first process is automatically blocked and waits until the second is finished. This implies that the process tree degenerates into a line.

2.1.5 Process States and Transitions

The diagram on the right contains much information. I often include it on exams.

  • Consider a running process P that issues an I/O request
    • The process blocks
    • At some later point, a disk interrupt occurs and the driver detects that P's request is satisfied.
    • P is unblocked, i.e. is moved from blocked to ready
    • At some later time the operating system scheduler looks for a ready job to run and picks P.
  • A preemptive scheduler has the dotted line preempt;
    A non-preemptive scheduler doesn't.
  • The number of processes changes only for two arcs: create and terminate.
  • Suspend and resume are medium term scheduling
    • Done on a longer time scale.
    • Involves memory management as well. As a result we study it later.
    • Sometimes called two level scheduling.

Homework: 1.

One can organize an OS around the scheduler.

  • Write a minimal (a micro-kernel) consisting of the scheduler, interrupt handlers, and IPC (interprocess communication).

  • The rest of the OS consists of kernel processes (e.g. memory, filesystem) that act as servers for the user processes (which of course act as clients).

  • The system processes also act as clients (of other system processes).

  • The above is called the client-server model and is one Tanenbaum likes. His operating system works this way.

  • Indeed, there was reason to believe that the client-server model would dominate OS design. But that hasn't happened.

  • Such an OS is sometimes called server based.

  • Systems like traditional unix or linux would then be called self-service since the user process serves itself. That is, the user process switches to kernel mode (via the TRAP instruction) and performs the system call itself without transferring control to another process.

2.1.6 Implementation of Processes

The OS organizes the data about each process in a table naturally called the process table. Each entry in this table is called a process table entry or process control block (PCB).

I have often referred to a process table entry as a PTE, but this is bad since I also use PTE for Page Table Entry. Because the latter usage is very common, I must stop using PTE to abbreviate the former. Please correct me if I slip up.

Characteristics of the process table.

  • One entry per process.
  • The central data structure for process management.
  • A process state transition (e.g., moving from blocked to ready) is reflected by a change in the value of one or more fields in the PCB.
  • We have converted an active entity (process) into a data structure (PCB). Finkel calls this the level principle.
  • The PCB contains a great deal of information about the process. For example,
    • Saved value of registers including the program counter (i.e., the address of the next instruction) when the process not running.
    • Stack pointer
    • CPU time used
    • Process id (PID)
    • Process id of parent (PPID)
    • User id (uid and euid)
    • Group id (gid and egid)
    • Pointer to text segment (memory for the program text)
    • Pointer to data segment
    • Pointer to stack segment
    • UMASK (default permissions for new files)
    • Current working directory
    • Many others

2.1.6A: An Addendum on Interrupts

This should be compared with the addenda on transfer of control and trap.

In a well defined location in memory (specified by the hardware) the OS stores an interrupt vector, which contains the address of the interrupt handler.

  • Tanenbaum calls the interrupt handler the interrupt service routine.
  • Actually one can have different priorities of interrupts and the interrupt vector then contains one pointer for each level. This is why it is called a vector.

Assume a process P is running and a disk interrupt occurs for the completion of a disk read previously issued by process Q, which is currently blocked. Note that disk interrupts are unlikely to be for the currently running process (because the process that initiated the disk access is likely blocked).

Actions by P Just Prior to the Interrupt:

  1. Who knows??
    This is the difficulty of debugging code depending on interrupts, the interrupt can occur (almost) anywhere. Thus, we do not know what happened just before the interrupt. Indeed, we do not even know which process P will be running when the interrupt does occur.
    We cannot (even for one specific execution) point to an instruction and say .

Executing the interrupt itself:

  1. The hardware saves the program counter and some other registers (or switches to using another set of registers, the exact mechanism is machine dependent).

  2. The hardware loads new program counter from the interrupt vector.
    • Loading the program counter causes a jump.
    • Steps 2 and 3 are similar to a procedure call. But the interrupt is asynchronous.

  3. As with a trap, the hardware automatically switches the system into privileged mode. (It might have been in supervisor mode already. That is, an interrupt can occur in supervisor or user mode.)

Actions by the interrupt handler (et al) upon being activated

  1. An assembly language routine saves registers.

  2. The assembly routine sets up a new stack. (These last two steps are often called setting up the C environment.)

  3. The assembly routine calls a procedure in a high level language, often the C language (Tanenbaum forgot this step).

  4. The C procedure does the real work.
    • Determines what caused the interrupt (in this case a disk completed an I/O).
    • How does it figure out the cause?
      • It might know the priority of the interrupt being activated.
      • The controller might write information in memory before the interrupt.
      • The OS might read registers in the controller.
    • Mark process Q as ready to run.
      • That is move Q to the ready list (note that again we are viewing Q as a data structure).
      • Q is now in ready state; it was in the blocked state before.
      • The code that Q needs to run initially is likely to be OS code. For example, the data just read is probably now in kernel space and Q needs to copy it into user space.
    • Now we have at least two processes ready to run, namely P and Q. There may be arbitrarily many others.

  5. The scheduler decides which process to run, P or Q or something else. (This very loosely corresponds to g calling other procedures in the simple f calls g case we discussed previously). Eventually the scheduler decides to run P.

Actions by P when control returns

  1. The C procedure (that did the real work in the interrupt processing) continues and returns to the assembly code.

  2. Assembly language restores P's state (e.g., registers) and starts P at the point it was when the interrupt occurred.

Properties of interrupts

  • Phew.
  • Unpredictable (to an extent). We cannot tell what was executed just before the interrupt occurred. That is, the control transfer is asynchronous; it is difficult to ensure that everything is always prepared for the transfer.
  • The user code is unaware of the difficulty and cannot (easily) detect that it occurred. This is another example of the OS presenting the user with a virtual machine environment that is more pleasant than reality (in this case synchronous rather asynchronous behavior).
  • Interrupts can also occur when the OS itself is executing. This can cause difficulties since both the main line code and the interrupt handling code are from the same , namely the OS, and hence might well be using the same variables. We will soon see how this can cause great problems even in what appear to be trivial cases.
  • The interprocess control transfer is neither stack-like nor queue-like. That is if first P was running, then Q was running, then R was running, then S was running, the next process to be run might be any of P, Q, or R (or some other process).
  • The system might have been in user-mode or supervisor mode when the interrupt occurred. The interrupt processing starts in supervisor mode.

The OS Running a User Process

In traditional Unix and Linux, if an interrupt occurs while a user process with PID=P is running, the system switches to kernel mode and OS code is executed, but the PID is still P. The owner of process P is charged for this execution. Try running the time program on one of the Unix systems and noting the output.

2.1.7 Modeling Multiprogramming (Crudely)

Consider a job that is unable to compute (i.e., it is waiting for I/O) a fraction p of the time.

  • With monoprogramming, the CPU utilization is 1-p.
  • Note that p is often > .5, so CPU utilization is poor.
  • But, if n jobs are in memory, then the probability that all n are waiting for I/O is approximately pn. So, with a multiprogramming level (MPL) of n, the CPU utilization is approximately 1-pn.
  • If p=.5 and n=4, then the utilization 1-pn=15/16 is much better than the monoprogramming (n=1) utilization of 1/2.

There are at least two causes of inaccuracy in the above modeling procedure.

  • Some CPU time is spent by the OS in switching from one process to another. So the "useful utilization", i.e. the proportion of time the CPU is executing user code, is lower than predicted.
  • The model assumes that the probability that one process is waiting for I/O is independent of the probability that another process is waiting for I/O. This assumption was used when we asserted that the probability all n jobs are waiting for I/O is .

Nonetheless, it is correct that increasing MPL does increase CPU utilization up to a point.

An important limitation is memory. That is, we assumed that we have many jobs loaded at once, which means we must have enough memory for them. There are other memory-related issues as well and we will discuss them later in the course.

Homework: 5.

2.2 Threads

Per process itemsPer thread items

Address spaceProgram counter
Global variablesMachine registers
Open filesStack
Child processes
Pending alarms
Signals and signal handlers
Accounting information

The idea behind threads to have separate threads of control (hence the name) running in the address space of a single process as shown in the diagram to the right. An address space is a memory management concept. For now think of an address space as the memory in which a process runs. (In reality it also includes the mapping from virtual addresses, i.e., addresses in the program, to physical addresses, i.e., addresses in the machine. The table on the left shows which properties are common to all threads in a given process and which properties are thread specific.

Each thread is somewhat like a process (e.g., it shares the processor with other threads) but a thread contains less state than a process (e.g., the address space belongs to the process in which the thread runs.)

2.2.1 Thread Usage

Often, when a process P executing an application is blocked (say for I/O), there is still computation that can be done for the application. Another process can't do this computation since it doesn't have access to P's memory. But two threads in the same process do share memory so that problem doesn't occur.

An important modern example is a multithreaded web server. Each thread is responding to a single WWW connection. While one thread is blocked on I/O, another thread can be processing another WWW connection.
Question: Why not use separate processes, i.e., what is the shared memory?
Answer: The cache of frequently referenced pages.

A common organization for a multithreaded application is to have a dispatcher thread that fields requests and then passes each request on to an idle worker thread. Since the dispatcher and worker share memory, passing the request is very low overhead.

Another example is a producer-consumer problem (see below) in which we have 3 threads in a pipeline. One thread reads data from an I/O device into an input buffer, the second thread performs computation on the input buffer and places results in an output buffer, and the third thread outputs the data found in the output buffer. Again, while one thread is blocked the others can execute.

Really you want 2 (or more) input buffers and 2 (or more) output buffers. Otherwise the middle thread would be using all the buffers and would block both outer threads.

Question: When does each thread block?

  1. The first thread blocks while waiting for the device to supply the data. It also blocks if all input buffers for the computational thread are full.

Q1. Trace the following curve

y=x/(1+x2)(5 Marks)


Given thaty=x/(1+x2)

therefore y(1+x2) = x

therefore y + x2y = x

therefore dy/dx + x2 (dy/dx) + 2xy = 1

therefore dy/dx(1+x2) = 1 – 2xy

Putting the value of y

dy/dx(1+x2) = 1 – 2x[x/(1+x2)]

= 1 – 2x2/(1+x2)= [1 – x2 / 1 + x2]

therefore dy/dx = [1 – x2 / (1 + x2)2]

From eqn(1)y=x/(1+x2)

So if y=0 and x=0

Then f(x,y) = y - x/(1+x2)= 0

Then curve













Curve is traced as follow

Q2. Obtain fifth roots of 4+3i.                                        (5 Marks)


Given Complex number is       Z = 4 + 3i

Q3. Prove that |4x – 5y| <= 4|x| + 5|y|(5 Marks)


L.H.S = |4x – 5y|

= (|4x – 5y|2)(because |x|2 = x for all x belong R)

= (4x – 5y)

Therefor |4x|2 + |5y|2 – 2|4x|.|5y|

= > (|4x| + |5y|)2 - 2|4x|.|5y|

Since |x|>= 0 for all x belong R

So taking square on both side

|4x – 5y| <= 4|x| + 5|y|

Given equation is

5x3 – 8x2 + 7x + 6 = 0

= > x3 – (8/5)x2 + (7/5)x + (6/5) = 0-----------(1)

Let a, b, c are the roots, so

a + b + c = 8/5-------------------(2)

ab + bc + ca = 7/5--------------------(3)

abc = -6/5--------------------(4)

Now Let roots of cubic equation is p, q, r such that

p = a2 + b2 + ba-------------------(5)

q = b2 +c2 + cb---------------------(6)

r = c2 + a2 + ca--------------------(7)

Then cube equation is

(x - p)(x - q)(x - r) = 0

= > (x - p)[x2 – (q + r)x + qr] = 0

= > x3 – x2 (p+q+r) + x(pq+qr+rp) – pqr = 0------------(8)

Now(p+q+r) = 2(a2 + b2 + c2) + ab + bc + ca

= >(p+q+r) = 2[(a + b + c)2 – 2(ab + bc + ca)] + ab + bc + ca

= >(p+q+r) = 2(a + b + c)2 – 3(ab + bc + ca)

= >(p+q+r) = 2(8/5)2 – 3*(7/5)

= >(p+q+r) = 23/25-----------------------(9)

Similarly,pq+qr+rp = (a2 + b2 + ab)(b2 + c2 + bc)

= a2b2 + a2c2 + a2bc + b4 + b2c2 + b3c + ab3 + abc2 + ab2c

We can use,( a2 + b2 + c2)2 = a4 + b4 + c4 + 2(a2b2 + b2c2 + c2a2)

Write equation in term of a2, b2 and c2

Putting the value from (5), (6) and (7) we get

pq+qr+rp = 22/25-------------------(10)

and pqr =-21/25--------------------(11)

putting the value of (9), (10) and (11) in (8)

= > x3 – x2 (23/25) + x(22/25) – (-21/25) = 0

= > 25x3 – 23x2 + 22x – 21 = 0

Q5. Find the perimeter of the cord


0 Replies to “Cs 60 Solved Assignment 2010 Dodge”

Lascia un Commento

L'indirizzo email non verrà pubblicato. I campi obbligatori sono contrassegnati *