You are on page 1of 22

System and

Device
Programming
Theory

Antonio Mignano
Sommario
11 September 2017 ........................................................................................................................................................... 2
17 July 2017 ...................................................................................................................................................................... 4
7 July 2017 ........................................................................................................................................................................ 4
12 September 2016 ........................................................................................................................................................... 7
15 July 2016 .................................................................................................................................................................... 10
1 July 2016 ...................................................................................................................................................................... 13
25 June 2015 ................................................................................................................................................................... 15

[Mignano A.] SDP Theory 1


11 September 2017

Windows

1. Explain the role of semaphores and mutexes, as thread synchronization primitives […]

Solution:

Mutex can be named and have HANDLEs and can be used for interprocess synchronization and they are
owned by a thread rather than a process. If two processes call the CreateMutex() just the first one will create
it and the other one will obtain access to the handle.

Semaphore doesn’t have the concept of owner. Semaphore is just a counter, if greater than 0 the
semaphore is signaled. Any thread can release a semaphore (increment its value by one or more) even those
who hasn’t acquire it. The semaphore also have a maximum count value that is specified at the creation.

For both mutexes and semaphores the function WaitForSingleObject() is used to obtain access to them.
Instead for releasing a mutex is used ReleaseMutex() while for semaphores is used ReleaseSemaphore().

Yes, a mutex is equivalent to a binary semaphore except for the characteristics described above.

2. A producer and two (2) consumers interact by means of a […]

Solution:
MSG_T buf[2];
HANDLE canWrite, canRead; //Sempahores (canWrite = 1, canRead = 0);
int toWrite = 0;
int toRead = 1;
void buf_write(MSG_T *msg){
WaitForSingleObject(canWrite);
WriteMessage(buf[toWrite], msg);
toWrite = (toWrite+1)%2;
ReleaseSemaphore(canRead, 1, NULL);
}
void buf_read(MSG_T *msg){
WaitForSingleObject(canRead);
ReadMessage(msg, buf[toRead]);
toRead = (toRead+1)%2;
ReleaseSemaphore(canWrite, 1, NULL);
}

Unix

1. List and illustrate five scenarios that the buffer cache mechanism has to deal with.

Solution:

Slide 27/28 di Week7/IO.pdf

The getblk() tests five different conditions before returning the block:

If the block is present in its hash queue:

1) If the buffer is not locked: marks the buffer as locked, remove it from the free list and return the buffer.
2) If the buffer is locked: sleep until the buffer becomes unlocked.

If the block is not present in its hash queue:

3) If free list is empty: sleeps until a free buffer arrives and return it.
4) If the free list is not empty but the buffer is marked as delay write: makes asynchronous disk write of the
buffer and return the now free block.
[Mignano A.] SDP Theory 2
5) If the free list is not empty and the buffer is not marked as delay write: return the block that can be used.

2. List the sequence of disk block accessed, and detail the data information […]

Solution:

ls –l “/usr/local/bin”

Finds the inode number of the root directory “/” (found in superblock), accessed the inode list to the
corresponding “/” inode number and then accessed the folder file in the data block to search the “usr”
corresponding inode number. Access “usr” in the inode list, open the corresponding folder file in the data
block looking for “local” and so on…

Inside the folder file of “bin” there are all the entries for that folder, accessing the inodes relative to the files
and folders inside “bin”.

3. List the differences between the canonical, raw, and cbreak terminal […]

Solution:

Canonical: The line discipline performs all the required editing before passing the line to the process.
Character sequences related to “interrupt” or “quit” are translated to the corresponding signals before
passing the to the process.

Raw: As the name suggests, no line editing is performed. Sequence for “interrupt” or “quit” are passed as
normal character.

Cbreak: Similar to raw mode but “interrupt”, “quit” control characters and modem flow control characters
are still handled specially and stripped from the input stream.

Every process that owns a terminal can switch to another strategy.

The programmatic interface for querying and modifying all of these modes and control characters was the
ioctl() system call. Now the new POSIX standard defined a new structure used by all terminal library calls:

struct termios{
tcflag_t c_iflag; //Input modes
tcflag_t c_oflag; //Output modes
tcflag_t c_flag; //Control modes
tcflag_tc_lflag; //Local modes
cc_t c_cc[NCCS]; //Control characters
}

[Mignano A.] SDP Theory 3


17 July 2017
No theory questions found.

7 July 2017
Unix

1. Using counters and semaphores, write the pseudo-code synchronization


**prologue and epilogue** of the four functions represented in the
precedence graph. Notice that the nodes represent functions, not process.
The graph imposes precedence on the function usage by an unknown set of
processes. The template of the function is:

: { synch_prologue; body; synch_epilogue;}

[…]

Solution:

F1: wait(s1); body; signal(s2);


F2: wait(s2); body; signal(s4);
F3: wait(s2); body; signal(s4);
F4: wait(s4); body; signal(s1);

2. Given this sequence of system calls in a Linux C language program:


fd = open(“f123”, O_RDWR | O_CREAT, 0777);
lseek(fd, 0x1000000L, 0);
write(fd, &fd, sizeof(int));

draw the file system data structure and disk blocks that are modified by these operations,
considering a 4 Kbytes block size, and index block
pointers of 4 bytes.
If the file doesn’t exist it is created with
Solution:
permission 0777. An entry in created
inside the user file table that point to the
file table and then to the inode table.

lseek move the file descriptor ahead from


the start of the file of 0x1000000L bytes. If
could be after the end of file. Then it write
the value of the pointer of fd.

If the write offset does not correspond to


an already allocated block, the kernel
3. What is the meaning of each instruction in this C language
allocates a new block and updates the
sequence
inode pointer structure. If instead the
1 - asm volatile(“mov %0, %%cr3”:: “r”(&x));
kernel has to write only part of a block, it
2 - asm volatile(“mov %%cr0, %0”: “=r”(cr0));
must read the block from disk.
3 - cr0 |= 0x80000000;
4 - asm volatile(“mov %0, %%cr0”:: “r”(cr0));

Solution:

1 – Sets CR3 that is the register that enables the translation from linear address to physical ones. The
variable x holds the location of the page directory (the physical address of the first page directory entry).
[Mignano A.] SDP Theory 4
2 – Load the value of CR0
3 – Set highest bit of register CR0 to 1
4 – Save the value of CR0

In this way paging is enabled and starts using the value stored in CR3 for translating addresses.

Windows

1. Given two processes that share data using memory mapping (by functions createFileMapping and
MapViewofFile) […]

Solution:

• The MMF view address will almost certainly be different the next time that file will be mapped or
a new view of the same region will be created. For this reason, the pointer should be based on
the view address.
• If a file is less than 2 GB it can be all mapped and then sorted. If greater ???
• Memory mapped file can be shared and for this reason yes, multithreading can improve sorting.
• Because HEAP_NO_SERIALIZE is used when the process uses only a single thread and each
thread has its own heap that no other thread can access or when mutual exclusion mechanism is
provided. It these cases the performances are increased because the default mutual exclusion is
avoided.
• Already said above.

2. Explain the behavior of WaitForMultipleObject in WIN32. In particular, explain the impact, […]

Solution:

When more than one process have to be synchronized, it is possible to use this function:

WaitForMultipleObject(DWORD nCount, LPHANDLE lpHandles, BOOL bWaitAll, DWORD dwMilliseconds)

It can be blocking or not depending on the value of dwMilliseconds that specify how much time we’ll
wait before the function unlocks itself if the object(s) is(are) not ready.

The number of objects that can be wait are at maximum MAXIMUM_WAIT_OBJECTS. There is a
parameter in the function (bWaitAll) that specify if it is needed to wait for all the objects or just the first
one who is ready. In this last one case the function will return the value corresponding to
WAIT_OBJECT_0 + n where n is the index inside the array corresponding to the object that is ready.
#define N 200
HANDLE *waitHandles, totHandles[N];
THREAD_DATA data[N];
HANDLE tmpHandle;
int i;
int completedThread = 0;
for(i = 0; i < N; i++){
completedThread = WaitForMultipleObjects(MAXMIMUM_WAIT_OBJECTS, waitHandles,
FALSE, INFINITE);

completedThread -= WAIT_OBJECT_0;

processResult(&data[completedThread]);
CloseHandle(totHandles[completedThread]);

tmpHandle = totHandles[completedThread];
totHandles[completedThread] = totHandles[N-1-i];
totHandles[N-1-i] = tmpHandle;
//same switch for object inside waitHandles

[Mignano A.] SDP Theory 5


}

[Mignano A.] SDP Theory 6


12 September 2016
Unix

1. How is organized the list of free blocks in a Unix filesystem? What is the function of an index block in the
Unix filesystem, what happens to an index block when all the data it contains have been used?
Solution:

The free blocks list is organized in a list of index blocks. Each index block contains the pointer to the next
block, and some pointers to free data blocks. The first index block is stored inside the superblock of the
partition. When a data block is requested, the pointer to the first data block inside the first index block is
retrieved and removed from the index block. When the index block inside the superblock is requested to
allocate the last data block indexed in it, the data block is returned and the content of the superblock is filled
with the content of the next index block. Also the pointer to the next index block is updated.

2. Explain the meaning of the following command line, detailing the function of each argument:
qemu -serial mon:stdio -hda fs.img xv6.img -smp 2 -m 512 -S -gdb tcp::2600
Solution:

The command calls qemu, a machine emulator, and the parameters are the following:

- serial mon:stdio specifies to duplicate the stdin/stdout/stderr of the virtual machine in the current
terminal window, to enable the user to send commands and read output not only in the emulator
window, but also on the terminal window
- hda fs.img xv6.img specifies the image files to attach to the virtual machine
- smp 2 specifies the number of virtual processors to the emulator
- m 512 specifies the amount of RAM in MB that the machine will be able to see
- S the machine is launched in suspended state, waiting for a debugger to connect and send continue/next
commands
- gdb tcp::2600 specifies that qemu will accept debugger connections on tcp port 2600

Windows

1. Explain, within the framework of win32 events, the difference between pulse and set behaviour. How is it
possible to implement a semaphore using a counter variable and an event? Describe the kind of event used,
and how the two semaphore operations can be implemented (code is allowed but not required).
Solution:
SetEvent sets the state of the event to signaled while PulseEvent sets the state to signaled and resets it after
all the waiting threads have been unlocked.
auto-reset manual-reset
SetEvent Exactly one thread is released. If All the thread are released even
no threads waiting the first one the new arriving until someone
arriving will be released calls ResetEvent
PulseEvent Exactly one thread is released. If All the threads waiting are
no threads waiting, no thread will released. New threads arriving will
be released when arriving be blocked

2. Which are the roles of file pointers and of the overlapped structures in direct file access on WIN32 systems?
Briefly describe common aspects and differences. The prototype of function ReadFile is:
BOOL ReadFile (HANDLE hFile, LPVOID lpBuffer, DWORD nNumberOfBytesToRead, LPDWORD
lpNumberOfBytesRead, LPOVERLAPPED lpOverlapped)

[Mignano A.] SDP Theory 7


How can we handle pointers to large files? What is a LARGE_INTEGER type in Win32? How can we increment
a LARGE_INTEGER variable by 1? Given a file of size 40 GB, containing records of fixed size 256 Bytes, how
can we read record n. 2^27 (records are numbered starting from 0)?

Provide the solution with file pointers and with overlapped structure.
Solution:
File Pointers and Overlapped Structure are used when it is needed to do direct I/O operations into a file.
Every file HANDLE stores the offset at which the next I/O operation will be performed.
The difference is that with the file pointer the offset will be moved in the specified position and then it will
move automatically while with the overlapped structure it is needed to manually update the offset.

The overlapped structure is passed as parameter to the ReadFile or WriteFile, the system will ignore the file
pointer and use the location specified inside Offset and OffsetHigh of the overlapped structure.
Asynchronous I/O requires overlapped structure for direct I/O, furthermore it is possible to perform more
than async operation at the time since the overlapped structure doesn’t update any global information.

Pointers to large files (more than 4GB) require 64 bit. A large pointer can be realized by combining two 32-
bit DWORD variables. This two variables can be passed to the SetFilePointer or combined in one data
structure and passed to the SetFilePointerEx.

This data structure is the LARGE_INTEGER/ULARGE_INTEGER. These two types are the union of 64-bit
(QuadPart) and a structure containing two 32-bit (HighPart and LowPart). Since it is a union, QuadPart can be
set and HighPart and LowPart will contain the proper value.

A large integer can be incremented with:


LARGE_INTEGER li = …;
li.QuadPart++;

Given a file of 40GB, containing records of fixed size 256 Bytes, to read record n. 2 two strategies can be
applied:

HANDLE hFile;
DWORD nRead;
RECORD record; // sizeof(RECORD) = 256
LARGE_INTEGER li;
li.QuadPart = 256 << 27;

// file pointers
SetFilePointerEx(hFile, li, NULL, FILE_BEGIN); // or SetFilePointer(hFile, li.LowPart,
&li.HighPart, FILE_BEGIN);
ReadFile(hFile, &record, sizeof(record), &nRead, NULL);

// overlapped
OVERLAPPED ov = {0, 0, 0, 0, NULL};
ov.Offset = li.LowPart;
ov.OffsetHigh = li.HighPart;
ReadFile(hFile, &record, sizeof(RECORD), &nRead, &ov);

3. In the following code section, se is a global semaphore and TraverseAndSaveList is a thread function.
HANDLE se;
static void TraverseAndSaveList (LPTSTR fullPath, LPTSTR dirName, HANDLE fileHandle);

TraverseAndSaveList recursively visits the directory named dirName and it stores all encountered entry
names in the file referenced by fileHandle.
[Mignano A.] SDP Theory 8
The variable fullPath represents the full file system path to dirName. If, for instance, the target directory is
a/b/c, fullPath is a/b and dirName is c.
As TraverseAndSaveList is a thread (concurrent) function, each write operation on the fileHandle file has to
be propery synchronized through the se semaphore. The format of the file fileHandle is free.
Write function TraverseAndSaveList to perform the requested operations.

Solution:
HANDLE se;
static void TraverseAndSaveList (LPTSTR fullPath, LPTSTR dirName, HANDLE fileHandle) {
DWORD fType;
HANDLE hSearch;
WIN32_FIND_DATA wfd;
TCHAR searchPath[MAX_PATH], newPath[MAX_PATH];
TCHAR fileName[MAX_PATH];
_stprintf(searchPath, _T("%s\\%s\\*"), fullPath, dirName);
hSearch = FindFirstFile(searchPath, &wfd);
do {
_stprintf(newPath, _T("%s\\%s"), fullPath, dirName);
fType = FileType(&wfd);
if(fType == TYPE_DIR) {
TraverseAndSaveList(newPath, wfd.cFileName, fileHandle);
} else if(fType == TYPE_FILE) {
memset(fileName, 0, sizeof(TCHAR) * MAX_PATH);
_stprintf(fileName, _T("%s\\%s"), newPath, wfd.cFileName);
WaitForSingleObject(se, INFINITE);
WriteFile(fileHandle, wfd.cFileName, sizeof(TCHAR) * MAX_PATH, NULL, NULL);
ReleaseSemaphore(se, 1, NULL);
}
} while (FindNextFile(hSearch, &wfd));
}

[Mignano A.] SDP Theory 9


15 July 2016
Unix

1. Which are the Unix commands for installing a new module, for listing the installed modules, for removing
modules? Standard IO functions, such as printf cannot be used in module programs, why? Which function
can be used instead? In which file does the output of a module go?
Solution:
- insmode mymodule.ko install a new module corresponding to the file mymodule.ko
- ismode lists the currently installed modules
- rmmod mymodule removes the module named mymodule

Standart I/O functions can’t be used because:

- The code is executed in the memory space of the kernel, and therefore the standard I/O functions are
not loaded
- The modules don’t have standard handles (stdin, stdout, stderr) available because they don’t have a
terminal attached.
- Modules are not designed to perform I/O on a terminal

The only output a module can have is the file var/log/messages that can be written by a module using the
function printk. This file can be read by using normal editors or by using the command dmesg. A module
message written by the printk function could appear on the current tty (not a pts) if the warning level used is
high.

2. Given this reference string: 112233444412441221444322555543


Compute the page fault frequency and the mean resident set for the VMIN strategy with control parameter
t=3. The VMIN strategy is similar to the Working-set strategy, but it looks forward in time in the reference
string rather than backward as the Working-set strategy does.

TODO

Windows

1. Briefly describe similarities and differences between WIN32 critical sections and mutexes. Can critical
sections be used to provide mutual exclusion between two threads? And two processes? (motivate the
answer).
Solution:
Critical sections (CS) are user variables, that can reside in the stack or in the heap of a process. Mutexes are
instead kernel objects, that can be accessed only by using a handle. They are both used to protect shared
variables by giving mutual exclusion to sections of code that must be executed without interleaving different
flows of executions. The functions used for them are different: with CSs we EnterCriticalSection and
LeaveCriticalSection while with Mutexes we WaitForSingleObject and ReleaseMutex.
CSs can be used to synchronize between threads belonging to the same process, because the addressing
space is the same, so the threads can access the same variable.
CSs can't be used to synchronize between different processes because the addressing space is different.

[Mignano A.] SDP Theory 10


2. Explain the role of filter expressions in try...except blocks. Why/when do the following block call the Filter
functions?
__try {
__try {
// statements
...
} __except(Filter1(GetExceptionCode()));
} __except(Filter2(GetExceptionCode()));
Are Filter1 and Filter2 system routines or user-generated functions?
What is the role of GetExceptionCode() ? Under which conditions is Filter2 called?
Is Filter2 called when Filter1 return EXCEPTION_CONTINUE_EXECUTION ? (motivate all yes/no answers)

Solution:
The filter expressions must be written to determine if the corresponding __except block must be executed.
Their return value can be:

- EXCEPTION_EXECUTE_HANDLER: __except block is executed and the search is stopped


- EXCEPTION_CONTINUE_SEARCH: the search for a filter that captures this exception continues in outer
stack frames
- EXCEPTION_CONTINUE_EXECUTION: if the exception is continuable, the execution proceeds normally

Filter1 is called if an exception is generated inside the inner __try block.


Filter2 is called if Filter1 return EXCEPTION_CONTINUE_SEARCH or if an exception is generated in Filter1.
Filter1 and Filter2 are user-generated functions/values/macros.
GetExceptionCode is needed to provide to the filters the exception code (can't be called from within a filter).
If Filter1 return EXCEPTION_CONTINUE_EXECUTION, Filter2 is not executed (for this exception, maybe later
with another exception).

3. Explain the role of completion functions in extended (alertable) asynchronous IO. A given file contains a
sequence of NR fixed length records that can be described by the following C struct:
typedef struct {
data_t value;
int next;
}record;

Solution:
Completion functions execute some code when the corresponding asynchronous I/O operation is concluded.
Instead of waiting the end of the operation synchronously or waiting for an event to be in the signalled state
(via normal overlapped asynchronous I/O), the programmer has the freedom to write a function that will be
executed on completion of the operation, and manually manage the synchronization details.
The file must be opened by specifying the FILE_FLAG_OVERLAPPED flag. The asynchronous operation must
be executed by calling an extended function (ReadFileEx, WriteFileEx) and at some point the thread must
enter in alertable state, calling WaitForSingleObjectEx/WaitForMultipleObjectEx/SleepEx.

typedef struct {
data_t value;
int next;
} record; // give a name to the record struct

// global variables
DWORD start_global;
HANDLE hFile_global;
BOOL done;
[Mignano A.] SDP Theory 11
DWORD count_global;
record record_global;

DWORD countCircularList (HANDLE hFile, DWORD start) {


OVERLAPPED ov = {0,0,0,0,NULL};
LARGE_INTEGER li;
count_global = 0;
start_global = start;
hFile_global = hFile;
done = FALSE;
li.QuadPart = start * sizeof(record);
ov.Offset = li.LowPart;
ov.OffsetHigh = li.HighPart;
ov.hEvent = start; // completion routine needs to know which record ??
ReadFileEx(hFile, &record_global, sizeof(record), &ov, readDone);
while(!done) {
// enter in alertable state
SleepEx();
}
return count_global;
}

//readDone prorotype defined by Win32API


VOID WINAPI readDone(DWORD Code, DWORD nBytes, LPOVERLAPPED pOv) {
LARGE_INTEGER li;
count_global++;
if(record_global.next == start_global) {
// end of list
done = TRUE;
return;
}
li.QuadPart = record_global.next * sizeof(record); //Point to next record
memset(pOv, 0, sizeof(OVERLAPPED));
pOv->Offset = li.LowPart;
pOv->OffsetHigh = li.HighPart;
ReadFileEx(hFile_global, &record_global, sizeof(record), pOv, readDone);
return;
}

[Mignano A.] SDP Theory 12


1 July 2016
Unix

1. Write the sequence of instructions that allow the bash process to interpret and execute the command:
p1 < f1.txt | p2
where p1 and p2 are two executable files.
Solution:
int fds[2];
pipe(fds);
if(!fork()){//child(p1)
close(0); //Closing stdin
close(fds[0]); //Closing read end of pipe
open("f1.txt", O_RDONLY); //Occupies the position 0 (stdin) since it was freed
above
close(1); //Closing stdout
dup(fds[1]); //Write end of piep Occupies the position 1 (stdout)
close(fds[1]); //Closing the write end of pipe
execlp("p1", "p1", NULL);
exit(-1); //If execlp fails
}
close(fd1[1]);
if(!fork()){//child(p2)
close(0); //Closing stdin
dup(fds[0]); //Read end of pipe occupies the position 0 (stdin)
close(fds[0]); //Closing read end of pipe
execlp("p2", "p2", NULL);
exit(-2);
}
close(fds[0]);
wait(NULL);
wait(NULL); //wait for childs

2. Describe the logical sequence of instructions that allow an Intel CPU to switch from the real to the protected
mode during booting, […]
Solution:

From real mode to protected mode:

1) Disable interrupts (also unmaskable ones)


2) Load GDT (Global Descriptor Table)
3) Set lowest bit of register CR0 to 1
4) Perform a long jump in order to load register CS

Enable paging:

1) Load register CR3 with the directory page table address


2) Set highest bit of register CR0 to 1

3. Show the syntax and describe the function of the system call popen.
Solution:
Prototype: FILE *pipe_fp = popen(char *command, char *type)
It creates a streaming of data, from or to the “command”, in which is possible either to read or write from a
pipe, specifying in the type parameter if you want to read from it (“r”) or write (“w”).
You should close it with pclose(FILE *stream).

[Mignano A.] SDP Theory 13


Windows

1. Briefly explain the difference between implicit and explicit linking within the framework of WIN32 DLLs. […]
Solution:

Implicit linking requires some work during the code writing and compiling procedures.
The source that use DLL must use _declspec(ddlimport) modifier before the prototype of the functions that
are needed from the DLL. During compilation time, the source must be linked with the corresponding .lib file,
that contains the stubs for the functions of the DLL. At runtime the stubs will load the file library that
effectively contains the procedures needed.

Explicit linking is different because doesn’t need extra compiling work. When the it is needed to use a
function from the DLL the steps are the following:

- Call LoadLibrarypassing the name of the DLL file. It will return a HMODULE handle.
- Call GetProcAddress passing the HMODULE and the name of the required function. It will return a
FARPROC.
- Cast the FARPROC to the actual prorotype of the function.
- Invoke the function.

A DLL should not include routines that are not thread safe. We don’t know which process will load our DLL so
in better prevent every possible condition making it thread safe.

A DLL can’t be used to share global variables between process because every process will load a different
copy of the DLL. Instead memory mapped file can be used to share variables, the processes just have to
agree on the file path or mapping name of the MMF.

Explicit linking can provide less memory usage because if the DLL is very large it can be possible to import
just the needed functions.

2. Given the (partial) Win32 producer/consumer code in figure, […]


Solution:
• If an exception occurs while executing MessageFill the program will evaluate the filter that
captures the exception (in this case there is no filter but EXCEPTION_EXECUTE_HANDLER) thenb
executes the termination handler (__finally) that release the guard and then the __except. So
the mguard is always released if no other events occurs, like someone calling TerminateThread.
• It is possible just having the event signaled with PulseEvent with MANUAL_RESET.
• Yes, the producer would have the same behavior. The only difference is that the exception
message would be printed before releasing the mguard.

[Mignano A.] SDP Theory 14


25 June 2015
Unix

1. Write the sequence of instructions that allow the bash process to interpret and execute the command:
p1 | p2 > f1.txt
Where p1 and p2 are two executable files.

Solution:
int myPipe[2];
pipe(myPipe);
if(!fork()){ //Process p1
close(1); //Closing stdout
close(myPipe[0]); //Closing read end of the pipe
dup(myPipe[1]); //Assigning write end of the pipe to stdout
close(myPipe[1]);
execlp("p1", "p1", NULL);
}
if(!fork()){ //Process p2
close(0); //Closing stdin
dup(myPipe[0]); //Assigning read end of the pipe to stdin

int fd = fopen("f1.txt", O_WRONLY, 777);

close(1); //Closing stdout


dup(fd); //Assigning fd to stdout

close(myPipi[1]); //We don't need them anymore


close(fd);

execlp("p2", "p2", NULL);


}
close(myPipe[0]);
close(myPipe[1]);
wait(NULL);
wait(NULL);

2. List the steps to create a bootdisk with a minimal Linux kernel.


Solution:

The steps necessary to create a bootdisk including a minimal Linux kernel are:
1. Create a disk image file (using dd)
2. Attach the created disk image file to a loop device e.g. /dev/loop0 (using losetup)
3. Create a new partition into the disk image file created above (using fdisk)
4. Mark the created partition to be bootable through fdisk
5. Create a new file system into the disk image file attached to the loop device /dev/loop0
6. Mount the created file system
7. Insert a GRUB configuration file to the create file system
8. Mount the file system
9. Install GRUB stage 1 and stage 2 code to the partition
10. Copy kernel's executable to the partition

Step 2 is done because it makes easier to mount the disk image file in later steps.

[Mignano A.] SDP Theory 15


Windows

1. Explain the behaviour of WaitForSingleObject and WaitForMultipleObjects in WIN32. Are calls to the two
functions blocking? What can we wait for, with the two functions? How many, and which, different
synchronization schemes are possible through WFMO? Is it possible to use WFMO in order to wait for one
among multiple events/objects? What does constant WAIT_OBJECT_0 represent?

Given the following loop, where the tHandles array is an array of handles of running threads, and
processThreadResult works on the result produced by a thread, explain what the loop does.

/* wait thread completion 1 */


for (iThrd = 0; iThrd < N; iThrd++) {
WaitForSingleObject (tHandles[iThrd], INFINITE);
processThreadResult (tData[iThrd]);
}

Since the loop forces a given order in waiting for thread completion, write an alternative loop, based on
WaitForMultipleObjects, where thread results are processed following an order given by thread completions.

Solution:

WaitForSingleObject and WaitForMultipleObjects are two function used for waiting the completion of events,
semaphores, mutexes or threads.

The main difference is that WFSO is blocking while WFMO can be either blocking or not.

WaitForMultipleObjects can have a flag (WaitAll) set to false, in that case the function, that receives an array of
waited object, returns a value corresponding to WAIT_OBJECT_0+n where n is the index in the array
corresponding to the position of the completed object.

The loop above will block the execution until the objects are in a signaled state (they’re completed) and this is
done in order. After each object is completed there is a call to processThreadResult. Another way could be the
following:

for (iThrd = 0; iThrd < N; iThrd++) {


completedIndex = WaitForMultipleObjects (N, tHandles, FALSE, INFINITE);
processThreadResult (tData[completedIndex – WAIT_OBJECT_0]);
}

2. Explain the main features of dynamic libraries in Win32. Motivate the main advantages of dynamic libraries vs.
static ones. Explain the difference between implicit and explicit linking. What kind of modification is required by
a program in order to become a dynamic library (answer for both implicit and explicit linking).

Solution:

Static libraries are those libraries that will be contained inside the executable after the linking phase. The
advantages are that the is no complication during building time and it is portable among systems but the result is
a big executable (in terms of file size) and that if a change is done in the library all the code must be relinked.

Dynamic libraries instead are those libraries that are not required to be in the executable but can be provided by
the system, or just externally. The advantages are that the executable is smaller, in case of changes in the library
it can be just replaced and some part of these DLLs can be shared by more applications. Cons are that there is a
little bit of overhead for proper linking and that is more difficult to build the final program.

DLLs can be linked in two different ways: implicit or explicit. The difference is in when the code is loaded.

[Mignano A.] SDP Theory 16


Implicit linking requires some work during the code writing and compiling procedures.
The source that use DLL must use _declspec(ddlimport) modifier before the prototype of the functions that
are needed from the DLL. During compilation time, the source must be linked with the corresponding .lib file,
that contains the stubs for the functions of the DLL. At runtime the stubs will load the file library that
effectively contains the procedures needed.

Explicit linking is different because doesn’t need extra compiling work. When the it is needed to use a
function from the DLL the steps are the following:

- Call LoadLibrarypassing the name of the DLL file. It will return a HMODULE handle.
- Call GetProcAddress passing the HMODULE and the name of the required function. It will return a
FARPROC.
- Cast the FARPROC to the actual prorotype of the function.
- Invoke the function.

3. Which are the roles of files pointers and of the overlapped structures in direct file access on WIN32 systems.
Briefly describe common aspects and differences. Provide a brief example of application for both of them. How
can we increment by 100 bytes a file pointer in an overlapped structure? (provide an example) Does an
overlapped structure include an event? Is it automatically created? When is it signaled?

Solution:

File pointers and overlapped structures are used to access specific part of opened file.

When a call to ReadFile(…) is performed the file pointer is automatically advanced. It is possible to set it in a
particular location by calling SetFilePointer(…).

Instead when an Overlapped Structure is used it is not automatically updated, it is up to the programmer to
update it. OVS are also used for Asynchronous I/O. To do so it is necessary to assign an event to the OVS and it
will be used to signal when I/O is completed. The event must be created manually with CreateEvent(…).

Examples of usage:

File Pointer:
BOOL ReadRecord (HANDLE hFile, DWORD i, RECORD_T *myRec){
DWORD byteRead;
LARGE_INTEGER li;
li.QuadPart = i*sizeof(RECORD_T);
SetFilePointer(hFile,li.LowPart,li.HighPart,FILE_BEGIN);//Setting the file pointer
return ReadFile(hFile, myRec, sizeof(RECORD_T), &byteRead, NULL);
}
Overlapped Structure:
BOOL ReadRecord (HANDLE hFile, DWORD i, RECORD_T * myRec){
DWORD byteRead;
LARGE_INTEGER li;
OVERLAPPED ov = {0,0,0,0,NULL};
li.QuadPart = i*sizeof(RECORD_T);
ov.Offset = li.LowPart; //Setting the proper index in the OVS
ov.OffsetHigh = li.HighPart;
return ReadFile (hFile, myRec, sizeof(RECORD_T), &byteRead, &ov);
}
Increment of the OVS of 100 bytes:
li.QuadPart += 100;
ov.Offset = li.LowPart;
ov.OffsetHigh = li.HighPart;

[Mignano A.] SDP Theory 17


15 July 2014
Windows

1. Describe events in the Windows system and their related system calls. Describe the 4 cases of event
signal/release, related to manual/auto reset and set/pulse event conditions.

b) Given a set of threads working in master-slave mode, with one master and N slaves, we want to use events for
two purposes:

- for the master, in order to wait for task completion from a slave (any slave can complete a task under execution
at a given time): events are used to communicate completion from a slave to the master

- for the master, to enable a specific slave to start a new task

How many events are necessary? 1, 2, N, N+1, 2N, 2N+1, 2N+2? (motivate your answer) Which kind of events
(manual/auto, set/pulse)? Why? (handling of tasks and data is NOT required, just propose a solution for
synchronization)

Solution:

a) Event are created with the function CreateEvent(lpEventAttributes, bManualReset, bInitialState, lpName)
that will return an handler used by functions like WaitForSignleObject or WaitForMultipleObject.
Furthermore with the creating function is possible to specify if the event is inheritable, if it needs a manual
reset, the initial state and a name to assign to the object.

To signal an event two functions can be used: SetEvent(…) or PulseEvent(…) and they behave differently
combined with manual/auto reset:

auto-reset manual-reset
SetEvent Exactly one thread is released. If All the thread are released even
no threads waiting the first one the new arriving until someone
arriving will be released calls ResetEvent
PulseEvent Exactly one thread is released. If All the threads waiting are
no threads waiting, no thread will released. New threads arriving will
be released when arriving be blocked
To release the resource allocated for the event it is used CloseHandle(…).

b) We’d need to use N events, one for each thread. The event can be used twice, one time signaling the slave
and another time signaling back the master:
HANDLE eventH[N];
int _tmain(){
HANDLE threadsH[N];
for(i = 0; i < N; i++){
threadsH[i] = CreateThread(...);
eventH[i] = CreateEvent(...); //Manual reset, not signaled
}

//Active thread slave number 3


PulseEvent(eventH[3]);
//Wait it's completion
WaitForSingleObject(eventH[3], INFINITE);
}
void slave(thId){
while(1){
WaitForSingleObject(eventH[thId], INFINITE);
//Do stuff
PulseEvent(eventH[thId]);
}
}

[Mignano A.] SDP Theory 18


2. Describe the main differences between static and dynamic libraries, and answer the following questions:

• What are implicit and explicit linking, within the framework of dynamic libraries?
• When are (explicitly/implicitly linked) libraries linked: at compile- load- or execution-time?
• Are DLLs required to be thread-safe (motivate the yes/no answer, and also explain what is thread-
safety)?
• Can a DLL be shared, at runtime, among processes?
o If (the answer is) yes, can it be used to share data among processes using them (e.g. to hold
a shared array for reading/writing)?
o If (the answer is) no, how can a library routine be shared (a single copy is resident in
memory) among several processes?

Solution:
Static libraries are those libraries that are binded to the executable after the linking process. This leads to a
larger final file. Also if a change is made in the library it is necessary to compile and link again everything
together.

Dynamic libraries, instead, are those libraries that are not binded in the executable and can be loaded at run
time, this can be done in two different ways: Implicit linking and Explicit linking.

For implicit linking it is needed to insert in the code, before the prototype of the function, a
_declspec(dllimport) (or dllexport if inside the DLL) modifier in order to import it. With implicit linking the dll
will be loaded, completely, at the start of the program.

For explicit linking is a little bit different. In order to load a function from a DLL the code has to call the
LoadLibrary(…) function that will return an HMODULE that have to be passed to GetProcAddress(…) along
with the name of the function we want to load. This function will return a FARPROC that need to be casted to
the actual prototype.

DLLs require to be thread safe because they could work with shared data even if each process that loads a dll
has its own copy. A way to share data among processes with dlls could be using File Mapping.

3. Explain the behaviour of WaitForSingleObject and WaitForMultipleObject in WIN32. Are calls to the two
functions blocking? What can we wait for, with the two functions? How many, and which, different
synchronization schemes are possible through WFMO? Is it possible to use WFMO in order to wait for one
among multiple events/objects?
Given the two wait loops written below, explain their wait scheme. Are the calls to WFMO in loop1/loop2
waiting for completion of all, or for one, among many processes/threads? If yes, is the waiting order fixed? Is
it possible to detect which of the waited processes/threads has completed? What does constant
WAIT_OBJECT_0) represent? Why the first parameter in loop1 is min (MAXIMUM_WAIT_OBJECTS, argc - 2 -
iProc), instead of nProc? What are hProc and tHandle?
/* loop 1 */
for (iProc = 0; iProc < nProc; iProc += MAXIMUM_WAIT_OBJECTS)
WaitForMultipleObjects (min(MAXIMUM_WAIT_OBJECTS, argc-2-iProc),&hProc[iProc], TRUE,
INFINITE);
/* loop 2 */
while (ThdCnt > 0) {
ThdIdxP = WaitForMultipleObjects (ThdCnt, tHandle, FALSE,INFINITE);
iThrd = (int) ThdIdxP - (int) WAIT_OBJECT_0;
if (iThrd < 0 || iThrd >= ThdCnt)
ReportError (_T ("Thread wait error."), 5, TRUE);
GetExitCodeThread (tHandle [iThrd], &ExitCode);
CloseHandle (tHandle [iThrd]);
}
[Mignano A.] SDP Theory 19
Solution:
WaitForSingleObject and WaitForMultipleObjects are two functions used for waiting the completion of
threads, events, semaphores and mutexes.
WFSO is blocking while WFMO can be either blocking or not depending on the parameter WaitAll.

If the parameter WaitAll is set to FALSE the function will return as soon as one of the objects is in a signaled
state and it retuns WAIT_OBJECT_0 + n where n is the index, in the array of waiting objects, corresponding to
the object that is signaled.

In the loop1 there could be two cases.

- The case when nProc is <= MAXIMUM_WAIT_OBJECT, in this case objects are all waited before the
function returns and it is kind of ordered.
- The case when nProc Is > MAXIMUM_WAIT_OBJECT, in this case objects are waited in set of
MAXIMUM_WAIT_OBJECTS. This is why there is there is the function min(…)

In the loop2 threads are not waited in block but as soon as one terminates the WFMO will return.

The variable WAIT_OBJECT_0 is just an offset returned by the WFMO.

Unix

1. Detail the steps performed by the system call close, with reference to all the objects and data structures
involved by its operations.
Solution:
close() closes a file descriptor, so that it no longer refers to any file and may be reused. Any record locks held
on the file it was associated with, and owned by the process, are removed.
If the file descriptor is the last copy of a particular file descriptor the resources associated with it are freed.
If the descriptor was the last reference to a file which has been removed using unlink the file is deleted.
It releases:
- The inode associated in memory by the open() system call
- The corresponding entry in the inode table
- The entry in the user file descriptor table

2. Explain all the arguments of this command line:


qemu -hdb fs.img xv6.img -serial mon:stdio -S -gdb tcp::26000 -smp 2 -m 512
Solution:
Qemu is an emulator able to emulate different system architectures.
The parameters in the command above are:
- hdb fs.img – use fs.img as hard disk
[Mignano A.] SDP Theory 20
- xv6.img – is the image of the disk 0
- serial mon:stdio – Redirect the virtual serial port to the monitor device
- S – Tells that the CPU doesn’t have to start at startup
- gdb tcp::26000 – Wait for gdb on TCP port 26000 at address 0.0.0.0
- smp 2 – Use 2 virtual CPUs
- m 512 – Use 512 MB of RAM

[Mignano A.] SDP Theory 21

You might also like