You are on page 1of 8

Q2. Compare Memory Allocation methods used in Android and iOS.

The Android Runtime Routine (ART) and Dalvik virtual machine use the methods of memory
mapping and paging for memory management. Whether memory is modified through allocation
of objects or memory mapped pages, it remains in the RAM and cannot be paged out and the
only way to release it is to release object references that the app will hold. Once the ART
decides this piece of memory is not being used by the program, it frees it back to the heap
without any programmer-intervention. This memory will then become available to the garbage
collector which is used to reclaim it. The only exception to this is if any files are memory mapped
without modification, they can be paged out of the RAM to be used elsewhere.
The Dalvik heap is constrained to a single virtual memory range for each app process. This
defines the logical heap size, which can grow as it needs to but only up to a limit that the system
defines for each app. Android’s memory heap is a generational one, meaning that there are
different buckets of allocations that it tracks. Each heap generation has its own dedicated upper
limit on the amount of memory that objects there can occupy. Any time a generation starts to fill
up, the system executes a garbage collection event in an attempt to free up memory. The
duration of the garbage collection depends on which generation of objects it is collecting and
how many active objects are in each generation.
In order to fit everything it needs in RAM, Android tries to share RAM pages across processes.
It can do so in ways of the Zygote process, memory mapping static code, and sharing dynamic
RAM.
Each app process is forked from an existing process called Zygote. The Zygote process starts
when the system boots and loads common framework code and resources (such as activity
themes). To start a new app process, the system forks the Zygote process then loads and runs
the app's code in the new process.
Most static data is mmapped into a process. This technique allows data to be shared between
processes, and also allows it to be paged out when needed. Example static data include: Dalvik
code (by placing it in a pre-linked .odex file for direct mmapping), and traditional project
elements like native code in .so files.
In many places, Android shares the same dynamic RAM across processes using explicitly
allocated shared memory regions (either with ashmem or gralloc). For example, window
surfaces use shared memory between the app and screen compositor, and cursor buffers use
shared memory between the content provider and client.
Memory management in iOS recently launched support for Automatic Reference Counting
(ARC), a feature of the Xcode environment, used in Apple application development, where it is
not necessary to retain and release objects, and is often mistakenly assumed to be the same as
Garbage Collection of Android’s java routines. It is a compile time feature executed between
preprocessor and compiler. It virtually inserts release and autorelease statements. Quoted from
apple documentation, the two memory management issues are “Freeing or overwriting data that
is still in use. It causes memory corruption and typically results in your application crashing, or
worse, corrupted user data.
Not freeing data that is no longer in use causes memory leaks. When allocated memory is not
freed even though it is never going to be used again, it is known as memory leak. Leaks cause
your application to use ever-increasing amounts of memory, which in turn may result in poor
system performance or (in iOS) your application being terminated.”
The rules of the iOS memory management are as follows. We own the objects we create, and
we have to subsequently release them when they are no longer needed. We use Retain to have
ownership of an object that we did not create. We have to release these objects too after they
have no more work. But we do not release objects unowned by us.
When handling memory in ARC, we do not have to retain or release. So, all the view controller's
objects will be released when the view controller is removed. Similarly, any object’s sub-objects
will be released when they are released. If other classes have a strong reference to an object of
a class, then the whole class cannot be released.
iOS provides memory management tools such as allocators. Allocators are opaque objects that
allocate and deallocate memory for us and there is no need to allocate, reallocate, or deallocate
memory directly for Core Foundation objects. Allocators are passed into functions that create
objects; these functions have “Create” embedded in their names, for
example, CFStringCreateWithPascalString. The creation functions use the allocators to allocate
memory for the objects they create.
The allocator is associated with the object through its life span. If reallocation of memory is
necessary, the object uses the allocator for that purpose and when the object needs to be
deallocated, the allocator is used for the object’s deallocation. The allocator is also used to
create any objects required by the originally created object. Some functions also let you pass in
allocators for special purposes, such as deallocating the memory of temporary buffers. The
Core Foundation framework allows creation of custom allocators.
Heap use is as follows. All Objective-C objects are stored in a part of memory called the heap.
An alloc message to a class allocates a chunk of memory from the heap. This chunk includes
space for the object’s instance variables.
Stack use is as follows. When a method (or function) is executed, it allocates a chunk of
memory from the stack. This chunk of memory is called a frame, and it stores the values for
variables declared inside the method. A variable declared inside a method is called a local
variable. When an application launches and runs the main function, the frame for main is put at
the bottom of the stack. When main calls another method (or function), the frame for that
method is added to the top of the stack.
The memory in iOS devices is especially limited so the heap is also restricted. This is why the
critical ownership and memory management rules have been established. iOS is able to use
ARC and run efficiently with, according to some estimates, half as much memory as Android
needs, in accordance with it coming with limited memory. Android and Java’s Garbage
Collection concept of recycling memory as explained above, because the process requires more
memory to be performed and once as much free memory is not available, performance suffers.
Q4. How does Windows detect and handle thrashing?
When the system has to swap pages at such a high rate that major chunk of CPU time is
spent in swapping then this state is known as thrashing. So effectively during thrashing, the
CPU spends less time in some actual productive work and more time in swapping.
Windows uses a file area with variable length that may be increased in memory out situations.
The page selection algorithm based on the working set concept is implemented, the concept
being: the set of memory pages a process needs for its execution at a given moment is called a
working set. It also classifies computer resources as objects and this OS does not use a swap
partition. The system should check on the levels of CPU utilization to detect thrashing.
In experiments conducted to investigate Windows’ behavior in situations of numerous write
operations per write cycle, with 25 new processes being created and each allocated 100 MB, on
hardware less than 2 GB RAM, the Windows XP reaction to memory thrashing was more
aggressive than the other systems’, shown by the page-out peaks at the beginning and the
middle of the experiment. Windows XP maintained the lowest level of CPU activity and some
consumer processes finished much later than their expected time. Paging activity was also the
highest seen in Windows. The system took quite long to recover from the thrashing, from user
interaction perspective and otherwise. In spite of this, its usability did not suffer nearly as much
as the other systems in the study, except for Linux. This brings forth the suggestion that
Windows XP thrashing management uses a scheme based on delaying the most active
processes, which is akin to what is done in FreeBSD.
System behavior under memory thrashing is still a relevant problem. Adding more memory to
the system does not solve it, just pushes forward the thrashing threshold, making thrashing less
probable to occur (under the same memory usage conditions), but does not improve the system
behavior during a memory thrashing. Furthermore, the recent tendency of multi-core CPUs
leads to an increase in the processing power and multiprogramming level; more processes and
larger memory page sizes may lead to thrashing even faster. There is a lack of official
documentation about how Windows (any version) handles thrashing. There is one code
provided to detect it in DirectX, given below (next page).
Q5. Write a note on the NTFS file system.
The NTFS file system is supported by Microsoft Windows on basic and dynamic disks. When
you choose the NTFS file system, the formatting process places the key NTFS file data
structures on the volume, regardless of whether it is a basic or dynamic volume. During format
and setup of a volume file system on a hard disk, a master boot record (MBR) is created. It
contains a small amount of executable code called the master boot code as well as a partition
table for the disk. When a volume is mounted, the MBR executes the master boot code and
transfers control to the boot sector on the disk, allowing the server to boot the operating system
on the file system of that specific volume. The boot sector is a bootable partition that stores
information about the layout of the volume and the file system structures, as well as the boot
code that loads Ntdlr. The Ntdlr.dll switches the CPU to protected mode, starts the file system,
and then reads the contents of the Boot.ini file. This information determines the startup options
and initial boot menu selections. The other components included in the NTFS file system are,
NTFS.sys, NToskrnl.exe, User Mode and Kernel Mode.
NTFS stores all objects in the file system using a record called the Master File Table (MFT),
similar in structure to a database.
On NTFS volumes, clusters start at sector zero; therefore, every cluster is aligned on the cluster
boundary. Contiguous clusters for file storage allow for faster processing of a file. The default
size of clusters in a volume sized between 2GB to 2TB, is 4KB.
The NTFS volume is organized as follows.
- NTFS boot sector: This contains the BIOS parameter block that stores information about
the layout of the volume and the file system structures, as well as the boot code.
- Master File Table: This contains the information necessary to retrieve files from the
NTFS partition, such as the attributes of a file.
- File System Data: This stores data that is not contained within the Master File Table.
- Master File Table Copy: This includes copies of the records essential for the recovery of
the file system if there is a problem with the original copy.
There is a table Boot Sector Sections on an NTFS Volume which describes the boot sector of a
volume that is formatted with NTFS. When an NTFS volume is formatted, the format program
allocates the first 16 sectors for the boot sector and the bootstrap code.
When NTFS detects a bad sector, NTFS dynamically remaps the cluster containing the bad
sector — a recovery technique called cluster remapping — and allocates a new cluster for the
data. If the error occurred during a read, NTFS returns a read error to the calling program, and
the data is lost. If the error occurs during a write, NTFS writes the data to the new cluster, and
no data is lost.
Q3. Compare any famous brand's latest model Hard Disk Drive and Solid State drive from every
angle you can think of.
SSD (Solid-State Drive) is a nonvolatile mass storage device just as an HDD (Hard Disk Drive)
is, and supports reading and maintaining stores of data even without power and connect to a
computer like a hard drive. But, as per their name, they have no mechanical moving parts, and
store data using flash memory, thus needing no spinning or rotation to access data.
HDD (Hard Disk Drive) is a rigid, nonremovable magnetic disk with a large data storage
capacity. It uses magnetic storage to store and retrieve digital information using one or more
rapidly rotating disks (platters) coated with magnetic material.
According to a test run on a hard drive named Western Digital Travelstar with 750 GB and 5400
rpm, on a Toshiba laptop, the results were measured through the Notebook WorldBench 8.1
benchmark suite’s scores. The score with the hard drive was recorded to be 279, which was
thought to be good performance. To make the comparison, the hard-drive was then replaced by
a Samsung EVO Solid-State Drive, with 500 GB storage. The score was observed to leap to
435, which was a 56% improvement.
An SSD of the same brand but with half as much storage was also tested on an older Maingear
Tower PC, which was formerly fitted with its Seagate Barracuda of 1TB and 7200 rpm, and the
results measured with the Desktop WorldBench 8.1. The score with the hard-drive was 162 and
with the upgrade to the SSD it more than doubled to 325. It also dramatically improved the
tower’s boot time, too, reducing it to 23 seconds from 63.
For another comparison, Samsung’s Flash SSD (64 GB) and HDD (60 GB) can be considered.
As far as weight goes, the SSD has 47 grams which pales in comparison to the HDDs 60
grams.
According to tests run on a Samsung Seas NT, the boot-up time for the SSD was 00:36:29 or
36 seconds while that for the HDD was 01:03:00 or 1 minute and 3 seconds.
When comparing Data Read speed, a PDF file of size 25 MB was used and the results were
00:04:04 for the SSD and 00:12:04 for the HDD. For a Photoshop file of 40 MB, the SSD yielded
a read time of 00:13:24 while the HDD yielded 00:20:01.
For Shutdown speed, the same laptop showed from the HDD 00:13:02 and from the SSD
00:09:03.
When running a Vibration Test on two same laptops and playing an HD video file, the HDD after
some time reported an error and was unable to keep playing. The intensity measurements give
a 73.5 for the SSD and 35.5 for the HDD.
It was noted that a higher-capacity drive would deliver better performance than a lower-capacity
model from the same family, due to the fact that higher-capacity drives have more NAND chips
and more channels for data to travel over. Modern SSD controllers use wear-leveling
techniques to spread write operations evenly across all the memory cells, so it is safe to ignore
warnings about prematurely wearing out SSDs by too many program/erase cycles. However
they do come with limited memory and are relatively expensive on a per gigabyte basis.
Q1. Does Linux detect CPU starvation of processes? If yes then explain how it handles
starvation
Linux kernel contains many schedule classes each having its own priority. To select a new
process to run, the process scheduler iterates from the highest priority class to lowest priority
class. If a runnable process is found in a class, the highest priority process is selected to run
from that class.
In the scenario that there are some processes waiting in lower priority classes and processes
are being added to higher priority classes continuously, there would definitely be starvation. The
Linux kernel has some mechanisms to prevent this. One is the concept of Aging, during which
the time a process is waiting is being monitored and the longer the process is in waiting queue,
its priority will be steadily increased according to the wait. This will allow it to be serviced.
Linux may also have scheduling algorithms giving every process a time-quantum to use the
CPU. Time-Quantum varies; usually, interactive processes are given lower time-quantum as
they spend more time doing I/O while time consuming/computational processes are given
bigger time quantum. After a process runs its time quantum, it is put in an expired queue until
there are no active processes in the system. Then, the expired queue becomes the active
queue and vice versa. These are 2 ways in preventing starvation.
Linux may use a fair scheduling algorithm based on the implementation of virtual clock. A closer
look at the sched_entity structure associated with each scheduling entity, is given below.
struct sched_entity {
...
u64 exec_start;
u64 sum_exec_runtime;
u64 vruntime;
u64 prev_sum_exec_runtime;
...
}
These are updated using the update_curr() method. The consumed CPU time is recorded via
sum_exec_runtime for a process and vruntime stores the amount of time on the virtual clock
that has elapsed since execution. Vruntime increases slowly for higher priority processes
compared with lower priority processes. The runqueue is maintained in red-black tree and each
runqueue has a min_vruntime variable associated with it that holds the
smallest vruntime among all the process in the run-queue. (min_vruntime can only increase, not
decrease as processes will be scheduled).
The key for the node in red black tree is process->vruntime - min_vruntime
When scheduler is invoked, the kernel basically picks up the task which has the smallest key
(the leftmost node) and assigns it the CPU.
Elements with smaller key will be placed more to the left, and thus be scheduled more quickly.
There are little chances of starvation as a lower priority process if deprived of CPU, will have
its vruntime smallest and hence key will be smallest so it moves to the left of the tree quickly
and therefore scheduled.

You might also like