You are on page 1of 108

THE COMPLETE MAGAZINE ON OPEN SOURCE

September 2014 OPEN SOURCE FOR YOU VOLUME: 02 ISSUE: 12


Email : sales-in@liferay.com
Contents
Developers
26 Improve Python Code by
Using a Profiler

30 Understanding the
Document Object Model
(DOM) in Mozilla

40 Introducing AngularJS

45 Use Bugzilla to Manage


Defects in Software

48 An Introduction to Device

52
Drivers in the Linux Kernel

Creating Dynamic Web


35 Experimenting with More Functions in Haskell
Portals Using Joomla and
WordPress

56 Compile a GPIO Control


Application and Test It On
the Raspberry Pi

Admin
59 Use Pound on RHEL to
Balance the Load on Web
Servers
67 Boost the Performance of
CloudStack with Varnish
74 Use Wireshark to 63 Why We Need to Handle Bounced Emails
Detect ARP Spoofing
77 Make Your Own PBX with Asterisk REGULAR FEATURES
Open Gurus 08 You Said It... 25 Editorial Calendar
80 How to Make Your USB Boot
09 Offers of the Month 100 Tips & Tricks
with Multiple ISOs
86 Contiki OS Connecting
10 New Products 105 FOSS Jobs
Microcontrollers to the 13 FOSSBytes
Internet of Things

4 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com


YOU SAID IT
Online access to old issues helpful to them. We will surely look into your request and
I want all the issues of OSFY from 2011, right up to the try to include the topic you have asked for in upcoming
current issue. How can I get these online, and what would be issues. Keep reading OSFY and continue sending us your
the cost? feedback!
c kiran kumar;
kirru.chappidi@gmail.com Annual subscription
Ive bought the July 2014 issue of OSFY and I loved
ED: It feels great to know that we have such valuable readers. it. I want the latest version of the Ubuntu 14.04 LTS and the
Thank you, Kiran, for bringing this request to us. You can avail programming tools (JDK and other tools for C, C+, Java and
all the back issues of Open Source For You in e-zine format from Python). Also, how can I subscribe to your magazine for one
www.ezines.efyindia.com year and can I get it at my village (address enclosed)?
Parveen Kumar;
Request for a sample issue parveen199214@gmail.com
I am with a company called Relia-Tech, which is a brick-
and-mortar computer service company. We are interested in ED: Thank you for the compliments. We're glad to know that
subscribing to your magazine. Would you be willing to send us a you enjoy reading our magazine. We will definitely look into
magazine to check out before we commit to anything? your request. Also, I am forwarding your query regarding
Lindsay Steele; subscribing to the magazine to the concerned team. Please
lsteele@relia-tech.net feel free to get back to us in case of any other suggestions or
questions. We're always happy to help.
ED: Thanks for your mail. You can visit our website www.ezine.
lfymag.com and access our sample issue. Availability of OSFY in your city
I want to purchase Open Source For You for the
A thank-you and a request for more help library in my organisation but I am unable to find copies
I began reading your magazine in my college library and in the city I live in (Jabalpur in Madhya Pradesh). I cannot
thought of offering some feedback. go in for the subscription as well. Please give me the name
I was facing a problem with Oracle Virtual Box, but after of the distributor or dealer in my city through whom I can
reading an article on the topic in OSFY, the task became so easy. purchase the magazine.
Thanks for the wonderful help. I am also trying to set up Gaurav Singh;
my local (LAN-based) GIT server. I have no idea how to gaurav_kumar_singh@hotmail.com
set it up. I have worked a little with GitHub. I do wish your
magazine would feature content on this topic in upcoming ED: We have a website where you can locate the nearest store
editions. in your city that supplies Open Source For You. Do log on
Abhinav Ambure; to http://ezine.lfymag.com/listwholeseller.asp. You will find
adambure21@gmail.com there are two dealers of the magazine in your city: Sahu News
Agency (Sanjay Sahu, Ph: 09301201157) and Janta News
ED: Thank you so much for your valuable feedback. We Agency (Harish, Ph: 09039675118). They can ensure regular
really value our readers and are glad that our content proves supply of the magazine to your organisation.

Please send your comments


Share Your or suggestions to:
The Editor,
Open Source For You,
D-87/1, Okhla Industrial Area, Phase I,
New Delhi 110020, Phone: 011-26810601/02/03,
Fax: 011-26817563, Email: osfyedit@efy.in

8 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com


offe
rS THE
monTH
One 2000
Rupees
month Coupon
free Free Dedicated Server Hosting
(Free Trial Coupon)

No condition attached for trial of our


for one month cloud platform
Subscribe for our Annual Package of Dedicated
Server Hosting & enjoy one month free service Enjoy & Please share Feedback at
sales@cloudoye.com
Hurry!till 30th Hurry!till 30th
alid For more information, call us r valid 2014!
Offer vmber 2014! on 1800-209-3006/ +91-253-6636500 Offe
mber For more information, call us on
Septe Septe
1800-212-2022 / +91-120-666-7718

www.esds.co.in www.cloudoye.com

Get 10%
discount
35%
off & more
Do not wait! Be a part of
Reseller package special offer ! the winning team
Free Dedicated hosting/VPS for one Get 35% off on course fees and if you appear
month. Subscribe for annual package for two Red Hat exams, the second shot is free
of Dedicated hosting/VPS and get
Hurry!till 30th one month FREE Hurry!till 30th
alid alid Contact us @ 98409 82184/85 or
Offer vmber 2014! Contact us at 09841073179 Offer vmber 2014! Write to enquiry@vectratech.in
Septe or Write to sales@space2host.com Septe

www.space2host.com www.vectratech.in

Get Get 25% PACKWEB PACK WEB


12 Months HOSTING

Free Off ProX

Time to go PRO now

Pay Annually & get 12 Month Free Considering VPS or a Dedicated


Services on Dedicated Server Hosting Server? Save Big !!! And go
Subscribe for the Annual Packages of with our ProX Plans
Dedicated Server Hosting & Enjoy Next 25% Off on ProX Plans - Ideal for running
Hurry!till 30th
12 Months Free Services Hurry!till 30th High Traffic or E-Commerce Website
alid alid Coupon Code : OSFY2014
Offer vmber 2014! Offer vmber 2014!
Septe For more information, call us on Septe Contact us at 98769-44977 or
1800-212-2022 / +91-120-666-7777 Write to support@packwebhosting.com

www.goforhosting.com www.prox.packwebhosting.com

Pay the most EMBEDDED SOFTWARE DEVELOPMENT


competitive COURSES AND WORKSHOPS
Fee Embedded RTOS -Architecture, Internals
To advertise here, contact and Programming - on ARM platform

Omar on +91-995 888 1862 or Date: 20-21 Sept 2014 ( 2 days program)
Faculty: Mr. Babu Krishnamurthy
011-26810601/02/03 or COURSE Visiting Faculty / CDAC/ ACTS with 18 years
FEE: of Industry and Faculty Experience
Write to omar.farooq@efy.in RS.5620/-
(all inclusive)
Contact us at +91-98453-65845 or
Write to babu_krishnamurthy@yahoo.com

www.opensourceforu.com
FOSSBYTES Powered by www.efytimes.com

Ubuntu 14.04.1 LTS is out VLC 2.1.5 has been


The Ubuntu 14.04 LTS has released
been around for quite some VideoLAN has announced the
time now and most people release of the final update in the
must have upgraded it. 2.1.x series of its popular open
Another smaller update is source, cross-platform media player
ready 14.04.1. Canonical and streaming media
has announced that this server: the VLC media
Ubuntu update fixes many player. VLC 2.1.5 is
bugs and includes security now available for
updates. There is also a list of bugs and other updates in Ubuntu 14.04.1 that download and
you might want to have a look at, in order to see the scope of this update. If you installation on
havent upgraded to 14.04.1 yet, do so as soon as possible. It is a worthy upgrade Windows, Mac and
if you use an older version of Ubuntu. Linux operating systems. Notably,
the next big release for the VLC
Android Device Manager makes it easier to media player will be that of the
search for lost phones! 2.2.x branch. A careful look at the
Google has created an update in Android Device change log reveals that although the
Manager that will help the devices users better VLC 2.1.5 update has been released
security. This latest version is called 1.3.8. It will across multiple platforms, the most
help add a phone number in the remote locking noticeable improvements are for OS
screen, and the lock screen password can also X users. Others could consider it as a
be changed. An optional message can also be set minor update.
up. If the phone number is added, then a big green For OS X users, VLC 2.1.5
button will appear on the lock screen saying Call brings about additional stability
owner. If the lost phone is found by someone, to the Qtsound capture module as
then the owner can be easily contacted. Earlier, well as improved support for Reti.
only a message could be added by the users. The Other notable changes (for the OS
call-back number can be set up through the Android X platform) include compilation
Device Manager app as well as the Web interface, fixes for OS/2 operating systems.
if another Android device is not present. Both Also, MP3 file conversions will no
these message and call-back features are optional, longer be renamed .raw under the
though. But its highly recommended that these features are used so that a lost Qt interface following the update. A
phone can be easily found. few decoder fixes will now benefit
DxVA2 sample decoding, MAD
Ubuntus Amazon shopping feature complies resistance in broken MP3 streams
with UK Data Protection Act and PGS alignment tweaks for MKV.
The independent body investigating In terms of security, the new release
the implementation of Ubuntus Unity comes with fixes for GNU TLS and
Shopping Lens feature and its compliance libpng as well. One should remember
with the UK Data Protection Act (DPA) of that VLC is a portable, free and open
1998 has found no instances of Canonical source, cross-platform media player
being in breach of the act. Ubuntus and streaming media server written by
controversial Amazon shopping feature the VideoLAN project that supports
has been found to be compliant with many audio and video compression
relevant data protection and privacy laws methods and file formats. It comes
in the UK, something that was checked in response to a complaint filed by blogger with a large number of free decoding
Luis de Sousa last year. Notably, the feature sends out queries made in the Dash to an and encoding libraries, thereby
intermediary Canonical server, which sends it forward to Amazon. The e-commerce eliminating the need of finding or
giant then returns product suggestions matching the query back to the Dash. The calibrating proprietary plugins.
feature also sends across non-identifiable location data out in the process.

www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 13


FOSSBYTES

Heres whats new in Linux 3.16


The founder of Linux, Linus Torvalds,
Calendar of forthcoming events
announced the release of the stable build of Name, Date and Venue Description Contact Details and Website
Linux 3.16 recently. This version is known
4th Annual Datacenter The event aims to assist the community in Praveen Nair; Email: Praveen.nair@
as Shuffling Zombie Juror for developers. Dynamics Converged. the data centre domain by exchanging ideas, datacenterdynamics.com; Ph: +91
There are a host of improvements and new September 18, 2014; accessing market knowledge and launching 9820003158; Website:
Bengaluru new initiatives. http://www.datacenterdynamics.com/
features in this new stable build of Linux.
These include new and improved drivers, Gartner Symposium IT Xpo,
CIOs and senior IT executives from across the
world will gather at this event, which offers Website:
and some complex integral improvements October 14-17, 2014; Grand
talks and workshops on new ideas and strate- http://www.gartner.com
Hyatt, Goa
like a unified control hierarchy. This new gies in the IT industry.

Linux 3.16 stable version will be ideal for


Open Source India, Asias premier open source conference that Omar Farooq; Email: omar.farooq@
the Ubuntu Linux Kernel 14.10. LTS version November 7-8, 2014; aims to nurture and promote the open source efy.in; Ph: 09958881862
users will get this update once the 14.10 NIMHANS Center, Bengaluru ecosystem across the sub-continent. http://www.osidays.com
kernel is released.
CeBit This is one of the worlds leading business IT Website:
November 12-14, 2014; events, and offers a combination of services http://www.cebit-india.com/
Shutter 0.92 for Linux released BIEC, Bengaluru and benefits that will strengthen the Indian IT
and fixes a number of bugs and ITES markets.

Users have had some trouble using the 5th Annual Datacenter The event aims to assist the community in Praveen Nair; Email: Praveen.nair@
popular Shutter screenshot tool for Linux Dynamics Converged; the datacentre domain by exchanging ideas, datacenterdynamics.com; Ph: +91
December 9, 2014; Riyadh accessing market knowledge and launching 9820003158; Website:
owing to the many irritating bugs and new initiatives. http://www.datacenterdynamics.com/
stability issues that came along. But they are
Hostingconindia This event will be attended by Web hosting Website:
in for a pleasant surprise as developers have December 12-13, 2014; companies, Web design companies, domain http://www.hostingcon.com/
now released a new bug fix for the tool that NCPA, Jamshedji Bhabha and hosting resellers, ISPs and SMBs from contact-us/
Theatre, Mumbai across the world.
aims to address some of its more prominent
issues. The new bug fixShutter 0.92is
now available for download for the Linux According to Sousa, the Shopping Lens implementation contravened a
platform and a number of stability issues 1995 EU Directive on the protection of users personal data. Sousa had provided
have been dealt with for good. a number of instances to put forward his point. Initially, Sousa began by reaching
out to Canonical for clarification but to no avail. He was finally forced to file a
Open source community irked complaint with the Information Commissioners Office regarding his security
by broken Linux kernel patches concerns. Finally, the ICO responded to Sousas need for clarification by clearly
One of the many fine threads that bind the stating that the Shopping Lens feature complies with the DPA (Data Protection Act)
open source community is avid participation very well and in no way breaches users privacy.
and cooperation between developers across
the globe, with the common goal of improving Oracle launches Solaris 11.2 with OpenStack support
the Linux kernel. However, not everyone is Oracle Corp recently launched the latest
actually trying to help out there, as recent version of its Solaris enterprise UNIX
happenings suggest. Trolls exist even in the platform: Solaris 11.2. Notably, this new
Linux community, and one that has managed version was in beta since April. The
to make a big impression is Nick Krause. latest release comes with several key
Krauses recent antics have led to significant enhancementsthe support for OpenStack
bouts of frustration among Linux kernel as well as software-defined networking
maintainers. Krause continuously tries to get (SDN). Additionally, there are various
broken patches past the maintainersonly security, performance and compliance
his goals are not very clear at the moment. enhancements introduced in Oracles
Many developers believe that Krause aims to new release. Solaris 11.2 comes with OpenStack integration, which is perhaps its
damage the Linux kernel. While that might most crucial enhancement. The latest version runs the most recent version of the
be a distant dream for him (at least for now), popular toolbox for building clouds: OpenStack Havana. Meanwhile, the inclusion
he has managed to irk quite a lot of people, of software-defined networking (SDN) support is seen as Oracles ongoing effort to
slowing down the whole development process transform its Exalogic Elastic Cloud into one-stop data centres. Until now, Exalogic
because of the need to keep fixing broken boxes were being increasingly used in the form of massive servers or for transaction
patches introduced by him. processing. They were therefore not fulfilling their real purpose, which is to work

14 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com


FOSSBYTES

as cloud-hosting systems. However, with SDN support added, Oracle is aiming


Android-x86 4.4 R1 to change all this. Oracle plans to directly take on network equipment makers
Linux distro available for like Cisco, Hewlett-Packard and Brocade with the introduction of Solaris 11.2.
download and testing Enterprises using Solaris can now simply purchase a handful of Solaris boxes and
The team behind Android-x86 run their mission-critical clouds. In addition, they can also use bits of OpenStack
recently launched version 4.4 R1 of without acquiring additional hardware.
the port of the Android OS designed
specifically for the x86 platform. Canonical launches Ubuntu 12.04.5 LTS
Android-x86 4.4 KitKat is now Marking its fifth point release, Canonical has announced that Ubuntu 12.04.5 LTS
available for download and testing is available for download and installation. Ubuntu 12.04
on the Linux platform for your LTS was first released back in April 2012. Canonical
PC. Android is actually based on a will continue supporting the LTS until 2017 with regular
modified Linux kernel, with many updates from time to time. Also, this is the first major
believing it to be a stand alone Linux release for Canonical since the debut of Ubuntu 14.04
distribution in its own right. With that LTS earlier this year. The most notable improvement
said, developers have managed to in the new release is the inclusion of an updated kernel
tweak Android to make it port to the (3.13) and X.org stack. Both of these have been traded
PC for the x86 platforms; thats what from Ubuntu 14.04 LTS. The new release is out now for
Android-x86 is really all about. desktop, server, cloud and core products, as well as other
flavours of Ubuntu with long-term support. In addition, the new release also comes
Linux Mint Debian edition with security updates and corrections for other high-impact bugs, with a focus
to switch from snapshot on maintaining stability and compatibility with Ubuntu 12.04 LTS. Meanwhile,
cycle to Debian stable Kubuntu 12.04.5 LTS, Edubuntu 12.04.5 LTS and Ubuntu Studio 12.04.5 LTS are
package base also available for download and install.

Storm Energys SunSniffer charmed by Raspberry Pi!


The humble Raspberry Pi single board computer is indeed going places, receiving critical
acclaim for, well, being downright awesome. The latest to be smitten by it is the German
company, Storm Energy, which builds products like SunSniffer, a
solar plant monitoring system. The SunSniffer system is designed
to monitor photovoltaic (PV) solar power installations of varied
sizes. The company has now upgraded the system to a Linux-
The team behind Linux Mint has based platform running on a Raspberry Pi. In addition to this, the
decided to let go of the current latest SunSniffer version also comes with a custom expansion
snapshot cycle in the Debian edition board and customised Linux OS. The SunSniffer is IP65-rated,
for the Linux distribution and instead and the new Connection Boxs custom Raspberry Pi expansion
switch over to a Debian stable board comes with five RS-485 ports and eight analogue/digital
package base. The current Linux I/O interfaces to help simultaneously monitor a wide variety
Mint editions are based on Ubuntu of solar inverters (Refusol, Huawei and Kostal, among others). In short, the new system
and the team is most likely to stick can remotely control solar inverters via a radio ripple control receiver, as against earlier
to that for at least a couple of years. versions where users could only monitor their data.
The team recently launched the The Raspberry Pi-laden SunSniffer also offers SSL-encryption and optional
latest iteration of Linux Mint, a.k.a. integrated anti-theft protection.
Qiana. Both the Cinnamon and
Mate versions are now available for Italian city of Turin switching to open source technology
download with the KDE and XFCE In a recent development, the Italian city of Turin is considering ditching all
versions expected to come out soon. Microsoft products in favour of open source alternatives. The move is directly
Meanwhile, it has been announced aimed at cutting government costs, while not compromising on functionality. If at
that the next three Linux Mint all Turin gets rid of all proprietary software, it will go on to become one of the first
releases would also, in all probability, Italian open source cities and save itself at least a whopping six million Euros. A
be based on Ubuntu 14.04 LTS. report suggests that as many as 8,300 computers of the local administration in Turin
will soon have Ubuntu under the hood and will be shipped with the Mozilla Firefox

16 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com


FOSSBYTES

Web browser and OpenOfficethe two joys of the open source world. The local Khronos releases OpenGL NG
government has argued that a large amount of money is spent on buying licences in The Khronos Group recently announced
case of proprietary software, wasting a lot of the local tax payers money. Therefore, the release of the latest iteration of
a decision to drop Microsoft in favour of cost-effective open source alternatives OpenGL (the oldest high-level 3D
seems to be a viable option. graphics API still in popular use).
Although OpenGL 4.5 is a noteworthy
LibreOffice coming to Android release in its own right, the Groups
LibreOffice needs no introduction. The Document Foundations popular open second major release in the next
source office suite is widely used by millions of people across the globe. Therefore, generation OpenGL initiative is garnering
news that the suite could soon be widespread appreciation. While OpenGL
launched on Android is something to 4.5 is what some might call a fairly
watch out for. You heard that right! A standard annual OpenGL update, OpenGL
new report by Tech Republic suggests NG is a complete rebuild of the OpenGL
that the Document Foundation is API, designed with the idea of building an
currently on a rigorous workout to entirely new version of OpenGL. This new
make this happen. However, as things version will have a significantly reduced
stand, there is still some time before that happens for real. Even as the Document overhead owing to the removal of a lot
Foundation came out with the first Release Candidate (RC) version of the upcoming of abstraction. Also, it will do away with
LibreOffice 4.2.5 recently (it has been quite consistent in updating its stable version the major inefficiencies of older versions
on a timely basis), work is on to make LibreOffice available for Googles much when working at a low level with the bare
loved Android platform as well, the report says. The buzz is that developers back metal GPU hardware.
home are currently talking about (and working at) getting the file size right, that is, Being a very high-level API, earlier
something well below the Google limit. Until they are able to do that, LibreOffice versions of OpenGL made it hard to
for Android is a distant dream, sadly. efficiently run code on the GPU directly.
However, as and when this happens, LibreOffice would be in direct competition While this didnt matter so much earlier,
with Google Docs. Since there is a genuine need for Open Document Format (ODF) now things have changed. Fuelled by
support in Android, the release might just be what the doctor ordered for many users. more mature GPUs, developers today
This is more of a rumour at the moment, and things will get clearer in time. There is tend to ask for graphics APIs that allow
no official word from either Google or the Document Foundation about this, but we them to get much closer to the bare
will keep you posted on developments. The recent release the LibreOffice 4.2.5 metal. The next generation OpenGL
RC1meanwhile tries to curb many key bugs that plagued the last 4.2.4 final release. initiative is directed at developers who
This, in turn, has improved its usability and stability to a significant extent. are looking to improve performance and
reduce overhead.
RHEL 6.6 beta is released; draws major inspiration from RHEL 7
Just so RHEL 6.x users (who wish to continue with this branch of the distribution for Dropboxs updated Android
a bit longer) dont feel left out, Red Hat has launched a beta release of its Red Hat App offers improved features
Enterprise Linux 6.6 (RHEL 6.6) platform. Taking much of its inspiration from the A major update has been announced
recently released RHEL 7, the move is directed towards RHEL 6.x users so that they by Dropbox in connection with its
benefit from new platform features. At the same time, it comes with some real cool official Android app, and is available
features that are quite independent of RHEL 7 and which make 6.6 beta stand out at Google Play. This new update
on its own merits. Red Hat offers Application Binary Interface (ABI) compatibility carries version number 2.4.3 and
for RHEL for a period of ten years, so technically speaking, it cannot drastically comes with a lot of improved features.
change major elements of an in-production release. Quite simply put, it cant and As the Google Play listing suggests,
wont change an in-production release in a way that could alter stability or existing this new Dropbox version supports in-
compatibility. This would eventually mean that the new release on offer cannot go app previews of Word, PowerPoint and
much against the tide with respect to RHEL 6. Although the feature list for RHEL PDF files. A better search experience is
6.6 beta ties in closely with the feature list of the major release (6.0), it doesnt also offered in this new version, which
mean RHEL 6.6 beta is simply old wine served in a new bottle. It does manage to enables tracking of recent queries, and
introduce some key improvements for RHEL 6.x users. To begin with, RHEL 6.6 suggestions are also displayed. One
beta includes some features that were first introduced with RHEL 7, the most notable can also search in specific folders from
being Performance Co-Pilot (PCP). The new beta release will also offer RHEL 6.x now onwards.
users more integrated Remote Direct Memory Access (RDMA) capabilities.

www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 17


Buyers Guide

Motherboards
The Lifeline of Your Desktop
If you are a gamer, or like to customise your PC and build it from scratch, the motherboard is
what you require to link all the important and key components together. Lets find out how to
select the best desktop motherboards.

T
he central processing unit (CPU) can be considered to CPU socket
be the brain of a system or a PC in laymans language, The central processing unit is the key component of a motherboard
but it still needs a nervous system to be connected and its performance is primarilydetermined by the kind of
with all the other components in your PC. A motherboard processor it is designed to hold. The CPU socket can be defined
plays this role, as all the components are attached to it and as an electrical component that connects or attaches to the
to each other with the help of this board. It can be defined motherboard and is designed to house a microprocessor. So,
as a PCB (printed circuit board) that has the capability of when youre buying a motherboard, you should look for a CPU
expanding. As the name suggests, a motherboard is believed socket that is compatible with the CPU you have planned to use.
to be the mother of all the components attached in it, Most of the time, motherboards use one of the following five
including network cards, sound cards, hard drives, TV tuner sockets -- LGA1155, LGA2011, AM3, AM3+ and FM1. Some
cards, slots, etc. It holds the most significant sub-systems of the sockets are backward compatible and some of the chips
the processor along with other important components. A are interchangeable. Once you opt for a motherboard, you will be
motherboard is found in all electronics devices like TVs, limited to using the processors that offer similar specifications.
washing machines and other embedded systems. Since it
provides the electrical connections through which other Form factor
components are connected and linked with each other, it needs A motherboards capabilities are broadly determined by its
the most attention. It hosts other devices and subsystems and shape, size and how much it can be expanded these aspects
also contains the central processing unit, unlike the backplane. are known as form factors. Although there is no fixed design or
There are quite a lot of companies that deal with form for motherboards, and they are available in many variations,
motherboards and Simmtronics is one among the leading players. two form factors have always been the favourites -- ATX and
According to Dr Inderjeet Sabbrawal, chairman, Simmtronics, microATX. The ATX motherboard measures around 305cm
Simmtronics has been one of the exclusive manufacturers of x 23cm (12 inch x 9 inch) and offers the highest number of
motherboards in the hardware industry over the last 20 years. We expansion slots, RAM bays and data connectors. MicroATX
strongly believe in creativity, innovation and R&D. Currently, we motherboards measure 24.38cm x 24.38cm (9.6 x 9.6 inch) and
are fulfilling our commitment to provide the latest mainstream have fewer expansion slots, RAM bays and other components.
motherboards. At Simmtronics, the quality of the motherboards The form factor of a motherboard can be decided according to
is strictly controlled. At present, the market is not growing. what purpose the motherboard is expected to serve.
India still has a varied market for older generation models as well
as the latest models of motherboards. RAM bays
Random access memory (RAM) is considered the most important
Factors to consider while buying a motherboard workspace in a motherboard, where data is processed even after
In a desktop, several essential units and components being removed from the hard disk drive or solid state drive. The
are attached directly to the motherboard, such as the efficiency of your PC directly depends on the speed and size of your
microprocessor, main memory, etc. Other components, such RAM. The more space you have on your RAM, the more efficient
as the external storage controllers for sound and video display your computing will be. But its no use having a RAM with greater
and various peripheral devices, are attached to it through efficiency than your motherboard can support, as that will be just a
slots, plug-in cards or cables. There are a number of factors to waste of the extra potential. Neither can you have RAM with lesser
keep in mind while buying a motherboard, and these depend efficiency than the motherboard, as then the PC will not work well
on the specific requirements. Linux is slowly taking over the due to the bottlenecks caused by mismatched capabilities. Choosing
PC world and, hence, people now look for Linux-supported the motherboard which supports just the right RAM is vital.
motherboards. As a result, almost every motherboard now Apart from these factors, there are many others to consider before
supports Linux. The many factors to keep in mind when selecting a motherboard. These include the audio system, display,
buying a Linux-supported motherboard are discussed below. LAN support, expansion capabilities and peripheral interfaces.

18 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com


Buyers Guide

A few desktop motherboards


with the latest chipsets
Intel: DZ87KLT-75K
motherboard
Supported CPU: Fourth generation Intel Core i7
processor, Intel Core i5 processor and other Intel
processors in the LGA1150 package
Memory supported: 32GB of system memory, dual
channel DDR3 2400+ MHz, DDR3 1600/1333 MHz
Form factor: ATX form factor

Asus: Z87-K
motherboard
Supported CPU: Fourth generation Intel Core
i7 processor, Intel Core i5 processor and other
Intel processors
Memory supported: Dual channel memory
architecture supports Intel XMP
Form factor: ATX form factor

Simmtronics SIMM-INT H61


(V3) motherboard
CPU supported: Intel Core2nd and Core3rd
Generation i7/i5/i3/Pentium/Celeron
Main memory supported: Dual channel DDR3
1333/1066
BIOS: 132MB Flash ROM
Connectors: 14-pin ATX 12V power connector
Chipset: Intel H61 (B3 Version)

Gigabyte Technology:
GA-Z87X-OC motherboard
CPU supported: Fourth generation Intel Core i7
processor, Intel Core i5 processor and other Intel
processors
Memory supported: Supports DDR3 3000
Form factor: MicroATX

By: Manvi Saxena


The author is a part of the editorial team at EFY.

www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 19


CODE
SPORT
In this months column, we continue our discussion on natural language processing.
Sandya Mannarswamy

F
or the past few months, we have been discussing The software design phase also produces a number of
information retrieval and natural language processing, SE artifacts such as the design document, design models
as well as the algorithms associated with them. This in the form of UML documents, etc, which also can be
month, we continue our discussion on natural language mined for information. Design documents can be analysed
processing (NLP) and look at how NLP can be applied to generate automatic test cases in order to test the final
in the field of software engineering. Given one or many product. During the development and maintenance phases,
text documents, NLP techniques can be applied to extract a number of textual artifacts are generated. Source code
information from the text documents. The software itself can be considered as a textual document. Apart from
engineering (SE) lifecycle gives rise to a number of textual source code, source code control system logs such as SVN/
documents, to which NLP can be applied. GIT logs, Bugzilla defect reports, developers mailing lists,
So what are the software artifacts that arise in SE? field reports, crash reports, etc, are the various SE artifacts to
During the requirements phase, a requirements document which text mining can be applied.
is an important textual artifact. This specifies the expected Various types of text analysis techniques can be applied
behaviour of the software product being designed, in terms to SE artifacts. One popular method is duplicate or similar
of its functionality, user interface, performance, etc. It is document detection. This technique can be applied to
important that the requirements being specified are clear find out duplicate bug reports in bug tracking systems. A
and unambiguous, since during product delivery, customers variation of this technique can be applied to code clones
would like to confirm that the delivered product meets all and copy-and-paste snippets.
their specified requirements. Automatic summarisation is another popular technique
Having vague ambiguous requirements can hamper in NLP. These techniques try to generate a summary of a
requirement verification. So text analysis techniques can given document by looking for the key points contained in it.
be applied to the requirements document to determine There are two approaches to automatic summarisation. One
whether there are any ambiguous or vague statements. is known as extractive summarisation, using which key
For instance, consider a statement like, Servicing of user phrases and sentences in the given document are extracted
requests should be fast, and request waiting time should and put back together to provide a summary of the document.
be low. This statement is ambiguous since it is not clear The other is the abstractive summarisation technique, which
what exactly the customers expectations of fast service is used to build an internal semantic representation of the
or low waiting time may be. NLP tools can detect such given document, from which key concepts are extracted, and
ambiguous requirements. It is also important that there are a summary generated using natural language understanding.
no logical inconsistencies in the requirements. For instance, The abstractive summarisation technique is close to how
a requirement that Login names should allow a maximum humans would summarise a given document. Typically, we
of 16 characters, and that The login database will have a would proceed by building a knowledge representation of
field for login names which is 8 characters wide, conflict the document in our minds and then using our own words
with each other. While the user interface allows up to a to provide a summary of the key concepts. Abstractive
maximum of 16 characters, the backend login database summarisation is obviously more complex than extractive
will support fewer characters, which is inconsistent with summarisation, but yields better summaries.
the earlier requirement. Though currently such inconsistent Coming to SE artifacts, automatic summarisation
requirements are flagged by human inspection, it is possible techniques can be applied to generate large bug reports.
to design text analysis tools to detect them. They can also be applied to generate high level comments

20 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com


Guest Column CodeSport
of methods contained in source code. In this case, each method intent. Hence, the code written by the developer and the comment
can be treated as an independent document and the high level associated with that piece of code should be consistent with each
comment associated with that method or function is nothing but a other. Consider the simple code sample shown below:
short summary of the method.
Another popular text analysis technique involves the use of /* linux/drivers/scsi/in2000.c: */
language models, which enables predicting what the next word /* caller must hold instance lock */
would be in a particular sentence. This technique is typically used in Static int reset_hardware()
optical character recognition (OCR) generated documents, where due {
to OCR errors, the next word is not visible or gets lost and hence the .
tool needs to make a best case estimate of the word that may appear }
there. A similar need also arises in the case of speech recognition static int in2000_bus_reset()
systems. In case of poor speech quality, when a sentence is being {
transcribed by the speech recognition tool, a particular word may ..
not be clear or could get lost in transmission. In such a case, the tool reset_hardware();
needs to predict what the missing word is and add it automatically.
Language modelling techniques can also be applied in intelligent }
development environments (IDE) to provide auto-completion
suggestions to the developers. Note that in this case, the source code In the above code snippet, the developer has expressed the
itself is being treated as text and is analysed. intention that instance_lock must be held before the function
Classifying a set of documents into specific categories is another reset_hardware is called as a code comment. However, in the
well-known text analysis technique. Consider a large number of news actual source code, the lock is not acquired before the call to
articles that need to be categorised based on topics or their genre, such reset_hardware is made. This is a logical inconsistency, which can
as politics, business, sports, etc. A number of well-known text analysis arise either due to: (a) comments being outdated with respect to the
techniques are available for document classification. Document source code; or (b) incorrect code. Hence, flagging such errors is
classification techniques can also be applied to defect reports in SE to useful to the developer who can fix either the comment or the code,
classify the category to which the defect belongs. For instance, security depending on which is incorrect.
related bug reports need to be prioritised. While people currently
inspect bug reports, or search for specific key words in a bug category My must-read book for this month
field in Bugzilla reports in order to classify bug reports, more robust This months book suggestion comes from one of our readers,
and automated techniques are needed to classify defect reports in large Sharada, and her recommendation is very appropriate to the
scale open source projects. Text analysis techniques for document current column. She recommends an excellent resource for natural
classification can be employed in such cases. language processinga book called, Speech and Language
Another important need in the SE lifecycle is to trace source Processing: An Introduction to Natural Language Processing by
code to its origin in the requirements document. If a feature X Jurafsky and Martin. The book describes different algorithms for
is present in the source code, what is the requirement Y in the NLP techniques and can be used as an introduction to the subject.
requirements document which necessitated the development Thank you, Sharada, for your valuable recommendation.
of this feature? This is known as traceability of source code to If you have a favourite programming book or article that you
requirements. As source code evolves over time, maintaining think is a must-read for every programmer, please do send me
traceability links automatically through tools is essential to a note with the books name, and a short write-up on why you
scale out large software projects. Text analysis techniques think it is useful so I can mention it in the column. This would
can be employed to connect a particular requirement from the help many readers who want to improve their software skills.
requirements document to a feature in the source code and hence If you have any favourite programming questions/software
automatically generate the traceability links. topics that you would like to discuss on this forum, please
We have now covered automatic summarisation techniques send them to me, along with your solutions and feedback, at
for generating summaries of bug reports and generating header sandyasm_AT_yahoo_DOT_com. Till we meet again next
level comments for methods. Another possible use for such month, happy programming!
techniques in SE artifacts is to enable the automatic generation
of user documentation associated with that software project. By: Sandya Mannarswamy
A number of text mining techniques have been employed to
The author is an expert in systems software and is currently working
mine stack overflow mailing lists to generate automatic user
with Hewlett Packard India Ltd. Her interests include compilers,
documentation or FAQ documents for different software projects. multi-core and storage systems. If you are preparing for systems
Regarding the identification of inconsistencies in the software interviews, you may find it useful to visit Sandya's LinkedIn
requirements document, inconsistency detection techniques group Computer Science Interview Training India at http://www.
can be applied to source code comments also. It is a general linkedin.com/groups?home=HYPERLINK "http://www.linkedin.com/
groups?home=&gid=2339182"
expectation that source code comments express the programmers

www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 21


Exploring Software Guest Column

Anil Seth Exploring Big Data on a Desktop


Getting Started with Hadoop
Hadoop is a large scale, open source storage and processing
framework for data sets. In this article, the author sets up Hadoop
on a single node, takes the reader through testing it, and later
tests it on multiple nodes.

F
edora 20 makes it easy to install Hadoop. Version 2.2 $ sudo systemctl start hadoop-namenode hadoop-datanode \
is packaged and available in the standard repositories. hadoop-nodemanager hadoop-resourcemanager
It will place the configuration files in /etc/hadoop,
with reasonable defaults so that you can get started easily. As You can find out the hdfs directories created as
you may expect, managing the various Hadoop services is follows. The command may look complex, but you are
integrated with systemd. running the hadoop fs command in a shell as Hadoop's
internal user, hdfs:
Setting up a single node
First, start an instance, with name h-mstr, in OpenStack $ sudo runuser hdfs -s /bin/bash /bin/bash -c hadoop fs -ls
using a Fedora Cloud image (http://fedoraproject. /
org/get-fedora#clouds). You may get an IP like Found 3 items
192.168.32.2. You will need to choose at least the drwxrwxrwt - hdfs supergroup 0 2014-07-15 13:21 /tmp
m1.small flavour, i.e., 2GB RAM and 20GB disk. Add drwxr-xr-x - hdfs supergroup 0 2014-07-15 14:18 /user
an entry in /etc/hosts for convenience: drwxr-xr-x - hdfs supergroup 0 2014-07-15 13:22 /var

192.168.32.2 h-mstr Testing the single node


Create a directory with the right permissions for the user,
Now, install and test the Hadoop packages on the virtual fedora, to be able to run the test scripts:
machine by following the article, http://fedoraproject.org/
wiki/Changes/Hadoop: $ sudo runuser hdfs -s /bin/bash /bin/bash -c "hadoop fs
-mkdir /user/fedora"
$ ssh fedora@h-mstr $ sudo runuser hdfs -s /bin/bash /bin/bash -c "hadoop fs
$ sudo yum install hadoop-common hadoop-common-native hadoop- -chown fedora /user/fedora"
hdfs \
hadoop-mapreduce hadoop-mapreduce-examples hadoop-yarn Disable the firewall and iptables and run a
mapreduce example. You can monitor the progress at
It will download over 200MB of packages and take about http://h-mstr:8088/. Figure 1 shows an example running
500MB of disk space. on three nodes.
Create an entry in the /etc/hosts file for h-mstr using the The first test is to calculate pi using 10 maps and
name in /etc/hostname, e.g.: 1,000,000 samples. It took about 90 seconds to estimate the
value of pi to be 3.1415844.
192.168.32.2 h-mstr h-mstr.novalocal
$ hadoop jar /usr/share/java/hadoop/hadoop-mapreduce-
Now, you can test the installation. First, run a script to examples.jar pi 10 1000000
create the needed hdfs directories:
In the next test, you create 10 million records of 100
$ sudo hdfs-create-dirs bytes each, that is, 1GB of data (~1 min). Then, sort it (~8
min) and, finally, verify it (~1 min). You may want to clean
Then, start the Hadoop services using systemctl: up the directories created in the process:

22 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com


Exploring Software Guest Column

$ hadoop jar /usr/share/java/hadoop/hadoop-mapreduce-


examples.jar teragen 10000000 gendata
$ hadoop jar /usr/share/java/hadoop/hadoop-mapreduce-
examples.jar terasort gendata sortdata
$ hadoop jar /usr/share/java/hadoop/hadoop-mapreduce-
examples.jar teravalidate sortdata reportdata
$ hadoop fs -rm -r gendata sortdata reportdata

Stop the Hadoop services before creating and working


with multiple data nodes, and clean up the data directories:

$ sudo systemctl stop hadoop-namenode hadoop-datanode \


hadoop-nodemanager hadoop-resourcemanager
$ sudo rm -rf /var/cache/hadoop-hdfs/hdfs/dfs/* Figure 1: OpenStack-Hadoop

Testing with multiple nodes </property>


The following steps simplify creation of multiple instances:
Generate ssh keys for password-less log in from any node Delete the following lines from hdfs-site.xml:
to any other node.
<!-- Immediately exit safemode as soon as one DataNode
$ ssh-keygen checks in.
$ cat .ssh/id_rsa.pub >> .ssh/authorized_keys On a multi-node cluster, these configurations must be
removed. -->
In /etc/ssh/ssh_config, add the following to ensure that <property>
ssh does not prompt for authenticating a new host the first <name>dfs.safemode.extension</name>
time you try to log in. <value>0</value>
</property>
StrictHostKeyChecking no <property>
<name>dfs.safemode.min.datanodes</name>
In /etc/hosts, add entries for slave nodes yet to be created: <value>1</value>
</property>
192.168.32.2 h-mstr h-mstr.novalocal
192.168.32.3 h-slv1 h-slv1.novalocal Edit or create, if needed, slaves with the host names of the
192.168.32.4 h-slv2 h-slv2.novalocal data nodes:

Now, modify the configuration files located in /etc/hadoop. [fedora@h-mstr hadoop]$ cat slaves
Edit core-site.xml and modify the value of fs.default.name h-slv1
by replacing localhost by h-mstr: h-slv2

<property> Add the following lines to yarn-site.xml so that multiple


<name>fs.default.name</name> node managers can be run:
<value>hdfs://h-mstr:8020</value>
</property> <property>
<name>yarn.resourcemanager.hostname</name>
Edit mapred-site.xml and modify the value of mapred.job. <value>h-mstr</value>
tracker by replacing localhost by h-mstr: </property>

<property> Now, create a snapshot, Hadoop-Base. Its creation will


<name>mapred.job.tracker</name> take time. It may not give you an indication of an error if it
<value>h-mstr:8021</value> runs out of disk space!

24 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com


Guest Column Exploring Software

Launch instances h-slv1 and h-slv2 serially using $ sudo runuser hdfs -s /bin/bash /bin/bash -c "hadoop fs
Hadoop-Base as the instance boot source. Launching of the -mkdir /user/fedora"
first instance from a snapshot is pretty slow. In case the IP $ sudo runuser hdfs -s /bin/bash /bin/bash -c "hadoop fs
addresses are not the same as your guess in /etc/hosts, edit / -chown fedora /user/fedora"
etc/hosts on each of the three nodes to the correct value. For
your convenience, you may want to make entries for h-slv1 You can run the same tests again. Although you are using
and h-slv2 on the desktop /etc/hosts file as well. three nodes, the improvement in the performance compared to
The following commands should be run from Fedora on the single node is not expected to be noticeable as the nodes
h-mstr. Reformat the namenode to make sure that the single are running on a single desktop.
node tests are not causing any unexpected issues: The pi example took about one minute on the three nodes,
compared to the 90 seconds taken earlier. Terasort took 7
$ sudo runuser hdfs -s /bin/bash /bin/bash -c "hadoop minutes instead of 8.
namenode -format"
Start the hadoop services on h-mstr. Note: I used an AMD Phenom II X4 965 with 16GB
$ sudo systemctl start hadoop-namenode hadoop-datanode RAM to arrive at the timings. All virtual machines and their
hadoop-nodemanager hadoop-resourcemanager data were on a single physical disk.

Start the datanode and yarn services on the slave nodes: Both OpenStack and Mapreduce are a collection of
interrelated services working together. Diagnosing problems,
$ ssh -t fedora@h-slv1 sudo systemctl start hadoop-datanode especially in the beginning, is tough as each service has its
hadoop-nodemanager own log files. It takes a while to get used to realising where to
$ ssh -t fedora@h-slv2 sudo systemctl start hadoop-datanode look. However, once these are working, it is incredible how
hadoop-nodemanager easy they make distributed processing!

Create the hdfs directories and a directory for user fedora By: Dr Anil Seth
as on a single node: The author has earned the right to do what interests him.
You can find him online at http://sethanil.com, http://sethanil.
blogspot.com, and reach him via email at anil@sethanil.com
$ sudo hdfs-create-dirs

OSFY Magazine Attractions During 2014-15


Month Theme Featured List buyers guide
March 2014 Network monitoring Security -------------------

April 2014 Android Special Anti Virus Wifi Hotspot Devices

May 2014 Backup and Data Storage Certification External Storage

June 2014 Open Source on Windows Mobile Apps UTMs fo SMEs

July 2014 Firewall and Network security Web Hosting Solutions Providers MFD Printers for SMEs

August 2014 Kernel Development Big Data solution Providers SSDs for Servers

September 2014 Open Source for Start-ups Cloud Android Devices

October 2014 Mobile App Development Training on Programming Languages Projectors

November 2014 Cloud Special Virtualisation Solutions Providers Network Switches and Routers

December 2014 Web Development Leading Ecommerce Sites AV Conferencing

January 2015 Programming Languages IT Consultancy Service Providers Laser Printers for SMEs

February 2015 Top 10 of Everything on Open Source Storage Solutions Providers Wireless Routers

www.OpenSourceForU.com | OPEN SOURCE For You | september 2014 | 25


Developers Insight

Improve Python Code


by Using a Profiler

The line_profiler gives


a line-by-line analysis
of the Python code
and can thus identify
bottlenecks that slow
down the execution of
a program. By making
modifications to the
code based on the
results of this profiler,
developers can
improve the code and
refine the program.

H
ave you ever wondered which module is slowing b) For Fedora systems:
down your Python program and how to optimise
it? Well, there are profilers that can come to sudo yum install -y mercurial python python3 python-pip
your rescue.
Profiling, in simple terms, is the analysis of a program
Note: 1. I have used the y argument to
to measure the memory used by a certain module,
automatically install the packages after being tracked by
frequency and duration of function calls, and the time the yum installer.
complexity of the same. Such profiling tools are termed 2. Mac users can use Homebrew to install these packages.
profilers. This article will discuss the line_profiler for
Python. Cython is a pre-requisite because the source releases
require a C compiler. If the Cython package is not found or is
Installation too old in your current Linux distribution version, install it by
Installing pre-requisites: Before installing line_profiler running the following command in a terminal:
make sure you install these pre-requisites:
a) For Ubuntu/Debian-based systems (recent versions): sudo pip install Cython

sudo apt-get install mercurial python python3 python-pip Note: Mac OS X users can install Cython using pip.
python3-pip Cython Cython3

26 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com


Developers Insight

Cloning line_profiler: Let us begin


by cloning the line_profiler source code
from bitbucket.
To do so, run the following
command in a terminal:

hg clone https://bitbucket.org/
robertkern/line_profiler

The above repository is the official


line_profiler repository, with support for
python 2.4 - 2.7.x.
For python 3.x support, we will
need to clone a fork of the official Figure 1: line_profiler output
source code that provides python 3.x
compatibility for line_profiler and kernprof.
Note: I have combined both the commands in a single
line separated by a semicolon ; to immediately show the
hg clonehttps://bitbucket.org/kmike/line_profiler
profiled results.
Installing line_profiler: Navigate to the cloned You can run the two commands separately or run
repository by running the following command in a terminal: kernprof.py with v argument to view the formatted result in
the terminal.
cd line_profiler kernprof.py -l compiles the profiled function in
example.py line by line; hence, the argument -l stores
To build and install line_profiler in your system, run the the result in a binary file with a .lprof extension. (Here,
following command: example.py.lprof)
a) For official source (supported by python 2.4 - 2.7.x): We then run line_profiler on this binary file by using the
-m line_profiler argument. Here -m is followed by the
sudo python setup.py install module name, i.e., line_profiler.
Case study: We will use the Gnome-Music source code
b) For forked source (supported by python 3.x): for our case study. There is a module named _connect_view
in the view.py file, which handles the different views (artists,
sudo python3 setup.py install albums, playlists, etc) within the music player. This module is
reportedly running slow because a variable is initialised each
Using line_profiler time the view is changed.
Adding profiler to your code: Since line_profiler has been By profiling the source code, we get the following result:
designed to be used as a decorator, we need to decorate the
specified function using a @profile decorator. We can do so Wrote profile results to gnome-music.lprof
by adding an extra line before a function, as follows: Timer unit: 1e-06 s

@profile File: ./gnomemusic/view.py


def foo(bar): Function: _connect_view at line 211
..... Total time: 0.000627 s

Running line_profiler: Once the slow module is Line # Hits Time Per Hit % Time Line Contents
profiled, the next step is to run the line_profiler, which =============================================================
will give line-by-line computation of the code within the 211 @profile
profiled function. 212 def _connect_view(self):
Open a terminal, navigate to the folder where the .py file 213 4 205 51.2 32.7 vadjustment =
is located and type the following command: self.view.get_vadjustment()
214 4 98 24.5 15.6 self._
kernprof.py -l example.py; python3 -m line_profilerexample. adjustmentValueId =
py.lprof vadjustment.connect(
215 4 79 19.8 12.6 'value-changed',

28 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com


Insight Developers

216 4 245 61.2 39.1


self._on_scrolled_win_change)

In the above code, line no


213, vadjustment = self.view.get_
vadjustment(), is called too many times,
which makes the process slower than
expected. After caching (initialising) it
in the init function, we get the following
result tested under the same condition.
You can see that there is a significant
improvement in the results (Figure 2). Figure 2: Optimised code line_profiler output

Wrote profile results to gnome-music.lprof to the total amount of recorded time spent in the function.
Timer unit: 1e-06 s Line content: It displays the actual source code.

File: ./gnomemusic/view.py Note: If you make changes in the source code you
Function: _connect_view at line 211 need to run the kernprof and line_profiler again in order to
Total time: 0.000466 s profile the updated code and get the latest results.

Line # Hits Time Per Hit % Time Line Contents Advantages


============================================================ Line_profiler helps us to profile our code line-by-line,
211 @profile giving the number of hits, time taken for each hit and
212 def _connect_view(self): %time. This helps us to understand which part of our code
213 4 86 21.5 18.5 self._adjustmentValueId = is running slow. It also helps in testing large projects and
vadjustment.connect( the time spent by modules to execute a particular function.
214 4 161 40.2 34.5 'value-changed', Using this data, we can commit changes and improve our
215 4 219 54.8 47.0 self._on_scrolled_win_change) code to build faster and better programs.

Understanding the output


Here is an analysis of the output shown in the above snippet. References
Function: Displays the name of the function that is [1] http://pythonhosted.org/line_profiler/
profiled and its line number. [2] http://jacksonisaac.wordpress.com/2013/09/08/using-
Line#: The line number of the code in the respective file. line_profiler-with-python/
[3] https://pypi.python.org/pypi/line_profiler
Hits: The number of times the code in the corresponding [4] https://bitbucket.org/robertkern/line_profiler
line was executed. [5] https://bitbucket.org/kmike/line_profiler
Time: Total amount of time spent in executing the line
in Timer unit (i.e., 1e-06s here). This may vary from By: Jackson Isaac
system to system.
The author is an active open source contributor to projects
Per hit: The average amount of time spent in executing like gnome-music, Mozilla Firefox and Mozillians. Follow
the line once in Timer unit. him on jacksonisaac.wordpress.com or email him at
% time: The percentage of time spent on a line with respect jacksonisaac2008@gmail.com

www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 29


Developers Insight

Understanding the Document


Object Model (DOM) in Mozilla

This article is an introduction to the DOM programming interface and the DOM inspector,
which is a tool that can be used to inspect and edit the live DOM of any Web document or
XUL application.

T
he Document Object Model (DOM) is a programming objects. For example, the document object that represents the
interface for HTML and XML documents. It provides document itself, the tableObject that implements the special
a structured representation of a document and it HTMLTableElement DOM interface to access the HTML
defines a way that the structure can be accessed from the tables, and so forth.
programs so that they can change the document structure,
style and content. The DOM provides a representation of the Why is DOM important?
document as a structured group of nodes and objects that have Dynamic HTML (DHTML) is a term used by some vendors
properties and methods. Essentially, it connects Web pages to to describe the combination of HTML, style sheets and
scripts or programming languages. scripts that allow documents to be animated. The W3C DOM
A Web page is a document that can either be displayed in working group is aiming to make sure interoperable and
the browser window or as an HTML source that is in the same language-neutral solutions are agreed upon.
document. The DOM provides another way to represent, store As Mozilla claims the title of Web Application Platform,
and manipulate that same document. In simple terms, we can support for the DOM is one of the most requested features; in
say that the DOM is a fully object-oriented representation of a fact, it is a necessity if Mozilla wants to be a viable alternative
Web page, which can be modified by any scripting language. to the other browsers. The user interface of Mozilla (also
The W3C DOM standard forms the basis of the DOM Firefox and Thunderbird) is built using XUL and the DOM to
implementation in most modern browsers. Many browsers manipulate its own user interface.
offer extensions beyond the W3C standard.
All the properties, methods and events available for How do I access the DOM?
manipulating and creating the Web pages are organised into You dont have to do anything special to begin using the

30 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com


Insight Developers

Figure 1: DOM inspector


Figure 2: Inspecting content documents
DOM. Different browsers have different implementations of
it, which exhibit varying degrees of conformity to the actual representing the HTMLFormElement gets its name property
DOM standard but every browser uses some DOM to make from the HTMLFormElement interface but its className
Web pages accessible to the script. property from the HTMLElement interface. In both cases, the
When you create a script, whether its inline in a property you want is simply in the form object.
script element or included in the Web page by means of
a script loading instruction, you can immediately begin Interfaces and objects
using the API for the document or window elements. Many objects borrow from several different interfaces. The
This is to manipulate the document itself or to get at the table object, for example, implements a specialised HTML
children of that document, which are the various elements table element interface, which includes such methods as
in the Web page. createCaption and insertRow. Since an HTML element is
Your DOM programming may be something as simple as also, as far as the DOM is concerned, a node in the tree of
the following, which displays an alert message by using the nodes that makes up the object model for a Web page or an
alert( ) function from a window object or it may use more XML page, the table element also implements the more basic
sophisticated DOM methods to actually create them, as in the node interface, from which the element derives.
longer examples that follow: When you get a reference to a table object, as in
the following example, you routinely use all three of
<body onload = window.alert (welcome to my home page!); > these interfaces interchangeably on the object, perhaps
unknowingly:
Aside from the script element in which JavaScript is
defined, this JavaScript sets a function to run when the var table = document.getElementById (table);
document is loaded. This function creates a new element H1, var tableAttrs = table.attributes; // Node/Element interface
adds text to that element, and then adds H1 to the tree for this for (var i = 0; i < tableAttrs.length; i++) {
document, as shown below: // HTMLTableElement interface: border attribute
if(tableAttrs[i].nodeName.toLowerCase() == border)
<html> table.border = 1;
<head> }
<script> // HTMLTableElement interface: summary attribute
// run this function when the document is loaded table.summary = note: increased border ;
window.onload = function() {
// create a couple of elements Core interfaces in the DOM
// in an otherwise empty HTML page These are some of the important and most commonly
heading = document.createElement(h1); used interfaces in the DOM. These common APIs are
heading_text = document.createTextNode(Big Head!); used in the longer examples of DOM. You will often
heading.appendChild(heading_text); see the following APIs, which are types of methods and
document.body.appendChild(heading); properties, when you use DOM.
} The interfaces of document and window objects are
</script> generally used most often in DOM programming. In
</head> simple terms, the window object represents something
<body> like the browser, and the document object is the root of
</body> the document itself. The element inherits from the generic
</html> node interface and, together, these two interfaces provide
many of the methods and properties you use on individual
DOM interfaces elements. These elements may also have specific interfaces
These interfaces just give you an idea about the actual things for dealing with the kind of data those elements hold, as in
that you can use to manipulate the DOM hierarchy. The object the table object example.

www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 31


Developers Insight

Figure 3: Inspecting Chrome documents

Figure 4: Inspecting arbitrary URLs

easily accessed from scripts.

An introduction to the DOM inspector


The DOM inspector is a Mozilla extension that you can
access from the Tools -> Web Development menu in
SeaMonkey, or by selecting the DOM inspector menu item
from the Tools menu in Firefox and Thunderbird or by
Figure 5: Inspecting a Web page using Ctrl/Cmd+Shift+I in either application. The DOM
inspector is a standalone extension; it supports all toolkit
The following are a few common APIs in XML and Web applications, and its possible to embed it in your own
page scripting that show the use of DOM: XULRunner app. The DOM inspector can serve as a sanity
document.getElementById (id) check to verify the state of the DOM, or it can be used to
element.getElementsByTagName (name) manipulate the DOM manually, if desired.
document.createElement (name) When you first start the DOM inspector, you are presented
parentNode.appendChild (node) with a two-pane application window that looks a little like the
element.innerHTML main Mozilla browser. Like the browser, the DOM inspector
element.style.left includes an address bar and some of the same menus. In
element.setAttribute SeaMonkey, additional global menus are available.
element.getAttribute
element.addEventListener Using the DOM inspector
window.content Once youve opened the document for the page you are
window.onload interested in Chrome, youll see that it loads the DOM nodes
window.dump viewer in the document pane and the DOM node viewer in
window.scrollTo the object pane. In the DOM nodes viewer, there should be a
structured, hierarchical view of the DOM.
Testing the DOM API By clicking around in the document pane, youll see
Here, you will be provided samples for every interface that the viewers are linked; whenever you select a new node
that you can use in Web development. In some cases, the from the DOM nodes viewer, the DOM node viewer is
samples are complete HTML pages, with the DOM access automatically updated to reflect the information for that node.
in a <script> element, the interface (e.g., buttons) necessary Linked viewers are the first major aspect to understand when
to fire up the script in a form, and the HTML elements upon learning how to use the DOM inspector.
which the DOM operates listed as well. When this is the
case, you can cut and paste the example into a new HTML Inspecting a document
document, save it, and run the example from the browser. When the DOM inspector opens, it may or may not load an
There are some cases, however, when the examples are associated document, depending on the host application. If it
more concise. To run examples that only demonstrate the doesnt automatically load a document or loads a document
basic relationship of the interface to the HTML elements, other than the one youd like to inspect, you can select the
you may want to set up a test page in which interfaces can be desired document in a few different ways.

32 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com


Insight Developers

Figure 6: Finding app content Figure 7: Search on Click

There are three ways of inspecting any document, which of the DOM inspector to find and inspect the nodes you
are described below. are interested in. One of the biggest and most immediate
Inspecting content documents: The Inspect Content advantages that this brings to your Web and application
Document menu popup can be accessed from the File menu, development is that it makes it possible to find the mark-up
and it will list the currently loaded content documents. In and the nodes in which the interesting parts of a page or a
the Firefox and SeaMonkey browsers, these will be the piece of the user interface are defined.
Web pages you have opened in tabs. For Thunderbird and One common use of the DOM inspector is to find the
SeaMonkey Mail and News, any messages youre viewing name and location of a particular icon being used in the
will be listed here.
Inspecting Chrome documents: The Inspect Chrome
Document menu popup can be accessed from the File menu,

EMBEDDED SOFTWARE
and it will contain the list of currently loaded Chrome
windows and sub-documents. A browser window and the
DOM inspector are likely to already be open and displayed
in this list. The DOM inspector keeps track of all the
T
DEVELOPMENPS
windows that are open, so to inspect the DOM of a particular COURSES AND WORKSHO
window in the DOM inspector, simply access that window
as you would normally do and then choose its title from this
dynamically updated menu list.
Inspecting arbitrary URLs: We can also inspect the
DOM of arbitrary URLs by using the Inspect a URL menu Embedded RTOS -ARCHITECTURE, INTERNALS
item in the File menu, or by just entering a URL into the AND PROGRAMMING - ON ARM PLATFORM
DOM inspectors address bar and clicking Inspect or pressing FACULTY : Babu Krishnamurthy
(Visiting Faculty, CDAC/ACTS - with 18 years of Industry
Enter. We should not use this approach to inspect Chrome and Faculty Experience)
documents, but instead ensure that the Chrome document
AUDIENCE : BE/BTECH Students, PG Diploma Students,
loads normally, and use the Inspect Chrome Document menu ME/MTECH Students and Embedded / sw Engineers
popup to inspect the document. DATES : 20-09-2014 and 21-09-2014 (2 Days Program)
When you inspect a Web page by this method, a browser VENUE: School of Embedded Software Development,
M.V. Creators' Wing,
pane at the bottom of the DOM inspector window will open 3rd Floor, #218, Sunshine Complex, Kammanahalli,
up, displaying the Web page. This allows you to use the DOM 4th Main, 2nd Block, HRBR Layout, Kalyan Nagar,
inspector without having to use a separate browser window, Bangalore - 560043.
(Opposite to HDFC Bank, Next to FoodWorld
or without embedding a browser in your application at all. If and near JalaVayu Vihar )
you find that the browser pane takes up too much space, you Email : babu_krishnamurthy@yahoo.com
may close it, but you will not be able to visually observe any Phone : 080-41207855
SMS : +91-9845365845 ( leave a message and we will call you back )
of the consequences of your actions.
UPCOMING COURSES :
RTOS - BSP AND DRIVER DEVELOPMENT, REAL-TIME LINUX DEVELOPMENT
DOM inspector viewers LINUX INTERNALS AND DEVICE DRIVERS - FUNDAMENTALS AND
You can use the DOM nodes viewer in the document pane LINUX INTERNALS AND DEVICE DRIVERS - ADVANCED

www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 33


Developers Insight

user interface, which is not an easy task otherwise. If youre Selecting elements by clicking: A powerful interactive
inspecting a Chrome document, as you select nodes in the feature of the DOM inspector is that when you have it open
DOM nodes viewer, the rendered versions of those nodes are and have enabled this functionality by choosing Edit >
highlighted in the user interface itself. Note that there are Select Element by Click (or by clicking the little magnifying
bugs that prevent the flasher from the DOM inspector APIs glass icon in the upper left portion of the DOM Inspector
that are working currently on certain platforms. application), you can click anywhere in a loaded Web page or
If you inspect the main browser window, for example, the Inspect Chrome document. The element you click will be
and select nodes in the DOM nodes viewer, you will see the shown in the document pane in the DOM nodes viewer and
various parts of the browser interface being highlighted with the information will be displayed in the object pane.
a blinking red border. You can traverse the structure and go Searching for nodes in the DOM: Another way to inspect
from the topmost parts of the DOM tree to lower level nodes, the DOM is to search for particular elements youre interested in
such as the search-go-button icon that lets users perform a by ID, class or attribute. When you select Edit > Find Nodes...
query using the selected search engine. or press Ctrl + F, the DOM inspector displays a Find dialogue
The list of viewers available from the viewer menu gives that lets you find elements in various ways, and that gives you
you some idea about how extensive the DOM inspectors incremental searching by way of the <F3> shortcut key.
capabilities are. The following descriptions provide an Updating the DOM, dynamically: Another feature
overview of these viewers capabilities: worth mentioning is the ability the DOM inspector gives
1. The DOM nodes viewer shows attributes of nodes that can you to dynamically update information reflected in the DOM
take them, or the text content of text nodes, comments and about Web pages, the user interface and other elements. Note
processing instructions. The attributes and text contents that when the DOM inspector displays information about a
may also be edited. particular node or sub-tree, it presents individual nodes and
2. The Box Model viewer gives various metrics about XUL their values in an active list. You can perform actions on the
and HTML elements, including placement and size. individual items in this list from the Context menu and the
3. The XBL Bindings viewer lists the XBL bindings attached Edit menu, both of which contain menu items that allow you
to elements. If a binding extends to another binding, the to edit the values of those attributes.
binding menu list will list them in descending order to This interactivity allows you to shrink and grow the
root binding. element size, change icons, and do other layout-tweaking
4. The CSS Rules viewer shows the CSS rules that updatesall without actually changing the DOM as it is
are applied to the node. Alternatively, when used in defined in the file on disk.
conjunction with the Style Sheets viewer, the CSS Rules
viewer lists all recognised rules from that style sheet. References
Properties may also be edited. Rules applying to pseudo- [1] https://developer.mozilla.org/en-US/docs/Web/API/
elements do not appear. Document_Object_Model
5. This viewer gives a hierarchical tree of the object panes [2] https://developer.mozilla.org/en/docs/Web/API/Document
subject. The JavaScript Object viewer also allows
JavaScript to be evaluated by selecting the appropriate By: Anup Allamsetty
menu item in the context menu. The author is an active contributor to Mozilla and GNOME. He
Three basic actions of DOM node viewers are blogs at https://anup07.wordpress.com/ and you can email him
at allamsetty.anup@gmail.com.
described below.

EB Times
Electronics Trade Channel Updates
An EFY Group publication

is Becoming Regional
Get East, West, North & South Editions at you
doorstep. Write to us at myeb@efy.in and get EB
Times regularly

This monthly B2B Newspaper is a resource for traders, distributors, dealers, and those
who head channel business, as it aims to give an impetus to channel sales

34 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com


Let's Try Developers

Experimenting with
More Functions in Haskell

We continue our exploration of the open source, advanced and purely functional
programming language, Haskell. In the third article in the series, we will focus on more
Haskell functions, conditional constructs and their usage.

A
function in Haskell has the function name *Main> 3 `elem` [1, 2, 3]
followed by arguments. An infix operator function True
has operands on either side of it. A simple infix *Main> 4 `elem` [1, 2, 3]
add operation is shown below: False

*Main> 3 + 5 Functions can also be partially applied in Haskell. A function that


8 subtracts ten from a given number can be defined as:

If you wish to convert an infix function to a prefix diffTen :: Integer -> Integer
function, it must be enclosed within parentheses: diffTen = (10 -)

*Main> (+) 3 5 Loading the file in GHCi and passing three as an argument yields:
8
*Main> diffTen 3
Similarly, if you wish to convert a prefix function 7
into an infix function, you must enclose the function
name within backquotes(`). The elem function takes an Haskell exhibits polymorphism. A type variable in a function
element and a list, and returns true if the element is a is said to be polymorphic if it can take any type. Consider the last
member of the list: function that returns the last element in an array. Its type signature is:

www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 35


Developers Let's Try

*Main> :t last Similarly, the if and else constructs must be neatly aligned.
last :: [a] -> a The else statement is mandatory in Haskell. For example:

The a in the above snippet refers to a type variable and sign :: Integer -> String
can represent any type. Thus, the last function can operate on a sign x =
list of integers or characters (string): if x > 0
then "Positive"
*Main> last [1, 2, 3, 4, 5] else
5 if x < 0
*Main> last "Hello, World" then "Negative"
'd' else "Zero"

You can use a where clause for local definitions inside a Running the example with GHCi, you get:
function, as shown in the following example, to compute the
area of a circle: *Main> sign 0
"Zero"
areaOfCircle :: Float -> Float *Main> sign 1
areaOfCircle radius = pi * radius * radius "Positive"
where pi = 3.1415 *Main> sign (-1)
"Negative"
Loading it in GHCi and computing the area for radius
1 gives: The case construct can be used for pattern matching
against possible expression values. It needs to be combined
*Main> areaOfCircle 1 with the of keyword. The different values need to be aligned
3.1415 and the resulting action must be specified after the ->
symbol for each case. For example:
You can also use the let expression with the in statement to
compute the area of a circle: sign :: Integer -> String
sign x =
areaOfCircle :: Float -> Float case compare x 0 of
areaOfCircle radius = let pi = 3.1415 in pi * radius * radius LT -> "Negative"
GT -> "Positive"
Executing the above with input radius 1 gives: EQ -> "Zero"

*Main> areaOfCircle 1 The compare function compares two arguments and


3.1415 returns LT if the first argument is lesser than the second, GT
if the first argument is greater than the second, and EQ if both
Indentation is very important in Haskell as it helps in code are equal. Executing the above example, you get:
readability the compiler will emit errors otherwise. You must
make use of white spaces instead of tab when aligning code. If *Main> sign 2
the let and in constructs in a function span multiple lines, they "Positive"
must be aligned vertically as shown below: *Main> sign 0
"Zero"
compute :: Integer -> Integer -> Integer *Main> sign (-2)
compute x y = "Negative"
let a = x + 1
b = y + 2 The sign function can also be expressed using guards
in (|) for readability. The action for a matching case must be
a * b specified after the = sign. You can use a default guard with
the otherwise keyword:
Loading the example with GHCi, you get the following output:
sign :: Integer -> String
*Main> compute 1 2 sign x
8 | x > 0 = "Positive"

36 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com


Let's Try Developers

| x < 0 = "Negative" *Main> :t foldl


| otherwise = "Zero" foldl :: (a -> b -> a) -> a -> [b] -> a
*Main> :t foldr
The guards have to be neatly aligned: foldr :: (a -> b -> b) -> b -> [a] -> b

*Main> sign 0 The way the fold is evaluated among the two types is
"Zero" different and is demonstrated below:
*Main> sign 3
"Positive" *Main> foldl (+) 0 [1, 2, 3]
*Main> sign (-3) 6
"Negative" *Main> foldl (+) 1 [2, 3]
6
There are three very important higher order functions in *Main> foldl (+) 3 [3]
Haskell map, filter and fold. 6
The map function takes a function and a list, and applies
the function to each and every element of the list. Its type It can be represented as f (f (f a b1) b2) b3 where f is the
signature is: function, a is the accumulator value, and b1, b2 and b3
are the elements of the list. The parenthesis is accumulated on
*Main> :t map the left for a left fold. The computation looks like this:
map :: (a -> b) -> [a] -> [b]
*Main> (+) ((+) ((+) 0 1) 2) 3
The first function argument accepts an element of type a 6
and returns an element of type b. An example of adding two *Main> (+) 0 1
to every element in a list can be implemented using map: 1
*Main> (+) ((+) 0 1) 2
*Main> map (+ 2) [1, 2, 3, 4, 5] 3
[3,4,5,6,7] *Main> (+) ((+) ((+) 0 1) 2) 3
6
The filter function accepts a predicate function for
evaluation, and a list, and returns the list with those elements With the recursion, the expression is constructed and
that satisfy the predicate. For example: evaluated only when it is finally formed. It can thus cause
stack overflow or never complete when working with infinite
*Main> filter (> 0) [-2, -1, 0, 1, 2] lists. The foldr evaluation looks like this:
[1,2]
*Main> foldr (+) 0 [1, 2, 3]
Its type signature is: 6
*Main> foldr (+) 0 [1, 2] + 3
filter :: (a -> Bool) -> [a] -> [a] 6
*Main> foldr (+) 0 [1] + 2 + 3
The predicate function for filter takes as its first argument 6
an element of type a and returns True or False.
The fold function performs a cumulative operation on a It can be represented as f b1 (f b2 (f b3 a)) where f is the
list. It takes as arguments a function, an accumulator (starting function, a is the accumulator value, and b1, b2 and b3
with an initial value) and a list. It cumulatively aggregates the are the elements of the list. The computation looks like this:
computation of the function on the accumulator value as well
as each member of the list. There are two types of folds left *Main> (+) 1 ((+) 2 ((+) 3 0))
and right fold. 6
*Main> (+) 3 0
*Main> foldl (+) 0 [1, 2, 3, 4, 5] 3
15 *Main> (+) 2 ((+) 3 0)
*Main> foldr (+) 0 [1, 2, 3, 4, 5] 5
15 *Main> (+) 1 ((+) 2 ((+) 3 0))
6
Their type signatures are, respectively: To be continued on page.... 44

www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 37


Developers Insight

Introducing
AngularJS

AngularJS is an open source Web application framework maintained by Google and the
community, which helps to build Single Page Applications (SPA). Lets get to know it better.

A
ngularJS can be introduced as a front-end the Hello World program in minutes. With the help of
framework capable of incorporating the Angular, the combined power of HTML and JavaScript can
dynamicity of JavaScript with HTML. The self- be put to maximum use. One of the prominent features of
proclaimed super heroic JavaScript MVW (Model View Angular is that it is extremely easy to test. And that makes
Whatever) framework is maintained by Google and many it very suitable for creating large-scale applications. Also,
other developers at Github. This open source framework the Angular community, comprising Googles developers
works its magic on Web applications of the Single Page primarily, is very active in the development process.
Applications (SPA) category. The logic behind an SPA is Google Trends gives assuring proof of Angulars future in
that an initial page is loaded at the start of an application the field of Web development (Figure 1).
from the server. When an action is performed, the
application fetches the required resources from the server Core features
and adds them to the initial page. The key point here is Before getting into the basics of AngularJS, you need to
that an SPA just makes one server round trip, providing understand two key termstemplates and models. The
you with the initial page. This makes your applications HTML page that is rendered out to you is pretty much the
very responsive. template. So basically, your template has HTML, Angular
entities (directives, filters, model variables, etc) and CSS (if
Why AngularJS? necessary). The example code given below for data binding
AngularJS brings out the beauty in Web development. is a template.
It is extremely simple to understand and code. If youre In an SPA, the data and presentation of data is separated
familiar with HTML and JavaScript, you can write by a model layer that handles data and a view layer that reads

40 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com


Developers Insight

Topics Subscribe
the purpose of some common directives.
ngApp:This directive bootstraps your angular
angularjs emberjs knockoutjs backbonejs + Add term
search term search term search term search term application and considers the HTML element in which the
attribute is specified to be the root element of Angular.
Interest over time News headlines Forecast
In the above example, the entire HTML page becomes an
angular application, since the ng-app attribute is given
March 2009
angularjs:0 to the <html> tag. If it was given to the <body> tag,
emberjs:0
knockoutjs:0 the body alone becomes the root element. Or you could
backbonejs:0
create your own Angular module and let that be the root
of your application. An AngularJS module might consist
Average 2005 2007 2009 2011 2013
of controllers, services, directives, etc. To create a new
module, use the following commands:
Figure 1: Report from Google Trends
var moduleName = angular.module( moduleName , [ ] );
from models. This helps an SPA in redrawing any part of the // The array is a list of modules our module depends on
UI without requiring a server round trip to retrieve HTML.
When the data is updated, its view is notified and the altered Also, remember to initialise your ng-app attribute to
data is produced in the view. moduleName. For instance,

Data binding <html ng-app = moduleName >


AngularJS provides you with two-way binding between the
model variables and HTML elements. One-way binding ngModel: The purpose of this directive is to bind the
would mean a one-way relation between the twowhen the view with the model. For instance,
model variables are updated, so are the values in the HTML
elements; but not the other way around. Lets understand <input type = "text" ng-model = "sometext" />
two-way binding by looking at an example: <p> Your text: {{ sometext }}</p>

<html ng-app > Here, the model sometext is bound (two-way) to the
<head> view. The double curly braces will notify Angular to put the
<script src="http://ajax.googleapis.com/ajax/libs/ value of sometext in its place.
angularjs/1.0.7/angular.min.js"> ngClick: How this directive functions is similar to that of
</script> the onclick event of Javascript.
</head>
<body ng-init = yourtext = Data binding is cool! > <button ng-click="mul = mul * 2" ng-init="mul = 1"> Multiply
Enter your text: <input type="text" ng-model = with 2 </button>
"yourtext" /> After multiplying : {{mul}}
<strong>You entered :</strong> {{yourtext}}
</body> Whenever the button is clicked, mul gets multiplied by
</html> two.

The model variable yourtext is bound to the HTML input Filters


element. Whenever you change the value in the input box, A filter helps you in modifying the output to your view. You
yourtext gets updated. Also, the value of the HTML input box can subject your expression to any kind of constraints to give
is initialised to that of the yourtext variable. out the desired output. The format is:

Directives {{ expression | filter }}


In the above example, many words like ng-app, ng-init
and ng-model may have struck you as odd. Well, these You can filter the output of filter1 again with filter2, using
are attributes that represent directives - ngApp, ngInit and the following format:
ngModel, respectively. As described in the official AngularJS
developer guide, Directives are markers on a DOM element {{ expression | filter1 | filter2 }}
(such as an attribute, element name, comment or CSS class)
that tell AngularJS's HTML compiler ($compile) to attach a The following code filters the members of the people
specified behaviour to that DOM element. Lets look into array using the name as the criteria:

42 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com


Insight Developers

<body ng-init=" people=[{name:'Tony',branch:'CSE'}, html and animals.html are examples of partials; these are
{name:'Santhosh', branch:'EEE'}, files that will be loaded to your view, depending on the
{name:'Manisha', branch:'ECE'}];"> URL passed. For example, you could have an app that has
Name: <input type="text" ng-model="name"/> icons and whenever the icon is clicked, a link is passed.
<li ng-repeat="person in people | filter: name"> {{person. Depending on the link, the corresponding partial is loaded to
name }} - {{person.branch}} the view. This is how you pass links:
</li>
</body> <a href='#/home'><img src='partials/home.jpg' /></a>
<a href='#/animal'><img src='partials/animals.jpg' /></a>
Advanced features
Controllers: To bring some more action to our app, we Dont forget to add the ng-view attribute to the HTML
need controllers. These are JavaScript functions that add component of your choice. That component will act as a
behaviour to our app. Lets make use of the ngController placeholder for your views.
directive to bind the controller to the DOM:
<div ng-view=""></div>
<body ng-controller="ContactController">
<input type="text" ng-model="name"/> Services: According to the official documentation of
<button ng-click="disp()">Alert !</button> AngularJS, Angular services are substitutable objects
<script type="text/javascript"> that are wired together using dependency injection (DI).
function ContactController($scope) { You can use services to organise and share code across
$scope.disp = function( ){ your app. With DI, every component will receive
alert("Hey " + $scope.name); a reference to the service. Angular provides useful
}; services like $http, $window, and $location. In order to
} use these services in controllers, you can add them as
</script> dependencies. As in:
</body>
var testapp = angular.module( testapp, [ ] );
One term to be explained here is $scope. To quote testapp.controller ( testcont, function( $window ) {
from the developer guide: Scope is an object that //body of controller
refers to the application model. With the help of scope, });
the model variables can be initialised and accessed.
In the above example, when the button is clicked the
disp( ) comes into play, i.e., the scope is assigned with To define a custom service, write the following:
a behaviour. Inside disp( ), the model variable name is
accessed using scope. testapp.factory ('serviceName', function( ) {
Views and routes: In any usual application, we var obj;
navigate to different pages. In an SPA, instead of pages, we return obj; // returned object will be injected to
have views. So, you can use views to load different parts the component
of your application. Switching to different views is done //that has called the service
through routing. For routing, we make use of the ngRoute });
and ngView directives:
Testing
var miniApp = angular.module( 'miniApp', ['ngRoute'] ); Testing is done to correct your code on-the-go and avoid
ending up with a pile of errors on completing your apps
miniApp.config(function( $routeProvider ){ development. Testing can get complicated when your
$routeProvider.when( '/home', { templateUrl: app grows in size and APIs start to get tangled up, but
'partials/home.html' } ); Angular has got its own defined testing schemes. Usually,
$routeProvider.when( '/animal', {templateUrl: two kinds of testing are employed, unit and end-to-end
'partials/animals.html' } ); testing (E2E). Unit testing is used to test individual API
$routeProvider.otherwise( { redirectTo: '/home' } ); components, while in E2E testing, the working of a set of
}); components is tested.
The usual components of unit testing are describe( ),
ngRoute enables routing in applications and beforeEach( ) and it( ). You have to load the angular module
$routeProvider is used to configure the routes. home. before testing and beforeEach( ) does this. Also, this function

www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 43


Developers Insight

makes use of the injector method to inject dependencies. Competing technologies


The test to be conducted is given in it( ). The test suite is Features Ember.js AngularJS Backbone.js
describe( ), and both beforeEach( ) and it( ) come inside Routing Yes Yes Yes
it. E2E testing makes use of all the above functions. Views Yes Yes Yes
One other function used is expect( ). This creates Two-way binding Yes Yes No
expectations, which verify if the particular application's
state (value of a variable or URL) is the same as the The chart above covers only the core features of the three
expected values. frameworks. Angular is the oldest of the lot and has the
Recommended frameworks for unit testing are biggest community.
Jasmine and Karma, and for E2E testing, Protractor is the
one to go with. References
[1] http://singlepageappbook.com/goal.html
[2] https://github.com/angular/angular.js
Who uses AngularJs? [3] https://docs.angularjs.org/guide/
Some of the following corporate giants use AngularJS: [4] http://karma-runner.github.io/0.12/index.html
Google [5] http://viralpatel.net/blogs/angularjs-introduction-hello-
world-tutorial/
Sony (YouTube on PS3) [6] https://builtwith.angularjs.org/
Virgin America
Nike By: Tina Johnson
msnbc (msnbc.com) The author is a FOSS enthusiast who has contributed to
You can find a lot of interesting and innovative apps in Mediawiki and Mozilla's Bugzilla. She is also working on a project
to build a browser (using AngularJS) for autistic children.
the Built with AngularJS page.

To be continued from page.... 37

There are some statements like condition checking highest precedence and is right-associative. For example:
where f b1 can be computed even without requiring the
subsequent arguments, and hence the foldr function can *Main> (reverse ((++) "yrruC " (unwords ["skoorB",
work with infinite lists. There is also a strict version of "lleksaH"])))
foldl (foldl) that forces the computation before proceeding "Haskell Brooks Curry"
with the recursion.
If you want a reference to a matched pattern, you can use You can rewrite the above using the function application
the as pattern syntax. The tail function accepts an input list operator that is right-associative:
and returns everything except the head of the list. You can
write a tailString function that accepts a string as input and Prelude> reverse $ (++) "yrruC " $ unwords ["skoorB",
returns the string with the first character removed: "lleksaH"]
"Haskell Brooks Curry"
tailString :: String -> String
tailString "" = "" You can also use the dot notation to make it even more
tailString input@(x:xs) = "Tail of " ++ input ++ " is " ++ xs readable, but the final argument needs to be evaluated first;
hence, you need to use the function application operator for it:
The entire matched pattern is represented by input in the
above code snippet. *Main> reverse . (++) "yrruC " . unwords $ ["skoorB",
Functions can be chained to create other functions. This is "lleksaH"]
called composing functions. The mathematical definition is "Haskell Brooks Curry"
as under:

(f o g)(x) = f(g(x))

By: Shakthi Kannan


This dot (.) operator has the highest precedence and is
The author is a free software enthusiast and blogs
left-associative. If you want to force an evaluation, you can
at shakthimaan.com.
use the function application operator ($) that has the second

44 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com


Let's Try Developers

Use Bugzilla
to Manage Defects in Software
In the quest for excellence in software products, developers have to go through the process of
defect management. The tool of choice for defect containment is Mozilla's Bugzilla. Learn how to
install, configure and use it to file a bug report and act on it.

I
n any project, defect management and various types of them are on your Linux system before proceeding with the
testing play key roles in ensuring quality. Defects need installation. This specific installation covers MySQL as the
to be logged, tracked and closed to ensure the project backend database.
meets quality expectations. Generating defect trends also
helps project managers to take informed decisions and make Step 2: User and database creation
the appropriate course corrections while the project is being Before proceeding with the installation, the user and database
executed. Bugzilla is one of the most popular open source need to be created by following the steps mentioned below.
defect management tools and helps project managers to track The names used here for the database or the users are
the complete lifecycle of a defect. specific to this installation, which can change between
installations.
Installation and configuration of Bugzilla Start the service by issuing the following command:

Step 1: Getting the source code $/etc/rc.d/init.d/mysql start


Bugzilla is part of the Mozilla foundation. Its latest releases
are available from the official website. This article will Trigger MySQL by issuing the following command (you
be covering the installation of Bugzilla version 4.4.2. will be asked for the root password, so ensure you keep it
The steps mentioned here should apply to later releases handy):
as well. However, for version-specific releases, check the
appropriate release notes. Here is the URL for downloading $mysql -u root -p
Bugzilla version 4.4.2 on a Linux system: http://www.
bugzilla.org/releases/4.4.2/ Use the following keywords as shown in the MySQL
Pre-requisites for Bugzilla include a CGI-enabled Web prompt for creating a user in the database for Bugzilla:
server (an Apache http server), a database engine (MySQL,
PostgreSQL, etc) and the latest Perl modules. Ensure all of mysql > CREATE USER 'bugzilla'@'localhost' IDENTIFIED BY

www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 45


Developers Let's Try

Figure 1: Configuring Bugzilla by changing the localconfig file

'password';
> GRANT ALL PRIVILEGES ON *. * TO 'bugzilla'@'localhost'; Figure 2: Bugzilla main page
> FLUSH PRIVILEGES;
mysql > CREATE DATABASE bugzilla_db CHARACTER SET utf8;
> GRANT SELECT,INSERT,UPDATE,DELETE,INDEX,ALTER,CREATE,DROP,
REFERENCES ON bugzilla_db.* TO 'bugzilla'@'localhost'
IDENTIFIED BY 'cspasswd';
> FLUSH PRIVILEGES; Figure 3: Defect lifecycle
> QUIT

Use the following command to connect the user with the


database:

$mysql -u bugzilla -p bugzilla_db


$mysql > use bugzilla_db

Step 3: Bugzilla installation and configuration


After downloading the Bugzilla archive from the URL
mentioned above, untar the package into the /var/www
directory. All the configuration related information can
Figure 4: New account creation
be modified by the localconfig file. To start with, set the
variable $webservergroup as www' and set other items as
mentioned in Figure 1. Defect lifecycle management
Followed by the configuration, installation can be The main purpose of Bugzilla is to manage the defects
completed by executing the following Perl script. Ensure this lifecycle. Defects are created and logged in various phases of
script is executed with root privileges: the project (e.g., functional testing), where they are created by
the test engineer and assigned to development engineers for
$ ./checksetup.pl resolution. Along with that, managers or team members need
to be aware of the change in the state of the defect to ensure
Step 4: Integrating Bugzilla with Apache that there is a good amount of traceability of the defects.
Insert the following lines in the Apache server configuration When the defect is created, it is given a new state, after
file (server.xml) to integrate Bugzilla into it. Place the which it is assigned to a development engineer for resolution.
directory bugzilla inside www in our build folder: Subsequently, it will get resolved and eventually be moved
to the closed state.
<Directory /var/www/bugzilla>
AddHandler cgi-script.cgi Step 1: User account creation
Options +ExecCGI To start using Bugzilla, various user accounts have to be
DirectoryIndex index.cgi index.html created. In this example, Bugzilla is deployed in a server
AllowOverride Limit FileInfo Indexes Options named hydrogen. On the home page, click the New
</Directory> Account link available in the header/footer of the pages (refer
to Figure 4). You will be asked for your email address; enter
Our set up is now ready. Lets hit the address in the it and click the Send button. After registration is accepted,
browser to see the home page of our freshly deployed Web you should receive an email at the address you provided
application (http://localhost/bugzilla). confirming your registration. Now all you need to do is to

46 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com


Let's Try Developers

Figure 6: Defect resolution

Figure 5: New defect creation


Figure 7: Simple search
click the Log in link in the header/footer at the bottom of
the page in your browser, enter your email address and the
password you just chose into the login form, and click on the
Log in button. You will be redirected to the Bugzilla home
page for defect interfacing.

Step 2: Reporting the new bug


1. Click the New link available in the header/footer of the
pages, or the File a bug option displayed on the home
page of the Bugzilla installation as shown in Figure 5.
2. Select the product in which you found a bug. Please note
that the administrator will be able to create an appropriate Figure 8: Simple dashboard of defects
product and corresponding versions from his account,
which is not demonstrated here. Step 4: Reports and dashboards
3. You now see a form on which you can specify the Typically, in large scale projects, there could be thousands of
component, the version of the program you were using, the defects logged and fixed by hundreds of development and test
operating system and platform your program is running on, engineers. To monitor the project at various phases, generation of
and the severity of the bug, as shown in Figure 5. reports and dashboards becomes very important. Bugzilla offers
4. If there is any attachment like a screenshot of the bug, very simple but very powerful search and reporting features with
attach it using the option Add an attachment shown at which all the necessary information can be obtained immediately.
the bottom of the page, else click on Submit Bug. By exploring the Search and Reports options, one can easily
figure out ways to generate reports. A couple of simple examples
Step 3: Defect resolution and closure are provided in Figure 7 (search) and Figure 8 (reports). Outputs
Once the bug is filed, the assignees (typically, developers) can be exported to formats like CSV for further analysis.
get an email when the bug gets fixed. If the developers Bugzilla is a very simple but powerful open source tool
fix the bug successfully by adding the details like a bug that helps in complete defect management in projects. Along
fixing summary and then marking the status as resolved with the information provided above, Bugzilla also exposes its
in the status button, they can route the defect back to the source code, which can be explored for further scripting and
tester or to the development team leader for further review. programming. This helps to make Bugzilla a super-customised,
This can be easily done by changing the assignee field defect-tracking tool for effectively managing defects.
of a defect and filling it with an appropriate email ID.
When the developers complete fixing the defect, it can
By: Satyanarayana Sampangi
be marked as shown in Figure 6. When the test engineers
receive the resolved defect report, they can verify it and Satyanarayana Sampangi is a Member - Embedded software at
Emertxe Information Technologies (http://www.emertxe.com). His
mark the status as closed. At every step, notes from each area of interest lies in Embedded C programming combined with
individual are to be captured and logged along with the data structures and micro-controllers. He likes to experiment with
time-stamp. This helps in backtracking the defect in case C programming and open source tools in his spare time to explore
any clarifications are required. new horizons. He can be reached at satya@emertxe.com

www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 47


Developers How To

An Introduction to
Device Drivers in the Linux Kernel
In the article An Introduction to the Linux Kernel in the August 2014 issue of OSFY, we wrote and
compiled a kernel module. In the second article in this series, we move on to device drivers.

H
ave you ever wondered how a computer Device: This can be the actual device present at the
plays audio or shows video? The hardware level, or a pseudo device.
answer is: by using device drivers. Let us take an example where a user-space
A few years ago we would always install application sends data to a character device.
audio or video drivers after installing MS Instead of using an actual device we are going to
Windows XP. Only then we were able use a pseudo device. As the name suggests, this
to listen the audio. Let us explore device device is not a physical device. In GNU/Linux /
drivers in this column. dev/null is the most commonly used pseudo
A device driver (often referred to as device. This device accepts any kind of data
driver') is a piece of software that controls (i.e., input) and simply discards it. And it
a particular type of device which is doesn't produce any output.
connected to the computer system. Let us send some data to the /dev/null
It provides a software interface to pseudo device:
the hardware device, and enables
access to the operating system [mickey]$ echo -n 'a' > /dev/null
and other applications. There are
various types of drivers present In the above example, echo is a user-
in GNU/Linux such as Character, space application and null is a special
Block, Network and USB file present in the /dev directory. There
drivers. In this column, is a null driver present in the kernel to
we will explore only control the pseudo device.
character drivers. To send or receive data to and
Character drivers from the device or application,
are the most common use the corresponding device
drivers. They provide file that is connected to the driver
unbuffered, direct access to hardware through the Virtual File System (VFS)
devices. One can think of character drivers as a layer. Whenever an application wants to perform any
long sequence of bytes -- same as regular files but can be operation on the actual device, it performs this on the
accessed only in sequential order. Character drivers support device file. The VFS layer redirects those operations to
at least the open(), close(), read() and write() operations. The the appropriate functions that are implemented inside the
text console, i.e., /dev/console, serial consoles /dev/stty*, and driver. This means that whenever an application performs
audio/video drivers fall under this category. the open() operation on a device file, in reality the open()
To make a device usable there must be a driver present function from the driver is invoked, and the same concept
for it. So let us understand how an application accesses data applies to the other functions. The implementation of these
from a device with the help of a driver. We will discuss the operations is device-specific.
following four major entities.
User-space application: This can be any simple utility Major and minor numbers
like echo, or any complex application. We have seen that the echo command directly sends data to
Device file: This is a special file that provides an interface the device file. Hence, it is clear that to send or receive data to
for the driver. It is present in the file system as an ordinary and from the device, the application uses special device files.
file. The application can perform all supported operation on But how does communication between the device file and the
it, just like for an ordinary file. It can move, copy, delete, driver take place? It happens via a pair of numbers referred to
rename, read and write these device files. as major and minor numbers.
Device driver: This is the software interface for the device The command below lists the major and minor numbers
and resides in the kernel space. associated with a character device file:

48 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com


How To Developers

[bash]$ ls -l /dev/null #define MINORMASK ((1U << MINORBITS) - 1)


crw-rw-rw- 1 root root 1, 3 Jul 11 20:47 /dev/null
#define MAJOR(dev) ((unsigned int) ((dev) >> MINORBITS))
In the above output there are two numbers separated by a #define MINOR(dev) ((unsigned int) ((dev) & MINORMASK))
comma (1 and 3). Here, 1 is the major and 3 is the minor
number. The major number identifies the driver associated If you have major and minor numbers and you want to
with the device, i.e., which driver is to be used. The minor convert them to the dev_t type, the MKDEV macro will do
number is used by the kernel to determine exactly which the needful. The definition of the MKDEV macro from the
device is being referred to. For instance, a hard disk may <linux/kdev_t.h> header file is given below:
have three partitions. Each partition will have a separate
minor number but only one major number, because the same #define MKDEV(ma,mi) (((ma) << MINORBITS) | (mi))
storage driver is used for all the partitions.
Older kernels used to have a separate major number We now know what major and minor numbers are and the
for each driver. But modern Linux kernels allow multiple role they play. Let us see how we can allocate major numbers.
drivers to share the same major number. For instance, / Here is the prototype of the register_chrdev():
dev/full, /dev/null, /dev/random and /dev/zero use the same
major number but different minor numbers. The output int register_chrdev(unsigned int major, const char *name,
below illustrates this: struct file_operations *fops);

[bash]$ ls -l /dev/full /dev/null /dev/random /dev/zero This function registers a major number for character
crw-rw-rw- 1 root root 1, 7 Jul 11 20:47 /dev/full devices. Arguments of this function are self-explanatory. The
crw-rw-rw- 1 root root 1, 3 Jul 11 20:47 /dev/null major argument implies the major number of interest, name
crw-rw-rw- 1 root root 1, 8 Jul 11 20:47 /dev/random is the name of the driver and appears in the /proc/devices area
crw-rw-rw- 1 root root 1, 5 Jul 11 20:47 /dev/zero and, finally, fops is the pointer to the file_operations structure.
Certain major numbers are reserved for special drivers;
The kernel uses the dev_t type to store major and minor hence, one should exclude those and use dynamically allocated
numbers. dev_t type is defined in the <linux/types.h> header major numbers. To allocate a major number dynamically, provide
file. Given below is the representation of dev_t type from the the value zero to the first argument, i.e., major == 0. This
header file: function will dynamically allocate and return a major number.
To deallocate an allocated major number use the
#ifndef _LINUX_TYPES_H unregister_chrdev() function. The prototype is given below
#define _LINUX_TYPES_H and the parameters of the function are self-explanatory:

#define __EXPORTED_HEADERS__ void unregister_chrdev(unsigned int major, const char *name)


#include <uapi/linux/types.h>
The values of the major and name parameters must be
typedef __u32 __kernel_dev_t; the same as those passed to the register_chrdev() function;
otherwise, the call will fail.
typedef __kernel_dev_t dev_t;
File operations
dev_t is an unsigned 32-bit integer, where 12 bits are used So we know how to allocate/deallocate the major number, but
to store the major number and the remaining 20 bits are used to we haven't yet connected any of our drivers operations to the
store the minor number. But don't try to extract the major and major number. To set up a connection, we are going to use
minor numbers directly. Instead, the kernel provides MAJOR the file_operations structure. This structure is defined in the
and MINOR macros that can be used to extract the major and <linux/fs.h> header file.
minor numbers. The definition of the MAJOR and MINOR Each field in the structure must point to the function in the
macros from the <linux/kdev_t.h> header file is given below: driver that implements a specific operation, or be left NULL for
unsupported operations. The example given below illustrates that.
#ifndef _LINUX_KDEV_T_H Without discussing lengthy theory, let us write our first
#define _LINUX_KDEV_T_H null driver, which mimics the functionality of a /dev/null
pseudo device. Given below is the complete working code for
#include <uapi/linux/kdev_t.h> the null driver.
Open a file using your favourite text editor and save the
#define MINORBITS 20 code given below as null_driver.c:

www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 49


Developers How To

#include <linux/module.h>
#include <linux/kernel.h> static void __exit null_exit(void)
#include <linux/fs.h> {
#include <linux/kdev_t.h> unregister_chrdev(major, name);
printk(KERN_INFO "Device unregistered successfully.\n");
static int major; }
static char *name = "null_driver";
module_init(null_init);
static int null_open(struct inode *i, struct file *f) module_exit(null_exit);
{
printk(KERN_INFO "Calling: %s\n", __func__); MODULE_AUTHOR("Narendra Kangralkar.");
return 0; MODULE_LICENSE("GPL");
} MODULE_DESCRIPTION("Null driver");

static int null_release(struct inode *i, struct file *f) Our driver code is ready. Let us compile and insert the
{ module. In the article last month, we did learn how to write
printk(KERN_INFO "Calling: %s\n", __func__); Makefile for kernel modules.
return 0;
} [mickey]$ make

static ssize_t null_read(struct file *f, char __user *buf, [root]# insmod ./null_driver.ko
size_t len, loff_t *off)
{ We are now going to create a device file for our driver.
printk(KERN_INFO "Calling: %s\n", __func__); But for this we need a major number, and we know that
return 0; our driver's register_chrdev() function will allocate the
} major number dynamically. Let us find out this dynamically
allocated major number from /proc/devices, which shows the
static ssize_t null_write(struct file *f, const char __user currently loaded kernel modules:
*buf, size_t len, loff_t *off)
{ [root]# grep "null_driver" /proc/devices
printk(KERN_INFO "Calling: %s\n", __func__); 248 null_driver
return len;
} From the above output, we are going to use 248 as a
major number for our driver. We are only interested in the
static struct file_operations null_ops = major number, and the minor number can be anything within
{ a valid range. I'll use 0 as the minor number. To create the
.owner = THIS_MODULE, character device file, use the mknod utility. Please note that to
.open = null_open, create the device file you must have superuser privileges:
.release = null_release,
.read = null_read, [root]# mknod /dev/null_driver c 248 0
.write = null_write
}; Now it's time for the action. Let us send some data to the
pseudo device using the echo command and check the output
static int __init null_init(void) of the dmesg command:
{
major = register_chrdev(0, name, &null_ops); [root]# echo "Hello" > /dev/null_driver
if (major < 0) {
printk(KERN_INFO "Failed to register driver."); [root]# dmesg
return -1; Device registered successfully.
} Calling: null_open
Calling: null_write
printk(KERN_INFO "Device registered successfully.\n"); Calling: null_release
return 0;
} Yes! We got the expected output. When open, write, close

50 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com


How To Developers

operations are performed on a device file, the appropriate is performed on a device file, the driver should transfer len bytes
functions from our driver's code get called. Let us perform the of data to the device and update the file offset off accordingly.
read operation and check the output of the dmesg command: Our null driver accepts input of any length; hence, return value is
always len, i.e., all bytes are written successfully.
[root]# cat /dev/null_driver In the next step we have initialised the file_operations
structure with the appropriate driver's function. In initialisation
[root]# dmesg function we have done a registration related job, and we are
Calling: null_open deregistering the character device in cleanup function.
Calling: null_read
Calling: null_release Implementation of the full pseudo driver
Let us implement one more pseudo device, namely, full. Any write
To make things simple I have used printk() statements in operation on this device fails and gives the ENOSPC error. This
every function. If we remove these statements, then /dev/null_ can be used to test how a program handles disk-full errors. Given
driver will behave exactly the same as the /dev/null pseudo below is the complete working code of the full driver:
device. Our code is working as expected. Let us understand
the details of our character driver. #include <linux/module.h>
First, take a look at the driver's function. Given below are the #include <linux/kernel.h>
prototypes of a few functions from the file_operations structure: #include <linux/fs.h>
#include <linux/kdev_t.h>
int (*open)(struct inode *i, struct file *f);
int (*release)(struct inode *i, struct file *f); static int major;
ssize_t (*read)(struct file *f, char __user *buf, size_t len, static char *name = "full_driver";
loff_t *off);
ssize_t (*write)(struct file *f, const char __user buf*, static int full_open(struct inode *i, struct file *f)
size_t len, loff_t *off); {
return 0;
The prototype of the open() and release() functions is }
exactly same. These functions accept two parametersthe first
is the pointer to the inode structure. All file-related information static int full_release(struct inode *i, struct file *f)
such as size, owner, access permissions of the file, file creation {
timestamps, number of hard-links, etc, is represented by the return 0;
inode structure. And each open file is represented internally by }
the file structure. The open() function is responsible for opening
the device and allocation of required resources. The release() static ssize_t full_read(struct file *f, char __user *buf,
function does exactly the reverse job, which closes the device size_t len, loff_t *off)
and deallocates the resources. {
As the name suggests, the read() function reads data from the return 0;
device and sends it to the application. The first parameter of this }
function is the pointer to the file structure. The second parameter
is the user-space buffer. The third parameter is the size, which static ssize_t full_write(struct file *f, const char __user
implies the number of bytes to be transferred to the user space *buf, size_t len, loff_t *off)
buffer. And, finally, the fourth parameter is the file offset which {
updates the current file position. Whenever the read() operation return -ENOSPC;
is performed on a device file, the driver should copy len bytes }
of data from the device to the user-space buffer buf and update
the file offset off accordingly. This function returns the number static struct file_operations full_ops =
of bytes read successfully. Our null driver doesn't read anything; {
that is why the return value is always zero, i.e., EOF. .owner = THIS_MODULE,
The driver's write() function accepts the data from the .open = full_open,
user-space application. The first parameter of this function is the .release = full_release,
pointer to the file structure. The second parameter is the user- .read = full_read,
space buffer, which holds the data received from the application. .write = full_write
The third parameter is len which is the size of the data. The };
fourth parameter is the file offset. Whenever the write() operation To be continued on page.... 55

www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 51


Developers Insight

Creating Dynamic Web Portals


Using Joomla and WordPress
Joomla and WordPress are popular
Web content management
systems, which provide authoring,
collaboration and administration
tools designed to allow amateurs
to create and manage websites
with ease.

N
owadays, every organisation wishes to have an online be called by the programmer depending upon the module and
presence for maximum visibility as well as reach. feature required in the application. As far as user-friendliness is
Industries from across different sectors have their concerned, the CMSs are very easy to use. CMS products can
own websites with detailed portfolios so that marketing as be used and deployed even by those who do not have very good
well as broadcasting can be integrated very effectively. programming skills.
Web 2.0 applications are quite popular in the global market. A framework can be considered as a model, a structure
With Web 2.0, the applications developed are fully dynamic or simply a programming template that provides classes,
so that the website can provide customised results or output to events and methods to develop an application. Generally,
the client. Traditionally, long term core coding, using different the software framework is a real or conceptual structure of
programming or scripting languages like CGI PERL, Python, software intended to serve as a support or guide to build
Java, PHP, ASP and many others, has been in vogue. But today something that expands the structure into something useful.
excellent applications can be developed within very little The software framework can be seen as a layered structure,
time. The major factor behind the implementation of RAD indicating which kind of programs can or should be built and
frameworks is re-usability. By making changes to the existing the way they interrelate.
code or by merely reusing the applications, development has
now become very fast and easy. Content Management Systems (CMSs)
The digital repositories and CMSs have a lot of feature-
Software frameworks overlap, but both systems are unique in terms of their
Software frameworks and content management systems underlying purposes and the functions they fulfill.
(CMS) are entirely different concepts. In the case of CMSs, the A CMS for developing Web applications is an integrated
reusable modules, plugins and related components are provided application that is used to create, deploy, manage and store
with the source code and all that is required is to only plug in or content on Web pages. The Web content includes plain or
plug out. The frameworks need to be installed and imported on formatted text, embedded graphics in multiple formats,
the host machine and then the functions are called. This means photos, video, audio as well as the code that can be third party
that the framework with different classes and functions needs to APIs for interaction with the user.

52 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com


Insight Developers

PHP-based open source frameworks

Laravel Prado
Phalcon Seagull
Symfony Yii
CodeIgniter CakePHP

Digital repositories
An institutional repository refers to the online archive or
library for collecting, preserving and disseminating digital
copies of the intellectual output of the institution, particularly
in the field of research. Figure 1: Joomla extensions
For any academic institution like a university, it also
includes digital content such as academic journal articles. It Sites offering government applications
covers both pre-prints and post-prints, articles undergoing Websites of small businesses and NGOs
peer review, as well as digital versions of theses and Community-based portals
dissertations. It even includes some other digital assets School and church websites
Personal or family home pages
PHP-based open source CMSs Joomlas user base includes:
The military - http://www.militaryadvice.org/
Joomla Typo3 US Army Corps of Engineers - Country: http://www.spl.
Drupal Mambo usace.army.mil/cms/index.php
WordPress
MTV Networks Quizilla (social networking) - http://www.
quizilla.com
generated in an institution such as administrative documents, New Hampshire National Guard - https://www.nh.ngb.
course notes or learning objectives. Depositing material in army.mil/
an institutional repository is sometimes mandated by some United Nations Regional Information Centre - http://www.
institutions. unric.org
IHOP (a restaurant chain) - http://www.ihop.com
Joomla CMS Harvard University - http://gsas.harvard.edu
Joomla is an award-winning open source CMS written in and many others
PHP. It enables the building of websites and powerful online The essential features of Joomla are:
applications. Many aspects, including its user-friendliness and User management
extensible nature, makes Joomla the most popular Web-based Media manager
software development CMS. Joomla is built on the model Language manager
viewcontroller (MVC) Web application framework, which Banner management
can be used independent of the CMS. Contact management
Joomla CMS can store data in a MySQL, MS SQL or Polls
PostgreSQL database, and includes features like page caching, Search
RSS feeds, printable versions of pages, news flashes, blogs, Web link management
polls, search and support for language internationalisation. Content management
According to reports by Market Wire, New York, as of Syndication and newsfeed management
February 2014, Joomla has been downloaded over 50 million Menu manager
times. Over 7,700 free and commercial extensions are available Template management
from the official Joomla Extension Directory and more are Integrated help system
available from other sources. It is supposedly the second most System features
used CMS on the Internet after WordPress. Many websites Web services
provide information on installing and maintaining Joomla sites. Powerful extensibility
Joomla is used across the globe to power websites of all
types and sizes: Joomla extensions
Corporate websites or portals Joomla extensions are used to extend the functionality of
Corporate intranets and extranets Joomla-based Web applications. The Joomla extensions for
Online magazines, newspapers and publications multiple categories and services can be downloaded from
E-commerce and online reservation sites http://extensions.joomla.org.

www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 53


Developers Insight

Configuration Database Overview

Database Configuration
Joomla! is free software released under the GNU General Public License.
Database Type* MySQLi

This is probably "MySQLi"


Configuration Database Overview
Host Name* localhost
Select Language English (United States)
This is usuaally "localhost"

Main Configuration Username*

Either something as "root or a username given by the host


Site Name * My now Joomla Website! Admin Email*
Pasword
Enter the name of your Joomla! site. Enter an email address. This will be the
email address of the Web site Super For site security using a password for the database account is manadatory
Administrator.
Description This is my new Joomla site and it
is going to be great! Database Name*
Admin Username*
Some hosts allow only a certain DB name per site. Use table prefix in this case for district joomla! sites.
Enter a description of the overall Web site You may change the default username
that is to be used by search engines. admin. Table Prefix*
Generally, a maximum of 20 words is
optimal. Choose a table prefix or use the randomly generated ideally, three or four characters long,
Admin Password*
contain only alphanumeric characters, and MUST end in an underscore. Make sure that the
Set the password for your Super prefix chosen is not used by other table
Administrator account and confirm it in the
field below.
Old Database Backup Remove
Process*
Confirm Admin Any existing backup tables from former joomla! installations will be replaced
Password*

Site Offline No Yes


Figure 3: Database configuration panel for setting up Joomla
Set the site fronted offline when installation is completed. The site can be set online later on through the Global
Configuration

After all the necessary information has been filled in at all


Figure 2: Creating a MySQL user in a Web hosting panel stages, press the Next button to proceed. You will be forwarded
to the last page of the installation process. On this page, specify
Installing and working with Joomla if you want any sample data installed on your server.
For Joomla installation on a Web server, whether local or hosted, The second part of the page will show the pre-installation
we need to download the Joomla installation package, which checks. The Web hosting servers will check that all Joomla
ought to be done from the official website, Joomla.org. If Joomla requirements and prerequisites have been met and you will
is downloaded from websites other than the official one, there are see a green check after each line.
risks of viruses or malicious code in the set-up files. Finally, click the Install button to start the actual Joomla
Once you click the Download button for the latest stable installation. In a few moments, you will be redirected to the last
Joomla version, the installation package will be saved to the local screen of the Joomla Web Installer. On the last screen of the
hard disk. Extract it so that it can be made ready for deployment. installation process, press the Remove installation folder button.
Now, at this instant, upload the extracted files and folders This is required for security reasons; otherwise, every time, the
to the Web server. The easiest and safest method to upload the installation will restart. Joomla is now ready to be used.
Joomla installation files is via FTP.
If Joomla is required to be installed live on a specific Creating articles and linking them with the menu
domain, upload the extracted files to the public_html folder After installation, the administrator panel to control the
on the online file manager of the domain. If access to Joomla Joomla website is displayed. Here, different modules, plugins
is needed on a sub-folder of any domain (www.mydomain. and components, along with the HTML contents, can be
com/myjoomla) it should be uploaded to the appropriate sub- added or modified.
directory (public_html/myjoomla/).
After this step, create a blank MySQL database and assign WordPress CMS
a user to it with full permissions. A blank database is created WordPress is another free and open source blogging CMS
because Joomla will automatically create the tables inside tool based on PHP and MySQL. The features of WordPress
that database. Once you have created your MySQL database include a specialised plugin architecture with a template
and user, save the database name, database user name and system. WordPress is the most popular blogging system in use
password just created because, during Joomla installation, you on the Web, used by more than 60 million websites. It was
will be asked for these credentials. initially released in 2003 with the objective of providing an
After uploading the installation files, open the Web easy-to-use CMS for multiple domains.
browser and navigate to the main domain (http://www. The installation steps for all CMSs are almost the same.
mysite.com), or to the appropriate sub-domain (http://www. The compressed file is extracted and deployed on the public_
mysite.com/joomla), depending upon the location the Joomla html folder of the Web server. In the same way, a blank
installation package is uploaded to. Once done, the first screen database is created and the credentials are placed during the
of the Joomla Web Installer will open up. installation steps.
Once you fill in all the required fields, press the Next button According to the official declaration of WordPress, this
to proceed with the installation. On the next screen, you will have CMS powers more than 17 per cent of the Web and the figure
to enter the necessary information for your MySQL database. is rising every day.

54 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com


Insight Developers

The salient features of


WordPress are:
Simplicity
Flexibility
Ease of publishing
Publishing tools
User management
Media management
Full standards
compliance Figure 4: Administrator login for Joomla Figure 5: WYSIWYG editor for creating articles
Easy theme system
Can be extended with plugins Nicholls State University
Built-in comments Milwaukee School of Engineering
Search engine optimised .and many others
Multi-lingual
Easy installation and upgrades
By: Dr Gaurav Kumar
Importers
Strong community of troubleshooters The author is the MD of Magma Research & Consultancy Pvt Ltd,
Worldwide users of WordPress include: Ambala. He is associated with a number of academic institutes,
where he delivers lectures and conducts technical workshops
FIU College of Engineering and Computing
on the latest technologies and tools. He can be contacted at
MTV Newsroom kumargaurav.in@gmail.com.
Sony Music

To be continued from page.... 51

static int __init full_init(void) [root]# insmod ./full_driver.ko


{
major = register_chrdev(0, name, &full_ops); [root]# grep "full_driver" /proc/devices
if (major < 0) { 248 full_driver
printk(KERN_INFO "Failed to register driver.");
return -1; [root]# mknod /dev/full_driver c 248 0
}
[root]# echo "Hello" > /dev/full_driver
return 0; -bash: echo: write error: No space left on device
}
If you want to learn more about GNU/Linux device
static void __exit full_exit(void) drivers, the Linux kernel's source code is the best place to do
{ so. You can browse the kernel's source code from http://lxr.
unregister_chrdev(major, name); free-electrons.com/. You can also download the latest source
} code from https://www.kernel.org/. Additionally, there are a
few good books available in the market like Linux Kernel
module_init(full_init); Development' (3rd Edition) by Robert Love, and Linux
module_exit(full_exit); Device Drivers' (3rd Edition) which is a free book. You can
download it from http://lwn.net/Kernel/LDD3/. These books
MODULE_AUTHOR("Narendra Kangralkar."); also explain kernel debugging tools and techniques.
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("Full driver");
By: Narendra Kangralkar
Let us compile and insert the module. The author is a FOSS enthusiast and loves exploring
anything related to open source. He can be reached at
narendrakangralkar@gmail.com
[mickey]$ make

www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 55


Developers Let's Try

Compile a GPIO Control Application and


Test It On the Raspberry Pi
GPIO is the acronym for General Purpose (I/O). The role played by these drivers is to handle
I/O requests to read or write to groups of GPIO pins. Lets try and compile a GPIO driver.

T
his article goes deep into what really goes on inside Jumper (female-to-female)
an OS while managing and controlling the hardware. SD card (with bootable Raspbian image)
The OS hides all the complexities, carries out all the Here's a quick overview of what device drivers are. As the
operations and gives end users their requirements through the name suggests, they are pieces of code that drive your device.
UI (User Interface). GPIO can be considered as the simplest One can even consider them a part of the OS (in this case,
of all the peripherals to work on any board. A small GPIO Linux) or a mediator between your hardware and UI.
driver would be the best medium to explain what goes on A basic understanding of how device drivers actually
under the hood. work is required; so do learn more about that in case you need
A good embedded systems engineer should, at the very to. Lets move forward to the GPIO driver assuming that one
least, be well versed in the C language. Even if the following knows the basics of device drivers (like inserting/removing
demonstration can't be replicated (due to the unavailability of the driver from the kernel, probe functionality, etc).
hardware or software resources), a careful read through this When you insert (insmod) this driver, it will register itself
article will give readers an idea of the underlying processes. as a platform driver with the OS. The platform device is also
registered in the same driver. Contrary to this, registering
Prerequisites to perform this experiment the platform device in the board file is a good practice. A
C language (high priority) peripheral can be termed a platform device if it is a part of
Raspberry Pi board (any model) the SoC (system on chip). Once the driver is inserted, the
BCM2835-ARM-peripherals datasheet (just Google for it!) registration (platform device and platform driver) takes place,

56 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com


Let's Try Developers

after which the probe function gets called.


User Applications
Generic information User
Probe in the driver gets called whenever a device's (already Space
GNU C Library (glibc)
registered) name matches with the name of your platform
driver (here, it is bcm-gpio). The second major functionality
is ioctl which acts as a bridge between the application GNU/ System Call Interface
Linux
space and your driver. In technical terms, whenever your
application invokes this (ioctl) system call, the call will be Kernel Kernel
Space
routed to this function of your driver. Once the call from the
application is in your driver, you can process or provide data Architecture-Dependent Kernel Code

inside the driver and can respond to the application.


The SoC datasheet, i.e., BCM2835-ARM-Peripherals, Hardware Platform

plays a pivotal role in building up this driver. It consists of


all the information pertaining to the peripherals supported by Figure 1: System layout
your SoC. It exposes all the registers relevant to a particular
peripheral, which is where the key is. Once you know what
registers of a peripheral are to be configured, half the job
is done. Be cautious about which address has to be used to
access these peripherals.

Types of addressing modes


There are three kinds of addressing modes - virtual
addressing, physical addressing and system bus addressing.
To learn the details, turn to Page 6 of the datasheet.
The macro __io_address implemented in the probe
function of the driver returns the virtual address of the
physical address passed as an argument. For GPIO, the
physical address is 0x20200000(0x20000000 + 0x200000),
where 0x20000000 is the base address and 0x200000 is the
peripheral offset. Turn to Page 5 of the datasheet for more
details. Any guesses on which address the macro __io_ Figure 2: Console
address would return? The address returned by this macro can
then be used for accessing (reading or writing) the concerned Local compilation on the target board
peripheral registers. In the first method, one needs to have certain packages
The GPIO control application is analogous to a simple downloaded. These are:
C program with an additional ioctl call. This call is capable ARM cross-compiler
of passing data from the application layer to the driver layer Raspbian kernel source (the kernel version must match
with an appropriate command. I have restricted the use of with the one running on your Pi; otherwise, the driver will
other GPIOs as they are not exposed to headers like others. not load onto the OS due to the version mismatch)
So, modify the application as per your requirements. More In the second method, one needs to install certain
information is available on this peripheral from Page 89 of packages on Pi.
the datasheet. In this code, I have just added functionality for Go to the following link and follow the steps indicated:
setting or clearing a GPIO. Another interesting feature is that http://stackoverflow.com/questions/20167411/how-to-
by configuring the appropriate registers, you can configure compile-a-kernel-module-for-raspberry-pi
GPIOs as interrupt pins. So whenever a pulse is routed to Or, follow the third answer at this link, the starting line of
that pin, the processor, i.e., ARM, is interrupted and the which says, "Here are the steps I used to build the Hello
corresponding handler registered for that interrupt is invoked World kernel module on Raspbian."
to handle and process it. This interesting aspect will be taken I went ahead with the second method as it was more
up in later articles. straightforward.

Compilation of the GPIO device driver Testing on your Raspberry Pi


There are two ways in which you can compile your driver. Boot up your Raspberry Pi using minicom and you will see
Cross compilation on the host PC the console that resembles mine (Figure 2).

www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 57


Developers Let's Try

Figure 3: dmesg output

Run sudo dmesg C. (This command would clean up all


the kernel boot print logs.) Figure 5: Output showing GPIO 24=0
Run sudo make. (This command would
compile GPIO driver. Do this only for
the second method.)
Run sudo insmod gpio_driver.ko. (This
command inserts the driver into the OS.)
Run dmesg. You can see the prints
from the GPIO driver and the major
number allocated to it, as shown in
Figure 3. (The major number plays a
unique role in identifying a specific
driver with whom the process from the
application space wants to communicate Figure 4: R-pi GPIO Figure 6: Output showing GPIO 25=1
with, whereas the minor number is used
to recognise hardware.) This command will drive the value of GPIO 24 to 1,
Run sudo mknod /dev/bcm-gpio c major-num 0. (The which in turn will be routed to GPIO 25. To verify the value
mknod command creates a node in /dev directory, of GPIO 25, run:
c stands for character device and 0 is the minor
number.) sudo ./app -n 25 -g 1
Run sudo gcc gpio_app.c -o gpio_app. (Compile the
GPIO control application.) This will give the output. The output value of GPIO 25 = 1
Now lets test our GPIO driver and application. (see Figure 6).
To verify whether our driver is indeed communicating One can also connect any external device or a simple LED
with GPIO, short pins 25 and 24 (one can use other (through a resistor) to the GPIO pin and test its output.
available pins like 17, 22 and 23 as well but make sure that Arguments passed to the application through the command
they aren't mixed up for any other peripheral) using the lines are:
female-to-female jumper (Figure 4). The default values of -n : GPIO number
both the pins will be 0. To confirm the default values, run -d : GPIO direction (0 - IN or 1 - OUT)
the following commands: -v : GPIO value (0 or 1)
-s/g : set/get GPIO
sudo ./app -n 25 -g 1 The files are:
gpio_driver.c : GPIO driver file
This will be the output. The output value of GPIO 25 = 0. gpio_app.c : GPIO control application
Now run the following command: gpio.h : GPIO header file
Makefile : File to compile GPIO driver
sudo ./app -n 24 -g 1 After conducting this experiment, some curious folk may
have questions like:
This will again be the output. The output value of GPIO Why does one have to use virtual addresses to access GPIO?
24 = 0. How does one determine the virtual address from the
Thats it. Its verified (see Figure 5). Now, as the GPIO physical address?
pins are shorted, if we output 1 to 24 then it would be the We will discuss the answers to these in later articles.
input value of 25 and vice versa.
To test this, run: By: Sumeet Jain
The author works at eInfochips as an embedded systems
sudo ./app -n 24 -d 1 -v 1 -s 1 engineer. You can reach him at sumeet.jain@einfochips.com

58 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com


Admin How To

LOAD BALANCING USING POUND SERVER 192.168.10.31 192.168.10.32

...ApacheWebServer1... ...ApacheWebServer2...
WEB SERVER 1
USER 1,
USER2, 192.168.10.31
Figure 3: Custom web page of Figure 4: Custom web page of
USER3 ...etc
Apache Web Server1 Apache Web Server2
HTTP TRAFFIC
Pound server

192.168.10.30
Installation and configuration of Pound
gateway server
WEB SERVER 2
First, ensure YUM is up and running:
192.168.10.32

[root@poundgateway ~]# ps -ef | grep yum


NOTE - POUND Server performs the require load balancing of the Web Servers. root 2050 1998 0 13:30 pts/1 00:00:00 grep yum
[root@poundgateway ~]#
Figure 1: Load balancing using the Pound server
[root@poundgateway ~]# yum clean all
Loaded plugins: product-id, refresh-packagekit, subscription-
manager
Updating Red Hat repositories.
Cleaning repos:
Cleaning up Everything
[root@poundgateway ~]#

Figure 2: Default page [root@poundgateway ~]# yum update all


Loaded plugins: product-id, refresh-packagekit, subscription-
[root@apachewebsever1 Packages]# manager
Updating Red Hat repositories.
Start the service: Setting up Update Process
No Match for argument: all
[root@apachewebsever1 ~]# service httpd start No package all available.
Starting httpd: No Packages marked for Update
[ OK ] [root@poundgateway ~]#
[root@apachewebsever1 ~]#
Then, check the default directory of YUM:
Start the service at boot time:
[root@poundgateway ~]# cd /etc/yum.repos.d/
[root@apachewebsever1 ~]# chkconfig httpd on [root@poundgateway yum.repos.d]#
[root@apachewebsever1 ~]# [root@poundgateway yum.repos.d]# ll
[root@apachewebsever1 ~]# chkconfig --list httpd total 8
httpd 0:off 1:off 2:on 3:on 4:on 5:on -rw-r--r-- 1 root root 67 Jul 27 13:30 redhat.repo
6:off -rw-r--r--. 1 root root 529 Apr 27 2011 rhel-
[root@apachewebsever1 ~]# source.repo
[root@poundgateway yum.repos.d]#
The directory location of Apache HTTP Service is /etc/
httpd/. Figure 2 gives the default test page for Apache Web By default, the repo file rhel-source.repo is disabled. To
Server on Red Hat Enterprise Linux. enable, edit the file rhel-source.repo and change the value
Now, lets create a Web page index.html at /var/www/html.
Restart Apache Web Service to bring the changes into effect. enabled = 1
The index.html Web page will be displayed (Figure 3).
Repeat the above steps for Web Server2 or
ApacheWebServer2.linuxrocks.org, except for the following:
Set the IP address to 192.168.10.32 enable = 0
The contents of the custom Web page index.html should
be ApacheWebServer2 as shown in Figure 4. For now you can leave this repository disabled.

60 | september 2014 | OPEN SOURCE For You | www.OpenSourceForU.com


How To Admin

Now, download the epel-release-6-8.noarch.rpm package added repo files. No changes are made in epel.repo and
and install it. epel-testing.repo. Move the default redhat.repo and rhel-
source.repo to the backup location. Now, connect the server
Important notes on EPEL to the Internet and, using the yum utility, install Pound:
1. EPEL stands for Extra Packages for Enterprise Linux.
2. EPEL is not a part of RHEL but provides a lot of open [root@PoundGateway ~]# yum install Pound*
source packages for major Linux distributions.
3. EPEL packages are maintained by the Fedora team and This will install Pound, Pound-debuginfo and will also
are fully open source, with no core duplicate packages and install required dependencies along with it.
no compatibility issues. They are to be installed using the To verify Pounds installation, type:
YUM utility.
The link to download the EPEL release for RHEL 6 (32-bit) [root@PoundGateway ~]# rpm -qa Pound
is: http://download.fedoraproject.org/pub/epel/6/i386/epel- Pound-2.6-2.el6.i686
release-6-8.noarch.rpm [root@PoundGateway ~]#
And for 64 bit, it is:
http://download.fedoraproject.org/pub/epel/6/x86_64/epel- The location of the Pound configuration file is /etc/
release-6-8.noarch.rpm pound.cfg
Here, epel-release-6-8.noarch.rpm is kept at /opt: You can view the default Pound configuration file by
Go to the /opt directory and change the permission of using the command given below:
the files:
[root@PoundGateway ~]# cat /etc/pound.cfg
[root@poundgateway opt]# chmod -R 755 epel-release-6-8.noarch.
rpm Make the changes to the Pound configuration file as shown
[root@poundgateway opt]# in the code snippet given below:
We will comment the section related to ListenHTTPS
Now, install epel-release-6-8.noarch.rpm: as we do not need HTTPS for now.
Add the IP address 192.168.10.30 under the
[root@poundgateway opt]# rpm -ivh --aid --force epel- ListenHTTP section.
release-6-8.noarch.rpm A dd the IP address 192.168.10.31 and 192.168.10.32
warning: epel-release-6-8.noarch.rpm: Header V3 RSA/SHA256 with Port 80 under Service Backend Section, where
Signature, key ID 0608b895: NOKEY [192.168.10.30] is for the Pound server; [192.168.10.31]
Preparing... ################################### for Web Server1 and [192.168.10.32 ] for Web Server2.
######## [100%] The edited Pound configuration file is:
1:epel-release ###################################
######## [100%] [root@PoundGateway ~]# cat /etc/pound.cfg
[root@poundgateway opt]# #
# Default pound.cfg
epel-release-6-8.noarch.rpm installs the repo files necessary #
to download the Pound package: # Pound listens on port 80 for HTTP and port 443 for HTTPS
# and distributes requests to 2 backends running on
[root@poundgateway ~]# cd /etc/yum.repos.d/ localhost.
[root@poundgateway yum.repos.d]# ll # see pound(8) for configuration directives.
total 16 # You can enable/disable backends with poundctl(8).
-rw-r--r-- 1 root root 957 Nov 4 2012 epel. #
repo
-rw-r--r-- 1 root root 1056 Nov 4 2012 epel- User "pound"
testing.repo Group "pound"
-rw-r--r-- 1 root root 67 Jul 27 13:30 redhat.repo Control "/var/lib/pound/pound.cfg"
-rw-r--r--. 1 root root 529 Apr 27 2011 rhel-
source.repo ListenHTTP
[root@poundgateway yum.repos.d]# Address 192.168.10.30
Port 80
As observed, epel.repo and epel-testing.repo are the new End

www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 61


Admin How To

#ListenHTTPS To configure the service to be started at boot time, type:


# Address 0.0.0.0
# Port 443 [root@PoundGateway ~]# chkconfig pound on
# Cert "/etc/pki/tls/certs/pound.pem" [root@PoundGateway ~]# chkconfig list pound
#End pound 0:off 1:off 2:on 3:on 4:on 5:on 6:off

Service [root@PoundGateway ~]#


BackEnd
Address 192.168.10.31 Observation
Port 80 Now open a Web browser and access the URL
End http://192.168.10.30. It displays the Web page from Web
BackEnd Server1ApacheWebServer1.linuxrocks.org
Address 192.168.10.32 Refresh the page, and it will display the Web page from
Port 80 Web Server2ApacheWebServer2.linuxrocks.org
End Keep refreshing the Web page; it will flip from Web
End Server1 to Web Server2, back and forth. We have now
[root@PoundGateway ~]# configured a system where the load on the Web server is
being balanced between two physical servers.
Now, start the Pound service:
By: Arindam Mitra
[root@PoundGateway ~]# service pound start
The author can be reached at mail2arindam2003@yahoo.com or
Starting Pound: starting... [OK]
arindam0310018@gmail.com
[root@PoundGateway ~]#

Customer Feedback Form


Open Source For You

None

OSFY?

You can mail us at osfyedit@efy.in You can send this form to


The Editor, OSFY, D-87/1, Okhla Industrial Area, Phase-1, New Delhi-20. Phone No. 011-26810601/02/03, Fax: 011-26817563

62 | september 2014 | OPEN SOURCE For You | www.OpenSourceForU.com


How To Admin

Why We Need to Handle


Bounced Emails
Bounced emails are the bane of marketing campaigns and mailing lists. In this article, the
author explains the nature of bounce messages and describes how to handle them.

bounces cannot be used to determine the status of a


failing recipient, and therefore need to be sorted out
effectively from our bounce processing.
To understand this better, consider a sender alice@
example.com, sending an email to bob@somewhere.
com. She mistyped the recipients address as bub@
somewhere.com. The email message will have a default

W
ikipedia defines a bounce email as a system- envelope sender, set by the local MTA running there
generated failed delivery status notification (mta.example.com), or by the PHP script to alice@
(DSN) or a non-delivery report (NDR), which example.com. Now, mta.example.com looks up the
informs the original sender about a delivery problem. When DNS mx records for somewhere.com, chooses a host
that happens, the original email is said to have bounced. from that list, gets its IP address and tries to connect
Broadly, bounces are categorised into two types: to the MTA running on somewhere.com, port 25 via
A hard/permanent bounce: This indicates that there an SMTP connection. Now, the MTA of somewhere.
exists a permanent reason for the email not to get com is in trouble as it can't find a user receiver in its
delivered. These are valid bounces, and can be due to the local user table. The mta.somewhere.com responds to
non-existence of the email address, an invalid domain example.com with an SMTP failure code, stating that
name (DNS lookup failure), or the email provider the user lookup failed (Code: 550). Its time for mta.
blacklisting the sender/recipient email address. example.com to generate a bounce email to the address
A soft/temporary bounce: This can occur due to of the return-path email header (the envelope sender),
various reasons at the sender or recipient level. It with a message that the email to alice@somewhere.
can evolve due to a network failure, the recipient com failed. That's a bounce email. Properly maintained
mailbox being full (quota-exceeded), the recipient mailing lists will have every email passing through
having turned on a vacation reply, the local Message them branded with the generic email ID, say mails@
Transfer Agent (MTA) not responding or being badly somewhere.com as the envelope sender, and bounces to
configured, and a whole lot of other reasons. Such that will be wasted if left unhandled.

www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 63


Admin How To

VERP (Variable Envelope Return-Path) attach it along with the hash.


In the above example, you will have noticed that the You need to edit your email headers to generate the
delivery failure message was sent back to the address of custom return-path, and make sure you pass it as the fifth
the Return-Path header in the original email. If there is argument to the php::mail() function to tell your exim MTA to
a key to handle the bounced emails, it comes from the set it as the default envelope sender.
Return-Path header.
The idea of VERP is to safely encode the recipient details, $to = bob@somewhere.com;
too, somehow in the return-path so that we can parse the $from = alice@example.com;
received bounce effectively and extract the failing recipient $subject = This is the message subject ;
from it. We specifically use the Return-Path header, as thats $body = This is the message body;
the only header that is not going to get tampered with by the
intervention of a number of MTAs. /** Altering the return path */
Typically, an email from Alice to Bob in the above $alteredReturnPath = self::generateVERPAddress( $to );
example will have headers like the following: $headers[ Return-Path] = $alteredReturnPath;
$envelopeSender= -f . $alteredReturnPath;
From: alice@example.com
To: bob@somewhere.com mail( $to, $subject, $body, $headers, $envelopeSender );
Return-Path: mails@example.com
/** We need to produce a return address of the form -
Now, we create a custom return path header by encoding * bounces-{ prefix }- {hash(prefix) }@sender_domain, where
the To address as a combination of prefix-delim-hash. The prefix can be
hash can be generated by the PHP hmac functions, so that the * string_ replaced(to_address )
new email headers become something like what follows: */
public generateVERPAddress( $to ) {
From: alice@example.com global $hashAlgorithm = md5;
To: bob@somewhere.com global $hashSecretKey = myKey;
Return-Path: bounce-bob.somewher.com-{encode ( bob@somewher. $emailDomain = example.com;
com ) }@example.com $addressPrefix = str_replace( '@', '.', $to );
$verpAddress = hash_hmac( $hashAlgorithm , $to,
Now, the bounces will get directed to our new return-path $hashSecretKey );
and can be handled to extract the failing recipient. $returnPath = bounes. -.$addressPrefix.-.
$verpAddress. @. $emailDomain;
Generating a VERP address return $returnPath;
The task now is to generate a secure return-path, which is not }
bulky, and cannot be mimicked by an attacker. A very simple
VERP address for a mail to bob@somewhere.com will be: Including security features is yet another concern and can
be done effectively by adding the current timestamp value (in
bounces-bob=somehwere.com@example.com UNIX time) in the VERP prefix. This will make it easy for
the bounce processor to decode the email delivery time and
Since it can be easily exploited by an attacker, we need to add additional protection by brute-forcing the hash. Decoding
also include a hash generated with a secret key, along with the and comparing the value of the timestamp with the current
address. Please note that the secret key is only visible to the timestamp will also help to understand how old the bounce is.
sender and in no way to the receiver or an attacker. Therefore, a more secure VERP address will look like
Therefore, a standard VERP address will be of the form: what follows:

bounces-{ prefix }-{hash(prefix,secretkey) }@sender_domain bounces-{ to_address }-{ delivery_timestamp }-{ encode ( to_
address-delivery & timestamp ), secretKey }@somewhere.com
PHP has its own hash-generating functions that can make
things easier. Since PHPs hmacs cannot be decoded, but only The current timestamp can be generated in PHP by:
compared, the idea will be to adjust the recipient email ID in
the prefix part of the VERP address along with its hash. On $current_timestamp = time();
receipt, the prefix and the hash can be compared to validate
the integrity of the bounce. Theres still work to do before the email is sent, as the
We will string replace the @ in the recipient email ID to local MTA at example.com may try to set its own custom

64 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com


How To Admin

return-path for messages it transmits. In the example below, path to the To: header of bounce
we adjust the exim configuration on the MTA to override The email can be made use of in the handleBounce.php by
this behaviour. using a simple POST request.

$ sudo nano /etc/exim4/exim4.conf $email = $_POST[ email ];

# Do not remove Return Path header


return_path_remove = false Decoding the failing recipient from the
bounce email
# Remove the field errors_to from the current router Now that the mail is successfully in the PHP script, our task
configuration. will be to extract the failing recipient from the encoded email
# This will enable exim to use the fifth param of headers. Thanks to exim configurations like envelope_to_add
php::mail() prefixed by -f to be set as the default # in the pipe transport (above), the VERP address gets pasted to
envelope sender the To header of the bounce email, and thats the place to look
for the failing recipient.
Every email ID will correspond to a user_id field in a Some common regex functions to extract the headers are:
standard user database, and this can be used instead of an
email ID to generate a tidy and easy to look up VERP hash. function extractHeaders( $email ) {
$bounceHeaders = array();
Redirect your bounces to a PHP bounce- $lineBreaks = explode( "\n", $email );
handling script foreach ( $lineBreaks as $lineBreak ) {
We now have a VERP address being generated on every if ( preg_match( "/^To: (.*)/", $lineBreak , $toMatch )
sent email, and it will have all the necessary information ) {
we need securely embedded in it. The remaining part of our $bounceHeaders[ 'to' ] = $toMatch[1];
task is to capture and validate the bounces, which would }
require redirecting the bounces to a processing PHP script. if ( preg_match( "/^Subject: (.*)/", $lineBreak ,
By default, every bounce message will reach all the way $subjectMatch ) ) {
back till the MTA that sent it, say mx.example.com, as its $bounceHeaders[ 'subject' ] =
return-path gets set to mails@example.com, with or without $subjectMatch[1];
VERP. The advantage of using VERP is that we will have }
the encoded failing address, too, somewhere in the bounce. if ( preg_match( "/^Date: (.*)/", $lineBreak ,
To get that out from the bounce, we can HTTP POST $dateMatch ) ) {
the email via curl to the bounce processing script, say $bounceHeaders[ 'date' ] = $dateMatch[1];
localhost/handleBounce.php using an exim pipe transport, }
as follows: if ( trim( $lineBreak ) == "" ) {
// Empty line denotes that the header part is
$sudo nano /etc/exim4/exim4.conf finished
break;
# suppose you have a recieve_all router that will accept }
all the emails to your domain. }
# this can be the system_alias router too return $bounceHeaders;
recieve_all: }
driver = accept
transport = pipe_transport After extracting the headers, we need to decode the
# Edit the pipe_transport original-failed-recipient email ID from the VERP hashed
pipe_transport: $bounceHeader[to], which involves more or less the
driver = pipe reverse of what we did earlier. This would help us validate
command = /usr/bin/curl http://localhost/handleBounce..php the bounced email too.
--data-urlencode "email@-"
group = nogroup /**
return_path_add # adds Return-Path header for *Considering the recieved $heders[ to ] is of the form
incoming mail. * bounces-{ to_address }-{ delivery_timestamp }-{ encode (
delivery_date_add # adds the bounce timestamp to_address-delivery & timestamp ), * secretKey }@
envelope_to_add # copies the return somewhere.com

www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 65


Admin How To

*/ $hashedTo = $bounceHeaders[ to ]; // This will hold the


$hashedTo = $headers[ to ]; VERP address
$failedRecipient = self::extractToAddress( $hashedTo );
$to = self::extractToAddress( $hashedTo );
$con = mysqli_connect( "database_server", "dbuser", "dbpass",
function extractToAddress( $hashedTo ) { "databaseName" );
$timeNow = time(); mysqli_query( $con, "INSERT INTO bounceRecords(
failedRecipient, bounceTimestamp, failureReason )VALUES (
// This will help us get the address part of address@ $failedRecipient, $bounceTimestamp , $failureReason);
domain
preg_match( '~(.*?)@~', $hashedTo, $hashedSlice ); mysqlI_close( $con );

// This will help us cut the address part with the Simple tests to differentiate between a
symbol - permanent and temporary bounce
$hashedAddressPart = explode( '-', $hashedSlice1] ); One of the greatest challenges while writing a bounce
processor is to make sure it handles only the right bounces or
// Now we have the prefix in the permanent ones. A bounce processing script that reacts to
$hashedAddressPart[ 0 - 2 ] and the hash in every single bounce can lead to mass unsubscription of users
$hashedAddressPart[3] from the mailing list and a lot of havoc. Exim helps us here in
$verpPrefix = $hashedAddressPart [0]. '-'. a great way by including an additional X-Failed-Recipients:
$hashedAddressPart 1]. '-'. hashedAddressPart [2]; header to a permanent bounce email. This key can be checked
for in the regex function we wrote earlier, and action can be
// Extracting the bounce time. taken only if it exists.
$bounceTime = $hashedAddressPart[ 2 ];
/**
// Valid time for a bounce to happen. The values can be * Check if the bounce corresponds to a permanent failure
subtracted to find out the time in between and even used to * can be added to the extractHeaders() function above
set an accept time, say 3 days. */
if ( $bounceTime < $timeNow ) { function isPermanentFailure( $email ) {
if ( hash_hmac( $hashAlgorithm, $verpPrefix , $lineBreaks = explode( "\n", $email );
$hashSecretKey ) === $hashedAddressPart[3] ) { foreach ( $lineBreaks as $lineBreak ) {
// Bounce is valid, as if ( preg_match( "/^X-Failed-Recipients: (.*)/", $lineBreak,
the comparisons return true. $permanentFailMatch ) ) {
$to = string_replace( $bounceHeaders[ 'x-failed-recipients' ] =
., @, $verpPrefix[1] ); $permanentFailMatch;
return $to; return true;
} }
} }
} Even today, we have a number of large organisations
that send more than 100 emails every minute and still
Taking action on the failing recipient have all bounces directed to /dev/null. This results in far
Now that you have got the failing recipient, the task would too many emails being sent to undeliverable addresses
be to record his bounce history and take relevant action. and eventually leads to frequent blacklisting of the
A recommended approach would be to maintain a bounce organisations mail server by popular providers like
records table in the database, which would store the failed Gmail, Hotmail, etc.
recipient, bounce-timestamp and failure reason. This can be If bounces are directed to an IMAP maildir, the regex
inserted into the database on every bounce processed, and can functions won't be necessary, as the PHP IMAP library can
be as simple as: parse the headers readily for you.

/** extractHeaders is defined above */


By: Tony Thomas
$bounceHeaders = self::extractHeaders( $email );
The author is currently doing his Google SoC project for Wikimedia
$failureReason = $bounceHeaders[ subject ]; on handling email bounces effectively. You can contact the author
at 01tonythomas@gmail.com. Github: github.com/tonythomas01
$bounceTimestamp = $bounceHeaders[ date ];

66 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com


How To Admin

Boost the Performance of


CloudStack with Varnish
In this article, the author demonstrates how the performance of CloudStack can
dramatically improve by using Varnish. He does so by drawing upon his practical
experience with administering SaaS servers at his own firm.

T he current cloud inventory for one of the SaaS


applications at our firm is as follows:
Web server: Centos 6.4 + NGINX + MySql + PHP + Drupal
A word about Varnish
Varnish is a Web application accelerator or reverse proxy.
Its installed in front of the Web server to handle HTTP
Mail server: Centos 6.4 + Postfix + Dovecot + Squirrelmail requests. This way, it speeds up the site and improves the
A quick test on Pingdom showed a load time of 3.92 performance significantly. In some cases, it can improve the
seconds for a page size of 2.9MB with 105 requests. performance of a site by 300 to 1000 times.
Tests using Apache Bench ab -c1 -n500 http://www. It does this by caching the Web pages and when visitors
bookingwire.co.uk/ yielded almost the same figuresa mean come to the site, Varnish serves the cached pages rather than
response time of 2.52 seconds. requesting the Web server for it. Thus the load on the Web
We wanted to improve the page load times by caching the server reduces. This method improves the sites performance
content upstream, scaling the site to handle much greater http and scalability. It can also act as a failsafe method if the Web
workloads, and implementing a failsafe mechanism. server goes down because Varnish will continue to serve the
The first step was to handle all incoming http requests cached pages in the absence of the Web server.
from anonymous users that were loading our Web server. With that said, lets begin by installing Varnish on a VPS,
Since anonymous users are served content that seldom and then connect it to a single NGINX Web server. Then lets
changes, we wanted to prevent these requests add another NGINX Web server so that we can implement
from reaching the Web server so that a failsafe mechanism. This will accomplish the
its resources would be available performance goals that we stated. So lets get
to handle the requests from started. For the rest of the article, lets
authenticated users. Varnish assume that you are using the Centos
was our first choice to 6.4 OS. However, we have provided
handle this. information for Ubuntu users
Our next concern was to wherever we felt it was necessary.
find a mechanism to handle
the SSL requests mainly Enable the required
on the sign-up pages, repositories
where we had interfaces First enable the appropriate
to Paypal. Our aim was repositories. For Centos, Varnish
to include a second Web is available from the EPEL
server that handled a repository. Add this repository to
portion of the load, your repos list, but before you do so,
and we wanted to youll need to import the GPG keys. So
configure Varnish to open a terminal and enter the following
distribute http traffic commands:
using a round-robin
mechanism between these [root@bookingwire sridhar]#wget https://
two servers. Subsequently, fedoraproject.org/static/0608B895.txt
we planned on configuring [root@bookingwire sridhar]#mv 0608B895.txt /
Varnish in such a way that etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6
even if the Web servers [root@bookingwire sridhar]#rpm --import /
were down, the system would etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6
continue to serve pages. During the course of [root@bookingwire sridhar]#rpm -qa
this exercise we documented our experiences gpg*
and thats what youre reading about here. gpg-pubkey-c105b9de-4e0fd3a3

www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 67


Admin How To

Full Page Test DNS Health Ping and Traccroute Sign up [root@bookingwire sridhar]# sudo apt-get install varnish
Pingdom Website Speed Test

After a few seconds, Varnish will be installed. Lets verify


Enter a URL to test the load time of that page, analyze it and find bottlenecks

Test Now

the installation before we go further. In the terminal, enter the


bookingwire.co.uk
Tested from on May 15 at 15:29:23 following command the output should contain the lines that
Your website is faster than 41% of all tested websites
follow the input command (we have reproduced only a few
lines for the sake of clarity).
Download Har Tweet Email

Waterfall Performance Grade Page Analysis History

[root@bookingwire sridhar]## yum info varnish


Installed Packages
Figure 1: Pingdom result
Name : varnish
Arch : i686
Version : 3.0.5
Release : 1.el6
Size : 1.1 M
Repo : installed

That looks good; so we can be sure that Varnish is installed.


Now, lets configure Varnish to start up on boot. In case you
have to restart your VPS, Varnish will be started automatically.

[root@bookingwire sridhar]#chkconfig --level 345 varnish on


Figure 2: Apache Bench result
Having done that, lets now start Varnish:
After importing the GPG keys you can enable the repository.
[root@bookingwire sridhar]#/etc/init.d/varnish start
[root@bookingwire sridhar]#wget http://dl.fedoraproject.org/
pub/epel/6/x86_64/epel-release-6-8.noarch.rpm We have now installed Varnish and its up and running.
[root@bookingwire sridhar]#rpm -Uhv epel-release-6*.rpm Lets configure it to cache the pages from our NGINX server.

To verify if the new repositories have been added to the Basic Varnish configuration
repo list, run the following command and check the output to The Varnish configuration file is located in /etc/sysconfig/
see if the repository has been added: varnish for Centos and /etc/default/varnish for Ubuntu.
Open the file in your terminal using the nano or vim text
[root@bookingwire sridhar]#yum repolist editors. Varnish provides us three ways of configuring it. We
prefer Option 3. So for our 2GB server, the configuration
If you happen to use an Ubuntu VPS, then you should use steps are as shown below (the lines with comments have been
the following commands to enable the repositories: stripped off for the sake of clarity):

[root@bookingwire sridhar]# wget http://repo.varnish-cache. NFILES=131072


org/debian/GPG-key.txt MEMLOCK=82000
[root@bookingwire sridhar]# apt-key add GPG-key.txt RELOAD_VCL=1
[root@bookingwire sridhar]# echo deb http://repo.varnish- VARNISH_VCL_CONF=/etc/varnish/default.vcl
cache.org/ubuntu/ precise varnish-3.0 | sudo tee -a /etc/ VARNISH_LISTEN_PORT=80 , :443
apt/sources.list VARNISH_ADMIN_LISTEN_ADDRESS=127.0.0.1
[root@bookingwire sridhar]# sudo apt-get update VARNISH_ADMIN_LISTEN_PORT=6082
VARNISH_SECRET_FILE=/etc/varnish/secret
Installing Varnish VARNISH_MIN_THREADS=50
Once the repositories are enabled, we can install Varnish: VARNISH_MAX_THREADS=1000
VARNISH_STORAGE_FILE=/var/lib/varnish/varnish_storage.bin
[root@bookingwire sridhar]# yum -y install varnish VARNISH_STORAGE_SIZE=1G
VARNISH_STORAGE=malloc,${VARNISH_STORAGE_SIZE}
On Ubuntu, you should run the following command: VARNISH_TTL=120
DAEMON_OPTS=-a ${VARNISH_LISTEN_ADDRESS}:${VARNISH_LISTEN_

68 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com


How To Admin

PORT} \ of the servers fails, then all requests should be routed to the
-f ${VARNISH_VCL_CONF} \ healthy server. To do this, add the following to your default.
-T ${VARNISH_ADMIN_LISTEN_ADDRESS}:${VARNISH_ vcl file:
ADMIN_LISTEN_PORT} \
-t ${VARNISH_TTL} \ backend bw1 { .host = 146.185.129.131;
-w ${VARNISH_MIN_THREADS},${VARNISH_MAX_ .probe = { .url = /google0ccdbf1e9571f6ef.
THREADS},${VARNISH_THREAD_TIMEOUT} \ html;
-u varnish -g varnish \ .interval = 5s;
-p thread_pool_add_delay=2 \ .timeout = 1s;
-p thread_pools=2 \ .window = 5;
-p thread_pool_min=400 \ .threshold = 3; }}
-p thread_pool_max=4000 \ backend bw2 { .host = 37.139.24.12;
-p session_linger=50 \ .probe = { .url = /google0ccdbf1e9571f6ef.
-p sess_workspace=262144 \ html;
-S ${VARNISH_SECRET_FILE} \ .interval = 5s;
-s ${VARNISH_STORAGE} .timeout = 1s;
.window = 5;
The first line when substituted with the variables will read -a .threshold = 3; }}
:80,:443 and instruct Varnish to serve all requests made on Ports backend bw1ssl { .host = 146.185.129.131;
80 and 443. We want Varnish to serve all http and https requests. .port = 443;
To set the thread pools, first determine the number .probe = { .url = /google0ccdbf1e9571f6ef.
of CPU cores that your VPS uses and then update the html;
directives. .interval = 5s;
.timeout = 1s;
[root@bookingwire sridhar]# grep processor /proc/cpuinfo .window = 5;
processor : 0 .threshold = 3; }}
processor : 1 backend bw2ssl { .host = 37.139.24.12;
.port = 443;
This means you have two cores. .probe = { .url = /google0ccdbf1e9571f6ef.
The formula to use is: html;
.interval = 5s;
-p thread_pools=<Number of CPU cores> \ .timeout = 1s;
-p thread_pool_min=<800 / Number of CPU cores> \ .window = 5;
.threshold = 3; }}
The -s ${VARNISH_STORAGE} translates to -s director default_director round-robin {
malloc,1G after variable substitution and is the most { .backend = bw1; }
important directive. This allocates 1GB of RAM for { .backend = bw2; }
exclusive use by Varnish. You could also specify -s file,/ }
var/lib/varnish/varnish_storage.bin,10G which tells
Varnish to use the file caching mechanism on the disk and director ssl_director round-robin {
that 10GB has been allocated to it. Our suggestion is that { .backend = bw1ssl; }
you should use the RAM. { .backend = bw2ssl; }
}
Configure the default.vcl file
The default.vcl file is where you will have to make most of sub vcl_recv {
the configuration changes in order to tell Varnish about your if (server.port == 443) {
Web servers, assets that shouldnt be cached, etc. Open the set req.backend = ssl_director;
default.vcl file in your favourite editor: }
else {
[root@bookingwire sridhar]# nano /etc/varnish/default.vcl set req.backend = default_director;
}
Since we expect to have two NGINX servers running }
our application, we want Varnish to distribute the http
requests between these two servers. If, for any reason, one You might have noticed that we have used public IP

www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 69


Admin How To

addresses since we had not enabled private networking If you dont handle this, Varnish will cache the same page
within our servers. You should define the backends one once each, for each type of encoding, thus wasting server
each for the type of traffic you want to handle. Hence, we resources. In our case, it would gobble up memory. So add the
have one set to handle http requests and another to handle following commands to the vcl_recv to have Varnish cache
the https requests. the content only once:
Its a good practice to perform a health check to see
if the NGINX Web servers are up. In our case, we kept it if (req.http.Accept-Encoding) {
simple by checking if the Google webmaster file was present if (req.http.Accept-Encoding ~ gzip) {
in the document root. If it isnt present, then Varnish will not # If the browser supports it, well use gzip.
include the Web server in the round robin league and wont set req.http.Accept-Encoding = gzip;
redirect traffic to it. }
else if (req.http.Accept-Encoding ~ deflate) {
.probe = { .url = /google0ccdbf1e9571f6ef.html; # Next, try deflate if it is supported.
set req.http.Accept-Encoding = deflate;
The above command checks the existence of this file at }
each backend. You can use this to take an NGINX server out else {
intentionally either to update the version of the application or # Unknown algorithm. Remove it and send unencoded.
to run scheduled maintenance checks. All you have to do is to unset req.http.Accept-Encoding;
rename this file so that the check fails! }
In spite of our best efforts to keep our servers sterile, }
there are a number of reasons that can cause a server to
go down. Two weeks back, we had one of our servers go Now, restart Varnish.
down, taking more than a dozen sites with it because the
master boot record of Centos was corrupted. In such cases, [root@bookingwire sridhar]# service varnish restart
Varnish can handle the incoming requests even if your Web
server is down. The NGINX Web server sets an expires
header (HTTP 1.0) and the max-age (HTTP 1.1) for each Additional configuration for content management
page that it serves. If set, the max-age takes precedence systems, especially Drupal
over the expires header. Varnish is designed to request A CMS like Drupal throws up additional challenges when
the backend Web servers for new content every time the configuring the VCL file. Well need to include additional
content in its cache goes stale. However, in a scenario directives to handle the various quirks. You can modify the
like the one we faced, its impossible for Varnish to obtain directives below to suit the CMS that you are using. When
fresh content. In this case, setting the Grace in the using CMSs like Drupal if there are files that you dont want
configuration file allows Varnish to serve content (stale) cached for some reason, add the following commands to your
even if the Web server is down. To have Varnish serve the default.vcl file in the vcl_recv section:
(stale) content, add the following lines to your default.vcl:
if (req.url ~ ^/status\.php$ ||
sub vcl_recv { req.url ~ ^/update\.php$ ||
set req.grace = 6h; req.url ~ ^/ooyala/ping$ ||
} req.url ~ ^/admin/build/features ||
req.url ~ ^/info/.*$ ||
sub vcl_fetch { req.url ~ ^/flag/.*$ ||
set beresp.grace = 6h; req.url ~ ^.*/ajax/.*$ ||
} req.url ~ ^.*/ahah/.*$) {
return (pass);
if (!req.backend.healthy) { }
unset req.http.Cookie;
} Varnish sends the length of the content (see the
Varnish log output above) so that browsers can display
The last segment tells Varnish to strip all cookies for an the progress bar. However, in some cases when Varnish
authenticated user and serve an anonymous version of the is unable to tell the browser the specified content-length
page if all the NGINX backends are down. (like streaming audio) you will have to pass the request
Most browsers support encoding but report it differently. directly to the Web server. To do this, add the following
NGINX sets the encoding as Vary: Cookie, Accept-Encoding. command to your default.vcl:

70 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com


How To Admin

if (req.url ~ ^/content/music/$) { you should track down the cookie and update the regex
return (pipe); above to strip it.
} Once you have done that, head to /admin/config/
development/performance, enable the Page Cache setting
Drupal has certain files that shouldnt be accessible to and set a non-zero time for Expiration of cached pages.
the outside world, e.g., Cron.php or Install.php. However, Then update the settings.php with the following snippet
you should be able to access these files from a set of IPs by replacing the IP address with that of your machine running
that your development team uses. At the top of default.vcl Varnish.
include the following by replacing the IP address block with
that of your own: $conf[reverse_proxy] = TRUE;
$conf[reverse_proxy_addresses] = array(37.139.8.42);
acl internal { $conf[page_cache_invoke_hooks] = FALSE;
192.168.1.38/46; $conf[cache] = 1;
} $conf[cache_lifetime] = 0;
$conf[page_cache_maximum_age] = 21600;
Now to prevent the outside world from accessing these
pages well throw an error. So inside of the vcl_recv function You can install the Drupal varnish module (http://www.
include the following: drupal.org/project/varnish), which provides better integration
with Varnish and include the following lines in your settings.php:
if (req.url ~ ^/(cron|install)\.php$ && !client.ip ~
internal) { $conf[cache_backends] = array(sites/all/modules/varnish/
error 404 Page not found.; varnish.cache.inc);
} $conf[cache_class_cache_page] = VarnishCache;

If you prefer to redirect to an error page, then use this Checking if Varnish is running and
instead: serving requests
Instead of logging to a normal log file, Varnish logs to a
if (req.url ~ ^/(cron|install)\.php$ && !client.ip ~ shared memory segment. Run varnishlog from the command
internal) { line, access your IP address/ URL from the browser and
set req.url = /404; view the Varnish messages. It is not uncommon to see a 503
} service unavailable message. This means that Varnish is
unable to connect to NGINX. In which case, you will see an
Our approach is to cache all assets like images, JavaScript error line in the log (only the relevant portion of the log is
and CSS for both anonymous and authenticated users. So reproduced for clarity).
include this snippet inside vcl_recv to unset the cookie set by
Drupal for these assets: [root@bookingwire sridhar]# Varnishlog

if (req.url ~ (?i)\.(png|gif|jpeg|jpg|ico|swf|css|js|html| 12 StatSess c 122.164.232.107 34869 0 1 0 0 0 0 0 0


htm)(\?[a-z0-9]+)?$) { 12 SessionOpen c 122.164.232.107 34870 :80
unset req.http.Cookie; 12 ReqStart c 122.164.232.107 34870 1343640981
} 12 RxRequest c GET
12 RxURL c /
Drupal throws up a challenge especially when 12 RxProtocol c HTTP/1.1
you have enabled several contributed modules. These 12 RxHeader c Host: 37.139.8.42
modules set cookies, thus preventing Varnish from 12 RxHeader c User-Agent: Mozilla/5.0 (X11; Ubuntu;
caching assets. Google analytics, a very popular module, Linux i686; rv:27.0) Gecko/20100101 Firefox/27.0
sets a cookie. To remove this, include the following in 12 RxHeader c Accept: text/html,application/
your default.vcl: xhtml+xml,application/xml;q=0.9,*/*;q=0.8
12 RxHeader c Accept-Language: en-US,en;q=0.5
set req.http.Cookie = regsuball(req.http.Cookie, (^|;\s*) 12 RxHeader c Accept-Encoding: gzip, deflate
(__[a-z]+|has_js)=[^;]* 12 RxHeader c Referer: http://37.139.8.42/
12 RxHeader c Cookie: __zlcmid=OAdeVVXMB32GuW
If there are other modules that set JavaScript cookies, 12 RxHeader c Connection: keep-alive
then Varnish will cease to cache those pages; in which case, 12 FetchError c no backend connection

www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 71


Admin How To

Check if Varnish is serving pages


Visit http://www.isvarnishworking.com/, provide your URL/
IP address and you should see your Gold Star! (See Figure
3.) If you dont, but instead see other messages, it means that
Varnish is running but not caching.
Then you should look at your code and ensure that it
sends the appropriate headers. If you are using a content
management system, particularly Drupal, you can check the
additional parameters in the VCL file and set them correctly.
You have to enable caching in the performance page.

Running the tests


Running Pingdom tests showed improved response times
of 2.14 seconds. If you noticed, there was an improvement
in the response time in spite of having the payload of
the page increasing from 2.9MB to 4.1MB. If you are
wondering why it increased, remember, we switched the
site to a new theme.
Apache Bench reports better figures at 744.722 ms.

Configuring client IP forwarding


Figure 3: Varnish status result Check the IP address for each request in the access logs of
your Web servers. For NGINX, the access logs are available
DNS Health Ping and Traccroute Sign up
at /var/log/nginx and for Apache, they are available at /var/
Full Page Test
log/httpd or /var/log/apache2, depending on whether you are
Pingdom Website Speed Test running Centos or Ubuntu.
Enter a URL to test the load time of that page, analyze it and find bottlenecks

Its not surprising to see the same IP address (of the


Test Now
Varnish machine) for each request. Such a configuration will
throw all Web analytics out of gear. However, there is a way
37.139.8.42 out. If you run NGINX, try out the following procedure.
Determine the NGINX configuration that you currently run by
Download Har
Your website is faster than 68% of all tested websites

Tweet Post to Timeline Email


executing the command below in your command line:

Figure 4: Pingdom test result after configuring Varnish [root@bookingwire sridhar]# nginx -V

12 VCL_call c error Look for the with-http_realip_module. If this is


12 TxProtocol c HTTP/1.1 available, add the following to your NGINX configuration file
12 TxStatus c 503 in the http section. Remember to replace the IP address with
12 TxResponse c Service Unavailable that of your Varnish machine. If Varnish and NGINX run on
12 TxHeader c Server: Varnish the same machine, do not make any changes.
12 TxHeader c Retry-After: 0
12 TxHeader c Content-Type: text/html; charset=utf-8 set_real_ip_from 127.0.0.1;
12 TxHeader c Content-Length: 686 real_ip_header X-Forwarded-For;
12 TxHeader c Date: Thu, 03 Apr 2014 09:08:16 GMT
12 TxHeader c X-Varnish: 1343640981 Restart NGINX and check the logs once again. You will
12 TxHeader c Age: 0 see the client IP addresses.
12 TxHeader c Via: 1.1 varnish If you are using Drupal then include the following line in
12 TxHeader c Connection: close settings.php:
12 Length c 686
$conf[reverse_proxy_header] = HTTP_X_FORWARDED_FOR;
Resolve the error and you should have Varnish running.
But that isnt enoughwe should check if its caching the Other Varnish tools
pages. Fortunately, the folks at the following URL have Varnish includes several tools to help you as an administrator.
made it simple for us. varnishstat -1 -f n_lru_nuked: This shows the number of

72 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com


How To Admin

If you have the Remi repo enabled and the Varnish


cache repo enabled, install them by specifying the defined
repository.

Yum install varnish enablerepo=epel


Yum install varnish enablerepo=varnish-3.0

Our experience has been that Varnish reduces the number


of requests sent to the NGINX server by caching assets, thus
improving page response times. It also acts as a failover
mechanism if the Web server fails.
Figure 5: Apache Bench result after configuring Varnish We had over 55 JavaScript files (two as part of the theme
and the others as part of the modules) in Drupal and we
objects nuked from the cache. aggregated JavaScript by setting the flag in the Performance
Varnishtop: This reads the logs and displays the most page. We found a 50 per cent drop in the number of requests;
frequently accessed URLs. With a number of optional flags, it however, we found that some of the JavaScript files were not
can display a lot more information. loaded on a few pages and had to disable the aggregation.
Varnishhist: Reads the shared memory logs, and displays This is something we are investigating. Our recommendation
a histogram showing the distribution of the last N requests on is not to choose the aggregate JavaScript files in your Drupal
the basis of their processing. CMS. Instead, use the Varnish module (https://drupal.org/
Varnishadm: A command line utility for Varnish. project/varnish).
Varnishstat: Displays the statistics. The module allows you to set long object lifetimes
(Drupal doesnt set it beyond 24 hours), and use Drupals
Dealing with SSL: SSL-offloader, SSL- existing cache expiration logic to dynamically purge
accelerator and SSL-terminator Varnish when things change.
SSL termination is probably the most misunderstood term You can scale this architecture to handle higher loads
in the whole mix. The mechanism of SSL termination is either vertically or horizontally. For vertical scaling, resize
employed in situations where the Web traffic is heavy. your VPS to include additional memory and make that
Administrators usually have a proxy to handle SSL available to Varnish using the -s directive.
requests before they hit Varnish. The SSL requests are To scale horizontally, i.e., to distribute the requests between
decrypted and the unencrypted requests are passed to several machines, you could add additional Web servers and
the Web servers. This is employed to reduce the load update the round robin directives in the VCL file.
on the Web servers by moving the decryption and other You can take it a bit further by including HAProxy
cryptographic processing upstream. right upstream and have HAProxy route requests to
Since Varnish by itself does not process or understand Varnish, which then serves the content or passes it
SSL, administrators employ additional mechanisms to downstream to NGINX.
terminate SSL requests before they reach Varnish. Pound To remove a Web server from the round robin league,
(http://www.apsis.ch/pound) and Stud (https://github. you can improve upon the example that we have mentioned
com/bumptech/stud) are reverse proxies that handle SSL by writing a small PHP snippet to automatically shut down
termination. Stunnel (https://www.stunnel.org/) is a program or exit() if some checks fail.
that acts as a wrapper that can be deployed in front of Varnish.
Alternatively, you could also use another NGINX in front of
Varnish to terminate SSL. References
However, in our case, since only the sign-in pages [1] https://www.varnish-cache.org/
required SSL connections, we let Varnish pass all SSL [2] https://www.varnish-software.com/static/book/index.html
requests to our backend Web server. [3] http://www.lullabot.com/blog/article/configuring-varnish-
high-availability-multiple-web-servers
Additional repositories
There are other repositories from where you can get the latest By: Sridhar Pandurangiah
release of Varnish: The author is the co-founder and director of Sastra
Technologies, a start-up engaged in providing EDI solutions on
wget repo.varnish-cache.org/redhat/varnish-3.0/el6/noarch/ the cloud. He can be contacted at sridhar@sastratechnologies.
in /sridharpandu@gmail.com. He maintains a technical blog at
varnish-release/varnish-release-3.0-1.el6.noarch.rpm
sridharpandu.wordpress.com
rpm nosignature -i varnish-release-3.0-1.el6.noarch.rpm

www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 73


Admin How To

Use Wireshark to Detect ARP Spoofing


The first two articles in the series on Wireshark, which appeared in the July and August 2014
issues of OSFY, covered a few simple protocols and various methods to capture traffic in a
switched environment. This article describes an attack called ARP spoofing and explains
how you could use Wireshark to capture it.

I
magine an old Hindi movie where the villain and his consequences of the lie. It thus becomes vitally important
subordinate are conversing over the telephone, and the for the state to use all of its powers to repress dissent, for the
hero intercepts this call to listen in on their conversation truth is the mortal enemy of the lie, and thus by extension, the
a perfect man in the middle (MITM) scenario. Now truth is the greatest enemy of the state.
extend this to the network, where an attacker intercepts So let us interpret this quote by a leader of the
communication between two computers. infamous Nazi regime from the perspective of the ARP
Here are two possibilities with respect to what an attacker protocol: If you repeatedly tell a device who a particular
can do to intercepted traffic: MAC address belongs to, the device will eventually
1. Passive attacks (also called eavesdropping or only believe you, even if this is not true. Further, the device
listening to the traffic): These can reveal sensitive will remember this MAC address only as long as you keep
information such as clear text (unencrypted) login IDs and telling the device about it. Thus, not securing an ARP
passwords. cache is dangerous to network security.
2. Active attacks: These modify the traffic and can be used
for various types of attacks such as replay, spoofing, etc. Note: From the network security professionals
An MITM attack can be launched against cryptographic view, it becomes absolutely necessary to monitor ARP
systems, networks, etc. In this article, we will limit our traffic continuously and limit it to below a threshold. Many
discussions to MITM attacks that use ARP spoofing. managed switches and routers can be configured to monitor
and control ARP traffic below a threshold.
ARP spoofing
Joseph Goebbels, Nazi Germanys minister for propaganda, An MITM attack is easy to understand using this
famously said, If you tell a lie big enough and keep repeating context. Attackers trying to listen to traffic between any two
it, people will eventually come to believe it. The lie can devices, say a victims computer system and a router, will
be maintained only for such time as the state can shield launch an ARP spoofing attack by sending unsolicited (what
the people from the political, economic and/or military this means is an ARP reply packet sent out without receiving

74 | september 2014 | OPEN SOURCE For You | www.OpenSourceForU.com


How To Admin

Figure 3: Wireshark capture on the attackers PCARP packets

Figure 1: Ettercap menus

Figure 4: Wireshark capture on the attackers PCsniffed packets sniffed


from the victims PC and router

The tool has command line options, but its GUI is easier
and can be started by using:

ettercap -G

Launch the MITM ARP spoofing attack by using Ettercap


menus (Figure 1) in the following sequence (words in italics
indicate Ettercap menus):
Sniff is unified sniffing and selects the interface to be
sniffed (for example, eth0 for a wired network).
Figure 2: Successful ARP poisoning Hosts scans for hosts. It scans for all active IP addresses
in the eth0 network.
an ARP request) ARP reply packets with the following The hosts list displays the list of scanned hosts.
source addresses: The required hosts are added to Target1 and Target2.
Towards the victims computer system: Router IP address An ARP spoofing attack will be performed so as to read
and attacker's PC MAC address; traffic between all hosts selected under Target1 and
Towards the router: Victims computer IP address and Target2.
attackers PC MAC address. Targets gives the current targets. It verifies selection of
After receiving such packets continuously, due to ARP the correct targets.
protocol characteristics, the ARP cache of the router and the MITM ARP poisoning: Sniff remote connections will
victims PC will be poisoned as follows: start the attack.
Router: The MAC address of the attackers PC registered The success of the attack can be confirmed as follows:
against the IP address of the victim; In the router, check ARP cache (for a CISCO router, the
Victims PC: The MAC address of the attackers PC command is show ip arp).
registered against the IP address of the router. In the victim PC, use the ARP -a command. Figure 2
gives the output of the command before and after a
The Ettercap tool successful ARP spoofing attack.
ARP spoofing is the most common type of MITM attack, and The attacker PC captures traffic using Wireshark to
can be launched using the Ettercap tool available under Linux check unsolicited ARP replies. Once the attack is successful,
(http://ettercap.github.io/ettercap/downloads.html). A few the traffic between two targets will also be captured. Be
sites claim to have Windows executables. I have never tested carefulif traffic from the victims PC contains clear text
these, though. You may install the tool on any Linux distro, or authentication packets, the credentials could be revealed.
use distros such as Kali Linux, which has it bundled. Note that Wireshark gives information such as Duplicate

www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 75


Admin How To

use of IP is detected under the


Info column once the attack is
successful.
Here is how the actual packet No broadcast and no Multicast

travels and is captured after a No ARP


IP only
successful ARP poisoning attack: IP address 192.168.0.1
IPX only
When the packet from the TCP only
UDP only
victim PC starts for the router,
at Layer 2, the poisoned MAC
address of the attacker (instead
of the original router MAC)
is inserted as the target MAC;
thus the packet reaches the
attackers PC.
The attacker sees this packet
and forwards the same to the
router with the correct MAC
address. Figure 5: Wiresharks capture filter
The reply from the router is
logically sent towards the spoofed destination MAC
address of the attackers system (rather than the victims
PC). It is captured and forwarded by the attacker to the
victims PC.
Packets captured using the test scenarios described in
In between, the sniffer software, Wireshark, which is
this series of articles are capable of revealing sensitive
running on the attackers PC, reads this traffic.
information such as login names and passwords. Using
Here are various ways to prevent ARP spoof attacks:
ARP spoofing, in particular, will disturb the network
Monitor arpwatch logs on Linux
temporarily. Make sure to use these techniques only in
Use static ARP commands on Windows and Ubuntu
a test environment. If at all you wish to use them in a
as follows:
live environment, do not forget to avail explicit written
Windows: arp-s DeviceIP DeviceMAC
permission before doing so.
Ubuntu: arp -i eth0 -s DeviceIP DeviceMAC
Control ARP packets on managed switches
the previous article. But, in a busy network, capturing
Can MITM ARP spoofing be put to fruitful use? all traffic and using display filters to see only the desired
Definitely! Consider capturing packets from a system traffic may require a lot of effort. Wiresharks capture
suspected of malware (virus) infection in a switched filters provide a way out.
environment. There are two ways to do thisuse a In the beginning, before selecting the interface, you can
wiretap or MITM ARP spoofing. Sometimes, you may click on Capture Options and use capture filters to capture
not have a wiretap handy or may not want the system only the desired traffic. Click on the Capture filter button to
to go offline even for the time required to connect the see various filters, such as ARP, No ARP, TCP only, UDP only,
wiretap. Here, MITM ARP spoofing will definitely traffic from specific IP addresses, and so on. Select the desired
serve the purpose. filter and Wireshark will capture only the defined traffic.
For example, MITM ARP spoofing can be captured
Note: This attack is specifically targeted towards using the ARP filter from Capture filters instead of Display
OSI Layer 2a data link layer; thus, it can be executed filtering the entire captured traffic.
only from within your network. Be assured, this attack Keep a watch on this column for exciting Wireshark
cannot be used sitting outside the local network to features!
sniff packets between your computer and your banks
Web server the attacker must be within the local
network. By: Rajesh Deodhar
The author has been an IS auditor and network security
Before we conclude, let us understand an important consultant-trainer for the last two decades. He is a BE in Industrial
Wireshark feature called capture filters. Electronics, and holds CISA, CISSP, CCNA and DCL certifications.
Please feel free to contact him at rajesh@omegasystems.co.in
We did go through the basics of display filters in

76 | september 2014 | OPEN SOURCE For You | www.OpenSourceForU.com


Insight Admin

Make Your Own PBX with Asterisk

This article, the first of a multi-part series, familiarises readers with Asterisk, which is a
software implementation of a private branch exchange (PBX).

A
sterisk is a revolutionary open source platform his requirements. Later, he published the software as open
started by Mark Spencer, and has shaken up the source and a lot of others joined the community to further
telecom world. This series is meant to familiarise develop the software. The rest is history.
you with it, and educate you enough to be a part of it in
order to enjoy its many benefits. The statistics
If you are a technology freak, you will be able to make Today, Asterisk claims to have 2 million downloads
your own PBX for your office or home after going through every year, and is running on over 1 million servers,
this series. As a middle level manager, you will be able to with 1.3 million new endpoints created annually. A
guide a techie to do the job, while senior level managers with 2012 statistic by Eastern Management claims that 18
a good appreciation of the technology and minimal costs per cent of all PBX lines in North America are open
involved would be in a position to direct somebody to set up source-based and the majority of them are on Asterisk.
an Asterisk PBX. If you are an entrepreneur, you can adopt Indian companies have also started adopting Asterisk
one of the many business models with Asterisk. As you will since a few years. The initial thrust was for international
see, it is worthwhile to at least evaluate the option. call centres. A large majority of the smaller call centres
(50-100 seater) use Vicidial', another open source
History application based on Asterisk. IP PBX penetration in the
In 1999, Mark Spencer of Digium fame started a Linux Indian marketis not very high due to certain regulatory
technical support company with US$ 4000. Initially, he had misinterpretations. Anyhow, this unclear environment is
to be very frugal; so buying one of those expensive PBXs gradually getting clarity, and very soon, we will see an
was unthinkable. Instead, he started programming a PBX for astronomic growth of Asterisk in the Indian market.

www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 77


Admin Insight

The call centre boom also led anywhere in the officeparticipate in a conference, visit
to the development of the Asterisk a colleague, doctors can visit their in-patientsand yet
ecosystem comprising Asterisk- receive calls as if they were seated at their desks.
based product companies, software External extensions: The employees could be at home,
supporters, hardware resellers, etc, at a friend's house, or even out making a purchase, and
across India. This presents a huge still receive the same calls, as if at their desks.
opportunity for entrepreneurs. Increased call accountability: Calls can be recorded and
monitored for quality or security purposes at the PBX.
Some terminology Lower telephone costs: The volume of calls passing
Before starting, I would like to Mark Spencer, through the PBX makes it possible to negotiate with the
introduce some basic terms for the founder of Asterisk service provider for better rates.
benefit of readers who are novices in this field. Let us start The advantages that a roaming extension brings
with the PBX or private branch exchange, which is the are many, which we will explore in more detail in
heart of all corporate communication. All the telephones subsequent editions.
seen in an office environment are connected to the PBX, Let us look into the basics of Asterisk. Asterisk is
which in turn connects you to the outside world. The like a box of Lego blocks for people who want to create
internal telephones are called subscribers and the external communications applications. It includes all the building
lines are called trunk lines. blocks needed to create a PBX, an IVR system, a conference
The trunk lines connect the PBX to the outside world bridge and virtually any other communications app you can
or the PSTN (Public Switched Telephony Network). imagine, says an excerpt from asterisk.org.
Analogue trunks (FXOForeign eXchange Office) are Asterisk is actually a piece of software. In very simple
based on very old analogue technology, which is still and generic terms, the following are the steps required to
in use in our homes and in some companies. Digital create an application based on it:
trunk technology or ISDN (Integrated Services Digital 1. Procure standard hardware.
Network) evolved in the 80s with mainly two types of 2. Install Linux.
connections BRI (Basic Rate Interface) for SOHO 3. Download Asterisk software.
(small office/ home office) use, and PRI (Primary Rate 4. Install Asterisk.
Interface) for corporate use. In India, analogue trunks 5. Configure it.
are used for SOHO trunking, but BRI is no longer used 6. Procure hardware interfaces for the trunk line and
at all. Anyhow, PRI is quite popular among companies. configure them.
IP/SIP (Internet Protocol/Session Initiation Protocol) 7. Procure hardware for subscribers and configure them.
trunking has been used by international call centres for 8. Youre then ready to make your calls.
quite some time. Now, many private providers like Tata Procure a standard desktop or server hardware, based
Telecom have started offering SIP trunking for domestic on Pentium, Xeon, i3, etc. RAM is an important factor, and
calls also. The option of GSM trunking through a GSM could be 2GB, 4GB or 8GB. These two factors decide the
gateway using SIM cards is also quite popular, due to the number of concurrent calls. Hard disk capacity of 500GB or
flexibility offered in costs, prepaid options and network 1TB is mainly for space to store voice files for VoiceMail
availability. or VoiceLogger. The hard disks speed also influences the
The users connected to the PBX are called subscribers. concurrent calls.
Analogue telephones (FXS Foreign eXchange Subscriber) The next step is to choose a suitable OSFedora,
are still very commonly used and are the cheapest. As Debian, CentOS or Ubuntu are well suited for this
Asterisk is an IP PBX, we need a VoIP FXS gateway to purpose. After this, Asterisk software may be downloaded
convert the IP signals to analogue signals. Asterisk supports from www.asterisk.org/downloads/. Either the newest LTS
IP telephones, mainly using SIP. (Long Term Support) release or the latest standard version
Nowadays, Wi-Fi clients are available even for can be downloaded. LTS versions are released once in
smartphones, which enable the latter to work like extensions. four years. They are more stable, but have fewer features
These clients bring in a revolutionary transformation to the than the standard version, which is released once a year.
telephony landscapeanalogous to paperless offices and Once the software is downloaded, the installation may be
telephone-less desks. The same smartphone used to make carried out as per the instructions provided. We'll go into
calls over GSM networks becomes a dual-purpose phone the details of the installation in later sessions.
also working like a desk extension. Just for a minute, consider The download page also offers the option to download
the limitless possibilities enabled by this new transformed AsteriskNow, which is an ISO image of Linux, Asterisk and
extension phone. FreePBX GUI. If you prefer a very quick and simple installation
Extension roaming: Employees can roam about without much flexibility, you may choose this variant.

78 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com


Insight Admin

After the installation, one needs to create the trunks, users There are also lots of applications based on Asterisk
and set up some more features to be able to start using the like Vicidial, which is a call-centre suite for inbound
system. The administrators can make these configurations and outbound dialling. For the latter, one can configure
directly into the dial plan, or there are GUIs like FreePBX, campaigns with lists of numbers, dial these numbers
which enable easy administration. in predictive dialling mode and connect to the agents.
Depending on the type of trunk chosen, we need to Similarly, inbound dialling can also be configured with
procure hardware. If we are connecting a normal analogue multiple agents, and the calls routed based on multiple
line, an FXO card with one port needs to be procured, in PCI criteria like the region, skills, etc.
or PCIe format, depending on the slots available on the server. Asterisk also easily integrates with multiple enterprise
After inserting the card, it has to be configured. Similarly, if applications (like CRM and ERP) over CTI (computer
you have to connect analogue phones, you need to procure telephony interfaces) like TAPI (Telephony API) or by using
FXS gateways. IP phones can be directly connected to the simple URL integration.
system over the LAN. O'Reilly has a book titled Asterisk: The future of
Exploring the PBX further, you will be astonished telephony', which can be downloaded. I would like to take
by the power of Asterisk. It comes with a built in voice you through the power of Asterisk in subsequent issues, so
logger, which can be customised to record either all calls that you and your network can benefit from this remarkable
or those from selective people. In most proprietary PBXs, product, which is expected to change the telephony landscape
this would have been an additional component. Asterisk not of the future.
only provides a voice mail box, but also has the option to
convert the voice mail to an attachment that can be sent to
you as an email. The Asterisk IVR is very powerful; it has By: Devasia Kurian
multiple levels, digit collection, database and Web-service The author is the founder and CEO of *astTECS.
integration, and speech recognition.

Please share your feedback/ thoughts/


views via email at osfyedit@efy.in

www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 79


Open Gurus How To

How to Make Your USB Boot


with Multiple ISOs

This DIY article is for systems admins and software hobbyists, and teaches them
how to create a bootable USB that is loaded with multiple ISOs.

S
ystems administrators and other Linux enthusiasts use Fat32 (0x0c). You can choose ext2/ext3 file systems also, but
multiple CDs or DVDs to boot and install operating they will not load some OSs. So, Fat32 is the best choice for
systems on their PCs. But it is somewhat difficult and most of the ISOs.
costly to maintain one CD or DVD for each OS (ISO image Now download the grub4dos-0.4.5c (not grub4dos-
file) and to carry around all these optical disks; so, lets look 0.4.6a) from https://code.google.com/p/grub4dos-chenall/
at the alternativea multi-boot USB. downloads/list and extract it on the desktop.
The Internet provides so many ways (in Windows and Next, install the grub4dos on the MBR with a zero
in Linux) to convert a USB drive into a bootable USB. In second time-out on your USB stick, by typing the following
real time, one can create a bootable USB that contains a command at the terminal:
single OS. So, if you want to change the OS (ISO image),
you have to format the USB. To avoid formatting the USB sudo ~/Desktop/grub4dos-0.4.5c/bootlace.com - -time-out =0 /
each time the ISO is changed, use Easy2Boot. In my case, dev/sdb
the RMPrepUSB website saved me from unnecessarily
formatting the USB drive by introducing the Easy2Boot
option. Easy2Boot is open source - it consists of plain text Note: You can change the path to your grub4dos folder.
batch files and open source grub4dos utilities. It has no sdb is your USB and can be checked by the df command in a
proprietary software. terminal or by using the gparted or disk utility tools.

Making the USB drive bootable Copying Easy2Boot files to USB


To make your USB bootable, just connect it to your Linux Your pen drive is ready to boot, but we need menu files,
system. Open the disk utility or gparted tool and format it as which are necessary to detect the .ISO files in your USB.

80 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com


How To Open Gurus

defragfs-1.1.1.gz file from http://defragfs.sourceforge.net/


download.html and extract it to the desktop. Now run the
following command at the terminal:

sudo umount /dev/sdb1

sdb1 is the partition on my USB which has the E2B files.

sudo mkdir ~/Desktop/usb && sudo mount /dev/sdb1 ~/Desktop/


usb
sudo perl ~/Desktop/defragfs ~/Desktop/usb -f
Figure 1: Folders for different OSs
Thats it. Your USB drive is ready with a number of ISO
files to boot on any system. Just run the defragfs command
every time you modify (add or remove) the ISO files in the
USB to make all the files in the drive contiguous.

Using the QEMU emulator for testing


After completing the final stage, test how well your USB
boots with lots of .ISO load on it, using the QEMU tool.
Alternatively, you can choose any of the virtualisation tools
like Virtual Box or VMware. We used QEMU (it is easy
but somewhat slow) in our Linux machine by typing the
following command at the terminal:
Figure 2: Easy2Boot OS selection menu

sudo qemu m 512M /dev/sdb

Note: The loading of every .ISO file in the corresponding


folder is based only on the .mnu file for that .ISO. So, by
creating your own .mnu file you can add your own flavour to
the USB menu list. For further details and help regarding .mnu
file creation, just visit http://www.rmprepusb.com/tutorials.

Your USB will boot and the Easy2Boot OS selection menu


will appear. Choose the OS you want, which is placed under
Figure 3: Ubuntu boot menu the corresponding folder. You can use your USB in real
time, and can add or remove the .ISOs in the corresponding
The menu (.mnu) files and other boot-related files can folders simply by copy-pasting. You can use the same USB
be downloaded from the Easy2boot website. Extract for copying documents and other files by making all the files
the Easy2Boot file to your USB drive and you can that belong to Easy2Boot contiguous.
observe the different folders that are related to different
operating systems and applications. Now, just place the References
corresponding .ISO file in the corresponding folder. For
[1] http://www.rmprepusb.com/tutorials
example, all the Linux-related .ISO files should be placed [2] https://code.google.com/p/grub4dos-chenall/downloads/list
in the Linux folder, all the backup-Linux related files [3] http://www.easy2boot.com/download/
should be placed in the corresponding folder, utilities [4] http://defragfs.sourceforge.net/download.html
should be placed in the utilities folder, and so on.
Your USB drive is now ready to be loaded with any By: Gaali Mahesh and Nagaram Suresh Kumar
(almost all) Linux image files, backup utilities and some The authors are assistant professors at VNITSW (Vignans Nirula
other Windows related .ISOs without formatting it. After Institute of Technology and Science for Women, Andhra Pradesh).
placing your required image files, either installation They blog at surkur.blogspot.in, where they share some tech tricks
and their practical experiences with open source. You can reach
ISOs or live ISOs, you need to defragment the folders in
them at mahe1729@gmail.com and nagaramsuresh@gmail.com.
the USB drive. To defrag your USB drive, download the

www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 81


Open Gurus Let's Try

How to Cross Compile the Linux Kernel


with Device Tree Support
This article is intended for those who would like to experiment with the many embedded
boards in the market but do not have access to them for one reason or the other. With the
QEMU emulator, DIY enthusiasts can experiment to their hearts content.

Y
ou may have heard of the many embedded target to working with most target boards, you can apply these
boards available today, like the BeagleBoard, techniques on other boards too.
Raspberry Pi, BeagleBone, PandaBoard, Cubieboard,
Wandboard, etc. But once you decide to start development for Device tree
them, the right hardware with all the peripherals may not be Flattened Device Tree (FDT) is a data structure that describes
available. The solution to starting development on embedded hardware initiatives from open firmware. The device tree
Linux for ARM is by emulating hardware with QEMU, which perspective kernel no longer contains the hardware description,
can be done easily without the need for any hardware. There which is located in a separate binary called the device tree
are no risks involved, too. blob (dtb) file. So, one compiled kernel can support various
QEMU is an open source emulator that can emulate hardware configurations within a wider architecture family.
the execution of a whole machine with a full-fledged OS For example, the same kernel built for the OMAP family can
running. QEMU supports various architectures, CPUs and work with various targets like the BeagleBoard, BeagleBone,
target boards. To start with, lets emulate the Versatile Express PandaBoard, etc, with dtb files. The boot loader should be
Board as a reference, since it is simple and well supported by customised to support this as two binaries-kernel image and
recent kernel versions. This board comes with the Cortex-A9 the dtb file - are to be loaded in memory. The boot loader
(ARMv7) based CPU. passes hardware descriptions to the kernel in the form of dtb
In this article, I would like to mention the process of files. Recent kernel versions come with a built-in device tree
cross compiling the Linux kernel for ARM architecture compiler, which can generate all dtb files related to the selected
with device tree support. It is focused on covering the architecture family from device tree source (dts) files. Using the
entire process of workingfrom boot loader to file system device tree for ARM has become mandatory for all new SOCs,
with SD card support. As this process is almost similar with support from recent kernel versions.

82 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com


Let's Try Open Gurus

Building QEMU from sources


You may obtain pre-built QEMU binaries from your distro
repositories or build QEMU from sources, as follows.
Download the recent stable version of QEMU, say qemu-
2.0.tar.bz2, extract and build it:

tar -zxvf qemu-2.0.tar.bz2


cd qemu-2.0
./configure --target-list=arm-softmmu, arm-linux-user
--prefix=/opt/qemu-arm
make
make install Figure 1: Kernel configurationmain menu

You will observe commands like qemu-arm, qemu- Building mkimage
system-arm, qemu-img under /opt/qemu-arm/bin. The mkimage command is used to create images for use with
Among these, qemu-system-arm is useful to emulate the the u-boot boot loader.
whole system with OS support. Here, we'll use this tool to transform the kernel image to
be used with u-boot. Since this tool is available only through
Preparing an image for the SD card u-boot, we need to go for a quick build of this boot loader
QEMU can emulate an image file as storage media in the to generate mkimage. Download a recent stable version of
form of the SD card, flash memory, hard disk or CD drive. u-boot (tested on u-boot-2014.04.tar.bz2) from ftp.denx.de/
Lets create an image file using qemu-img in raw format and pub/u-boot:
create a FAT file system in that, as follows. This image file
acts like a physical SD card for the actual target board: tar -jxvf u-boot-2014.04.tar.bz2
cd u-boot-2014.04
qemu-img create -f raw sdcard.img 128M make tools-only
#optionally you may create partition table in this image
#using tools like sfdisk, parted Now, copy mkimage from the tools directory to any
mkfs.vfat sdcard.img directory under the standard path (like /usr/local/bin) as a
#mount this image under some directory and copy required files super user, or set the path to the tools directory each time,
mkdir /mnt/sdcard before the kernel build.
mount -o loop,rw,sync sdcard.img /mnt/sdcard
Building the Linux kernel
Setting up the toolchain Download the most recent stable version of the kernel source
We need a toolchain, which is a collection of various cross from kernel.org (tested with linux-3.14.10.tar.xz):
development tools to build components for the target
platform. Getting a toolchain for your Linux kernel is tar -xvf linux-3.14.10.tar.gz
always tricky, so until you are comfortable with the process cd linux-3.14.10
please use tested versions only. I have tested with pre-built make mrproper #clean all built files and
toolchains from the Linaro organisation, which can be got configuration files
from the following link http://releases.linaro.org/14.0.4/ make ARCH=arm vexpress_defconfig #default configuration for
components/toolchain/binaries/gcc-linaro-arm-linux- given board
gnueabihf-4.8-2014.04_linux.tar.xz or any latest stable make ARCH=arm menuconfig #customize the configuration
version. Next, set the path for cross tools under this toolchain,
as follows: Then, to customise kernel configuration (Figure 1), follow
the steps listed below:
tar -xvf gcc-linaro-arm-linux-gnueabihf-4.8-2014.04_linux. 1) Set a personalised string, say -osfy-fdt, as the local
tar.xz -C /opt version of the kernel under general setup.
export PATH=/opt/gcc-linaro-arm-linux-gnueabihf-4.8-2014.04_ 2) Ensure that ARM EABI and old ABI compatibility are
linux/bin:$PATH enabled under kernel features.
3) Under device drivers--> block devices, enable RAM disk
You will notice various tools like gcc, ld, etc, under /opt/ support for initrd usage as static module, and increase
gcc-linaro-arm-linux-gnueabihf-4.8-2014.04_linux/bin with default size to 65536 (64MB).
the prefix arm-linux-gnueabihf- You can use arrow keys to navigate between various options

www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 83


Open Gurus Let's Try

qemu-system-arm -M vexpress-a9 -m 1024 -serial stdio \


-kernel /mnt/sdcard/zImage \
-dtb /mnt/sdcard/vexpress-v2p-ca9.dtb \
-initrd /mnt/sdcard/rootfs.img -append root=/dev/ram0
console=ttyAMA0

In the above command, we are treating rootfs as initrd


image, which is fine when rootfs is of a small size. You can
connect larger file systems in the form of a hard disk or SD
card. Lets try out rootfs through an SD card:

Figure 2: Kernel configurationRAM disk support qemu-system-arm -M vexpress-a9 -m 1024 -serial stdio \
-kernel /mnt/sdcard/zImage \
and space bar to select among various states (blank, m or *) -dtb /mnt/sdcard/vexpress-v2p-ca9.dtb \
4) Make sure devtmpfs is enabled under the Device Drivers -sd /mnt/sdcard/rootfs.img -append root=/dev/mmcblk0
and Generic Driver options. console=ttyAMA0
Now, lets go ahead with building the kernel, as follows:
In case the sdcard/image file holds a valid partition table, we
#generate kernel image as zImage and necessary dtb files need to refer to the individual partitions like /dev/mmcblk0p1, /dev/
make ARCH=arm CROSS_COMPILE=arm-linux-gnueabihf- mmcblk0p2, etc. Since the current image file is not partitioned, we
zImage dtbs can refer to it by the device file name /dev/mmcblk0.
#transform zImage to use with u-boot
make ARCH=arm CROSS_COMPILE=arm-linux- Building u-boot
gnueabihf- uImage \ Switch back to the u-boot directory (u-boot-2014.04), build
LOADADDR=0x60008000 u-boot as follows and copy it to the SD card:
#copy necessary files to sdcard
cp arch/arm/boot/zImage /mnt/sdcard make ARCH=arm CROSS_COMPILE=arm-linux-gnueabihf- vexpress_
cp arch/arm/boot/uImage /mnt/sdcard ca9x4_config
cp arch/arm/boot/dts/*.dtb /mnt/sdcard make ARCH=arm CROSS_COMPILE=arm-linux-gnueabihf-
#Build dynamic modules and copy to suitable destination cp u-boot /mnt/image
make ARCH=arm CROSS_COMPILE=arm-linux-gnueabihf- modules # you can go for a quick test of generated u-boot as follows
make ARCH=arm CROSS_COMPILE=arm-linux-gnueabihf- qemu-system-arm -M vexpress-a9 -kernel /mnt/sdcard/u-boot
modules_install \ INSTALL_ -serial stdio
MODPATH=<mount point of rootfs>
Lets ignore errors such as u-boot couldn't locate kernel
You may skip the last two steps for the moment, as the image or any other suitable files.
given configuration steps avoid dynamic modules. All the
necessary modules are configured as static. The final steps
Lets boot the system with u-boot using an image file such as
Getting rootfs SD card, and make sure the QEMU PATH is not disturbed.
We require a file system to work with the kernel weve built. Unmount the SD card image and then boot using QEMU.
Download the pre-built rootfs image to test with QEMU
from the following link: http://downloads.yoctoproject.org/ umount /mnt/sdcard
releases/yocto/yocto-1.5.2/machines/qemu/qemuarm/core-
image-minimal-qemuarm.ext3 and copy it to the SD card (/
mnt/image) by renaming it as rootfs.img for easy usage. You
may obtain the rootfs image from some other repository or
build it from sources using Busybox.

Your first try


Lets boot this kernel image (zImage) directly without u-boot,
as follows:

export PATH=/opt/qemu-arm/bin:$PATH Figure 3: U-boot loading

84 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com


Let's Try Open Gurus

fatload mmc 0:0 0x80200000 uImage


fatload mmc 0:0 0x80100000 vexpress-v2p-ca9.dtb
setenv bootargs 'console=ttyAMA0 root=/dev/ram0 rw
initrd=0x82000000,8388608'
bootm 0x80200000 - 0x80100000

Ensure a space before and after the symbol in the


above command.
Log in using root as the username and a blank password
to play around with the system.
I hope this article proves useful for bootstrapping with
Figure 4: Loading of kernel with FDT support embedded Linux and for teaching the concepts when there is
no hardware available.
qemu-system-arm -M vexpress-a9 -sd sdcard.img -m 1024
Acknowledgements
-serial stdio -kernel u-boot
I thank Babu Krishnamurthy, a freelance trainer for his valuable
You can stop autoboot by hitting any key within the time inputs on embedded Linux and omap hardware during the
course of my embedded journey. I am also grateful to C-DAC for
limit and enter the following commands at the u-boot prompt the good support Ive received.
to load rootfs.img, uimage, dtb files from the SD card to
suitable memory locations without overlapping. Also, set the References
kernel boot parameters using setenv as shown below (here,
[1] elinux.org/Qemu
0x82000000 stands for the location of the loaded rootfs image
[2] Device Tree for Dummies by Thomas Petazzoni (free-
and 8388608 is the size of the rootfs image). electrons.com)
[3] Few inputs taken from en.wikipedia.org/wiki/Device_tree
[4] mkimage man page from u-boot documentation
Note: The following commands are internal to u-boot
and must be entered within the u-boot prompt.
By: Rajesh Sola
fatls mmc 0:0 #list out partition contents The author is a faculty member of C-DAC's Advanced
fatload mmc 0:0 0x82000000 rootfs.img # note down the size of Computing Training School, Pune, in the embedded systems
domain. You can reach him at rajeshsola@gmail.com.
image being loaded

www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 85


Open Gurus How To

As the Internet of
Things becomes
more of a reality,
Contiki, an open
source OS, allows
DIY enthusiasts

Contiki OS trollers
to experiment
with connecting

n ne ctin g M ic rocon tiny, low-cost,


Co rn et of Things low-power

to the I n te microcontrollers to
the Internet.

C
ontiki is an open source operating system for Step 3: Open the Virtual Machine and open the Contiki
connecting tiny, low-cost, low-power microcontrollers OS; then wait till the login screen appears.
to the Internet. It is preferred because it supports Step 4: Input the password as user; this shows the
various Internet standards, rapid development, a selection desktop of Ubuntu (Contiki).
of hardware, has an active community to help, and has
commercial support bundled with an open source licence. Running the simulation
Contiki is designed for tiny devices and thus the memory To run a simulation, Contiki comes with many prebuilt
footprint is far less when compared with other systems. modules that can be readily run on the Cooja simulator or on
It supports full TCP with IPv6, and the devices power the real hardware platform. There are two methods of opening
management is handled by the OS. All the modules of Contiki the Cooja simulator window.
are loaded and unloaded during run time; it implements Method 1: In the desktop, as shown in Figure 1, double
protothreads, uses a lightweight file system, and various click the Cooja icon. It will compile the binaries for the first
hardware platforms with sleepy routers (routers which sleep time and open the simulation windows.
between message relays). Method 2: Open the terminal and go to the Cooja directory:
One important feature of Contiki is its use of the Cooja
simulator for emulation in case any of the hardware devices pradeep@localhost$] cd contiki/tools/cooja
are not available. pradeep@localhost$] ant run

Installation of Contiki You can see the simulation window as shown in Figure 2.
Contiki can be downloaded as Instant Contiki, which is
available in a single download that contains an entire Contiki Creating a new simulation
development environment. It is an Ubuntu Linux virtual To create a simulation in Contiki, go to File menu New
machine that runs in VMware Player, and has Contiki and Simulation and name it as shown in Figure 3.
all the development tools, compilers and simulators used in Select any one radio medium (in this case) -> Unit Disk
Contiki development already installed. Most users prefer Graph Medium (UDGM): Distance Loss and click Create.
Instant Contiki over the source code binaries. The current Figure 4 shows the simulation window, which has the
version of Contiki (at the time of writing this post) is 2.7. following windows.
Step 1: Install VMware Player (which is free for Network window: This shows all the motes in the
academic and personal use). simulated network.
Step 2: Download the Instant Contiki virtual image of Timeline window: This shows all the events over the time.
size 2.5 GB, approximately (http://sourceforge.net/projects/ Mote output window: All serial port outputs will be
contiki/files/Instant%20Contiki/) and unzip it. shown here.

86 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com


How To Open Gurus

Figure 1: Contiki OS desktop

Figure 3: New simulation

Figure 2: Cooja compilation

Notes window: User notes information can be put here. Figure 4: Simulation window
Simulation control window: Users can start, stop and
pause the simulation from here. the Contiki application and select
/home/user/contiki/examples/hello-world/hello-world.c.
Adding the sensor motes Then, click Compile.
Once the simulation window is opened, motes can be added to Step 3: Once compiled without errors, click Create (Figure 5).
the simulation using Menu: Motes-> Add Motes. Since we are Step 4: Now the screen asks you to enter the number of
adding the motes for the first time, the type of mote has to be motes to be created and their positions (random, ellipse, linear
specified. There are more than 10 types of motes supported by or manual positions).
Contiki. Here are some of them: In this example, 10 motes are created. Click the Start
MicaZ button in the Simulation Control window and enable the
Sky mote's Log Output: printf() statements in the View menu of
Trxeb1120 the Network window. The Network window shows the output
Trxeb2520 Hello World in the sensors. Figure 6 illustrates this.
cc430 This is a simple output of the Network window. If the real
ESB MicaZ motes are connected, the Hello World will be displayed
eth11 in the LCD panel of the sensor motes. The overall output is
Exp2420 shown in Figure 7.
Exp1101 The output of the above Hello World application can also
Exp1120 be run using the terminal.
WisMote To compile and test the program, go into the hello-
Z1 world directory:
Contiki will generate object codes for these motes to run
on the real hardware and also to run on the simulator if the pradeep@localhost $] cd /home/user/contiki/examples/hello-
hardware platform is not available. world
Step 1: To add a mote, go to Add MotesSelect any of pradeep@localhost $] make
the motes given aboveMicaZ mote. You will get the screen
shown in Figure 5. This will compile the Hello World program in the native
Step 2: Cooja opens the Create Mote Type dialogue target, which causes the entire Contiki operating system and
box, which gives the name of the mote type as well as the the Hello World application to be compiled into a single
Contiki application that the mote type will run. For this program that can be run by typing the following command
example, click the button on the right hand side to choose (depicted in Figure 8):

www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 87


Open Gurus How To

Figure 5: Mote creation and compilation in Contiki Figure 7: Simulation window of Contiki

Figure 8: Compilation using the terminal

Here is the C source code for the above Hello World


application.

#include "contiki.h"
#include <stdio.h> /* For printf() */
/*-----------------------------------------------------------
----------------*/
PROCESS(hello_world_process, "Hello world process");
AUTOSTART_PROCESSES(&hello_world_process);
/*-----------------------------------------------------------
----------------*/
Figure 6: Log output in motes PROCESS_THREAD(hello_world_process, ev, data)
{
pradeep@localhost$] ./hello-world.native PROCESS_BEGIN();
This will print out the following text: printf("Hello, world\n");
Contiki initiated, now starting process scheduling PROCESS_END();
Hello, world }

The program will then appear to hang, and must be The Internet of Things is an emerging technology that leads
stopped by pressing Control + C. to concepts like smart cities, smart homes, etc. Implementing
the IoT is a real challenge but the Contiki OS can be of great
Developing new modules help here. It can be very useful for deploying applications like
Contiki comes with numerous pre-built modules like automatic lighting systems in buildings, smart refrigerators,
IPv6, IPV6 UDP, hello world, sensor nets, EEPROM, wearable computing systems, domestic power management for
IRC, Ping, Ping-IPv6, etc. These modules can run with homes and offices, etc.
all the sensors irrespective of their make. Also, there
are modules that run only on specific sensors. For
References
example, the energy of a sky mote can be used only on
[1] http://www.contiki-os.org/
Sky Motes and gives errors if run with other motes like
Z1 or MicaZ.
Developers can build new modules for various sensor By: T S Pradeep Kumar
motes that can be used with different sensor BSPs using The author is a professor at VIT University, Chennai. He has two
conventional C programming, and then be deployed in the websites http://www.nsnam.com and http://www.pradeepkumar.
org. He can be contacted at pradeepkumarts@gmail.com.
corresponding sensors.

88 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com


Lets Try For U & Me

This article introduces the reader to Nix, a reliable, multi-user, multi-version, portable,
reproducible and purely functional package manager. Software enthusiasts will find it a
powerful package manager for Linux and UNIX systems.

L
inux is versatile and full of choices. Every other day other systems. Nixpkgs, the Nix packages collection,
you wake up to hear about a new distro. Most of these contains thousands of packages, many pre-compiled.
are based on a more famous distro and use its package
manager. There are many package managers like Zypper and Installation
Yum for Red Hat-based systems; Aptitude and apt-get for Installation is pretty straightforward for Linux and Macs;
Debian-based systems; and others like Pacman and Emerge. No everything is handled magically for you by a script, but there
matter how many package managers you have, you may still run are some pre-requisites like sudo, curl and bash, so make sure
into dependency hell or you may not be able to install multiple you have them installed before moving on. Type the following
versions of the same package, especially for tinkering and command at a terminal:
testing. If you frequently mess up your system, you should try
out Nix, which is more than just another package manager. bash <(curl https://nixos.org/nix/install)
Nix is a purely functional package manager. According
to its site, Nix is a powerful package manager for Linux It will ask for sudo access to create a directory named Nix.
and other UNIX systems that makes package management You may see something similar to whats shown in Figure 1.
reliable and reproducible. It provides atomic upgrades and There are binary packages available for Nix but we are
roll-backs, side-by-side installation of multiple versions of a looking for a new package manager, so using another package
package, multi-user package management and easy set-up of manager to install it is bad form (though you can, if you want
build environments. Here are some reasons for which the site to). If you are running another distro with no binary packages
recommends you ought to try Nix. while also running Darwin or OpenBSD, you have the option
Reliable: Nixs purely functional approach ensures that of installing it from source. To set the environment variables
installing or upgrading one package cannot break other right, use the following command:
packages.
Reproducible: Nix builds packages in isolation from each ./~/.nix-profile/etc/profile.d/nix.sh
other. This ensures that they are reproducible and do not
have undeclared dependencies. So if a package works on Usage
one machine, it will also work on another. Now that we have Nix installed, lets use it for further testing.
Its great for developers: Nix makes it simple to set up To see a list of installable packages, run the following:
and share build environments for your projects, regardless
of what programming languages and tools youre using. nix-env -qa
Multi-user, multi-version: Nix supports multi-user
package management. Multiple users can share a common This will list the installable packages. To search for a
Nix store securely without the need to have root privileges specific package, pipe the output of the previous command
to install software, and can install and use different to Grep with the name of the target package as the argument.
versions of a package. Lets search for Ruby, with the following command:
Source/binary model: Conceptually, Nix builds packages
from source, but can transparently use binaries from a nix-env -qa | grep ruby
binary cache, if available.
Portable: Nix runs on Linux, Mac OS X, FreeBSD and It informs us that there are three versions of Ruby available.

www.OpenSourceForU.com | OPEN SOURCE For You | september 2014 | 89


For U & Me Lets Try
Lets install Ruby 2.0. There are two
ways to install a package. Packages can
be referred to by two identifiers. The first
one is the name of the package, which
might not be unique, and the second is
the attribute set value. As the result of
our search for the various Ruby versions
showed that the name of the package for
Ruby 2.0 is Ruby-2.0.0-p353, lets try to
install it, as follows:

nix-env - i ruby-2.0.0-p353

It gives the following error as the


output: Figure 1: Nix installation

error: unable to fork: Cannot allocate


memory
nix-env: src/libutil/util.cc:766: int
nix::Pid::wait(bool): Assertion `pid !=
-1 failed.
Aborted (core dumped)

As per the Nix wiki, the name of the


package might not be unique and may
yield an error with some packages. So
we could try things out with the attribute
set value. For Ruby 2.0, the attribute set
value is nixpkgs.ruby2 and can be used
with the following command:
Figure 2: Nix search result Figure 3: Package and attribute usage
nix-env -iA nixpkgs.ruby2
To update a specific package and all its In my case, while using Ruby
This worked. Notice the use of -iA dependencies, use: 2.0, I replaced it with Ruby-
flag when using the attribute set value. 2.0.0-p353, which was the package
I talked to Nix developer Domen nix-env -uA nixpkgs.package_attribute_name name and not the attribute name.
Koar about this and he said, Multiple Well, thats just the tip of the
packages may share the same name and To update all the installed packages, use: iceberg. To learn more, refer to the
version; thats why using attribute sets is a Nix manual http://nixos.org/nix/
better idea, since it guarantees uniqueness. nix-env -u manual.
This is some kind of a downside of Nix, There is a distro named
but this is how it functions :) To uninstall a package, use: NixOS, which uses Nix for
To see the attribute name and both configuration and package
the package name, use the following nix-env -e package_name management.
command:
References
nix-env -qaP | grep package_name
[1] https://www.domenkozar.com/2014/01/02/getting-started-with-nix-package-manager/
[2] http://nixos.org/nix/manual/
In case of Ruby, I replaced the [3] http://nixer.ghost.io/why/ - To convince yourself to use Nix
package_name with ruby2 and it
yielded: By: Jatin Dhankhar

nixpkgs.ruby2
By:
The Anil
author is a Kumar Pugalia
C++ lover and a Rubyist. His areas of interest include robotics,
programming and Web development. He can be reached at jatin@jatindhankhar.in.
ruby-2.0.0-p353

90 | september 2014 | OPEN SOURCE For You | www.OpenSourceForU.com


Lets Try For U & Me

Solve Engineering Problems


with Laplace Transforms
Laplace transforms are integral mathematical transforms widely used in physics and
engineering. In this 21st article in the series on mathematics in open source, the author
demonstrates Laplace transforms through Maxima.

I
n higher mathematics, transforms play an important role. (%o1) 1/s
A transform is mathematical logic to transform or convert (%i2) string(laplace(t, t, s));
a mathematical expression into another mathematical (%o2) 1/s^2
expression, typically from one domain to another. Laplace (%i3) string(laplace(t^2, t, s));
and Fourier are two very common examples, transforming (%o3) 2/s^3
from the time domain to the frequency domain. In general, (%i4) string(laplace(t+1, t, s));
such transforms have their corresponding inverse transforms. (%o4) 1/s+1/s^2
And this combination of direct and inverse transforms is very (%i5) string(laplace(t^n, t, s));
powerful in solving many real life engineering problems. The Is n + 1 positive, negative, or zero?
focus of this article is Laplace and its inverse transform, along
with some problem-solving insights. p; /* Our input */
(%o5) gamma(n+1)*s^(-n-1)
The Laplace transform (%i6) string(laplace(t^n, t, s));
Mathematically, the Laplace transform F(s) of a function f(t) Is n + 1 positive, negative, or zero?
is defined as follows:
n; /* Our input */
(%o6) gamma_incomplete(n+1,0)*s^(-n-1)
where t represents time and s represents complex (%i7) string(laplace(t^n, t, s));
angular frequency. Is n + 1 positive, negative, or zero?
To demonstrate it, lets take a simple example of f(t) = 1.
Substituting and integrating, we get F(s) = 1/s. Maxima has z; /* Our input, making it non-solvable */
the function laplace() to do the same. In fact, with that, we (%o7) laplace(t^n,t,s)
can choose to let our variables t and s be anything else as (%i8) string(laplace(1/t, t, s)); /* Non-solvable */
well. But, as per our mathematical notations, preserving them (%o8) laplace(1/t,t,s)
as t and s would be the most appropriate. Lets start with (%i9) string(laplace(1/t^2, t, s)); /* Non-solvable */
some basic Laplace transforms. (Note that string() has been (%o9) laplace(1/t^2,t,s)
used to just flatten the expression.) (%i10) quit();

$ maxima -q In the above examples, the expression is preserved as is, in


(%i1) string(laplace(1, t, s)); case of non-solvability.

www.OpenSourceForU.com | OPEN SOURCE For You | september 2014 | 91


For U & Me Lets Try
laplace() is designed to understand various symbolic (%o4) [s = -w,s = w]
functions, such as sin(), cos(), sinh(), cosh(), log(), exp(), (%i5) string(solve(denom(laplace(cos(w*t), t, s)), s));
delta(), erf(). delta() is the Dirac delta function, and erf() is the (%o5) [s = -%i*w,s = %i*w]
error functionothers being the usual mathematical functions. (%i6) string(solve(denom(laplace(cosh(w*t), t, s)), s));
(%o6) [s = -w,s = w]
$ maxima -q (%i7) string(solve(denom(laplace(exp(w*t), t, s)), s));
(%i1) string(laplace(sin(t), t, s)); (%o7) [s = w]
(%o1) 1/(s^2+1) (%i8) string(solve(denom(laplace(log(w*t), t, s)), s));
(%i2) string(laplace(sin(w*t), t, s)); (%o8) [s = 0]
(%o2) w/(w^2+s^2) (%i9) string(solve(denom(laplace(delta(w*t), t, s)), s));
(%i3) string(laplace(cos(t), t, s)); (%o9) []
(%o3) s/(s^2+1) (%i10) string(solve(denom(laplace(erf(w*t), t, s)), s));
(%i4) string(laplace(cos(w*t), t, s)); (%o10) [s = 0]
(%o4) s/(w^2+s^2) (%i11) quit();
(%i5) string(laplace(sinh(t), t, s));
(%o5) 1/(s^2-1) Involved Laplace transforms
(%i6) string(laplace(sinh(w*t), t, s)); laplace() also understands derivative() / diff(), integrate(),
(%o6) -w/(w^2-s^2) sum(), and ilt() - the inverse Laplace transform. Here are some
(%i7) string(laplace(cosh(t), t, s)); interesting transforms showing the same:
(%o7) s/(s^2-1)
(%i8) string(laplace(cosh(w*t), t, s)); $ maxima -q
(%o8) -s/(w^2-s^2) (%i1) laplace(f(t), t, s);
(%i9) string(laplace(log(t), t, s)); (%o1) laplace(f(t), t, s)
(%o9) (-log(s)-%gamma)/s (%i2) string(laplace(derivative(f(t), t), t, s));
(%i10) string(laplace(exp(t), t, s)); (%o2) s*laplace(f(t),t,s)-f(0)
(%o10) 1/(s-1) (%i3) string(laplace(integrate(f(x), x, 0, t), t, s));
(%i11) string(laplace(delta(t), t, s)); (%o3) laplace(f(t),t,s)/s
(%o11) 1 (%i4) string(laplace(derivative(sin(t), t), t, s));
(%i12) string(laplace(erf(t), t, s)); (%o4) s/(s^2+1)
(%o12) %e^(s^2/4)*(1-erf(s/2))/s (%i5) string(laplace(integrate(sin(t), t), t, s));
(%i13) quit(); (%o5) -s/(s^2+1)
(%i6) string(sum(t^i, i, 0, 5));
Interpreting the transform (%o6) t^5+t^4+t^3+t^2+t+1
A Laplace transform is typically a fractional expression (%i7) string(laplace(sum(t^i, i, 0, 5), t, s));
consisting of a numerator and a denominator. Solving (%o7) 1/s+1/s^2+2/s^3+6/s^4+24/s^5+120/s^6
the denominator, by equating it to zero, gives the various (%i8) string(laplace(ilt(1/s, s, t), t, s));
complex frequencies associated with the original function. (%o8) 1/s
These are called the poles of the function. For example, the (%i9) quit();
Laplace transform of sin(w * t) is w/(s^2 + w^2), where the
denominator is s^2 + w^2. Equating that to zero and solving Note the usage of ilt() - inverse Laplace transform in the %i8
it, gives the complex frequency s = +iw, -iw; thus, indicating of the above example. Calling laplace() and ilt() one after the
that the frequency of the original expression sin(w * t) is w, other cancels their effectthat is what is meant by inverse. Lets
which indeed it is. Here are a few demonstrations of the same: look into some common inverse Laplace transforms.

$ maxima -q Inverse Laplace transforms


(%i1) string(laplace(sin(w*t), t, s));
(%o1) w/(w^2+s^2) $ maxima -q
(%i2) string(denom(laplace(sin(w*t), t, s))); /* The Denominator (%i1) string(ilt(1/s, s, t));
*/ (%o1) 1
(%o2) w^2+s^2 (%i2) string(ilt(1/s^2, s, t));
(%i3) string(solve(denom(laplace(sin(w*t), t, s)), s)); /* The (%o2) t
Poles */ (%i3) string(ilt(1/s^3, s, t));
(%o3) [s = -%i*w,s = %i*w] (%o3) t^2/2
(%i4) string(solve(denom(laplace(sinh(w*t), t, s)), s)); (%i4) string(ilt(1/s^4, s, t));

92 | september 2014 | OPEN SOURCE For You | www.OpenSourceForU.com


Lets Try For U & Me
(%o4) t^3/6 $ maxima -q
(%i5) string(ilt(1/s^5, s, t)); (%i1) string(laplace(diff(f(t), t) + f(t) = exp(t), t, s));
(%o5) t^4/24 (%o1) s*laplace(f(t),t,s)+laplace(f(t),t,s)-f(0) = 1/(s-1)
(%i6) string(ilt(1/s^10, s, t));
(%o6) t^9/362880 Substituting f(0) as 0, and then simplifying, we get
(%i7) string(ilt(1/s^100, s, t)); laplace(f(t),t,s) = 1/((s-1)*(s+1)), for which we do an inverse
(%o7) t^99/933262154439441526816992388562667004907159682643816 Laplace transform:
21468592963895217599993229915608941463976156518286253697920827
2237582511852109168640000000000000000000000 (%i2) string(ilt(1/((s-1)*(s+1)), s, t));
(%i8) string(ilt(1/(s-a), s, t)); (%o2) %e^t/2-%e^-t/2
(%o8) %e^(a*t) (%i3) quit();
(%i9) string(ilt(1/(s^2-a^2), s, t));
(%o9) %e^(a*t)/(2*a)-%e^-(a*t)/(2*a) That gives us f(t) = (e^t e^-t) / 2, i.e., sinh(t), which
(%i10) string(ilt(s/(s^2-a^2), s, t)); definitely satisfies the given differential equation.
(%o10) %e^(a*t)/2+%e^-(a*t)/2 Similarly, we can solve equations with integrals. And not
(%i11) string(ilt(1/(s^2+a^2), s, t)); just integrals, but also equations with both differentials and
Is a zero or nonzero? integrals. Such equations come up very often when solving
problems linked to electrical circuits with resistors, capacitors
n; /* Our input */ and inductors. Lets again look at a simple example that
(%o11) sin(a*t)/a demonstrates the fact. Lets assume we have a 1 ohm resistor,
(%i12) string(ilt(s/(s^2+a^2), s, t)); a 1 farad capacitor, and a 1 henry inductor in series being
Is a zero or nonzero? powered by a sinusoidal voltage source of frequency w. What
would be the current in the circuit, assuming it to be zero at t =
n; /* Our input */ 0? It would yield the following equation: R * i(t) + 1/C * i(t)
(%o12) cos(a*t) dt + L * di(t)/dt = sin(w*t), where R = 1, C = 1, L =1.
(%i13) assume(a < 0) or assume(a > 0)$ So, the equation can be simplified to i(t) + i(t) dt + di(t)/
(%i14) string(ilt(1/(s^2+a^2), s, t)); dt = sin(w*t). Now, following the procedure as described
(%o14) sin(a*t)/a above, lets carry out the following steps:
(%i15) string(ilt(s/(s^2+a^2), s, t));
(%o15) cos(a*t) $ maxima -q
(%i16) string(ilt((s^2+s+1)/(s^3+s^2+s+1), s, t)); (%i1) string(laplace(i(t) + integrate(i(x), x, 0, t) + diff(i(t),
(%o16) sin(t)/2+cos(t)/2+%e^-t/2 t) = sin(w*t), t, s));
(%i17) string(laplace(sin(t)/2+cos(t)/2+%e^-t/2, t, s)); (%o1) s*laplace(i(t),t,s)+laplace(i(t),t,s)/
(%o17) s/(2*(s^2+1))+1/(2*(s^2+1))+1/(2*(s+1)) s+laplace(i(t),t,s)-i(0) = w/(w^2+s^2)
(%i18) string(rat(laplace(sin(t)/2+cos(t)/2+%e^-t/2, t, s)));
(%o18) (s^2+s+1)/(s^3+s^2+s+1) Substituting i(0) as 0, and simplifying, we get
(%i19) quit(); laplace(i(t), t, s) = w/((w^2+s^2)*(s+1/s+1)). Solving that by
inverse Laplace transform, we very easily get the complex
Observe that if we take the Laplace transform of the expression for i(t) as follows:
above %o outputs, they would give back the expressions,
which are input to ilt() of the corresponding %is. %i18 (%i2) string(ilt(w/((w^2+s^2)*(s+1/s+1)), s, t));
specifically shows one such example. It does laplace() of Is w zero or nonzero?
the output at %o16, giving back the expression, which was
input to ilt() of %i16. n; /* Our input: Non-zero frequency */
(%o2) w^2*sin(t*w)/(w^4-w^2+1)-(w^3-w)*cos(t*w)/(w^4-w^2+1)+%e^-
Solving differential and integral equations (t/2)*(sin(sqrt(3)*t/2)*(-(w^3-w)/(w^4-w^2+1)-2*w/(w^4-w^2+1))/
Now, with these insights, we can easily solve many sqrt(3)+cos(sqrt(3)*t/2)*(w^3-w)/(w^4-w^2+1))
interesting and otherwise complex problems. One of (%i3) quit();
them is solving differential equations. Lets explore a
simple example of solving f(t) + f(t) = e^t, where f(0) = By: Anil Kumar Pugalia
0. First, lets take the Laplace transform of the equation. The
By:author
Anil is aKumar
gold medallist from NIT Warangal and IISc
Pugalia
Bangalore and he is also a hobbyist in open source hardware
Then substitute the value for f(0), and simplify to obtain and software, with a passion for mathematics. Learn more
the Laplace of f(t), i.e., F(s). Finally, compute the inverse about him and his experiments at http://sysplay.in. He can be
reached at email@sarika-pugs.com.
Laplace transform of F(s) to get the solution for f(t).

www.OpenSourceForU.com | OPEN SOURCE For You | september 2014 | 93


For U & Me Let's Try

e ll w ith Zsh
Your Sh h
and O h -M y- Z s
ng
e Z sh e ll , a p owerful scripti ive use.
Discover th for interact
n g u age, wh ic h is designed
la

Z
shell (zsh) is a powerful interactive login shell
and command interpreter for shell scripting. A big
improvement over older shells, it has a lot of new
features and the support of the Oh-My-Zsh framework that
makes using the terminal fun.
Released in 1990, the zsh shell is fairly new compared
to its older counterpart, the bash shell. Although more than
a decade has passed since its release, it is still very popular
among programmers and developers who use the command-
line interface on a daily basis.

Why zsh is better than the rest


Most of what is mentioned below can probably be
implemented or configured in the bash shell as well; however,
it is much more powerful in the zsh shell.

Advanced tab completion


Tab completion in zsh supports the command line option for
the auto completion of commands. Pressing the tab key twice line look stunning. In some terminals, existing commands
enables the auto complete mode, and you can cycle through are highlighted in green and those typed incorrectly are
the options using the tab key. highlighted in red. Also, quoted text is highlighted in yellow.
You can also move through the files in a directory with All this can be configured further according to your needs.
the tab key. Prompts on zsh can be customised to be right-aligned,
Zsh has tab completion for the path of directories or files left-aligned or as multi-lined prompts.
in the command line too.
Another great feature is that you can switch paths by Globbing
using 1 to switch to the previous path, 2 to switch to the Wikipedia defines globbing as follows: In computer
previous, previous path and so on. programming, in particular in a UNIX-like environment,
the term globbing is sometimes used to refer to pattern
Real time highlighting and themeable prompts matching based on wildcard characters. Shells
To include real time highlighting, clone the zsh-syntax- before zsh also offered globbing; however, zsh offers
highlighting repository from github (https://github.com/zsh- extended globbing. Extra features can be enabled if the
users/zsh-syntax-highlighting). This makes the command- EXTENDEDGLOB option is set.

94 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com


Let's Try For U & Me

Here are some examples of the


extended globbing offered by zsh. The
^ character is used to negate any pattern
following it. Figure 1: Tab completion for command options

setopt EXTENDEDGLOB
# Enables extended globbing in zsh.
ls *(.) Figure 2: Tab completion for files
# Displays all regular files. commands. Most other shells have aliases but zsh supports
ls -d ^*.c # Displays global aliases. These are aliases that are substituted anywhere
all directories and files that are not cpp files. in the line. Global aliases can be used to abbreviate
ls -d ^*.* # Displays frequently-typed usernames, hostnames, etc. Here are some
directories and files that have no extension. examples of aliases:
ls -d ^file # Displays
everything in directory except file called file. alias -g mr=rm
ls -d *.^c alias -g TL=| tail -10
# Displays files with extensions except .c files. alias -g NUL=> /dev/null 2>&1

An expression of the form <x-y> matches a range of Installing zsh


integers. Also, files can be grouped in the search pattern. To install zsh in Ubuntu or Debian-based distros, type the
following:
% ls (foo|bar).*
bar.o foo.c foo.o sudo apt-get update && sudo apt-get install zsh # install zsh
% ls *.(c|o|pro) chsh -s /bin/zsh # to make zsh your default shell
bar.o file.pro foo.c foo.o main.o q.c
To install it on SUSE-based distros, type:
To exclude a certain file from the search, the ~ character
can be used. sudo zypper install zsh
finger yoda | grep zsh
% ls *.c
foo.c foob.c bar.c Configuring zsh
% ls *.c~bar.c The .zshrc file looks something like what is shown in Figure 4.
foo.c foob.c Add your own aliases for commands you use frequently.
% ls *.c~f*
bar.c Customising zsh with Oh-My-Zsh
Oh-My-Zsh is believed to be an open source community-driven
These and several more extended globbing features can framework for managing the zsh configuration. Although zsh is
help immensely while working through large directories powerful in comparison to other shells, its main attraction is the
themes, plugins and other features that come with it.
Case insensitive matching To install Oh-My-Zsh you need to clone the Oh-My-Zsh
Zsh supports pattern matching that is independent of repository from Github (https://github.com/robbyrussell/
whether the letters of the alphabet are upper or lower case. oh-my-zsh). A wide range of themes are available so there is
Zsh first surfs through the directory to find a match, and if something for everybody.
one does not exist, it carries out a case insensitive search for To clone the repository from Github, use the following
the file or directory. command. This installs Oh-My-Zsh in ~/.oh-my-zsh (a hidden
directory in your home directory). The default path can be
Sharing of command history among running shells changed by setting the environment variable for zsh using
Running shells share command history, thereby eradicating export ZSH = /your/path
the difficulty of having to remember the commands you typed
earlier in another shell. git clone https://github.com/robbyrussell/oh-my-zsh.git

Aliases To install Oh-My-Zsh via curl, type:


Aliases are used to abbreviate commands and command
options that are used very often or for a combination of curl -L http://install.ohmyz.sh | sh

www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 95


For U & Me Let's Try

Figure 3: View previous paths

Figure 4: ~/.zshrc file

you desire and then source Oh-My-Zsh. If you do not want any
theme enabled, set ZSH_THEME = . If you cant decide
Figure 5: Setting aliases in ~/.zshrc file on a theme, you can set ZSH_THEME = random. This will
change the theme every time you open a shell and you can
To install it via wget, type: decide upon the one that you find most suitable for your needs.
To make your own theme, copy any one of the existing
wget no-check-certificate http://install.ohmyz.sh -O - | sh themes from the themes/ directory to a new file with a zsh-
theme extension and make your changes to that.
To customise zsh, create a new zsh configuration, i.e., a A customised theme is shown in Figure 6.
~/.zshrc file by copying any of the existing templates provided: Here, the user name, represented by %n, has been set to
the colour green and the computer name, represented by %m,
cp ~/.oh-my-zsh/templates/zshrc.zsh-template ~/.zshrc has been set to the colour cyan. This is followed by the path
represented by %d. The prompt variable then looks like this...
Restart your zsh terminal to view the changes.
PROMPT= $fg[green]%n $fg[red]at $fg[cyan]%m---
Plugins >$fg[yellow]%d:
To check out the numerous plugins offered in Oh-My-Zsh,
you can go to the plugins directory in ~/.oh-my-zsh. The prompt can be changed to incorporate spacing, and
To enable these plugins, add them to the ~/.zshrc file and git states, battery charge, etc, by declaring functions that do
then source them. the same.
For example, here, instead of printing the entire path
cd ~/.oh-my-zsh including /home/darshana, we can define a function such that
vim ~/.zshrc if PWD detects $HOME, it replaces the same with ~
source ~/.zshrc
function get_pwd() {
If you want to install some plugin that is not present in echo ${PWD/$HOME/~}
the plugins directory, you can clone the plugin from Github or }
install it using wget or curl and then source the plugin.
To view the status of the current Git repository, the
Themes following code can be used:
To view the themes in zsh go to the themes/ directory. To
change your theme, set ZSH_THEME in ~/.zshrc to the theme function git_prompt_info() {

96 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com


For U & Me Open Strategy

Panasonic Looks to Engage with


Developers in India!
Panasonic entered the Indian smartphone market last year. In just one year, the
company has assessed the potential of the market and has found that it could make
India the headquarters for its smartphone division. But this cannot happen without
that bit of extra effort from the company. While Panasonic is banking big on Indias
favourite operating system, Android, it is also leaving no stone unturned to provide
a unique user experience on its devices. Diksha P Gupta from Open Source For You
spoke to Pankaj Rana, head, smartphones and tablets, Panasonic India Pvt Ltd, to
get a clearer picture of the companys growth plans. Excerpts

about the strategy, Pankaj Rana, head, smartphones


and tablets, Panasonic India Pvt Ltd, says, We
are banking on Android purely because it provides
the choice of customisation. Based on this ability
of Android, we have created a very different user
experience for Panasonic smartphones. What
Rana is referring to here is the single fit-home UI
launched by Panasonic. He explains, While we
have provided the standard Android UI in the feature
phones, the highly-efficient fit-home UI is available
on Panasonic smartphones. When working on the
standard Android UI, users need to use both hands to
perform any task. However, the fit-home UI allows
single-hand operations, making it easy for the user
to function.
Yet another feature of the UI is that it can
be operated in the landscape mode. Rana claims
that many phones do not allow the use of various
functions like settings, et al, in the landscape mode.
He says, We have kept the comfort of the users as
our top priority and, hence, designed the UI in such
a way that it offers a tablet-like experience as well.
The Panasonic Eluga is a 12.7cm (5-inch) phone.
This kind of a UI will be a great advantage on big
screen devices. For users of feature phones who
are migrating to smartphones now, this kind of UI
Pankaj Rana, head, smartphones and tablets, Panasonic India Pvt Ltd makes the transition easier.

P
anasonic is all set to launch 15 smartphones and Coming soon: An exclusive Panasonic app store
eight feature phones in India this year. While the Well, if you thought the unique user experience was
company will keep its focus on the smartphone the end of the show, hold on. Theres more coming
segment, it has no plans of losing its feature phone The company plans to leave no stone unturned
lovers as Panasonic believes that there is still scope for when it comes to making its Android experience
the latter in the Indian market. That said, Panasonic will complete for the Indian region. Rana reveals, We
invest more energy in grabbing what it hopes will be a are planning to come up with a Panasonic exclusive
5 per cent share in the Indian smartphone market. And app store, which should come to existence in the
that will happen with the help of Android. Speaking next 3-4 months.

98 | september 2014 | OPEN SOURCE For You | www.OpenSourceForU.com


Open Strategy For U & Me
The company plans to do the hiring
for the in-house team within the next
six months. The team may comprise
about 100 people. Rana clarifies
that the developers hired in India
are going to be based in Gurgaon,
Bengaluru and Hyderabad.

When it comes to the development for this


app store, Panasonic will look at hiring in-house
developers, as well as associate with third party
developers. Rana says, We will look at all possible
ways to make our app ecosystem an enriched one.
Just for the record, this UI has been built within
the company, with engineers from various facilities
including India, Japan and Vietnam. For the exclusive
app store that we are planning to build, we will have
some third-party developers. But besides that, we
plan to develop our in-house team as well. Right now,
we have about 25 software engineers working with
us in India, who are from Japan. We also have some
Vietnamese resources working for us.
The company plans to do the hiring for the in-
house team within the next six months. The team
may comprise about 100 people. Rana clarifies that
the developers hired in India are going to be based
in Gurgaon, Bengaluru and Hyderabad. He says,
We already have about 20 developers in Bengaluru,
who are on third party rolls. We are in the process
of switching them to the companys rolls over the
next couple of months. Similarly, we have about 10
developers in Gurgaon. In addition, our R&D team in
Vietnam has 70 members. We are also planning to shift
the Vietnam operations to India, making the country
our smartphone headquarters.
To take the idea of the Panasonic-exclusive app
store further, the company is planning some developers
engagement activities this November and December.

The consumer is the king!


While Rana asserts that Panasonic can make one of the
best offerings in the smartphone world, he recognises that
consumers are looking for something different every time,
when it comes to these fancy devices. He says, Right
now, companies are working on the UI level to offer that
newness in the experience. But six months down the line,
things will not remain the same. The situation is bound to
change and, to survive in this business, developers need
to predict the tastes of the consumers. But for now, it is
about providing an easy experience, so that the feature
phone users who are looking to migrate to smartphones
find it convenient enough.

www.OpenSourceForU.com | OPEN SOURCE For You | september 2014 | 99


TIPS
& TRICKS
Convert images to PDF height: 100%; border: none; outline: none; margin: 0;
Often, your scanned copies will be in an image padding: 90px; autofocus placeholder=wake up Neo... />
format that you would like to convert to the PDF format. In
Linux, there is an easy-to-use tool called convert that can You Web browser-based notepad is ready.
convert any image to PDF. The following example shows
you how: Chintan Umarani,
chintan4u@in.com
$convert scan1.jpg scan1.pdf
How to find a swap partition or
To convert multiple images into one PDF file, use the file in Linux
following command: Swap space can be a dedicated swap partition, a swap file,
or a combination of swap partitions and swap files.
$convert scan*.jpg scanned_docs.pdf To find a swap partition or file in Linux, use the
following command:
The convert tool comes with the imagemagick
package. If you do not find the convert command on your swapon -s
system, you will need to install imagemagick.
Or
Madhusudana Y N,
madhusudanayn@gmail.com cat /proc/swaps

Your own notepad and the output will be something like whats shown
Here is a simple and fast method to create a notepad- below:
like application that works in your Web browser. All you
need is a browser that supports HTML 5 and the commands Filename Type Size Used Priority
mentioned below.
Open your HTML 5 supported Web browser and paste /dev/sda5 partition 2110460 0 -1
the following code in the address bar:
Here, the swap is a partition and not a file.
data:text/html, <html contenteditable>
Sharad Chhetri,
Then use the following code: admin@sharadchhetri.com

data:text/html, <title>Text Editor</title><body Monitoring a process in Linux


contenteditable style=font-size:2rem;line-height:1.4;max- To monitor the systems performance on a per-
width:60rem;margin:0 auto;padding:4rem;> process basis, use the following command:

And finally pidstat -d -h -r -u -p <PID> <Delay in seconds>

data:text/html, <style>html,body{margin: 0; padding: 0;}</ This command will show the CPU utilisation, memory
style><textarea style=font-size: 1.5em; line-height: utilisation and IO utilisation of the process, along with the PID.
1.5em; background: %23000; color: %233a3; width: 100%; Example:

100 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com


pidstat -d -h -r -u -p 1234 5 hash: hash table empty

where 1234 is the PID of the process to be monitored test@linux-erv3:~/Desktop>


and 5 is the delay in seconds.
Bharanidharan Ambalavanan,
Prasanna Mohanasundaram, abharanidharan@srec.ac.in
prasanna.mohanasundaram@gmail.com
Searching for a specific string entirely
Finding the number of threads within a particular location
a process has If youre working on a big project with multiple files
By using the ps command and reading the /proc file entries, organised in various hierarchies and would need to know
we can find the number of threads in a process. where a specific string/word is located, the following Grep
Here, the nlwp option will report the number of light command option would prove to be priceless.
weight processes (i.e., threads).
grep -HRnF foo test <dir_loc> --include *.c
[bash]$ ps -o nlwp <PID Of Process>
The above command will search for foo test (without
[bash]$ ps -o nlwp 3415 the quotes) under <dir_loc> recursively for all the files
ending with .c extension and will print the output with the line
NLWP number.
Remember that foo test is case sensitive, so a search
34 for Foo Test would return different results.
Here is a sample output:
We can also obtain the same information by reading /
proc file entries: <dir_loc>/tip/src/WWW/cgi-bin/admin/file1.c:880: line
beginning of foo test in sample;
[bash]$ cat /proc/<PID>/status | grep Threads
<dir_loc>/tip/src/WWW/cgi-bin/admin/file1.c:1034: foo test
[bash]$ cat /proc/3415/status | grep Threads line ending;

Threads: 34 <dir_loc>/tip/src/WWW/cgi-bin/user/file2.c:166: partial foo


Narendra Kangralkar, test;
narendrakangralkar@gmail.com
<dir_loc>/file3.c:176: somewhere foo testing of sample;
Find out the number of times a user has
executed a command You can always replace <dir_loc> with .(dot) to
In Linux, the command known as hash will display the indicate a recursive search from the current directory and
number of hits or number of times that a particular shell also leave out the --include parameter to search for all files;
command has been executed. or provide it as --include *.*

test@linux-erv3:~/Desktop> hash grep -HRnF foo test . --include *.*


hits command
1 /bin/ps Runcy Oommen,
2 /usr/bin/pidstat runcy.oommen@gmail.com
1 /usr/bin/man
2 /usr/bin/top
test@linux-erv3:~/Desktop>
Share Your Linux Recipes!
In the above output of hash, we can see that the commands The joy of using Linux is in finding ways to get around
pidstat and top have been used twice and the rest only once. problemstake them head on, defeat them! We invite you to
share your tips and tricks with us for publication in OSFY so
If hash is the first command in the terminal, it will that they can reach a wider audience. Your tips could be related
return the following as output: to administration, programming, troubleshooting or general
tweaking. Submit them at www.opensourceforu.com. The sender
of each published tip will get a T-shirt.
test@linux-erv3:~/Desktop> hash

www.OpenSourceForU.com | OPEN SOURCE For You | september 2014 | 101


For U & Me Interview

We are
looking to hire
people with
core Android
experience
Before Xiaomi was to enter the
Indian market, many assumed
that this was just another Chinese
smartphone coming their way. But
perceptions changed after the brand
entered the sub-continent. Flipkart
got bombarded with orders and
Xiaomi eventually could not meet
the Indian demand. There are quite
a few reasons for this explosive
demand, but one of the most
important factors is the unique user
experience that the device offers. It
runs on Android, but on a different
versionone that originates from the
brain of Hugo Barra, vice president,
Xiaomi. When he was at Google,
he was pretty much instrumental
in making the Android OS what
it is. Currently, he is focused on
offering a different taste of it
at a unique price point. He has
launched MIUI, an Android-based
OS that he wants to be ported to
devices other than Xiaomi. For this,
he needs a lot of help from the
developers community. Diksha P
Gupta from Open Source For You
caught up with him to discuss his
plans for India and how he wants
to contribute to and leverage
the developers ecosystem in the
country. Read on...

Hugo Barra, vice president, Xiaomi

102 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com


Interview For U & Me

Q What are the top features of Xiaomi MIUI that you think
are lacking in other devices?
First of all, we have a dramatically simplified UI for the
you may want to share this Wi-Fi network with someone
else. So, you just go into the Wi-Fi wing and say, Share
this connection, and the phone will then share a QR code.
average person; so it feels simpler than anything else in the The person you want to share your Wi-Fi connection with
market right now. can just scan this QR code and immediately get connected
Second, it is very customisable and is really appealing to the network without having to enter a password. So
to our customers. We have thousands of themes that lots and lots of little things like that add up to a pretty
can completely change the experience, not just the wall delightful experience.
paper or the lock screen. From very detailed to very
minimalistic designs, from cartoonist styles to cultural
statements and on to other things, there is a huge list of
options to choose from.
Q After MIUI from Xiaomi, Panasonic has launched its own
UI and Xolo has launched HIVE. So do you think the war
has now shifted to the UI level?
Third, I would say that theres customisation for power I think it would be an injustice to say that our operating
users as well. You can change a lot of things in the system. system MIUI is just another UI like Android because it
You can control permissions on apps, and you can decide is so much more than that. We have had a team of 500
which apps are allowed to run in the background. There engineers working on our operating systems for the last
is a lot you can do to fine tune the performance of your four years, so it is not just a re-skinning of Android. It is
device if you choose to. For example, you can decide much, much more significant effort. I can spend five hours
which apps are allowed to access the 3G network. So I with you just explaining the features of MIUI. I dont think
can say that out of the 45 apps that I have running on there are many companies out there that have as significant
my phone, the only ones that are allowed to use 3G are a software effort that has been on for as long a time, as we
WhatsApp, Hike, my email and my browser. I dont want have. So while I havent looked at these operating systems
any of the other apps that are running on this phone to be that you are talking about closely, my instinct is that they
allowed to access 3G at all, which I wont know about and are not as profoundly different and well founded as MIUI.
which may use bandwidth that I am paying for. It is a very
simple menu. Like checkboxes, it lets you choose the apps
that you want to allow 3G access to. So if you care about
how much bandwidth you are consuming and presently
Q What are your plans to reach out to the developers?
From a development perspective, first and foremost, we
are very Android compliant. All of our builds, before OTA,
control that by turning 3G on and off (which people do all go to Google for approval, like every other OEM out there.
the time), now you can allow 3G access only to messaging We are focused on Android APIs. We are not building new
apps like WhatsApp or Hike that use almost no bandwidth APIs. We believe that doing so would create fragmentation.
at all. Those are the apps that youre all the time connected Its kind of not ideal and goes against the ecosystem. So,
to because if someone sends you a message, you want to from our point of view, we see developers as our early
get it as soon as possible. adopters. They are the people who we think are best
Fourth, we have added a number of features to the core equipped to try our products. We see developers as the first
apps that make them more interesting. These include in people that we take our devices to try out. Thats primarily
call features that allow users to take notes during a phone how we view the developer community.
call, the ability to record a phone call and a bunch of other There are some interesting twists out there as well. For
things. So it is not the dialler app alone, but also dozens of instance, we are the first company to make a Tegra K1 tablet.
features all around the OS like turning on the flash light So, already, MiPad is being used by game developers as the
from the lock screen, having a private messaging inbox and reference development platform for K1 game tabs. This is one
a whole lot of other features. of the few ways in which we get in touch with the developers
Fifth, on your text inbox, you can pin a person to the and work with them.
topif there is really someone who matters to you and you
always want to have their messages on the top. You can
decide at what time an SMS needs to be sent out. You can
compose a message saying, I want this message to go out
Q How do you want to involve the Indian developer
community, considering the fact that it is one of the
largest in the world?
at 7 p.m., because maybe youre going to be asleep, for First of all, we are looking to hire developers here. We are
example, but still want that message to go out. looking to build a software engineering team in India, and
Then there are little things like, if you fire up the in Bengaluru, to be precisewhere we are headquartered.
camera app and point the camera towards the QR code, So that is the first and the most important step for us. The
it just automatically recognises it. You dont have to second angle is MIUI. Its not an open source operating
download a special app just to recognise the QR codes. If system, but it is an open operating system that is based on
you are connected to a Wi-Fi network with your Mi phone, Android. A lot of the actual code is closed, but its open

www.OpenSourceForU.com | OPEN SOURCE For You | September 2014 | 103


For U & Me Interview
and is configurable. A really interesting initiative with the channel, which gets updates every one or two months. But
developer community, where we would love to get some if you want to be on the beta channel and get updates every
help with, is porting MIUI to all of the local devices. If week, to be the one to try out features first, all you have to
someone with a Micromax or a Karbonn device wants to run do is go to the MIUI forum and download the beta channel
MIUI on it, let them do it. So, we would like the help of the ROM to flash it to your device. It is a very simple thing
developer community for things like that. to do, and then you are on the beta channel and will get
We do have some developers in the country porting our weekly updates.
builds on different devices. So, thats something we would
love the developer community to help us with.
Q So what is the idea behind such weekly updates? Are
they not too frequent for even power users to adopt?

Q What would be the size of the engineering team and by


when can we expect that team to come up?
We will start hiring people pretty much right away. As for
The power users take them all the time. Their take rate
is incredibly high. These are people who have chosen to
take these over-the-air updates. They have actively said:
the size, I dont know yet. I suspect that we would only be I want to be on the beta channel because I want weekly
limited by our ability to hire quickly, as we need a really updates. And the reason they do it is because they want to
high count of engineers. The Bengaluru tech talent scene is get access to features early, they want to provide feedback,
incredibly competitive, and there is a shortage of talented they like having the latest and greatest software on their
people. So we will be working hard to recruit as fast as we devices. They want to participate and that is why people
can but I dont think we will be limited by any particular want to join in.
quota or budgetits how quickly we can take this up.

Q What kind of developers are you looking to hire in India?


Mainly, Android developers. We are looking to hire
Q Google promotes stock Android usage. You also did the
same when you were with Google, so forking Android is
not happily accepted by Google. Given that, where do you
people with core Android experiencesoftware engineers see Android reaching eventually in this game?
who have written code below the application level. Java From Googles perspective and from my perspective as
developers and Android developers are totally fine. well, the most important thing in the Android ecosystem
is for the devices to remain compatible. To me, the two

Q You are originally from Android, but you chose to do


something that is not open source, but closed-source.
Any reasons for this choice?
most important things are: first, every Android device must
be 100 per cent compatible, and second, all of Googles
innovations must reach every Android user. In the approach
This is something the company has thought about a lot. that we have taken as an OS developer, we are checking
Managing an open source project is a very different thing, both these boxes one hundred per cent.
from what we do today. So we thought that what we would Our devices, even those selling in China, without any of
rather do is contribute back. So, while our core UI is not Googles apps, are 100 per cent compatible and have always
open, we actually contribute a lot back, not only to Android, been so, which means that all of the Android APIs work
but to other initiatives as well. HBASE is one database perfectly. They pass all the compatibility apps, which means
initiative for which our team is the No. 1 contributor. that you will never have apps crashing. Second, because we
are pre-loading all of Googles apps (outside of China of

Q What are your thoughts on the Android One platform?


I think its a phenomenal effort from Google. Its a
very clever program. It is designed to empower members
course, where we are allowed to preload Googles apps), we
are enabling all of the innovation that Google has to reach
every MIUI user. So we are checking all the boxes.
of the ecosystem, who you might label as challengers, and I think you need to be careful when using the word
to help them reap real incentives. So I think it is a very, forking because you may be confusing yourself and
very cleverly designed program. I am very excited to see it your users with forks that are not compatibleforks that
taking off in India. break away from the Android ecosystem, if you will.
Im not going to mention names but that is unhealthy for

Q Is it true that Xiaomi provides weekly updates for its


devices?
We provide weekly updates for our Mi device family. We
the ecosystem. That is almost a sin, because it is going
to lead to user frustration at some point, sooner or later.
Apps are going to start crashing or not working. That is a
have two families the Redmi family and the Mi family. bad thing to do. Its just not a good idea and obviously we
We provide weekly updates for the devices of the Mi will not do that. We are 100 per cent Android compatible,
family on what we call the beta channel. So when you we are 100 per cent behind Android, and we love what the
buy a Mi 3, for example, its on what we call the stable Android teams do.

104 | September 2014 | OPEN SOURCE For You | www.OpenSourceForU.com


R N I No. DELENG/2012/49440, Mailed on 27/28th of Advance month Delhi Postal Regd. No. DL(S)-01/3443/2013-15
Published on 27th of Advance month

There is a place for Postgres in every datacenter.


WE CAN HELP YOU FIND IT.

Postgres delivers. Enterprise class Performance, Security, Success.


For 80% less.
EnterpriseDB - The Postgres Database Company
Sales-india@enterprisedb.com

EnterpriseDB Software India Pvt Ltd


Unit #3, Godrej Castlemaine
Pune - 411 01 India
T +91 20 30589500/01

You might also like