You are on page 1of 175

50 UNIX / Linux Sysadmin Tutorials

by Ramesh Natarajan on December 25, 2010


Object 60 Object 62

Object 61

Merry Christmas and Happy Holidays to all TGS Readers. To wrap this year, Ive collected 50 UNIX / Linux sysadmin related tutorials that weve posted so far. This is lot of reading. Bookmark this article for your future reference and read it whenever you get free time.
Object 10 Object11 59 58 57 56 55 54 53 52 51 50 49 48 47 46 45 44 43 42 41 40 39 38 37 36 35 34 33 32 31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 9 8 7 6 5 4 3 2 1

1. Disk to disk backup using dd command: dd is a powerful UNIX utility, which is used by the Linux kernel makefiles to make boot images. It can also be used to copy data. This article explains how to backup entire hard disk and create an image of a hard disk using dd command.

6 Examples to Backup Linux Using dd Command (Including Disk to Disk)


by Sasikala on October 11, 2010
Object 63 Object 65

Object 64

Data loss will be costly. At the very least, critical data loss will have a financial impact on companies of all sizes. In some cases, it can cost your job. Ive seen cases where sysadmins learned this in the hard way. There are several ways to backup a Linux system, including rsync and rsnapshot that we discussed a while back. This article provides 6 practical examples on using dd command to backup the Linux system. dd is a powerful UNIX utility, which is used by the Linux kernel makefiles to make boot images. It can also be used to copy data. Only superuser can execute dd command.

Warning: While using dd command, if you are not careful, and if you dont know what you are doing, you will lose your data!

Example 1. Backup Entire Harddisk


To backup an entire copy of a hard disk to another hard disk connected to the same system, execute the dd command as shown below. In this dd command example, the UNIX device name of the source hard disk is /dev/hda, and device name of the target hard disk is /dev/hdb.
# dd if=/dev/sda of=/dev/sdb

if represents inputfile, and of represents output file. So the exact copy of /dev/sda will be available in /dev/sdb. If there are any errors, the above command will fail. If you give the parameter conv=noerror then it will continue to copy if there are read errors. Input file and output file should be mentioned very carefully, if you mention source device in the target and vice versa, you might loss all your data. In the copy of hard drive to hard drive using dd command given below, sync option allows you to copy everything using synchronized I/O.
# dd if=/dev/sda of=/dev/sdb conv=noerror,sync

Example 2. Create an Image of a Hard Disk


Instead of taking a backup of the hard disk, you can create an image file of the hard disk and save it in other storage devices.There are many advantages to backing up your data to a disk image, one being the ease of use. This method is typically faster than other types of backups, enabling you to quickly restore data following an unexpected catastrophe.
# dd if=/dev/hda of=~/hdadisk.img

The above creates the image of a harddisk /dev/hda. Refer our earlier article How to view initrd.image for more details.

Example 3. Restore using Hard Disk Image


To restore a hard disk with the image file of an another hard disk, use the following dd command example.
# dd if=hdadisk.img of=/dev/hdb

The image file hdadisk.img file, is the image of a /dev/hda, so the above command will restore the image of /dev/hda to /dev/hdb.

Example 4. Creating a Floppy Image


Using dd command, you can create a copy of the floppy image very quickly. In input file, give the floppy device location, and in the output file, give the name of your floppy image file as shown below.
# dd if=/dev/fd0 of=myfloppy.img

Example 5. Backup a Partition


You can use the device name of a partition in the input file, and in the output either you can specify

your target path or image file as shown in the dd command example below.
# dd if=/dev/hda1 of=~/partition1.img

Example 6. CDROM Backup


dd command allows you to create an iso file from a source file. So we can insert the CD and enter dd command to create an iso file of a CD content.
# dd if=/dev/cdrom of=tgsservice.iso bs=2048

dd command reads one block of input and process it and writes it into an output file. You can specify the block size for input and output file. In the above dd command example, the parameter bs specifies the block size for the both the input and output file. So dd uses 2048bytes as a block size in the above command. Note: If CD is auto mounted, before creating an iso image using dd command, its always good if you unmount the CD device to avoid any unnecessary access to the CD ROM. 2. 15 rsync command examples: Every sysadmin should master the usage of rsync. rsync utility is used to synchronize the files and directories from one location to another. First time, rsync replicates the whole content between the source and destination directories. Next time, rsync transfers only the changed blocks or bytes to the destination location, which makes the transfer really fast.

How to Backup Linux? 15 rsync Command Examples


by Sasikala on September 9, 2010
Object 66 Object 68 Object 67

rsync stands for remote sync. rsync is used to perform the backup operation in UNIX / Linux. rsync utility is used to synchronize the files and directories from one location to another in an effective way. Backup location could be on local server or on remote server.

Important features of rsync


Speed: First time, rsync replicates the whole content between the source and destination directories. Next time, rsync transfers only the changed blocks or bytes to the destination location, which makes the transfer really fast. Security: rsync allows encryption of data using ssh protocol during transfer. Less Bandwidth: rsync uses compression and decompression of data block by block at the sending and receiving end respectively. So the bandwidth used by rsync will be always less compared to other file transfer protocols. Privileges: No special privileges are required to install and execute rsync

Syntax
$ rsync options source destination

Source and destination could be either local or remote. In case of remote, specify the login name, remote server name and location.

Example 1. Synchronize Two Directories in a Local Server


To sync two directories in a local computer, use the following rsync -zvr command.
$ rsync -zvr /var/opt/installation/inventory/ /root/temp building file list ... done sva.xml svB.xml . sent 26385 bytes received 1098 bytes 54966.00 bytes/sec total size is 44867 speedup is 1.63 $

In the above rsync example: -z is to enable compression -v verbose -r indicates recursive Now let us see the timestamp on one of the files that was copied from source to destination. As you see below, rsync didnt preserve timestamps during sync.
$ ls -l /var/opt/installation/inventory/sva.xml /root/temp/sva.xml -r--r--r-- 1 bin bin 949 Jun 18 2009 /var/opt/installation/inventory/sva.xml -r--r--r-- 1 root bin 949 Sep 2 2009 /root/temp/sva.xml

Example 2. Preserve timestamps during Sync using rsync -a


rsync option -a indicates archive mode. -a option does the following, Recursive mode Preserves symbolic links Preserves permissions Preserves timestamp Preserves owner and group

Now, executing the same command provided in example 1 (But with the rsync option -a) as shown below:
$ rsync -azv /var/opt/installation/inventory/ /root/temp/

building file list ... done ./ sva.xml svB.xml . sent 26499 bytes received 1104 bytes total size is 44867 speedup is 1.63 $

55206.00 bytes/sec

As you see below, rsync preserved timestamps during sync.


$ ls -l /var/opt/installation/inventory/sva.xml /root/temp/sva.xml -r--r--r-- 1 root bin 949 Jun 18 2009 /var/opt/installation/inventory/sva.xml -r--r--r-- 1 root bin 949 Jun 18 2009 /root/temp/sva.xml

Example 3. Synchronize Only One File


To copy only one file, specify the file name to rsync command, as shown below.
$ rsync -v /var/lib/rpm/Pubkeys /root/temp/ Pubkeys sent 42 bytes received 12380 bytes 3549.14 bytes/sec total size is 12288 speedup is 0.99

Example 4. Synchronize Files From Local to Remote


rsync allows you to synchronize files/directories between the local and remote system.
$ rsync -avz /root/temp/ thegeekstuff@192.168.200.10:/home/thegeekstuff/temp/ Password: building file list ... done ./ rpm/ rpm/Basenames rpm/Conflictname sent 15810261 bytes received 412 bytes total size is 45305958 speedup is 2.87 2432411.23 bytes/sec

While doing synchronization with the remote server, you need to specify username and ip-address of the remote server. You should also specify the destination directory on the remote server. The format is username@machinename:path As you see above, it asks for password while doing rsync from local to remote server. Sometimes you dont want to enter the password while backing up files from local to remote server. For example, If you have a backup shell script, that copies files from local to remote server using rsync, you need the ability to rsync without having to enter the password. To do that, setup ssh password less login as we explained earlier.

Example 5. Synchronize Files From Remote to Local


When you want to synchronize files from remote to local, specify remote path in source and local path in target as shown below.
$ rsync -avz thegeekstuff@192.168.200.10:/var/lib/rpm /root/temp Password: receiving file list ... done rpm/

rpm/Basenames . sent 406 bytes received 15810230 bytes total size is 45305958 speedup is 2.87

2432405.54 bytes/sec

Example 6. Remote shell for Synchronization


rsync allows you to specify the remote shell which you want to use. You can use rsync ssh to enable the secured remote connection. Use rsync -e ssh to specify which remote shell to use. In this case, rsync will use ssh.
$ rsync -avz -e ssh thegeekstuff@192.168.200.10:/var/lib/rpm /root/temp Password: receiving file list ... done rpm/ rpm/Basenames sent 406 bytes received 15810230 bytes total size is 45305958 speedup is 2.87 2432405.54 bytes/sec

Example 7. Do Not Overwrite the Modified Files at the Destination


In a typical sync situation, if a file is modified at the destination, we might not want to overwrite the file with the old file from the source. Use rsync -u option to do exactly that. (i.e do not overwrite a file at the destination, if it is modified). In the following example, the file called Basenames is already modified at the destination. So, it will not be overwritten with rsync -u.
$ ls -l /root/temp/Basenames total 39088 -rwxr-xr-x 1 root root 4096 Sep 2 11:35 Basenames

$ rsync -avzu thegeekstuff@192.168.200.10:/var/lib/rpm /root/temp Password: receiving file list ... done rpm/ sent 122 bytes received 505 bytes 114.00 bytes/sec total size is 45305958 speedup is 72258.31 $ ls -lrt total 39088 -rwxr-xr-x 1 root root

4096 Sep

2 11:35 Basenames

Example 8. Synchronize only the Directory Tree Structure (not the files)
Use rsync -d option to synchronize only directory tree from source to the destination. The below example, synchronize only directory tree in recursive manner, not the files in the directories.
$ rsync -v -d thegeekstuff@192.168.200.10:/var/lib/ . Password: receiving file list ... done logrotate.status CAM/ YaST2/ acpi/ sent 240 bytes received 1830 bytes 318.46 bytes/sec

total size is 956

speedup is 0.46

Example 9. View the rsync Progress during Transfer


When you use rsync for backup, you might want to know the progress of the backup. i.e how many files are copies, at what rate it is copying the file, etc. rsync progress option displays detailed progress of rsync execution as shown below.
$ rsync -avz --progress thegeekstuff@192.168.200.10:/var/lib/rpm/ /root/temp/ Password: receiving file list ... 19 files to consider ./ Basenames 5357568 100% 14.98MB/s 0:00:00 (xfer#1, to-check=17/19) Conflictname 12288 100% 35.09kB/s 0:00:00 (xfer#2, to-check=16/19) . . . sent 406 bytes received 15810211 bytes 2108082.27 bytes/sec total size is 45305958 speedup is 2.87

You can also use rsnapshot utility (that uses rsync) to backup local linux server, or backup remote linux server.

Example 10. Delete the Files Created at the Target


If a file is not present at the source, but present at the target, you might want to delete the file at the target during rsync. In that case, use delete option as shown below. rsync delete option deletes files that are not there in source directory.
# Source and target are in sync. Now creating new file at the target. $ > new-file.txt $ rsync -avz --delete thegeekstuff@192.168.200.10:/var/lib/rpm/ . Password: receiving file list ... done deleting new-file.txt ./ sent 26 bytes received 390 bytes 48.94 bytes/sec total size is 45305958 speedup is 108908.55

Target has the new file called new-file.txt, when synchronize with the source with delete option, it removed the file new-file.txt

Example 11. Do not Create New File at the Target


If you like, you can update (Sync) only the existing files at the target. In case source has new files, which is not there at the target, you can avoid creating these new files at the target. If you want this feature, use existing option with rsync command. First, add a new-file.txt at the source.
[/var/lib/rpm ]$ > new-file.txt

Next, execute the rsync from the target.


$ rsync -avz --existing root@192.168.1.2:/var/lib/rpm/ . root@192.168.1.2's password: receiving file list ... done ./ sent 26 bytes received 419 bytes 46.84 bytes/sec total size is 88551424 speedup is 198991.96

If you see the above output, it didnt receive the new file new-file.txt

Example 12. View the Changes Between Source and Destination


This option is useful to view the difference in the files or directories between source and destination. At the source:
$ ls -l /var/lib/rpm -rw-r--r-- 1 root root -rw-r--r-- 1 root root -rw-r--r-- 1 root root 5357568 2010-06-24 08:57 Basenames 12288 2008-05-28 22:03 Conflictname 1179648 2010-06-24 08:57 Dirnames

At the destination:
$ ls -l /root/temp -rw-r--r-- 1 root root -rw-r--r-- 1 bin bin -rw-r--r-- 1 root root 12288 May 28 2008 Conflictname 1179648 Jun 24 05:27 Dirnames 0 Sep 3 06:39 Basenames

In the above example, between the source and destination, there are two differences. First, owner and group of the file Dirname differs. Next, size differs for the file Basenames. Now let us see how rsync displays this difference. -i option displays the item changes.
$ rsync -avzi thegeekstuff@192.168.200.10:/var/lib/rpm/ /root/temp/ Password: receiving file list ... done >f.st.... Basenames .f....og. Dirnames sent 48 bytes received 2182544 bytes 291012.27 bytes/sec total size is 45305958 speedup is 20.76

In the output it displays some 9 letters in front of the file name or directory name indicating the changes. In our example, the letters in front of the Basenames (and Dirnames) says the following:
> f s t o g specifies that a file is being transferred to the local host. represents that it is a file. represents size changes are there. represents timestamp changes are there. owner changed group changed.

Example 13. Include and Exclude Pattern during File Transfer


rsync allows you to give the pattern you want to include and exclude files or directories while doing synchronization.

$ rsync -avz --include 'P*' --exclude '*' thegeekstuff@192.168.200.10:/var/lib/rpm/ /root/temp/ Password: receiving file list ... done ./ Packages Providename Provideversion Pubkeys sent 129 bytes received 10286798 bytes total size is 32768000 speedup is 3.19 2285983.78 bytes/sec

In the above example, it includes only the files or directories starting with P (using rsync include) and excludes all other files. (using rsync exclude * )

Example 14. Do Not Transfer Large Files


You can tell rsync not to transfer files that are greater than a specific size using rsync max-size option.
$ rsync -avz --max-size='100K' thegeekstuff@192.168.200.10:/var/lib/rpm/ /root/temp/ Password: receiving file list ... done ./ Conflictname Group Installtid Name Sha1header Sigmd5 Triggername sent 252 bytes received 123081 bytes 18974.31 bytes/sec total size is 45305958 speedup is 367.35

max-size=100K makes rsync to transfer only the files that are less than or equal to 100K. You can indicate M for megabytes and G for gigabytes.

Example 15. Transfer the Whole File


One of the main feature of rsync is that it transfers only the changed block to the destination, instead of sending the whole file. If network bandwidth is not an issue for you (but CPU is), you can transfer the whole file, using rsync -W option. This will speed-up the rsync process, as it doesnt have to perform the checksum at the source and destination.
# rsync -avzW thegeekstuff@192.168.200.10:/var/lib/rpm/ /root/temp Password: receiving file list ... done ./ Basenames Conflictname Dirnames Filemd5s Group Installtid Name

sent 406 bytes received 15810211 bytes total size is 45305958 speedup is 2.87

2874657.64 bytes/sec

3. Three sysadmin rules: If you are a sysadmin, you cant (and shouldnt) break these three sysadmin rules.

Three Sysadmin Rules You Cant (And Shouldnt) Break


by Ramesh Natarajan on July 27, 2010
Object 69

Object 71

Object 70

When I drafted this article, I really came-up with 7 sysadmin habits. But, out of those 7 habits, three really stood out for me. While habits are good, sometimes rules might even be better, especially in the sysadmin world, when handling a production environment.

Rule #1: Backup Everything ( and validate the backup regularly )


Experienced sysadmin knows that production system will crash someday, no matter how proactive we are. The best way to be prepared for that situation is to have a valid backup. If you dont have a backup of your critical systems, you should start planning for it immediately. While planning for a backup, keep the following factors in your mind: What software (or custom script?) you would use to take a backup? Do you have enough disk space to keep the backup? How often would you rotate the backups?

Apart from full-backup, do you also need regular incremental-backup? How would you execute your backup? i.e Using crontab or some other schedulers? If you dont have a backup of your critical systems, stop reading this article and get back to work. Start planning for your backup immediately. A while back in one of the research conducted by some group (dont remember who did that), I remember they mentioned that only 70% of the production applications are getting backed-up. Out of those, 30% of the backups are invalid or corrupted. Assume that Sam takes backup of the critical applications regularly, but doesnt validate his backup. However, Jack doesnt even bother to take any backup of his critical applications. It might sound like Sam who has a backup is in much better shape than Jack who doesnt even have a backup. In my opinion, both Sam and Jack are in the same situation, as Sam never validated his backup to make sure it can be restored when there is a disater. If you are a sysadmin and dont want to follow this golden rule#1 (or like to break this rule), you should seriously consider quitting sysadmin job and become a developer.

Rule #2: Master the Command Line ( and avoid the UI if possible )
There is not a single task on a Unix / Linux server, that you cannot perform from command line. While there are some user interface available to make some of the sysadmin task easy, you really dont need them and should be using command line all the time. So, if you are a Linux sysadmin, you should master the command line. On any system, if you want to be very fluent and productive, you should master the command line. The main difference between a Windows sysadmin and Linux sysadmin is GUI Vs Command line. Windows sysadmin are not very comfortable with command line. Linux sysadmin should be very comfortable with command line. Even when you have a UI to do certain task, you should still prefer command line, as you would understand how a particular service works, if you do it from the command line. In lot of production server environment, sysadmins typically uninstall all GUI related services and tools. If you are Unix / Linux sysadmin and dont want to follow this rule, probably there is a deep desire inside you to become a Windows sysadmin.

Rule #3: Automate Everything ( and become lazy )


Lazy sysadmin is the best sysadmin. There is not even a single sysadmin that I know of, who likes to break this rule. That might have something to do with the lazy part. Take few minutes to think and list out all the routine tasks that you might do daily, weekly or monthly. Once you have that list, figure out how you can automate those. The best sysadmin typically doesnt like to be busy. He would rather be relaxed and let the system do the job for him. 4. User and group disk quota: This article explains how to setup user and group quote with soft limit, hard limit and grace period. For example, if you specify 2GB as hard limit, user will not be able to create new files after 2GB.

5 Steps to Setup User and Group Disk Quota on UNIX / Linux


by Ramesh Natarajan on July 21, 2010
Object 72 Object 74 Object 73

On Linux, you can setup disk quota using one of the following methods: File system base disk quota allocation User or group based disk quota allocation On the user or group based quota, following are three important factors to consider: Hard limit For example, if you specify 2GB as hard limit, user will not be able to create new files after 2GB Soft limit For example, if you specify 1GB as soft limit, user will get a warning message disk quota exceeded, once they reach 1GB limit. But, theyll still be able to create new files until they reach the hard limit Grace Period For example, if you specify 10 days as a grace period, after user reach their hard limit, they would be allowed additional 10 days to create new files. In that time period, they should try to get back to the quota limit.

1. Enable quota check on filesystem


First, you should specify which filesystem are allowed for quota check. Modify the /etc/fstab, and add the keyword usrquota and grpquota to the corresponding filesystem that you would like to monitor. The following example indicates that both user and group quota check is enabled on /home filesystem
# cat /etc/fstab LABEL=/home /home ext2 defaults,usrquota,grpquota 1 2

Reboot the server after the above change.

2. Initial quota check on Linux filesystem using quotacheck


Once youve enabled disk quota check on the filesystem, collect all quota information initially as shown below.

# quotacheck -avug quotacheck: Scanning /dev/sda3 [/home] done quotacheck: Checked 5182 directories and 31566 files quotacheck: Old file not found. quotacheck: Old file not found.

In the above command: a: Check all quota-enabled filesystem v: Verbose mode u: Check for user disk quota g: Check for group disk quota

The above command will create a aquota file for user and group under the filesystem directory as shown below.
# ls -l /home/ -rw-------rw------1 root 1 root root root 11264 Jun 21 14:49 aquota.user 11264 Jun 21 14:49 aquota.group

3. Assign disk quota to a user using edquota command


Use the edquota command as shown below, to edit the quota information for a specific user. For example, to change the disk quota for user ramesh, use edquota command, which will open the soft, hard limit values in an editor as shown below.
# edquota ramesh Disk quotas for user ramesh (uid 500): Filesystem blocks soft /dev/sda3 1419352 0 hard 0 inodes 1686 soft 0 hard 0

Once the edquota command opens the quota settings for the specific user in a editor, you can set the following limits: soft and hard limit for disk quota size for the particular user. soft and hard limit for the total number of inodes that are allowed for the particular user.

4. Report the disk quota usage for users and group using repquota
Use the repquota command as shown below to report the disk quota usage for the users and groups.
# repquota /home *** Report for user quotas on device /dev/sda3 Block grace time: 7days; Inode grace time: 7days Block limits File limits User used soft hard grace used soft hard grace ---------------------------------------------------------------------root -- 566488 0 0 5401 0 0 nobody -1448 0 0 30 0 0 ramesh -- 1419352 0 0 1686 0 0 john -26604 0 0 172 0 0

5. Add quotacheck to daily cron job


Add the quotacheck to the daily cron job. Create a quotacheck file as shown below under the /etc/cron.daily directory, that will run the quotacheck command everyday. This will send the output

of the quotacheck command to root email address.


# cat /etc/cron.daily/quotacheck quotacheck -avug

5. Troubleshoot using dmesg: Using dmesg you can view boot up messages that displays information about the hardware devices that the kernel detects during boot process. This can be helpful during troubleshooting process.

Troubleshooting Using dmesg Command in Unix and Linux


by Balakrishnan Mariyappan on October 26, 2010
Object 75

Object 77

Object 76

During system bootup process, kernel gets loaded into the memory and it controls the entire system. When the system boots up, it prints number of messages on the screen that displays information about the hardware devices that the kernel detects during boot process. These messages are available in kernel ring buffer and whenever the new message comes the old message gets overwritten. You could see all those messages after the system bootup using the dmesg command.

1. View the Boot Messages


By executing the dmesg command, you can view the hardwares that are detected during bootup process and its configuration details. There are lot of useful information displayed in dmesg. Just browse through them line by line and try to understand what it means. Once you have an idea of the kind of messages it displays, you might find it helpful for troubleshooting, when you encounter an issue.
# dmesg | more Bluetooth: L2CAP ver 2.8 eth0: no IPv6 routers present bnx2: eth0 NIC Copper Link is Down usb 1-5.2: USB disconnect, address 5 bnx2: eth0 NIC Copper Link is Up, 100 Mbps full duplex

As we discussed earlier, you can also view hardware information using dmidecode.

2. View Available System Memory


You can also view the available memory from the dmesg messages as shown below.
# dmesg | grep Memory Memory: 57703772k/60817408k available (2011k kernel code, 1004928k reserved, 915k data, 208k init)

3. View Ethernet Link Status (UP/DOWN)


In the example below, dmesg indicates that the eth0 link is in active state during the boot itself.

# dmesg | grep eth eth0: Broadcom NetXtreme II BCM5709 1000Base-T (C0) PCI Express found at mem 96000000, IRQ 169, node addr e4:1f:13:62:ff:58 eth1: Broadcom NetXtreme II BCM5709 1000Base-T (C0) PCI Express found at mem 98000000, IRQ 114, node addr e4:1f:13:62:ff:5a eth0: Link up

4. Change the dmesg Buffer Size in /boot/config- file


Linux allows to you change the default size of the dmesg buffer. The CONFIG_LOG_BUF_SHIFT parameter in the /boot/config-2.6.18-194.el5 file (or similar file on your system) can be changed to modify the dmesg buffer. The below value is in the power of 2. So, the buffer size in this example would be 262144 bytes. You can modify the buffer size based on your need (SUSE / REDHAT).
# grep CONFIG_LOG_BUF_SHIFT CONFIG_LOG_BUF_SHIFT=18 /boot/config-`uname -r`

5. Clear Messages in dmesg Buffer


Sometimes you might want to clear the dmesg messages before your next reboot. You can clear the dmesg buffer as shown below.
# dmesg -c # dmesg

6. dmesg timestamp: Date and Time of Each Boot Message in dmesg


By default the dmesg dont have the timestamp associated with them. However Linux provides a way to see the date and time for each boot messages in dmesg in the /var/log/kern.log file as shown below. klogd service should be enabled and configured properly to log the messages in /var/log/kern.log file.
# dmesg | grep "L2 cache" [ 0.014681] CPU: L2 cache: 2048K # grep "L2 cache" kern.log.1 Oct 18 23:55:40 ubuntu kernel: [ 0.014681] CPU: L2 cache: 2048K

6. RPM package management examples: 15 examples provided in this article explains everything you need to know about managing RPM packages on redhat based system (including CentOS).

RPM Command: 15 Examples to Install, Uninstall, Upgrade, Query RPM Packages


by Sasikala on July 7, 2010
Object 78

Object 80

Object 79

RPM command is used for installing, uninstalling, upgrading, querying, listing, and checking RPM packages on your Linux system. RPM stands for Red Hat Package Manager. With root privilege, you can use the rpm command with appropriate options to manage the RPM software packages. In this article, let us review 15 practical examples of rpm command. Let us take an rpm of Mysql Client and run through all our examples.

1. Installing a RPM package Using rpm -ivh


RPM filename has packagename, version, release and architecture name. For example, In the MySQL-client-3.23.57-1.i386.rpm file: MySQL-client Package Name 3.23.57 Version 1 Release i386 Architecture

When you install a RPM, it checks whether your system is suitable for the software the RPM package contains, figures out where to install the files located inside the rpm package, installs them on your system, and adds that piece of software into its database of installed RPM packages. The following rpm command installs Mysql client package.
# rpm -ivh MySQL-client-3.23.57-1.i386.rpm Preparing... ########################################### [100%] 1:MySQL-client ########################################### [100%]

rpm command and options -i : install a package -v : verbose -h : print hash marks as the package archive is unpacked. You can also use dpkg on Debian, pkgadd on Solaris, depot on HP-UX to install packages.

2. Query all the RPM Packages using rpm -qa


You can use rpm command to query all the packages installed in your system.
# rpm -qa

cdrecord-2.01-10.7.el5 bluez-libs-3.7-1.1 setarch-2.0-1.1 . .

-q query operation -a queries all installed packages To identify whether a particular rpm package is installed on your system, combine rpm and grep command as shown below. Following command checks whether cdrecord package is installed on your system.
# rpm -qa | grep 'cdrecord'

3. Query a Particular RPM Package using rpm -q


The above example lists all currently installed package. After installation of a package to check the installation, you can query a particular package and verify as shown below.
# rpm -q MySQL-client MySQL-client-3.23.57-1 # rpm -q MySQL package MySQL is not installed

Note: To query a package, you should specify the exact package name. If the package name is incorrect, then rpm command will report that the package is not installed.

4. Query RPM Packages in a various format using rpm queryformat


Rpm command provides an option queryformat, which allows you to give the header tag names, to list the packages. Enclose the header tag with in {}.
# rpm -qa --queryformat '%{name-%{version}-%{release} %{size}\n' cdrecord-2.01-10.7 12324 bluez-libs-3.7-1.1 5634 setarch-2.0-1.1 235563 . . #

5. Which RPM package does a file belong to? Use rpm -qf
Let us say, you have list of files and you would want to know which package owns all these files. rpm command has options to achieve this. The following example shows that /usr/bin/mysqlaccess file is part of the MySQL-client-3.23.57-1 rpm.
# rpm -qf /usr/bin/mysqlaccess MySQL-client-3.23.57-1

-f : file name

6. Locate documentation of a package that owns file using rpm -qdf


Use the following to know the list of documentations, for a package that owns a file. The following command, gives the location of all the manual pages related to mysql package.
# rpm -qdf /usr/bin/mysqlaccess /usr/share/man/man1/mysql.1.gz /usr/share/man/man1/mysqlaccess.1.gz /usr/share/man/man1/mysqladmin.1.gz /usr/share/man/man1/mysqldump.1.gz /usr/share/man/man1/mysqlshow.1.gz

-d : refers documentation.

7. Information about Installed RPM Package using rpm -qi


rpm command provides a lot of information about an installed pacakge using rpm -qi as shown below:
# rpm -qi MySQL-client Name : MySQL-client Relocations: (not relocatable) Version : 3.23.57 Vendor: MySQL AB Release : 1 Build Date: Mon 09 Jun 2003 11:08:28 PM CEST Install Date: Mon 06 Feb 2010 03:19:16 AM PST Build Host: build.mysql.com Group : Applications/Databases Source RPM: MySQL-3.23.57-1.src.rpm Size : 5305109 License: GPL / LGPL Signature : (none) Packager : Lenz Grimmer URL : http://www.mysql.com/ Summary : MySQL - Client Description : This package contains the standard MySQL clients.

If you have an RPM file that you would like to install, but want to know more information about it before installing, you can do the following:
# rpm -qip MySQL-client-3.23.57-1.i386.rpm Name : MySQL-client Relocations: (not relocatable) Version : 3.23.57 Vendor: MySQL AB Release : 1 Build Date: Mon 09 Jun 2003 11:08:28 PM CEST Install Date: (not installed) Build Host: build.mysql.com Group : Applications/Databases Source RPM: MySQL-3.23.57-1.src.rpm Size : 5305109 License: GPL / LGPL Signature : (none) Packager : Lenz Grimmer URL : http://www.mysql.com/ Summary : MySQL - Client Description : This package contains the standard MySQL clients.

-i : view information about an rpm -p : specify a package name

8. List all the Files in a Package using rpm -qlp


To list the content of a RPM package, use the following command, which will list out the files without extracting into the local directory folder.
$ rpm -qlp ovpc-2.1.10.rpm /usr/bin/mysqlaccess

/usr/bin/mysqldata /usr/bin/mysqlperm . . /usr/bin/mysqladmin

q : query the rpm file l : list the files in the package p : specify the package name You can also extract files from RPM package using rpm2cpio as we discussed earlier.

9. List the Dependency Packages using rpm -qRP


To view the list of packages on which this package depends,
# rpm -qRp MySQL-client-3.23.57-1.i386.rpm /bin/sh /usr/bin/perl

10. Find out the state of files in a package using rpm -qsp
The following command is to find state (installed, replaced or normal) for all the files in a RPM package.
# rpm -qsp MySQL-client-3.23.57-1.i386.rpm normal /usr/bin/msql2mysql normal /usr/bin/mysql normal /usr/bin/mysql_find_rows normal /usr/bin/mysqlaccess normal /usr/bin/mysqladmin normal /usr/bin/mysqlbinlog normal /usr/bin/mysqlcheck normal /usr/bin/mysqldump normal /usr/bin/mysqlimport normal /usr/bin/mysqlshow normal /usr/share/man/man1/mysql.1.gz normal /usr/share/man/man1/mysqlaccess.1.gz normal /usr/share/man/man1/mysqladmin.1.gz normal /usr/share/man/man1/mysqldump.1.gz normal /usr/share/man/man1/mysqlshow.1.gz

11. Verify a Particular RPM Package using rpm -Vp


Verifying a package compares information about the installed files in the package with information about the files taken from the package metadata stored in the rpm database. In the following command, -V is for verification and -p option is used to specify a package name to verify.
# rpm -Vp MySQL-client-3.23.57-1.i386.rpm S.5....T c /usr/bin/msql2mysql S.5....T c /usr/bin/mysql S.5....T c /usr/bin/mysql_find_rows S.5....T c /usr/bin/mysqlaccess

The character in the above output denotes the following: S file Size differs M Mode differs (includes permissions and file type) 5 MD5 sum differs

D Device major/minor number mismatch L readlink(2) path mismatch U User ownership differs G Group ownership differs T mTime differs

12. Verify a Package Owning file using rpm -Vf


The following command verify the package which owns the given filename.
# rpm -Vf /usr/bin/mysqlaccess S.5....T c /usr/bin/mysql #

13. Upgrading a RPM Package using rpm -Uvh


Upgrading a package is similar to installing one, but RPM automatically un-installs existing versions of the package before installing the new one. If an old version of the package is not found, the upgrade option will still install it.
# rpm -Uvh MySQL-client-3.23.57-1.i386.rpm Preparing... ########################################### [100%] 1:MySQL-client ###########################################

14. Uninstalling a RPM Package using rpm -e


To remove an installed rpm package using -e as shown below. After uninstallation, you can query using rpm -qa and verify the uninstallation.
# rpm -ev MySQL-client

15. Verifying all the RPM Packages using rpm -Va


The following command verifies all the installed packages.
# rpm -Va S.5....T c S.5....T c S.5....T c S.5....T c . . S.5....T c S.5....T c /etc/issue /etc/issue.net /var/service/imap/ssl/seed /home/httpd/html/horde/ingo/config/backends.php /home/httpd/html/horde/ingo/config/prefs.php /etc/printcap

7. 10 netstat examples: Netstat command displays various network related information such as network connections, routing tables, interface statistics, masquerade connections, multicast memberships etc.,

UNIX / Linux: 10 Netstat Command Examples


by SathiyaMoorthy on March 29, 2010

Object 81

Object 83

Object 82

Netstat command displays various network related information such as network connections, routing tables, interface statistics, masquerade connections, multicast memberships etc., In this article, let us review 10 practical unix netstat command examples.

1. List All Ports (both listening and non listening ports)


List all ports using netstat -a
# netstat -a | more Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address tcp 0 0 localhost:30037 *:* udp 0 0 *:bootpc *:* Active UNIX domain sockets (servers Proto RefCnt Flags Type unix 2 [ ACC ] STREAM unix 2 [ ACC ] STREAM and established) State I-Node LISTENING 6135 LISTENING 5140 State LISTEN

Path /tmp/.X11-unix/X0 /var/run/acpid.socket

List all tcp ports using netstat -at


# netstat -at Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address tcp 0 0 localhost:30037 *:* tcp 0 0 localhost:ipp *:* tcp 0 0 *:smtp *:* tcp6 0 0 localhost:ipp [::]:* State LISTEN LISTEN LISTEN LISTEN

List all udp ports using netstat -au


# netstat -au Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address udp 0 0 *:bootpc *:* udp 0 0 *:49119 *:* udp 0 0 *:mdns *:* State

2. List Sockets which are in Listening State


List only listening ports using netstat -l
# netstat -l Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address tcp 0 0 localhost:ipp tcp6 0 0 localhost:ipp udp 0 0 *:49119 Foreign Address *:* [::]:* *:* State LISTEN LISTEN

List only listening TCP Ports using netstat -lt


# netstat -lt

Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address tcp 0 0 localhost:30037 tcp 0 0 *:smtp tcp6 0 0 localhost:ipp

Foreign Address *:* *:* [::]:*

State LISTEN LISTEN LISTEN

List only listening UDP Ports using netstat -lu


# netstat -lu Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address udp 0 0 *:49119 udp 0 0 *:mdns Foreign Address *:* *:* State

List only the listening UNIX Ports using netstat -lx


# netstat -lx Active UNIX domain Proto RefCnt Flags unix 2 [ ACC unix 2 [ ACC unix 2 [ ACC unix 2 [ ACC sockets (only servers) Type State ] STREAM LISTENING ] STREAM LISTENING ] STREAM LISTENING ] STREAM LISTENING I-Node 6294 6203 6302 6306 Path private/maildrop public/cleanup private/ifmail private/bsmtp

3. Show the statistics for each protocol


Show statistics for all ports using netstat -s
# netstat -s Ip: 11150 total packets received 1 with invalid addresses 0 forwarded 0 incoming packets discarded 11149 incoming packets delivered 11635 requests sent out Icmp: 0 ICMP messages received 0 input ICMP message failed. Tcp: 582 active connections openings 2 failed connection attempts 25 connection resets received Udp: 1183 packets received 4 packets to unknown port received. .....

Show statistics for TCP (or) UDP ports using netstat -st (or) -su
# netstat -st # netstat -su

4. Display PID and program names in netstat output using netstat -p


netstat -p option can be combined with any other netstat option. This will add the PID/Program

Name to the netstat output. This is very useful while debugging to identify which program is running on a particular port.
# netstat -pt Active Internet connections (w/o servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 1 0 ramesh-laptop.loc:47212 192.168.185.75:www CLOSE_WAIT 2109/firefox tcp 0 0 ramesh-laptop.loc:52750 lax:www ESTABLISHED 2109/firefox

5. Dont resolve host, port and user name in netstat output


When you dont want the name of the host, port or user to be displayed, use netstat -n option. This will display in numbers, instead of resolving the host name, port name, user name. This also speeds up the output, as netstat is not performing any look-up.
# netstat -an

If you dont want only any one of those three items ( ports, or hosts, or users ) to be resolved, use following commands.
# netsat -a --numeric-ports # netsat -a --numeric-hosts # netsat -a --numeric-users

6. Print netstat information continuously


netstat will print information continuously every few seconds.
# netstat -c Active Internet connections (w/o servers) Proto Recv-Q Send-Q Local Address tcp 0 0 ramesh-laptop.loc:36130 tcp 1 1 ramesh-laptop.loc:52564 tcp 0 0 ramesh-laptop.loc:43758 tcp 1 1 ramesh-laptop.loc:42367 ^C Foreign Address 101-101-181-225.ama:www 101.11.169.230:www server-101-101-43-2:www 101.101.34.101:www State ESTABLISHED CLOSING ESTABLISHED CLOSING

7. Find the non supportive Address families in your system


netstat --verbose

At the end, you will have something like this.


netstat: netstat: netstat: netstat: no no no no support support support support for for for for `AF `AF `AF `AF IPX' on this system. AX25' on this system. X25' on this system. NETROM' on this system.

8. Display the kernel routing information using netstat -r


# netstat -r Kernel IP routing table Destination Gateway 192.168.1.0 * Genmask 255.255.255.0 Flags U MSS Window 0 0 irtt Iface 0 eth2

link-local default

* 192.168.1.1

255.255.0.0 0.0.0.0

U UG

0 0 0 0

0 eth2 0 eth2

Note: Use netstat -rn to display routes in numeric format without resolving for host-names.

9. Find out on which port a program is running


# netstat -ap | grep ssh (Not all processes could be identified, non-owned process info will not be shown, you would have to be root to see it all.) tcp 1 0 dev-db:ssh 101.174.100.22:39213 tcp 1 0 dev-db:ssh 101.174.100.22:57643 -

CLOSE_WAIT CLOSE_WAIT

Find out which process is using a particular port:


# netstat -an | grep ':80'

10. Show the list of network interfaces


# netstat -i Kernel Interface table Iface MTU Met RX-OK RX-ERR RX-DRP RX-OVR eth0 1500 0 0 0 0 0 BMU eth2 1500 0 26196 0 0 0 BMRU lo 16436 0 4 0 0 0 LRU TX-OK TX-ERR TX-DRP TX-OVR Flg 0 0 0 0 26883 4 6 0 0 0 0 0

Display extended information on the interfaces (similar to ifconfig) using netstat -ie:
# netstat -ie Kernel Interface table eth0 Link encap:Ethernet HWaddr 00:10:40:11:11:11 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Memory:f6ae0000-f6b00000

8. Manage packages using apt-* commands: These 13 practical examples explains how to manage packages using apt-get, apt-cache, apt-file and dpkg commands.

How To Manage Packages Using apt-get, apt-cache, apt-file and dpkg Commands ( With 13 Practical Examples )
by Ramesh Natarajan on October 14, 2009
Object 84 Object 86 Object 85

Debian based systems (including Ubuntu) uses apt-* commands for managing packages from the command line. In this article, using Apache 2 installation as an example, let us review how to use apt-* commands to view, install, remove, or upgrade packages.

1. apt-cache search: Search Repository Using Package Name


If you are installing Apache 2, you may guess that the package name is apache2. To verify whether it is a valid package name, you may want to search the repository for that particular package name as shown below. The following example shows how to search the repository for a specific package name.
$ apt-cache search ^apache2$ apache2 - Apache HTTP Server metapackage

2. apt-cache search: Search Repository Using Package Description


If you dont know the exact name of the package, you can still search using the package description as shown below.
$ apt-cache search "Apache HTTP Server" apache2 - Apache HTTP Server metapackage apache2-doc - Apache HTTP Server documentation apache2-mpm-event - Apache HTTP Server - event driven model apache2-mpm-prefork - Apache HTTP Server - traditional non-threaded model apache2-mpm-worker - Apache HTTP Server - high speed threaded model apache2.2-common - Apache HTTP Server common files

3. apt-file search: Search Repository Using a Filename from the Package


Sometimes you may know the configuration file name (or) the executable name from the package that you would like to install. The following example shows that apache2.conf file is part of the apache2.2-common package. Search the repository with a configuration file name using apt-file command as shown below.
$ apt-file search apache2.conf apache2.2-common: /etc/apache2/apache2.conf apache2.2-common: /usr/share/doc/apache2.2common/examples/apache2/apache2.conf.gz

4. apt-cache show: Basic Information About a Package


Following example displays basic information about apache2 package.
$ apt-cache show apache2 Package: apache2 Priority: optional Maintainer: Ubuntu Core Developers Original-Maintainer: Debian Apache Maintainers Version: 2.2.11-2ubuntu2.3 Depends: apache2-mpm-worker (>= 2.2.11-2ubuntu2.3) | apache2-mpm-prefork (>= 2.2.11-2ubuntu2.3) | apache2-mpm-event (>= 2.2.11-2ubuntu2.3) Filename: pool/main/a/apache2/apache2_2.2.11-2ubuntu2.3_all.deb Size: 46350 Description: Apache HTTP Server metapackage The Apache Software Foundation's goal is to build a secure, efficient and extensible HTTP server as standards-compliant open source software. Homepage: http://httpd.apache.org/

5. apt-cache showpkg: Detailed Information About a Package


apt-cache show displays basic information about a package. Use apt-cache showpkg to display detailed information about a package as shown below.
$ apt-cache showpkg apache2 Package: apache2 Versions: 2.2.11-2ubuntu2.3 (/var/lib/apt/lists/us.archive.ubuntu.com_ubuntu_dists_jauntyupdates_main_binary-i386_Packages) (/var/lib/apt/lists/security.ubuntu.com_ubuntu_dists_jauntysecurity_main_binary-i386_Packages) Description Language: File: /var/lib/apt/lists/us.archive.ubuntu.com_ubuntu_dists_jauntyupdates_main_binary-i386_Packages MD5: d24f049cd70ccfc178dd8974e4b1ed01 Reverse Depends: squirrelmail,apache2 squid3-cgi,apache2 phpmyadmin,apache2 mahara-apache2,apache2 ipplan,apache2 Dependencies: 2.2.11-2ubuntu2.3 - apache2-mpm-worker (18 2.2.11-2ubuntu2.3) apache2-mpmprefork (18 2.2.11-2ubuntu2.3) apache2-mpm-event (2 2.2.11-2ubuntu2.3) 2.2.11-2ubuntu2 - apache2-mpm-worker (18 2.2.11-2ubuntu2) apache2-mpm-prefork (18 2.2.11-2ubuntu2) apache2-mpm-event (2 2.2.11-2ubuntu2) Provides: 2.2.11-2ubuntu2.3 2.2.11-2ubuntu2 Reverse Provides: apache2-mpm-itk 2.2.6-02-1build4.3 apache2-mpm-worker 2.2.11-2ubuntu2.3 apache2-mpm-prefork 2.2.11-2ubuntu2.3 apache2-mpm-prefork 2.2.11-2ubuntu2 apache2-mpm-event 2.2.11-2ubuntu2

6. apt-file list: List all the Files Located Inside a Package


Use apt-file list to display all the files located inside the apache2 package as shown below.

$ apt-file list apache2 | more apache2: /usr/share/bug/apache2/control apache2: /usr/share/bug/apache2/script apache2: /usr/share/doc/apache2/NEWS.Debian.gz apache2: /usr/share/doc/apache2/README.Debian.gz apache2: /usr/share/doc/apache2/changelog.Debian.gz ...

7. apt-cache depends: List all Dependent Packages


Before installation, if you like to view all the dependent packages, use apt-cache depends as shown below.
$ apt-cache depends apache2 apache2 |Depends: apache2-mpm-worker |Depends: apache2-mpm-prefork Depends: apache2-mpm-event

8. dpkg -l: Is the Package Already Installed?


Before installing a package, you may want to make sure it is not already installed as shown below using dpkg -l command.
$ dpkg -l | grep -i apache

9. apt-get install: Install a Package


Finally, install the package using apt-get install as shown below.
$ sudo apt-get install apache2 [sudo] password for ramesh: The following NEW packages will be installed: apache2 apache2-mpm-worker apache2-utils apache2.2-common libapr1 libaprutil1 libpq5 0 upgraded, 7 newly installed, 0 to remove and 26 not upgraded.

10. dpkg -l : Verify Whether the Package got Successfully Installed


After installing the package, use dpkg -l to make sure it got installed successfully.
$ dpkg -l | grep apache ii apache2 ii apache2-mpm-worker threaded mod ii apache2-utils ii apache2.2-common 2.2.11-2ubuntu2.3 2.2.11-2ubuntu2.3 2.2.11-2ubuntu2.3 2.2.11-2ubuntu2.3 Apache HTTP Server metapackage Apache HTTP Server - high speed utility programs for webservers Apache HTTP Server common files

11. apt-get remove: Delete a Package


Use apt-get purge or apt-get remove to delete a package as shown below.
$ sudo apt-get purge apache2 (or)

$ sudo apt-get remove apache2 The following packages were automatically installed and are no longer required: apache2-utils linux-headers-2.6.28-11 libapr1 apache2.2-common linux-headers-2.6.28-11-generic apache2-mpm-worker libpq5 libaprutil1 Use 'apt-get autoremove' to remove them. The following packages will be REMOVED: apache2 0 upgraded, 0 newly installed, 1 to remove and 26 not upgraded. Removing apache2 ...

apt-get remove will not delete the configuration files of the package apt-get purge will delete the configuration files of the package

12. apt-get -u install: Upgrade a Specific Package


The following example shows how to upgrade one specific package.
$ sudo apt-get -u install apache2 Reading package lists... Done Building dependency tree Reading state information... Done apache2 is already the newest version. The following packages were automatically installed and are no longer required: linux-headers-2.6.28-11 linux-headers-2.6.28-11-generic Use 'apt-get autoremove' to remove them. 0 upgraded, 0 newly installed, 0 to remove and 26 not upgraded.

13. apt-get -u upgrade: Upgrade all Packages


To upgrade all the packages to its latest version, use apt-get -u upgrade as shown below.
$ sudo apt-get -u upgrade The following packages will be upgraded: libglib2.0-0 libglib2.0-data libicu38 libsmbclient libwbclient0 openoffice.org-base-core openoffice.org-calc openoffice.org-common openoffice.org-core openoffice.org-draw openoffice.org-emailmerge openoffice.org-gnome openoffice.org-gtk openoffice.org-impress openoffice.org-math openoffice.org-style-human openoffice.org-writer python-uno samba-common smbclient ttf-opensymbol tzdata 26 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.

9. Modprobe command examples: modprobe utility is used to add loadable modules to the Linux kernel. You can also view and remove modules using modprobe command.

Linux modprobe Command Examples to View, Install, Remove Modules


by Balakrishnan Mariyappan on November 9, 2010
Object 87 Object 89 Object 88

modprobe utility is used to add loadable modules to the Linux kernel. You can also view and remove modules using modprobe command. Linux maintains /lib/modules/$(uname-r) directory for modules and its configuration files (except /etc/modprobe.conf and /etc/modprobe.d).

In Linux kernel 2.6, the .ko modules are used instead of .o files since that has additional information that the kernel uses to load the modules. The example in this article are done with using modprobe on Ubuntu.

1. List Available Kernel Modules


modprobe -l will display all available modules as shown below.
$ modprobe -l | less kernel/arch/x86/kernel/cpu/mcheck/mce-inject.ko kernel/arch/x86/kernel/cpu/cpufreq/e_powersaver.ko kernel/arch/x86/kernel/cpu/cpufreq/p4-clockmod.ko kernel/arch/x86/kernel/msr.ko kernel/arch/x86/kernel/cpuid.ko kernel/arch/x86/kernel/apm.ko kernel/arch/x86/kernel/scx200.ko kernel/arch/x86/kernel/microcode.ko kernel/arch/x86/crypto/aes-i586.ko kernel/arch/x86/crypto/twofish-i586.ko

2. List Currently Loaded Modules


While the above modprobe command shows all available modules, lsmod command will display all modules that are currently loaded in the Linux kernel.
$ lsmod | less soundcore ppdev snd_page_alloc psmouse lp 7264 1 snd 6688 0 9156 1 snd_pcm 56180 0 8964 0

3. Install New modules into Linux Kernel


In order to insert a new module into the kernel, execute the modprobe command with the module name. Following example loads vmhgfs module to Linux kernel on Ubuntu.
$ sudo modprobe vmhgfs

Once a module is loaded, verify it using lsmod command as shown below.


$ lsmod | grep vmhgfs vmhgfs 50772 0

The module files are with .ko extension. If you like to know the full file location of a specific Linux kernel module, use modprobe command and do a grep of the module name as shown below.
$ modprobe | grep vmhgfs misc/vmhgfs.ko $ cd /lib/modules/2.6.31-14-generic/misc $ ls vmhgfs* vmhgfs.ko

Note: You can also use insmod for installing new modules into the Linux kernel.

4. Load New Modules with the Different Name to Avoid Conflicts


Consider, in some cases you are supposed to load a new module but with the same module name another module got already loaded for different purposes. If for some strange reasons, the module name you are trying to load into the kernel is getting used (with the same name) by a different module, then you can load the new module using a different name. To load a module with a different name, use the modprobe option -o as shown below.
$ sudo modprobe vmhgfs -o vm_hgfs $ lsmod vm_hgfs | grep vm_hgfs 50772 0

5. Remove the Currently Loaded Module


If youve loaded a module to Linux kernel for some testing purpose, you might want to unload (remove) it from the kernel. Use modprobe -r option to unload a module from the kernel as shown below.
modprobe -r vmhgfs

10.Ethtool examples: Ethtool utility is used to view and change the ethernet device parameters. These examples will explain how you can manipulate your ethernet NIC card using ethtool.

9 Linux ethtool Examples to Manipulate Ethernet Card (NIC Card)


by Balakrishnan Mariyappan on October 28, 2010
Object 90

Object 92

Object 91

Ethtool utility is used to view and change the ethernet device parameters.

1. List Ethernet Device Properties


When you execute ethtool command with a device name, it displays the following information about the ethernet device.
# ethtool eth0 Settings for eth0:

Supported ports: [ TP ] Supported link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Supports auto-negotiation: Yes Advertised link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Advertised auto-negotiation: Yes Speed: 100Mb/s Duplex: Full Port: Twisted Pair PHYAD: 1 Transceiver: internal Auto-negotiation: on Supports Wake-on: d Wake-on: d Link detected: yes

This above ethtool output displays ethernet card properties such as speed, wake on, duplex and the link detection status. Following are the three types of duplexes available. Full duplex : Enables sending and receiving of packets at the same time. This mode is used when the ethernet device is connected to a switch. Half duplex : Enables either sending or receiving of packets at a single point of time. This mode is used when the ethernet device is connected to a hub. Auto-negotiation : If enabled, the ethernet device itself decides whether to use either full duplex or half duplex based on the network the ethernet device attached to.

2. Change NIC Parameter Using ethtool Option -s autoneg


The above ethtool eth0 output displays that the Auto-negotiation parameter is in enabled state. You can disable this using autoneg option in the ethtool as shown below.
# ifdown eth0 eth0 device: Broadcom Corporation NetXtreme II BCM5709 Gigabit Ethernet (rev 20) eth0 configuration: eth-bus-pci-0000:0b:00.0 # ethtool -s eth0 autoneg off

# ethtool eth0 Settings for eth0: Supported ports: [ TP ] Supported link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Supports auto-negotiation: Yes Advertised link modes: Not reported Advertised auto-negotiation: No Speed: Unknown! (65535) Duplex: Unknown! (255) Port: Twisted Pair PHYAD: 1 Transceiver: internal Auto-negotiation: off Supports Wake-on: g Wake-on: g Link detected: no # ifup eth0

After the above change, you could see that the link detection value changed to down and autonegotiation is in off state.

3. Change the Speed of Ethernet Device


Using ethtool you can change the speed of the ethernet device to work with the certain network devices, and the newly assign speed value should be within the limited capacity.
# ethtool -s eth0 speed 100 autoneg off # ethtool eth0 Settings for eth0: Supported ports: [ TP ] Supported link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Supports auto-negotiation: Yes Advertised link modes: Not reported Advertised auto-negotiation: No Speed: Unknown! (65535) Duplex: Unknown! (255) Port: Twisted Pair PHYAD: 1 Transceiver: internal Auto-negotiation: off Supports Wake-on: g Wake-on: g Link detected: no

Once you change the speed when the adapter is online, it automatically goes offline, and you need to bring it back online using ifup command.
# ifup eth0 eth0 device: Broadcom Corporation NetXtreme II BCM5709 Gigabit Ethernet (rev 20) eth0 configuration: eth-bus-pci-0000:0b:00.0 Checking for network time protocol daemon (NTPD): running # ethtool eth0 Settings for eth0: Supported ports: [ TP ] Supported link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Supports auto-negotiation: Yes Advertised link modes: Not reported Advertised auto-negotiation: No Speed: 100Mb/s Duplex: Full Port: Twisted Pair PHYAD: 1 Transceiver: internal Auto-negotiation: off Supports Wake-on: g Wake-on: g Link detected: yes

As shown in the above output, the speed changed from 1000Mb/s to 100Mb/s and auto-negotiation parameter is unset. To change the Maximum Transmission Unit (MTU), refer to our ifconfig examples article.

4. Display Ethernet Driver Settings


ethtool -i option displays driver version, firmware version and bus details as shown below.
# ethtool -i eth0 driver: bnx2 version: 2.0.1-suse firmware-version: 1.9.3 bus-info: 0000:04:00.0

5. Display Auto-negotiation, RX and TX of eth0


View the autonegotiation details about the specific ethernet device as shown below.
# ethtool -a eth0 Pause parameters for eth0: Autonegotiate: on RX: on TX: on

6. Display Network Statistics of Specific Ethernet Device


Use ethtool -S option to display the bytes transfered, received, errors, etc, as shown below.
# ethtool -S eth0 NIC statistics: rx_bytes: 74356477841 rx_error_bytes: 0 tx_bytes: 110725861146 tx_error_bytes: 0 rx_ucast_packets: 104169941 rx_mcast_packets: 138831 rx_bcast_packets: 59543904 tx_ucast_packets: 118118510 tx_mcast_packets: 10137453 tx_bcast_packets: 2221841 tx_mac_errors: 0 tx_carrier_errors: 0 rx_crc_errors: 0 rx_align_errors: 0 tx_single_collisions: 0 tx_multi_collisions: 0 tx_deferred: 0 tx_excess_collisions: 0 tx_late_collisions: 0 tx_total_collisions: 0 rx_fragments: 0 rx_jabbers: 0 rx_undersize_packets: 0 rx_oversize_packets: 0 rx_64_byte_packets: 61154057 rx_65_to_127_byte_packets: 55038726 rx_128_to_255_byte_packets: 426962 rx_256_to_511_byte_packets: 3573763 rx_512_to_1023_byte_packets: 893173 rx_1024_to_1522_byte_packets: 42765995 rx_1523_to_9022_byte_packets: 0 tx_64_byte_packets: 3633165 tx_65_to_127_byte_packets: 51169838 tx_128_to_255_byte_packets: 3812067 tx_256_to_511_byte_packets: 113766 tx_512_to_1023_byte_packets: 104081

tx_1024_to_1522_byte_packets: 71644887 tx_1523_to_9022_byte_packets: 0 rx_xon_frames: 0 rx_xoff_frames: 0 tx_xon_frames: 0 tx_xoff_frames: 0 rx_mac_ctrl_frames: 0 rx_filtered_packets: 14596600 rx_discards: 0 rx_fw_discards: 0

7. Troubleshoot the Ethernet Connection Issues


When there is a problem with the network connection, you might want to check (or change) the ethernet device parameters explained in the above examples, when you see following issues in the output of ethtool command. Speed and Duplex value is shown as Unknown Link detection value is shown as No Upon successful connection, the three parameters mentioned above gets appropriate values. i.e Speed is assigned with known value, Duplex become either Full/Half, and the Link detection becomes Yes. After the above changes, if the Link Detection still says No, check whether there are any issues in the cables that runs from the switch and the system, you might want to dig into that aspect further. To capture and analyze packets from a specific network interface, use tcpdump utility.

8. Identify Specific Device From Multiple Devices (Blink LED Port of NIC Card)
Let us assume that you have a machine with four ethernet adapters, and you want to identify the physical port of a particular ethernet card. (For example, eth0). Use ethtool option -p, which will make the corresponding LED of physical port to blink.
# ethtool -p eth0

9. Make Changes Permanent After Reboot


If youve changed any ethernet device parameters using the ethtool, it will all disappear after the next reboot, unless you do the following. On ubuntu, you have to modify /etc/network/interfaces file and add all your changes as shown below.
# vim /etc/network/interfaces post-up ethtool -s eth2 speed 1000 duplex full autoneg off

The above line should be the last line of the file. This will change speed, duplex and autoneg of eth2 device permanently. On SUSE, modify the /etc/sysconfig/network/ifcfg-eth-id file and include a new script using POST_UP_SCRIPT variable as shown below. Include the below line as the last line in the corresponding eth1 adpater config file.
# vim /etc/sysconfig/network/ifcfg-eth-id POST_UP_SCRIPT='eth1'

Then, create a new file scripts/eth1 as shown below under /etc/sysconfig/network directory. Make sure that the script has execute permission and ensure that the ethtool utility is present under /sbin directory.
# cd /etc/sysconfig/network/ # vim scripts/eth1 #!/bin/bash /sbin/ethtool -s duplex full speed 100 autoneg off

11.NFS mount using exportfs: This is a linux beginners guide to NFS mount using exportfs. This explains how to export a file system to a remote machine and mount it both temporarily and permanently.

Linux Beginners Guide to NFS Mount Using Exportfs


by SathiyaMoorthy on October 13, 2010
Object 93

Object 95

Object 94

Using NFS (Network File System), you can mount a disk partition of a remote machine as if it is a local disk. This article explains how to export a file system to a remote machine and mount it both temporarily and permanently.

1. Export File System to Remote Server using exportfs


To export a directory to a remote machine, do the following.
exportfs REMOTEIP:PATH

REMOTEIP IP of the remote server to which you want to export. : delimiter PATH Path of directory that you want to export.

2. Mount Remote Server File System as a Local Storage


To mount the remote file system on the local server, do the following.
mount REMOTEIP:PATH PATH

Explanation REMOTEIP IP of the remote server which exported the file system : delimeter PATH Path of directory which you want to export.

3. Unmount Remote File System


Umount the remote file system mounted on the local server using the normal umount PATH. For more option refer to umount command examples.

4. Unexport the File System


You can check the exported file system as shown below.
# exportfs /publicdata webserver.pq.net

To unexport the file system, use the -u option as shown below.


# exportfs -u REMOTEIP:PATH

After unexporting, check to make sure it is not available for NFS mount as shown below.
# exportfs

5. Make NFS Export Permanent Across System Reboot


Export can be made permanent by adding that entry into /etc/exports file.
# cat /etc/exports /publicdata webserver.pq.net

6. Make the Mount Permanent Across Reboot


mount can be made permanent by adding that entry into /etc/fstab file.
# cat /etc/fstab webserver.pq.net:/publicdata /mydata ext3 defaults 0 0

12.Change timezone: Depending on your Linux distribution, use one of the methods explained in this article to change the timezone on your system.

How To: 2 Methods To Change TimeZone in Linux


by Ramesh Natarajan on September 29, 2010
Object 96

Object 98

Object 97

Question: When I installed the Linux OS, I forgot to set the proper timezone. How do I change the timezone on my Linux distribution. I use CentOS (Red Hat Linux). But, can you please explain me how to do this on all Linux distributions with some clear examples. Answer: Use one of the following methods to change the timezone on your Linux system. One of these methods should work for you depending on the Linux distribution you are using.

Method 1: Change TimeZone Using /etc/localtime File


For this example, assume that your current timezone is UTC as shown below. You would like to change this to Pacific Time.
# date Mon Sep 17 22:59:24 UTC 2010

On some distributions (for example, CentOS), the timezone is controlled by /etc/localtime file. Delete the current localtime file under /etc/ directory
# cd /etc # rm localtime

All US timezones are located under under the /usr/share/zoneinfo/US directory as shown below.
# ls /usr/share/zoneinfo/US/ Alaska Arizona Pacific Aleutian Central Samoa Eastern East-Indiana Hawaii Indiana-Starke Michigan Mountain

Note: For other country timezones, browse the /usr/share/zoneinfo directory Link the Pacific file from the above US directory to the /etc/localtime directory as shown below.
# cd /etc # ln -s /usr/share/zoneinfo/US/Pacific localtime

Now the timezone on your Linux system is changed to US Pacific time as shown below.
# date Mon Sep 17 23:10:14 PDT 2010

Method 2: Change TimeZone Using /etc/timezone File


On some distributions (for example, Ubuntu), the timezone is controlled by /etc/timezone file. For example, your current timezone might be US Eastern time (New York) as shown below.
# cat /etc/timezone America/New_York

To change this to US Pacific time (Los Angeles), modify the /etc/timezone file as shown below.
# vim /etc/timezone America/Los_Angeles

Also, set the timezone from the command line using the TZ variable.
# export TZ=America/Los_Angeles

13.Install phpMyAdmin: phpMyAdmin is a web-based tool written in PHP to manage the MySQL database. Apart from viewing the tables (and other db objects), you can perform lot of DBA functions through the web based interface. You can also execute any SQL query from the UI.

How To: 5 Steps to Install phpMyAdmin on Linux


by Ramesh Natarajan on September 16, 2010
Object 99

Object 101

Object 100

Do you have a MySQL database in your environment? Did you know that the easy (and most effective) way to manage MySQL database is using phpMyAdmin? phpMyAdmin is a web-based tool written in PHP to manage the MySQL database. Apart from viewing the tables (and other db objects), you can perform lot of DBA functions through the web based interface. You can also execute any SQL query from the UI. This article will provide step-by-step instructions on how to install and configure phpMyAdmin on Linux distributions.

1. phpMyAdmin Pre requisites


Make sure you have PHP 5 (or above) installed.
# php -v PHP 5.3.2 (cli) (built: May 19 2010 03:43:49)

Make sure you have MySQL 5 (or above) installed.


# mysql -V mysql Ver 14.14 Distrib 5.1.47, for pc-linux-gnu (i686) using readline 5.1

Make sure Apache is installed and running. PHP5 Modules If you dont have PHP, I recommend that you install PHP from source. Following is the configure command I executed while installing PHP from source. This includes all the required PHP modules for phpMyAdmin. 14.Setup squid to control internet access: Squid is a proxy caching server. You can use squid to control internet access at work. This guide will give a jump-start on how to setup squid on Linux to restrict internet access in an network.

How To Use Squid Proxy Cache Server To Control Internet Access


by Balakrishnan Mariyappan on September 1, 2010
Object 102 Object 104 Object 103

Squid is a proxy caching server. If you are Linux sysadmin, you can use squid to control internet access at your work environment. This beginners guide will give a jump-start on how to setup squid on Linux to restrict internet access in an network.

Install Squid
You should install the following three squid related packages on your system. squid squid-common

squid-langpack On Debian and Ubuntu, use aptitude to install squid as shown below. On CentOS, use yum to install the squid package.
$ sudo aptitude install squid

Check Configuration and Startup scripts


Apart from installing the squid related packages, it also creates the /etc/squid/squid.conf and /etc/init.d/squid startup script. By default Squid runs on 3128 port. You can verify this from the squid.conf file. You can also set the visible_hostname parameter in your squid.conf, which will be used in error_log. If you dont define, squid gets the hostname value using gethostname() function.
# vim /etc/squid/squid.conf visible_hostname ubuntuserver httpd_port 3128

Note: The http port number (3128) specified in the squid.conf should be entered in the proxy setting section in the client browser. If squid is built with SSL, you can use https_port option inside squid.conf to define https squid.

Start Squid and View Logs


Start the Squid proxy caching server as shown below.
# service squid start squid start/running, process 11743

Squid maintains three log files (access.log, cache.log and store.log) under /var/log/squid directory. From the /var/log/squid/access.log, you can view who accessed which website at what time. Following is the format of the squid access.log record.
time elapsed remotehost code/status bytes method URL rfc931 peerstatus/peerhost

To disable logging in squid, update the squid.conf with the following information.
# to disable access.log cache_access_log /dev/null # to disable store.log cache_store_log none # to disable cache.log cache_log /dev/null

Squid Usage 1: Restrict Access to Specific Websites


This is how you can restrict folks from browsing certain website when they are connected to your network using your proxy server. Create a file called restricted_sites and list all sites that you would want to restrict the access.
# vim /etc/squid/restricted_sites www.yahoo.com mail.yahoo.com

Modify the squid.conf to add the following.


# vim /etc/squid/squid.conf acl RestrictedSites dstdomain "/etc/squid/restricted_sites" http_access deny RestrictedSites

Note: You can also configure squid as a transparent proxy server, which well discuss in a separate article. Also, refer to our earlier article on how to block ip-address using fail2ban and iptables.

Squid Usage 2: Allow Access to Websites Only During Specific Time


Some organization might want to allow employees to surf or download from the internet only during specific timeperiods. The squid.conf configuration shown below will allow internet access for employees only between 9:00AM and 18:00 during weekdays.
# vim /etc/squid/squid.conf acl official_hours time M T W H F 09:00-18:00 http_access deny all http_access allow official_hours

Squid Usage 3 : Restrict Access to Particular Network


Instead of restricting specific sites, you can also provide access only to certain network and block everything else. The example below, allows access only to the 192.168.1.* internal network.
# vim /etc/squid/squid.conf acl branch_offices src 192.168.1.0/24 http_access deny all http_access allow branch_offices

For a Linux based intrusion detection system, refer to our tripwire article.

Squid Usage 4 : Use Regular Expression to Match URLs


You can also use regular expression to allow or deny websites. First create a blocked_sites files with a list of keywords.
# cat /etc/squid/blocked_sites soccer movie www.example.com

Modify the squid.conf to block any sites that has any of these keywords in their url.
# vim /etc/squid/squid.conf acl blocked_sites url_regex -i "/etc/squid/blocked_sites" http_access deny blocked_sites http_access allow all

In the above example, -i option is used for ignoring case for matching. So, while accessing the websites, squid will try to match the url with any of the pattern mentioned in the above blocked_sites file and denies the access when it matches.

SARG Squid Analysis Report Generator


Download and install SARG to generate squid usage reports.

Use the sarg-reports command to generate reports as shown below.


# to generate the report for today sarg-report today # on daily basis sarg-report daily # on weekly basis sarg-report weekly # on monthly basis sarg-report monthly

Note: Add the sarg-report to the crontab. The reports generated by sarg are stored under /var/www/squid-reports. These are html reports can you can view from a browser.
$ ls /var/www/squid-reports Daily index.hyml $ ls /var/www/squid-reports/Daily 2010Aug28-2010Aug28 images index.html

15.Add new swap space: Use dd, mkswap and swapon commands to add swap space. You can either use a dedicated hard drive partition to add new swap space, or create a swap file on an existing filesystem and use it as swap space.

UNIX / Linux: 2 Ways to Add Swap Space Using dd, mkswap and swapon
by Ramesh Natarajan on August 18, 2010
Object 105 Object 107 Object 106

Question: I would like to add more swap space to my Linux system. Can you explain with clear examples on how to increase the swap space? Answer: You can either use a dedicated hard drive partition to add new swap space, or create a swap file on an existing filesystem and use it as swap space.

How much swap space is currently used by the system?


Free command displays the swap space. free -k shows the output in KB.
# free -k total Mem: 3082356 -/+ buffers/cache: Swap: 4192956 used 2043700 346456 0 free 1038656 2735900 4192956 shared 0 buffers 50976 cached 1646268

Swapon command with option -s, displays the current swap space in KB.
# swapon -s Filename /dev/sda2 Type partition Size Used 4192956 0 Priority -1

Swapon -s, is same as the following.


# cat /proc/swaps Filename /dev/sda2 Type partition Size Used 4192956 0 Priority -1

Method 1: Use a Hard Drive Partition for Additional Swap Space


If you have an additional hard disk, (or space available in an existing disk), create a partition using fdisk command. Let us assume that this partition is called /dev/sdc1 Now setup this newly created partition as swap area using the mkswap command as shown below.
# mkswap /dev/sdc1

Enable the swap partition for usage using swapon command as shown below.
# swapon /dev/sdc1

To make this swap space partition available even after the reboot, add the following line to the /etc/fstab file.
# cat /etc/fstab /dev/sdc1 swap swap defaults 0 0

Verify whether the newly created swap area is available for your use.
# swapon -s Filename /dev/sda2 /dev/sdc1 # free -k total Mem: 3082356 -/+ buffers/cache: Swap: 5241524 used 3022364 323836 0 Type partition partition free 59992 2758520 5241524 Size Used 4192956 0 1048568 0 shared 0 buffers 52056 Priority -1 -2 cached 2646472

Note: In the output of swapon -s command, the Type column will say partition if the swap space is created from a disk partition.

Method 2: Use a File for Additional Swap Space


If you dont have any additional disks, you can create a file somewhere on your filesystem, and use that file for swap space. The following dd command example creates a swap file with the name myswapfile under /root directory with a size of 1024MB (1GB).
# dd if=/dev/zero of=/root/myswapfile bs=1M count=1024 1024+0 records in 1024+0 records out # ls -l /root/myswapfile -rw-r--r-1 root root 1073741824 Aug 14 23:47 /root/myswapfile

Change the permission of the swap file so that only root can access it.
# chmod 600 /root/myswapfile

Make this file as a swap file using mkswap command.

# mkswap /root/myswapfile Setting up swapspace version 1, size = 1073737 kB

Enable the newly created swapfile.


# swapon /root/myswapfile

To make this swap file available as a swap area even after the reboot, add the following line to the /etc/fstab file.
# cat /etc/fstab /root/myswapfile 0 swap swap defaults 0

Verify whether the newly created swap area is available for your use.
# swapon -s Filename /dev/sda2 /root/myswapfile # free -k total Mem: 3082356 -/+ buffers/cache: Swap: 5241524 used 3022364 323836 0 Type partition file free 59992 2758520 5241524 Size Used 4192956 0 1048568 0 shared 0 buffers 52056 Priority -1 -2 cached 2646472

Note: In the output of swapon -s command, the Type column will say file if the swap space is created from a swap file. If you dont want to reboot to verify whether the system takes all the swap space mentioned in the /etc/fstab, you can do the following, which will disable and enable all the swap partition mentioned in the /etc/fstab
# swapoff -a # swapon -a

16.Install and configure snort: Snort is a free lightweight network intrusion detection system for both UNIX and Windows. This article explains how to install snort from source, write rules, and perform basic testing.

Snort: 5 Steps to Install and Configure Snort on Linux


by SathiyaMoorthy on August 6, 2010
Object 108 Object 110 Object 109

Snort is a free lightweight network intrusion detection system for both

UNIX and Windows. In this article, let us review how to install snort from source, write rules, and perform basic testing.

1. Download and Extract Snort


Download the latest snort free version from snort website. Extract the snort source code to the /usr/src directory as shown below.
# cd /usr/src # wget -O snort-2.8.6.1.tar.gz http://www.snort.org/downloads/116 # tar xvzf snort-2.8.6.1.tar.gz

Note: We also discussed earlier about Tripwire (Linux host based intrusion detection system) and Fail2ban (Intrusion prevention framework)

2. Install Snort
Before installing snort, make sure you have dev packages of libpcap and libpcre.
# apt-cache policy libpcap0.8-dev libpcap0.8-dev: Installed: 1.0.0-2ubuntu1 Candidate: 1.0.0-2ubuntu1 # apt-cache policy libpcre3-dev libpcre3-dev: Installed: 7.8-3 Candidate: 7.8-3

Follow the steps below to install snort.


# cd snort-2.8.6.1 # ./configure # make # make install

3. Verify the Snort Installation


Verify the installation as shown below.
# snort --version ,,_ o" )~ '''' team -*> Snort! <*Version 2.8.6.1 (Build 39) By Martin Roesch & The Snort Team: http://www.snort.org/snort/snortCopyright (C) 1998-2010 Sourcefire, Inc., et al. Using PCRE version: 7.8 2008-09-05

4. Create the required files and directory


You have to create the configuration file, rule file and the log directory. Create the following directories:

# mkdir /etc/snort # mkdir /etc/snort/rules # mkdir /var/log/snort

Create the following snort.conf and icmp.rules files:


# cat /etc/snort/snort.conf include /etc/snort/rules/icmp.rules # cat /etc/snort/rules/icmp.rules alert icmp any any -> any any (msg:"ICMP Packet"; sid:477; rev:3;)

The above basic rule does alerting when there is an ICMP packet (ping). Following is the structure of the alert:
<Rule Actions> <Protocol> <Source IP Address> <Source Port> <Direction Operator> <Destination IP Address> <Destination > (rule options)

Table: Rule structure and example Structure Example Rule Actions alert Protocol icmp Source IP Address any Source Port any Direction Operator -> Destination IP Address any Destination Port any (msg:ICMP Packet; sid:477; (rule options) rev:3;)

5. Execute snort
Execute snort from command line, as mentioned below.
# snort -c /etc/snort/snort.conf -l /var/log/snort/

Try pinging some IP from your machine, to check our ping rule. Following is the example of a snort alert for this ICMP rule.
# head /var/log/snort/alert [**] [1:477:3] ICMP Packet [**] [Priority: 0] 07/27-20:41:57.230345 > l/l len: 0 l/l type: 0x200 0:0:0:0:0:0 pkt type:0x4 proto: 0x800 len:0x64 209.85.231.102 -> 209.85.231.104 ICMP TTL:64 TOS:0x0 ID:0 IpLen:20 DgmLen:84 DF Type:8 Code:0 ID:24905 Seq:1 ECHO

Alert Explanation A couple of lines are added for each alert, which includes the following: Message is printed in the first line. Source IP Destination IP Type of packet, and header information.

If you have a different interface for the network connection, then use -dev -i option. In this example my network interface is ppp0.
# snort -dev -i ppp0 -c /etc/snort/snort.conf -l /var/log/snort/

Execute snort as Daemon


Add -D option to run snort as a daemon.
# snort -D -c /etc/snort/snort.conf -l /var/log/snort/

Additional Snort information


Default config file will be available at snort-2.8.6.1/etc/snort.conf Default rules can be downloaded from: http://www.snort.org/snort-rules 17.Register RHEL/OEL linux to support: If you have purchased support from Oracle for your Linux, you can register to oracle support network (ULN) using up2date as explained here.

How to Register RHEL/OEL Linux to Oracle Support (ULN) using up2date


by Ramesh Natarajan on August 6, 2010
Object 111 Object 113 Object 112

Question: I have purchased Linux support for RHEL and OEL from Oracle corporation. How do I register my Linux system to Oracle support network to download and update packages? Can you explain me with step-by-step instruction? Answer: After purchasing Linux support from Oracle, you should register your Linux system with Oracles Unbreakable Linux Network using up2date utility as explained in this article.

1. Launch up2date register Wizard


Type the following from the command line, which will invoke the Unbreakable Linux Network Registration wizard as shown below.
# up2date --register

2. Register to Oracle ULN using Oracle CSI Number


If you already have a uid/pwd to the ULN network, enter it here. If you dont have an existing account on ULN, the uid/pwd information you enter in this step will be used to create a new account for you. Make sure to enter a valid CSI number. When you purchased the Linux support from Oracle, you wouldve received a CSI number.

3. Register a System Profile Hardware Info


The up2date will automatically collect the following information about your system and use this to create a system profile. Hostname IP-address Memory Size CPU Model and Speed RHEL or OEL Version

4. Register a System Profile Packages Info


The up2date will automatically collect information about all the installed packages and associate it with the corresponding system profile. later this info is used to determine whether a package needs to be updated or not.

5. Send Profile Information to Oracle Network ( ULN )


On the confirmation screen, click on Next to send the profile information ( including hardware and packages info ) to Oracles ULN. Make sure your system can talk to linux.oracle.com. If not, this step will fail.

6. RHEL / OEL Registration Successful with ULN


Once the registration is completed, youll get the following confirmation screen.

18.tftpboot setup: You can install Linux from network using PXE by installing and configuring tftpboot server as explained here.

HowTo: 10 Steps to Configure tftpboot Server in UNIX / Linux (For installing Linux from Network using PXE)
by Balakrishnan Mariyappan on July 22, 2010
Object 114

Object 116

Object 115

In this article, let us discuss about how to setup tftpboot, including installation of necessary packages, and tftpboot configurations. TFTP boot service is primarily used to perform OS installation on a remote machine for which you dont have the physical access. In order to perform the OS installation successfully, there should be a way to reboot the remote server either using wakeonlan or someone manually rebooting it or some other ways. In those scenarios, you can setup the tftpboot services accordingly and the OS installation can be done remotely (you need to have the autoyast configuration file to automate the OS installation steps). Step by step procedure is presented in this article for the SLES10-SP3 in 64bit architecture. However, these steps are pretty much similar to any other Linux distributions.

Required Packages
The following packages needs to be installed for the tftpboot setup. dhcp services packages: dhcp-3.0.7-7.5.20.x86_64.rpm and dhcp-server-3.0.77.5.20.x86_64.rpm tftpboot package: tftp-0.48-1.6.x86_64.rpm pxeboot package: syslinux-3.11-20.14.26.x86_64.rpm

Package Installation
Install the packages for the dhcp server services:
$ rpm -ivh dhcp-3.0.7-7.5.20.x86_64.rpm Preparing... ########################################### [100%] 1:dhcp ########################################### [100%] $ rpm -ivh dhcp-server-3.0.7-7.5.20.x86_64.rpm Preparing... ########################################### [100%] 1:dhcp ########################################### [100%] $ rpm -ivh tftp-0.48-1.6.x86_64.rpm $ rpm -ivh syslinux-3.11-20.14.26.x86_64.rpm

After installing the syslinux package, pxelinux.0 file will be created under /usr/share/pxelinux/ directory. This is required to load install kernel and initrd images on the client machine. Verify that the packages are successfully installed.
$ rpm -qa | grep dhcp $ rpm -qa | grep tftp

Download the appropriate tftpserver from the repository of your respective Linux distribution.

Steps to setup tftpboot Step 1: Create /tftpboot directory


Create the tftpboot directory under root directory ( / ) as shown below.
# mkdir /tftpboot/

Step 2: Copy the pxelinux image


PXE Linux image will be available once you installed the syslinux package. Copy this to /tftpboot path as shown below.
# cp /usr/share/syslinux/pxelinux.0 /tftpboot

Step 3: Create the mount point for ISO and mount the ISO image
Let us assume that we are going to install the SLES10 SP3 Linux distribution on a remote server. If you have the SUSE10-SP3 DVD insert it in the drive or mount the ISO image which you have. Here, the iso image has been mounted as follows:
# mkdir /tftpboot/sles10_sp3 # mount -o loop SLES-10-SP3-DVD-x86_64.iso /tftpboot/sles10_sp3

Refer to our earlier article on How to mount and view ISO files.

Step 4: Copy the vmlinuz and initrd images into /tftpboot


Copy the initrd to the tftpboot directory as shown below.
# cd /tftpboot/sles10_sp3/boot/x86_64/loader

# cp initrd linux /tftpboot/

Step 5: Create pxelinux.cfg Directory


Create the directory pxelinux.cfg under /tftpboot and define the pxe boot definitions for the client.
# mkdir /tftpboot/pxelinux.cfg # cat >/tftpboot/pxelinux.cfg/default default linux label linux kernel linux append initrd=initrd showopts instmode=nfs install=nfs://192.168.1.101/tftpboot/sles10_sp3/

The following options are used for, kernel specifies where to find the Linux install kernel on the TFTP server. install specifies boot arguments to pass to the install kernel. As per the entries above, the nfs install mode is used for serving install RPMs and configuration files. So, have the nfs setup in this machine with the /tftpboot directory in the exported list. You can add the autoyast option with the autoyast configuration file to automate the OS installation steps otherwise you need to do run through the installation steps manually.

Step 6: Change the owner and permission for /tftpboot directory


Assign nobody:nobody to /tftpboot directory.
# chown nobody:nobody /tftpboot # chmod 777 /tftpboot

Step 7: Modify /etc/dhcpd.conf


Modify the /etc/dhcpd.conf as shown below.
# cat /etc/dhcpd.conf ddns-update-style none; default-lease-time 14400; filename "pxelinux.0"; # IP address of the dhcp server nothing but this machine. next-server 192.168.1.101; subnet 192.168.1.0 netmask 255.255.255.0 { # ip distribution range between 192.168.1.1 to 192.168.1.100 range 192.168.1.1 192.168.1.100; default-lease-time 10; max-lease-time 10; }

Specify the interface in /etc/syslinux/dhcpd to listen dhcp requests coming from clients.
# cat /etc/syslinux/dhcpd | grep DHCPD_INTERFACE DHCPD_INTERFACE=eth1;

Here, this machine has the ip address of 192.168.1.101 on the eth1 device. So, specify eth1 for the DHCPD_INTERFACE as shown above.

On a related note, refer to our earlier article about 7 examples to configure network interface using ifconfig.

Step 8: Modify /etc/xinetd.d/tftp


Modify the /etc/xinetd.d/tftp file to reflect the following. By default the value for disable parameter is yes, please make sure you modify it to no and you need to change the server_args entry to -s /tftpboot.
# cat /etc/xinetd.d/tftp service tftp { socket_type protocol wait user server server_args disable } = dgram = udp = yes = root = /usr/sbin/in.tftpd = -s /tftpboot = no

Step 9: No changes in /etc/xinetd.conf


There is no need to modify the etc/xinetd.conf file. Use the default values specified in the xinetd.conf file.

Step 10: Restart xinetd, dhcpd and nfs services


Restart these services as shown below.
# /etc/init.d/xinetd restart # /etc/init.d/dhcpd restart # /etc/init.d/nfsserver restart

After restarting the nfs services, you can view the exported directory list(/tftpboot) by the following command,
# showmount -e

Finally, the tftpboot setup is ready and now the client machine can be booted after changing the first boot device as network in the BIOS settings. If you encounter any tftp error, you can do the troubleshooting by retrieving some files through tftpd service. Retrieve some file from the tftpserver to make sure tftp service is working properly using the tftp client. Let us that assume that sample.txt file is present under /tftpboot directory.
$ tftp -v 192.168.1.101 -c get sample.txt

19.Delete all iptables rules: When you are starting to setup iptables, you might want to delete (flush) all the existing iptables as shown here.

How to View and Delete Iptables Rules List and Flush


by SathiyaMoorthy on July 16, 2010
Object 117

Object 119

Object 118

Question: How do I view all the current iptables rules? Once I view it, is there a way to delete all the current rules and start from scratch? Answer: Use the iptables list option to view, and iptables flush option to delete all the rules as shown below. You should have root permission to perform this operation.

1. View / List All iptables Rules


When you want to check what rules are in iptables, use list option as shown below.
# iptables --list

Example 1: Iptables list output showing no rules


# iptables --list Chain INPUT (policy ACCEPT) target prot opt source Chain FORWARD (policy ACCEPT) target prot opt source Chain OUTPUT (policy ACCEPT) target prot opt source destination destination destination

The above output shows chain headers. As you see, there are no rules in it.

Example 2: Iptables list output showing some rules


When there is a rule to disable ping reply, you have the iptables list output as like the following. You can see the rule in the OUTPUT chain.
# iptables --list Chain INPUT (policy ACCEPT) target prot opt source Chain FORWARD (policy ACCEPT) target prot opt source Chain OUTPUT (policy ACCEPT) target prot opt source DROP icmp -- anywhere destination destination destination anywhere

icmp echo-request

2. Delete iptables Rules using flush option


When you want to delete all the rules, use the flush option as shown below.
# iptables --flush

After doing this, your iptables will become empty, and the iptables list output will look like what is shown in the example 1. You can also delete (flush) a particular iptable chain by giving the chain name as an argument as shown below.
# iptables --flush OUTPUT

20.Disable ping replies: Someone can flood the network with ping -f. If ping reply is disabled as explained here we can avoid this flooding.

How To Disable Ping Replies in Linux using icmp_echo_ignore_all


by SathiyaMoorthy on July 9, 2010
Object 120

Object 122

Object 121

You may want to disable ping replies for many reasons, may be for a security reason, or to avoid network congestion. Someone can flood the network with ping -f as shown in Ping Example 5 in our earlier Ping Tutorial article. If ping reply is disabled we can avoid this flooding.

Disable ping reply Temporarily


You can temporarily disable the ping reply using the following method.
# echo "1" > /proc/sys/net/ipv4/icmp_echo_ignore_all

Please note that this setting will be erased after the reboot. To disable ping reply permanently (even after the reboot), follow the step mentioned below. Also, to enable the ping reply back, set the value to 0 as shown below.
# echo "0" > /proc/sys/net/ipv4/icmp_echo_ignore_all

Disable ping reply Permanently


You can permanently disable the ping reply using the following method. Step 1: Edit the sysctl.conf file and add the following line.
net.ipv4.icmp_echo_ignore_all = 1

Step 2: Execute sysctl -p to enforce this setting immediately.


# sysctl -p

The above command loads the sysctl settings from the sysctl.conf file. After the ping reply is disabled using one of the above method, when somebody tries to ping your machine they will end up waiting without getting a ping reply packet even when the machine is up and running.

21.Block ip address using fail2ban: Fail2ban is a intrusion preventon framework that scans log files for various services ( SSH, FTP, SMTP, Apache, etc., ) and bans the IP that makes too many password failures. It also updates iptles firewall rules to reject these ip addresses.

Fail2Ban Howto: Block IP Address Using Fail2ban and IPTables


by SelvaGaneshan S on July 2, 2010
Object 123

Object 125

Object 124

Fail2ban scans log files for various services ( SSH, FTP, SMTP, Apache, etc., ) and bans the IP that makes too many password failures. It also updates the firewall rules to reject these ip addresses. Fail2ban is an intrusion prevention framework written in the Python programming language. Main purpose of Fail2ban is to prevent brute force login attacks. Also, refer to our earlier article on Tripwire (Linux host based intrusion detection system).

Install Fail2ban
To install fail2ban from source, download it from sourceforge.. Use apt-get to install Fail2ban on a Debian based system as shown below.
# apt-get install fail2ban

You can also install Fail2ban manually by downloading the fail2ban deb package.
# dpkg -i fail2ban_0.8.1-1_all.deb

How to configure fail2ban


All Fail2ban configuration files are located under the /etc/fail2ban directory.

/etc/fail2ban/fail2ban.conf
Main purpose of this file is to configure fail2ban log related directives. Loglevel: Set the log level output. logtarget : Specify the log file path Actions taken by the Fail2ban are logged in the /var/log/fail2ban.log file. You can change the verbosity in the conf file to one of: 1 ERROR, 2 WARN, 3 INFO or 4 DEBUG.

/etc/fail2ban/jail.conf
jail.conf file contains the declaration of the service configurations. This configuration file is broken up into different contexts. The DEFAULT settings apply to all sections. The following DEFAULT section of jail.conf says that after five failed access attempts from a single IP address within 600 seconds or 10 minutes (findtime), that address will be automatically blocked for 600 seconds (bantime).
[DEFAULT] ignoreip = 127.0.0.1 maxretry = 5 findtime = 600 bantime = 600

ignoreip: This is a space-separated list of IP addresses that cannot be blocked by fail2ban. maxretry: Maximum number of failed login attempts before a host is blocked by fail2ban. bantime: Time in seconds that a host is blocked if it was caught by fail2ban (600 seconds = 10 minutes).

Service Configurations
By default, some services are inserted as templates. Following is an example of the ssh services section.
[ssh] enabled = true port = ssh filter = sshd logpath = /var/log/auth.log action = iptables

enabled : Enable the fail2ban checking for ssh service port: service port ( referred in /etc/services file ) filter: Name of the filter to be used by the service to detect matches. This name corresponds to a file name in /etc/fail2ban/filter.d; without the .conf extension. For example: filter = sshd refers to /etc/fail2ban/filter.d/sshd.conf. logpath: The log file that fail2ban checks for failed login attempts. Action: This option tells fail2ban which action to take once a filter matches. This name corresponds to a file name in /etc/fail2ban/action.d/ without the .conf extension. For example: action = iptables refers to /etc/fail2ban/action.d/iptables.conf. Fail2ban will monitor the /var/log/auth.log file for failed access attempts, and if it finds repeated failed ssh login attempts from the same IP address or host, fail2ban stops further login attempts from that IP address/host by blocking it with fail2ban iptables firewall rule.

Fail2ban Filters
The directory /etc/fail2ban/filter.d contains regular expressions that are used to detect break-in attempts, password failures, etc., for various services. For example: sshd.conf Fail2ban ssh related filters apache-auth.conf Fail2ban apache service filters We can also add our own regular expression to find unwanted action.

Fail2ban Actions
The directory /etc/fail2ban/action.d contains different scripts defining actions which will execute once a filter matches. Only one filter is allowed per service, but it is possible to specify several actions, on separate lines. For example: IPtables.conf block & unblock IP address Mail.conf Sending mail to configured user

Start/Stop Fail2ban Service


After making configuration changes stop and start the Fail2ban daemon as shown below.
# /etc/init.d/fail2ban stop # /etc/init.d/fail2ban start

22.Package management using dpkg: On debian, you can install or remove deb packages using dpkg utility.

Debian: How to Install or Remove DEB Packages Using dpkg


by Sasikala on June 18, 2010
Object 126 Object 128 Object 127

Question: I would like to know how to install, uninstall, verify deb packages on Debian. Can you explain me with an example? Answer: Use dpkg to install and remove a deb package as explained below. On Debian, dpkg (Debian package system) allows you to install and remove the software packages. dpkg is the simplest way to install and uninstall a package. Debian now supplies a tool named Apt (for A Package Tool) and aptitude to help the administrators to add or remove software more easily. Refer to our earlier Manage packages using apt-get for more details.

Installing a Deb Using dpkg -i


syntax: dpkg -i package-file-name -i is to install a package.

The following example installs the Debian package for tcl tool.
$ dpkg -i tcl8.4_8.4.19-2_amd64.deb Selecting previously deselected package tcl8.4. (Reading database ... 94692 files and directories currently installed.) Unpacking tcl8.4 (from tcl8.4_8.4.19-2_amd64.deb) ... Setting up tcl8.4 (8.4.19-2) ...

Processing triggers for menu ... Processing triggers for man-db ...

You can verify the installation of package using dpkg -l packagename as shown below.
$ dpkg -l | grep 'tcl' ii tcl8.4 Tool Command Language) v8.4 - run-t 8.4.19-2 Tcl (the

The above command shows that tcl package is installed properly. ii specifies status installed ok installed.

Uninstalling a Deb using dpkg -r


dpkg with -r option removes the installed package.
$ dpkg -r tcl8.4 (Reading database ... 94812 files and directories currently installed.) Removing tcl8.4 ... Processing triggers for man-db ... Processing triggers for menu ...

Now list the package and check the status.


# dpkg -l | grep 'tcl' rc tcl8.4 Tool Command Language) v8.4 - run-t 8.4.19-2 Tcl (the

rc stands for removed ok config-files. The remove action didnt purge the configuration files. The status of each installed package will be available in /var/lib/dpkg/status. Status of tcl8.4 package looks like,
Package: tcl8.4 Status: deinstall ok config-files Priority: optional Section: interpreters Installed-Size: 3308

The following command is used to purge the package completely.


$ dpkg -P tcl8.4 (Reading database ... 94691 files and directories currently installed.) Removing tcl8.4 ... Purging configuration files for tcl8.4 ... Processing triggers for menu ... $ dpkg -l | grep 'tcl' $

So the package is completely removed, and the status in the /var/lib/dpkg/status is given below.
Package: tcl8.4 Status: purge ok not-installed Priority: optional Section: interpreters

23.Alfresco content management system: Alfresco is the best open source content management system. Everything you need to know to install and configure Alfresco is explained here.

12 Steps to Install and Configure Alfresco on UNIX / Linux


by Ramesh Natarajan on May 24, 2010
Object 129 Object 131 Object 130

Alfresco is the best open source content management system. This has a rock solid document management foundation, with several functionality built on top of it. Alfresco provides web based content management, collaboration platform, Content Management Interoperability Services (CMIS), records management and image management. Alfresco has enterprise edition and free community edition. See the difference between them here. If you have an in-house IT team, just go with the Alfresco community edition. It is straight-forward to install and configure Alfresco. In this article, let us review how to install and configure alfresco community edition on UNIX / Linux platform using 12 easy steps.

1. Install Alfresco Community Tomcat Bundle


Download Alfresco from the community edition download page.
# cd ~ # wget -O alfresco-community-tomcat-3.3.tar.gz http://dl.alfresco.com/release/community/build-2765/alfresco-community-tomcat3.3.tar.gz?dl_file=release/community/build-2765/alfresco-community-tomcat3.3.tar.gz # mkdir /opt/alfresco/ # cd /opt/alfresco/ # tar xvfz ~/alfresco-community-tomcat-3.3.tar.gz

2. Modify Alfresco Global Properties


alf_data parameter identifies the location of alfresco data store, where all the documents will be stored. Make sure this is pointing to an absolute path as shown below. Initially this directory will not be present. This alf_data directory will be created when we start the alfresco for the 1st time.
# vi /opt/alfresco/tomcat/shared/classes/alfresco-global.properties dir.root=/opt/alfresco/alf_data # ls -l /opt/alfresco/alf_data ls: /opt/alfresco/alf_data: No such file or directory

3. Verify MySQL connector is installed


Just double-check to make sure the mysql connector is installed in the proper location, as shown below.
# ls -l /opt/alfresco/tomcat/lib/mysql-connector-java-5.1.7-bin.jar -rwxr-xr-x 1 root root 709922 Jan 12 11:59 /opt/alfresco/tomcat/lib/mysql-

connector-java-5.1.7-bin.jar

4. Create the Alfresco MySQL databases


If you dont have MySQL, install it as using yum groupinstall, or based on LAMP install article, or based on mysql rpm article. After installing MySQL, create the alfresco database using the db_setup.sql script as shown below.
# cd /opt/alfresco/extras/databases/mysql # mysql -u root -p <db_setup.sql Enter password: # ls -l /var/lib/mysql/alfresco/ total 4 -rw-rw---- 1 mysql mysql 54 May 7 11:25 db.opt

5. Verify that Alfresco MySQL databases got created


# mysql -u root -p Enter password: mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | alfresco | | mysql | | test | +--------------------+ 4 rows in set (0.00 sec) mysql>

6. Update the db.url in the global property files


Update the db.url parameter in the alfresco-global.properties file to point to localhost:3306 as shown below.
# vi /opt/alfresco/tomcat/shared/classes/alfresco-global.properties db.url=jdbc:mysql://localhost:3306/alfresco

7. Start Alfresco Server


Start the alfresco server. This will start the tomcat application server that was bundled with the alfresco.
# cd /opt/alfresco # ./alfresco.sh start Using CATALINA_BASE: /opt/alfresco/tomcat Using CATALINA_HOME: /opt/alfresco/tomcat Using CATALINA_TMPDIR: /opt/alfresco/tomcat/temp Using JRE_HOME: /usr/java/jdk1.6.0_18

While the alfresco tomcat server is starting up, check the /opt/alfresco/alfresco.log for any possible issues.

When alfresco.sh is executed for the 1st time, it will do some database setup, and youll see following messages in the alfresco.log (only the 1st time). Executing database script /opt/alfresco/tomcat/temp/Alfresco/*.sql All executed statements: /opt/alfresco/tomcat/temp/Alfresco/*.sql Applied patch [org.alfresco.repo.admin.patch.PatchExecuter] Look for the line in the log file where it says Alfresco started, which indicates that Alfresco was started successfully. Following are few sample lines from alfresco.log.
# tail -f /opt/alfresco/alfresco.log 21:29:25,431 INFO [org.alfresco.repo.domain.schema.SchemaBootstrap] Executing database script /opt/alfresco/tomcat/temp/Alfresco/AlfrescoSchemaMySQLInnoDBDialect-Update-3892772511531851057.sql (Copied from classpath:alfresco/dbscripts/create/3.3/org.hibernate.dialect.MySQLInnoDBDialect /AlfrescoCreate-3.3-RepoTables.sql). 21:29:27,245 INFO [org.alfresco.repo.domain.schema.SchemaBootstrap] All executed statements: /opt/alfresco/tomcat/temp/Alfresco/AlfrescoSchemaMySQLInnoDBDialect-All_Statements-4724137490855924607.sql. === Applied patch === ID: patch.db-V3.0-0-CreateActivitiesExtras RESULT: Script completed ===================================== 21:30:03,756 INFO [org.alfresco.service.descriptor.DescriptorService] Alfresco JVM - v1.6.0_21-b06; maximum heap size 910.250MB 21:30:03,756 INFO [org.alfresco.service.descriptor.DescriptorService] Alfresco started (Community): Current version 3.3.0 (2765) schema 4009 - Originally installed version 3.3.0 (2765) schema 4009

8. Verify the alf_data directory creation


When you start the alfresco for the 1st time, it will create the alfresco data repository as shown below.
# ls -l /opt/alfresco/alf_data total 32 drwxr-xr-x 2 root root 4096 Mar drwxr-xr-x 2 root root 4096 Mar drwxr-xr-x 2 root root 4096 Mar drwxr-xr-x 3 root root 4096 Mar 25 25 25 25 16:26 16:26 16:26 16:26 audit.contentstore contentstore contentstore.deleted lucene-indexes

9. Verify that Alfresco Server is Running


Make sure alfresco server is running successfully. View the alfresco.log file to make sure there are no errors.
# ps -ef | grep -i alf root 9280 1 51 16:25 pts/0 00:00:30 /usr/java/jdk1.6.0_18/bin/java -Xms128m -Xmx512m -XX:MaxPermSize=160m -server -Dalfresco.home=. -Dcom.sun.management.jmxremote -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -Djava.util.logging.config.file=/opt/alfresco/tomcat/conf/logging.properties -Djava.endorsed.dirs=/opt/alfresco/tomcat/endorsed -classpath :/opt/alfresco/tomcat/bin/bootstrap.jar -Dcatalina.base=/opt/alfresco/tomcat -Dcatalina.home=/opt/alfresco/tomcat -Djava.io.tmpdir=/opt/alfresco/tomcat/temp org.apache.catalina.startup.Bootstrap start # tail -f /opt/alfresco/alfresco.log

10. Login to Alfresco Explorer or Alfresco Share


Alfresco has two ways to access the application Alfresco Explorer and Alfresco Share. Go to http://localhost:8080/alfresco to launch the Alfresco explorer Go to http://localhost:8080/share to launch the Alfresco share Default alfresco administrator uid/pwd is admin/admin. Change it immediately after you login.

11. Change the default password for the alfresco database


Use the mysql update command to change the password for the alfresco user as shown below.
# mysql -u root -p mysql Enter password: Reading table information for completion of table and column names You can turn off this feature to get a quicker startup with -A Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 51 Server version: 5.0.77 Source distribution Type 'help;' or '\h' for help. Type '\c' to clear the buffer. mysql> UPDATE user SET password=PASSWORD('donttellanybody') WHERE user='alfresco'; Query OK, 2 rows affected (0.00 sec) Rows matched: 2 Changed: 2 Warnings: 0 mysql>

12. Modify the configuration file to reflect the new alfresco password.
Update the db.password parameter in the alfresco-global.properties file as shown below.
# vi /opt/alfresco/tomcat/shared/classes/alfresco-global.properties db.name=alfresco db.username=alfresco db.password=donttellanybody

After this, stop/start MySQL database and restart Alfresco Tomcat server. As a final step, make sure to take a backup of alfresco mysql database using mysqldump or mysqlhotcopy and /opt/alfresco directory.
# service mysqld restart # /opt/alfresco/alfresco.sh stop # /opt/alfresco/alfresco.sh start

24.Bugzilla bug tracking system: Bugzilla is the best open source bug tracking system. Everything you need to know to install and configure Bugzilla is explained here.

Step-by-Step Bugzilla Installation Guide for Linux


by Ramesh Natarajan on May 17, 2010
Object 132

Object 134

Object 133

Bugzilla is the best open source bug tracking system. Very simple to use with lot of features. Bugzilla allows you to track the bugs and collaborate with developers and other teams in your organization effectively. This is a detailed step-by-step bugzilla installation guide for Linux.

1. Verify Perl Version


Make sure your perl version is >= 5.8.1 as shown below.
# perl -v This is perl, v5.8.8 built for i386-linux-thread-multi

Most Linux distributions comes with perl. If you dont have it on yours, download and install it from corresponding distribution website.

2. Install MySQL Database


Make sure your MySQL version is >= 4.1.2 as shown below.
# mysql -V mysql Ver 14.12 Distrib 5.0.77, for redhat-linux-gnu (i686) using readline 5.1

If you dont have mysql, install it as using yum groupinstall, or based on LAMP install article, or based on mysql rpm article.

3. Install Apache
If you already have apache installed, make sure you are able to access it by using http://{your-ipaddress}. If you dont have apache, install is using yum based on LAMP install article, or install apache from source.

4. Download latest Bugzilla tar ball


Download the latest stable release from bugzilla download page. Extract the bugzilla*.tar.gz file to the apache document root directory as shown below.

# cd ~ # wget http://ftp.mozilla.org/pub/mozilla.org/webtools/bugzilla-3.6.tar.gz # cd /var/www/html # tar xvfz /usr/save/bugzilla-3.4.6.tar.gz

5. Execute the bugzilla checksetup.pl


Bugzilla checksetup.pl program will verify whether all the required perl modules are installed. This will also display a list of all missing bugzilla modules that needs to be installed. You can run the checksetup.pl program as many times as you like until youve verified all the required perl modules are installed. Following is the output of 1st run of checksetup.pl, where is has listed all the missing optional and required modules.
# cd /var/www/html/bugzilla-3.4.6 # ./checksetup.pl --check-modules COMMANDS TO INSTALL OPTIONAL MODULES: GD: /usr/bin/perl install-module.pl GD Chart: /usr/bin/perl install-module.pl Chart::Base Template-GD: /usr/bin/perl install-module.pl Template::Plugin::GD::Image GDTextUtil: /usr/bin/perl install-module.pl GD::Text GDGraph: /usr/bin/perl install-module.pl GD::Graph XML-Twig: /usr/bin/perl install-module.pl XML::Twig MIME-tools: /usr/bin/perl install-module.pl MIME::Parser libwww-perl: /usr/bin/perl install-module.pl LWP::UserAgent PatchReader: /usr/bin/perl install-module.pl PatchReader PerlMagick: /usr/bin/perl install-module.pl Image::Magick perl-ldap: /usr/bin/perl install-module.pl Net::LDAP Authen-SASL: /usr/bin/perl install-module.pl Authen::SASL RadiusPerl: /usr/bin/perl install-module.pl Authen::Radius SOAP-Lite: /usr/bin/perl install-module.pl SOAP::Lite HTML-Parser: /usr/bin/perl install-module.pl HTML::Parser HTML-Scrubber: /usr/bin/perl install-module.pl HTML::Scrubber Email-MIME-Attachment-Stripper: /usr/bin/perl install-module.pl Email::MIME::Attachment::Stripper Email-Reply: /usr/bin/perl install-module.pl Email::Reply TheSchwartz: /usr/bin/perl install-module.pl TheSchwartz Daemon-Generic: /usr/bin/perl install-module.pl Daemon::Generic mod_perl: /usr/bin/perl install-module.pl mod_perl2 YOU MUST RUN ONE OF THE FOLLOWING COMMANDS (depending on which database you use): PostgreSQL: /usr/bin/perl install-module.pl DBD::Pg MySQL: /usr/bin/perl install-module.pl DBD::mysql Oracle: /usr/bin/perl install-module.pl DBD::Oracle COMMANDS TO INSTALL REQUIRED MODULES (You *must* run all these commands and then re-run checksetup.pl): /usr/bin/perl /usr/bin/perl /usr/bin/perl /usr/bin/perl /usr/bin/perl install-module.pl install-module.pl install-module.pl install-module.pl install-module.pl CGI Digest::SHA Date::Format DateTime DateTime::TimeZone

/usr/bin/perl /usr/bin/perl /usr/bin/perl /usr/bin/perl /usr/bin/perl /usr/bin/perl

install-module.pl install-module.pl install-module.pl install-module.pl install-module.pl install-module.pl

Template Email::Send Email::MIME Email::MIME::Encodings Email::MIME::Modifier URI

To attempt an automatic install of every required and optional module with one command, do: /usr/bin/perl install-module.pl --all

6. Execute bugzilla install-module.pl


As suggested by the output of the checksetup.pl, you can execute the install-module.pl to install all bugzilla required and optional perl modules.
# /usr/bin/perl install-module.pl --all

Please review the output of the above install-module.pl to make sure everything got install properly. There is a possibility that some of the modules failed to install (may be because some required OS packages were missing). Execute the checksetup.pl to verify whether all the modules got installed properly. Following is the output of 2nd run of the checksetup.pl:
# ./checksetup.pl --check-modules COMMANDS TO INSTALL OPTIONAL MODULES: GD: Chart: Template-GD: GDTextUtil: GDGraph: XML-Twig: PerlMagick: SOAP-Lite: mod_perl: /usr/bin/perl /usr/bin/perl /usr/bin/perl /usr/bin/perl /usr/bin/perl /usr/bin/perl /usr/bin/perl /usr/bin/perl /usr/bin/perl install-module.pl install-module.pl install-module.pl install-module.pl install-module.pl install-module.pl install-module.pl install-module.pl install-module.pl GD Chart::Base Template::Plugin::GD::Image GD::Text GD::Graph XML::Twig Image::Magick SOAP::Lite mod_perl2

YOU MUST RUN ONE OF THE FOLLOWING COMMANDS (depending on which database you use): PostgreSQL: /usr/bin/perl install-module.pl DBD::Pg MySQL: /usr/bin/perl install-module.pl DBD::mysql Oracle: /usr/bin/perl install-module.pl DBD::Oracle

7. Install missing Perl Modules


As we see from the above checksetup.pl output, some of the optional modules and required module installed was not completed when we ran the install-module.pl. So, we have to install the missing modules manually one-by-one to figure out the issues and fix it one-by-one. Refer to the Troubleshooting Section at the end for list of all the issues that I faced while installing the perl modules required for bugzilla (along with the solution on how to fix those issues).

8. Final checksetup.pl check-modules verification


Execute checksetup.pl check-modules again as shown below as final verification to make sure all the modules got installed successfully.
# ./checksetup.pl --check-modules * This is Bugzilla 3.4.6 on perl 5.8.8 * Running on Linux 2.6.18-164.el5PAE #1 SMP Thu Sep 3 04:10:44 EDT 2009 Checking Checking Checking Checking Checking Checking Checking Checking Checking Checking Checking Checking Checking Checking Checking Checking Checking perl modules... for CGI.pm (v3.21) for Digest-SHA (any) for TimeDate (v2.21) for DateTime (v0.28) for DateTime-TimeZone (v0.71) for DBI (v1.41) for Template-Toolkit (v2.22) for Email-Send (v2.00) for Email-MIME (v1.861) for Email-MIME-Encodings (v1.313) for Email-MIME-Modifier (v1.442) for URI (any) available perl DBD modules... for DBD-Pg (v1.45) for DBD-mysql (v4.00) for DBD-Oracle (v1.19) ok: found v3.49 ok: found v5.48 ok: found v2.24 ok: found v0.55 ok: found v1.17 ok: found v1.52 ok: found v2.22 ok: found v2.198 ok: found v1.903 ok: found v1.313 ok: found v1.903 ok: found v1.54 not found ok: found v4.013 not found

The following Perl modules are optional: Checking for GD (v1.20) ok: found v2.44 Checking for Chart (v1.0) ok: found v2.4.1 Checking for Template-GD (any) ok: found v1.56 Checking for GDTextUtil (any) ok: found v0.86 Checking for GDGraph (any) ok: found v1.44 Checking for XML-Twig (any) ok: found v3.34 Checking for MIME-tools (v5.406) ok: found v5.427 Checking for libwww-perl (any) ok: found v5.834 Checking for PatchReader (v0.9.4) ok: found v0.9.5 Checking for PerlMagick (any) ok: found v6.2.8 Checking for perl-ldap (any) ok: found v0.4001 Checking for Authen-SASL (any) ok: found v2.1401 Checking for RadiusPerl (any) ok: found v0.17 Checking for SOAP-Lite (v0.710.06) ok: found v0.711 Checking for HTML-Parser (v3.40) ok: found v3.65 Checking for HTML-Scrubber (any) ok: found v0.08 Checking for Email-MIME-Attachment-Stripper (any) ok: found v1.316 Checking for Email-Reply (any) ok: found v1.202 Checking for TheSchwartz (any) ok: found v1.10 Checking for Daemon-Generic (any) ok: found v0.61 Checking for mod_perl (v1.999022) ok: found v2.000004

9. Create localconfig file using checksetup.pl


Execute checksetup.pl without any argument, which will create a localconfig file in the current directory. The localconfig file contains the key configuration parameters used by the bugzilla (for example, mysql db username and password).
# ./checksetup.pl Reading ./localconfig... This version of Bugzilla contains some variables that you may want to change and adapt to your local settings. Please edit the file

./localconfig and rerun checksetup.pl. The following variables are new to ./localconfig since you last ran checksetup.pl: create_htaccess, webservergroup, db_driver, db_host, db_name, db_user, db_pass, db_port, db_sock, db_check, index_html, cvsbin, interdiffbin, diffpath, site_wide_secret

10. Modify the localconfig file.


The only thing you need to modify the localconfig file is MySQL database db password by changing the $db_pass variable as shown below.
# vi ./localconfig $db_pass = 'Bugs4All';

11. Modify /etc/my.cnf to increase bugzilla attachment size


Set the max_allowed_packet to 4M in the /etc/my.cnf to increase bugzilla attachment size.
# cat /etc/my.cnf [mysqld] datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock user=mysql # Default to using old password format for compatibility with mysql 3.x # clients (those using the mysqlclient10 compatibility package). old_passwords=1 # Disabling symbolic-links is recommended to prevent assorted security risks; # to do so, uncomment this line: # symbolic-links=0 # Allow packets up to 4MB max_allowed_packet=4M [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid

Restart the mysqld after this change.


# service mysqld restart

12. Create bugs mysql user


Add bugzilla user (bugs) to the mysql database as shown below.
# mysql -u root -p mysql> GRANT SELECT, INSERT, UPDATE, DELETE, INDEX, ALTER, CREATE, LOCK TABLES, CREATE TEMPORARY TABLES, DROP, REFERENCES ON bugs.* TO bugs@localhost IDENTIFIED BY 'Bugs4All'; mysql> FLUSH PRIVILEGES;

13. Create the bugzilla database


Execute the checksetup.pl (without any arguments) again to create the mysql bugzilla database.

Since the localconfig file already exist, the second time when you execute the checksetup.pl, it will create the mysql database based on the information from localconfig file.
# ./checksetup.pl Creating database bugs... Building Schema object from database... Adding new table bz_schema ... Initializing the new Schema storage... Adding new table attach_data ... Adding new table attachments ... Adding new table bug_group_map ... Adding new table bug_see_also ... Adding new table bug_severity ... Adding new table bug_status ... Inserting values into the 'priority' table: Inserting values into the 'bug_status' table: Inserting values into the 'rep_platform' table: Creating ./data directory... Creating ./data/attachments directory... Creating ./data/duplicates directory... Adding foreign key: attachments.bug_id -> bugs.bug_id... Adding foreign key: attachments.submitter_id -> profiles.userid... Adding foreign key: bug_group_map.bug_id -> bugs.bug_id...

14. Create bugzilla administrator account.


At the end of the ./checksetup.pl execution, it will detect that you dont have an adminsitrator account and request you to enter administration login information as shown below.
Looks like we don't have an administrator set up yet. Either this is your first time using Bugzilla, or your administrator's privileges might have accidentally been deleted. Enter the e-mail address of the administrator: ramesh@thegeekstuff.com Enter the real name of the administrator: Ramesh Natarajan Enter a password for the administrator account: NotRealPwd Please retype the password to verify: welcome ramesh@4medica.com is now set up as an administrator. Creating default classification 'Unclassified'... Creating initial dummy product 'TestProduct'... Now that you have installed Bugzilla, you should visit the 'Parameters' page (linked in the footer of the Administrator account) to ensure it is set up as you wish - this includes setting the 'urlbase' option to the correct URL.

15. Configure apache for mod_perl


Rename the bugzilla directory. (i.e remove the version number in it)
# cd /var/www/html # mv bugzilla-3.4.6/ bugzilla

Add the following two lines to httpd.conf


# tail -2 /etc/httpd/conf/httpd.conf

PerlSwitches -I/var/www/html/bugzilla -I/var/www/html/bugzilla/lib -w -T PerlConfigRequire /var/www/html/bugzilla/mod_perl.pl

Verify the Group in httpd.conf matches the webservergroup in localconfig


# cd /var/www/html/bugzilla/ # grep webservergroup localconfig $webservergroup = 'apache'; # grep Group /etc/httpd/conf/httpd.conf Group apache

16. Final checksetup.pl execution


Execute the checksetup.pl again.
# ./checksetup.pl Reading ./localconfig... Removing existing compiled templates... Precompiling templates...done. Fixing file permissions... Now that you have installed Bugzilla, you should visit the 'Parameters' page (linked in the footer of the Administrator account) to ensure it is set up as you wish - this includes setting the 'urlbase' option to the correct URL.

17. Login to bugzilla and complete one time setup.


Start the apache, go to http://{your-ip-address}/bugzilla and login using the administrator account you created above. From the bugzilla UI, at the footer -> Administration -> Parameters -> Required Settings section -> Fill-out following information: maintainer: ramesh@thegeekstuff.com urlbase: http://{your-ip-address}/ Note: Depending on your setup, go to -> User Authentication -> and you might want to change requiredlogin and emailregexp parameter.

Troubleshooting Bugzilla Install Issues Issue1: DBD::mysql module failed


The DBD:mysql perl module failed with the mysql.h: No such file or directory error message as shown below.
# /usr/bin/perl install-module.pl DBD::mysql dbdimp.h:22:49: error: mysql.h: No such file or directory dbdimp.h:23:45: error: mysqld_error.h: No such file or directory dbdimp.h:25:49: error: errmsg.h: No such file or directory In file included from dbdimp.c:20: dbdimp.h:144: error: expected specifier-qualifier-list before MYSQL dbdimp.h:236: error: expected specifier-qualifier-list before MYSQL_RES

Solution1: install mysql-devel


Error message mysql.h: No such file or directory is because mysql-devel package was missing as shown below.
# rpm -qa | grep -i mysql MySQL-python-1.2.1-1 mysql-5.0.77-4.el5_4.2 mysql-connector-odbc-3.51.26r1127-1.el5 mysql-server-5.0.77-4.el5_4.2 libdbi-dbd-mysql-0.8.1a-1.2.2 perl-DBD-MySQL-3.0007-2.el5

Install the mysql-devel package as shown below.


# yum install mysql-devel # rpm -qa | grep -i "mysql-devel" mysql-devel-5.0.77-4.el5_4.2

DBD::mysql installation will go through without any issues now.


# /usr/bin/perl install-module.pl DBD::mysql

Issue2: GD failed with missing gdlib-config / libgd


Installing GD module failed with the following error message.
# /usr/bin/perl install-module.pl GD **UNRECOVERABLE ERROR** Could not find gdlib-config in the search path. Please install libgd 2.0.28 or higher. If you want to try to compile anyway, please rerun this script with the option --ignore_missing_gd. Running make test Make had some problems, maybe interrupted? Won't test Running make install Make had some problems, maybe interrupted? Won't install

Solution2: Install gd-devel package


Install libgd (i.e gd-devel package) as shown below to fix the GD module issue.
# yum install gd-devel # rpm -qa | grep gd gd-2.0.33-9.4.el5_4.2 gd-devel-2.0.33-9.4.el5_4.2

GD got installed without any issues after insingalling gd-devel package.


# /usr/bin/perl install-module.pl GD

Issue3: Twig Failed with expat.h error


Twig module failed to install with the error message expat.h: No such file or directory as shown below.
# /usr/bin/perl install-module.pl XML::Twig

Expat.xs:12:19: error: expat.h: No such file or directory Expat.xs:60: error: expected specifier-qualifier-list before XML_Parser

Solution3: Install expat and expat-devel for Twig


Install expat and expat-devel package as shown below.
# yum install expat # yum install expat-devel

Now install Twig without any issues.


# /usr/bin/perl install-module.pl XML::Twig

Issue4: Image::Magick failed to install


Image::Magick installation failed with magick/MagickCore.h: No such file or directory error message as shown below.
# /usr/bin/perl install-module.pl Image::Magick Note (probably harmless): No library found for -lMagickCore Magick.xs:64:31: error: magick/MagickCore.h: No such file or directory Magick.xs:171: error: expected specifier-qualifier-list before MagickRealType Magick.xs:192: error: expected specifier-qualifier-list before ImageInfo Magick.xs:214: error: MagickNoiseOptions undeclared here (not in a function) Magick.xs:214: warning: missing initializer

Solution4: Image::Magick failed to install


Make sure following ImageMagic related packages are present.
# rpm -qa | grep -i Image ImageMagick-6.2.8.0-4.el5_1.1 ImageMagick-c++-devel-6.2.8.0-4.el5_1.1 ImageMagick-devel-6.2.8.0-4.el5_1.1 ImageMagick-c++-6.2.8.0-4.el5_1.1 ImageMagick-perl-6.2.8.0-4.el5_1.1

In my case, ImageMagic-devel was missing. So, installed it as shown below. After that, Image::Magick perl module got installed successfully.
# yum install ImageMagick-devel # /usr/bin/perl install-module.pl Image::Magick

Issue5: SOAP::Lite failed to install


SOAP::Lite module failed to install with Cannot locate version.pm in @INC message as shown below.
#/usr/bin/perl install-module.pl SOAP::Lite Failed test 'use SOAP::Lite;' at t/SOAP/Data.t line 5. Tried to use 'SOAP::Lite'. Error: Can't locate version.pm in @INC

Solution5: Install version.pm required for SOAP::Lite


Installed version.pm as shown below. After this, SOAP::Lite got installed without any issue.
# perl -MCPAN -e 'install version' # /usr/bin/perl install-module.pl SOAP::Lite

Issue6 (and Solution6): mod_perl was missing


Dont install mod_perl using /usr/bin/perl install-module.pl mod_perl2 . Insetad, use yum to install mod_perl as shown below.
# yum install mod_perl

Issue7: Apache start failed


Starting apache failed with Cannot locate Template/Config.pm in @INC error message.
# service httpd restart Stopping httpd: [ OK ]

Starting httpd: Syntax error on line 994 of /etc/httpd/conf/httpd.conf: Can't locate Template/Config.pm in @INC

Solution7: Install Template-Tool Kit as shown below


Install Template-Tool kit to fix the above apache error message
# cpan cpan> i /Template-Toolkit/ Distribution A/AB/ABEL/Eidolon-Driver-Template-Toolkit-0.01.tar.gz Distribution A/AB/ABW/Template-Toolkit-1.07.tar.gz Distribution A/AB/ABW/Template-Toolkit-2.22.tar.gz Distribution I/IN/INGY/Template-Toolkit-Simple-0.03.tar.gz 4 items found cpan> install A/AB/ABW/Template-Toolkit-2.22.tar.gz

Issue8: Apache start failed again


Starting apache failed with Cannot locate DateTime/Locale.pm in @INC error message.
# service httpd restart Stopping httpd: [ OK ]

Starting httpd: Syntax error on line 994 of /etc/httpd/conf/httpd.conf: Can't locate DateTime/Locale.pm in @INC

Solution8: Install DateTime/Locale.pm as shown below


Install DateTime/Locale.pm to fix the above apache error message
# cpan cpan> install DateTime:Locale

Also, in your apache error_log if you see Digest/SHA.pm issue, you should install it as shown

below.
# tail -f /etc/httpd/logs/error_log Can't locate Digest/SHA.pm in @INC (@INC contains: # cpan cpan> install Digest::SHA

25.Rpm, deb, dpot and msi packages: This article explains how to view and extract files from various package types used by different Linux / UNIX distributions.

How to View and Extract Files from rpm, deb, depot and msi Packages
by Sasikala on April 19, 2010
Object 135 Object 137 Object 136

Question: How do I view or extract the files that are bundled inside the packages of various operating system. For example, I would like to know how to view (and extract) the content of a rpm, or deb, or depot, or msi file. Answer: You can use tools like rpm, rpm2cpio, ar, dpkg, tar, swlist, swcopy, lessmsi as explained below.

1. RPM package in Redhat / CentOS / Fedora


Listing the files from a RPM package using rpm -qlp
RPM stands for Red Hat package manager. The following example shows how to view the files available in a RPM package without extracting or installing the rpm package.
$ rpm -qlp ovpc-2.1.10.rpm /usr/src/ovpc/-5.10.0 /usr/src/ovpc/ovpc-2.1.10/examples /usr/src/ovpc/ovpc-2.1.10/examples/bin /usr/src/ovpc/ovpc-2.1.10/examples/lib /usr/src/ovpc/ovpc-2.1.10/examples/test . . .

/usr/src/ovpc/ovpc-2.1.10/pcs

Explanation of the command: rpm -qlp ovpc-2.1.10.rpm rpm command q query the rpm file l list the files in the package p specify the package name

Extracting the files from a RPM package using rpm2cpio and cpio
RPM is a sort of a cpio archive. First, convert the rpm to cpio archive using rpm2cpio command. Next, use cpio command to extract the files from the archive as shown below.
$ rpm2cpio ovpc-2.1.10.rpm | cpio -idmv ./usr/src/ovpc/-5.10.0 ./usr/src/ovpc/ovpc-2.1.10/examples ./usr/src/ovpc/ovpc-2.1.10/examples/bin ./usr/src/ovpc/ovpc-2.1.10/examples/lib ./usr/src/ovpc/ovpc-2.1.10/examples/test . . . ./usr/src/ovpc/ovpc-2.1.10/pcs $ ls . usr

2. Deb package in Debian


deb is the extension of Debian software package format. *.deb is also used in other distributions that are based on Debian. (for example: Ubuntu uses *.deb)

Listing the files from a debian package using dpkg -c


dpkg is the package manager for debian. So using dpkg command you can list and extract the packages, as shown below. To view the content of *.deb file:
$ dpkg -c ovpc_1.06.94-3_i386.deb dr-xr-xr-x root/root 0 2010-02-25 dr-xr-xr-x root/root 0 2010-02-25 dr-xr-xr-x root/root 0 2010-02-25 dr-xr-xr-x root/root 0 2010-02-25 dr-xr-xr-x root/root 0 2010-02-25 -r-xr-xr-x root/root 130 2009-10-29 . . . -r-xr-xr-x root/root dr-xr-xr-x root/root 10:54 10:54 10:54 10:54 10:48 17:06 ./ ./ovpc/ ./ovpc/pkg/ ./ovpc/pkg/lib/ ./ovpc/pkg/lib/header/ ./ovpc/pkg/lib/header/libov.so

131 2009-10-29 17:06 ./ovpc/pkg/etc/conf 0 2010-02-25 10:54 ./ovpc/pkg/etc/conf/log.conf

Extracting the files from a debian package using dpkg -x


Use dpkg -x to extract the files from a deb package as shown below.
$ dpkg -x ovpc_1.06.94-3_i386.deb /tmp/ov

$ ls /tmp/ov ovpc

DEB files are ar archives, which always contains the three files debian-binary, control.tar.gz, and data.tar.gz. We can use ar command and tar command to extract and view the files from the deb package, as shown below. First, extract the content of *.deb archive file using ar command.
$ x x x $ ar -vx ovpc_1.06.94-3_i386.deb - debian-binary - control.tar.gz - data.tar.gz

Next, extract the content of data.tar.gz file as shown below.


$ tar -xvzf data.tar.gz ./ ./ovpc/ ./ovpc/pkg/ ./ovpc/pkg/lib/ ./ovpc/pkg/lib/header/ ./ovpc/pkg/lib/header/libov.so . . ./ovpc/pkg/etc/conf ./ovpc/pkg/etc/conf/log.con

3. Depot package in HP-UX


Listing the files from a depot package using tar and swlist
DEPOT file is a HP-UX Software Distributor Catalog Depot file. HP-UX depots are just a tar file, with some additional information as shown below.
$ tar -tf ovcsw_3672.depot OcswServer/MGR/etc/ OcswServer/MGR/etc/opt/ OcswServer/MGR/etc/opt/OV/ OcswServer/MGR/etc/opt/OV/share/ OcswServer/MGR/etc/opt/OV/share/conf/ OcswServer/MGR/etc/opt/OV/share/conf/OpC/ OcswServer/MGR/etc/opt/OV/share/conf/OpC/opcctrlovw/

swlist is a HP-UX command which is used to display the information about the software. View the content of the depot package as shown below using swlist command.
$ # # # # # swlist -l file -s /root/ovcsw_3672.depot Initializing... Contacting target "osgsw"... Target: osgsw:/root/ovcsw_3672.depot 8.50.000 9.00.140 Ocsw Server product Ocs Server Ovw

# OcswServer # OcswServer.MGR /etc /etc/opt /etc/opt/OV /etc/opt/OV/share

/etc/opt/OV/share/conf /etc/opt/OV/share/conf/OpC

Extracting the files from a depot package using swcopy


Swcopy command copies or merges software_selections from a software source to one or more software depot target_selections. Using uncompress option in swcopy, you can extract the files from a depot software package.
$ swcopy -x uncompress_files=true -x enforce_dependencies=false -s /root/ovcsw_3672.depot \* @ /root/extracted/ $ ls /root/extracted MGR catalog osmsw.log $

Since depot files tar files, you can extract using normal tar extraction as shown below.
$ tar -xvf filename

4. MSI in Windows
Microsoft installer is an engine for the installation, maintenance, and removal of software on windows systems.

Listing the files from a MSI package using lessmsi


The utility called lessmsi.exe is used to view the files from the msi packages with out installing. The same utility is also used to extract the msi package. Select the msi which you want to view the content. lessmsi will list the files available in msi.

Extracting the files from a MSI package using msiexec


Windows Installer Tool (Msiexec.exe) is used to extract the files from the MSI package. It can open a MSI package in Administrator installation mode, where it can extract the files without performing the install as shown below.
C:\>msiexec /a ovcsw_3672.msi /qb TARGETDIR="C:\ovcsw"

26.Backup using rsnapshot: You can backup either a local host or remote host using rsnapshot rsync utility. rsnapshot uses the combination of rsync and hard links to maintain full-backup and incremental backups. Once youve setup and configured rsnapshot, there is absolutely no maintenance involved in it. rsnapshot will automatically take care of deleting and rotating the old backups.

How To Backup Remote Linux Host Using rsnapshot rsync Utility


by Ramesh Natarajan on September 16, 2009
Object 138 Object 140 Object 139

In the previous article we reviewed how to backup local unix host using rsnapshot utility. In this article, let us review how to backup remote Linux host using this utility.

1. Setup Key Based Authentication


As weve explained earlier setup the key based authentication as explained either in ssh-keygen and ssh-copy-id article or openSSH article.
[root@local-host]# ssh-keygen [root@local-host]# ssh-copy-id -i ~/.ssh/id_rsa.pub remote-host

2. Verify the password less login between servers


Login to the remote-host from local-host without entering the password.
[root@local-host]# ssh remote-host Last login: Sun Mar 15 16:45:40 2009 from local-host [root@remote-host]#

3. Configure rsnapshot and specify Remote Host Backup Directories


Define your remote-host destination backup directories in /etc/rsnapshot.conf as shown below. In this example, root@remote-host:/etc Source directory on the remote-host that should be backed-up. i.e remote backup destination directory. remote-host-backup/ destination directory where the backup of the remote-host will be stored. Please note that this directory will be created under local-host /.snapshots/ {internal.n}/ directory as shown in the last step.
# vi /etc/rsnapshot.conf backup root@remote-host:/etc/ remote-host-backup/ exclude=mtab,exclude=core

4. Test rsnapshot Configuration


Perform configuration test to make sure rsnapshot is setup properly and ready to perform Linux rsync backup.

# rsnapshot configtest Syntax OK

5. Add Crontab Entry for rsnapshot


Once youve verified that the rsync hourly and daily backup configurations are setup properly in the rsnapshot cwrsync utility, it is time to set this puppy up in the crontab as shown below.
# crontab -e 0 */4 * * * /usr/local/bin/rsnapshot hourly 30 23 * * * /usr/local/bin/rsnapshot daily

Check out Linux crontab examples article to understand how to setup and configure crontab.

6. Manually test the remote-host backup once


[root@local-host]# /usr/local/bin/rsnapshot hourly [root@local-host]# ls -l /.snapshots/hourly.0/ total 8 drwxr-xr-x 3 root root 4096 Jul 22 04:19 remote-host-backup drwxr-xr-x 3 root root 4096 Jul 13 05:07 localhost [root@local-host]# ls -l /.snapshots/hourly.0/remote-host-backup/ total 4 drwxr-xr-x 93 root root 4096 Jul 22 03:36 etc

Troubleshooting Tips
Problem: rsnapshot failed with ERROR: /usr/bin/rsync returned 20 as shown below.
[root@local-host]# /usr/local/bin/rsnapshot hourly rsync error: received SIGINT, SIGTERM, or SIGHUP (code 20) at rsync.c(260) [receiver=2.6.8] ---------------------------------------------------------------------------rsnapshot encountered an error! The program was invoked with these options: /usr/local/bin/rsnapshot hourly ---------------------------------------------------------------------------ERROR: /usr/bin/rsync returned 20 while processing copyman@192.168.2.2:/etc/

Solution: This typically happens when the users who is performing the rsnapshot (rsync) doesnt have access to the remote directory that you are trying to backup. Make sure the remote host backup directory has appropriate permission for the user who is trying to execute the rsnapshot. 27.Create Linux user: This article explains how to create users with default configuration, create users with custom configuration, create users interactively, and creating users in bulk.

The Ultimate Guide to Create Users in Linux / Unix


by Ramesh Natarajan on June 24, 2009
Object 141 Object 143 Object 142

Creating users in Linux or Unix system is a routine task for system administrators. Sometimes you may create a single user with default configuration, or create a single user with custom configuration, or create several users at same time using some bulk user creation method. In this article, let us review how to create Linux users in 4 different methods using useradd, adduser and newusers command with practical examples.

Method 1: Linux useradd Command Create User With Default Configurations


This is a fundamental low level tool for user creation. To create user with default configurations use useradd as shown below.
Syntax: # useradd LOGIN-NAME

While creating users as mentioned above, all the default options will be taken except group id. To view the default options give the following command with the option -D.
$ useradd -D GROUP=1001 HOME=/home INACTIVE=-1 EXPIRE= SHELL=/bin/sh SKEL=/etc/skel CREATE_MAIL_SPOOL=no

GROUP: This is the only option which will not be taken as default. Because if you dont specify -n option a group with same name as the user will be created and the user will be added to that group. To avoid that and to make the user as the member of the default group you need to give the option -n. HOME: This is the default path prefix for the home directory. Now the home directory will be created as /home/USERNAME. INACTIVE: -1 by default disables the feature of disabling the account once the user password has expired. To change this behavior you need to give a positive number which means if the password gets expired after the given number of days the user account will be disabled. EXPIRE: The date on which the user account will be disabled. SHELL: Users login shell. SKEL: Contents of the skel directory will be copied to the users home directory. CREATE_MAIL_SPOOL: According to the value creates or does not create the mail spool.

Example 1: Creating user with all the default options, and with his own group.
Following example creates user ramesh with group ramesh. Use Linux passwd command to change the password for the user immediately after user creation.
# useradd ramesh # passwd ramesh Changing password for user ramesh. New UNIX password: Retype new UNIX password: passwd: all authentication tokens updated successfully. # grep ramesh /etc/passwd ramesh:x:500:500::/home/ramesh:/bin/bash # grep ramesh /etc/group ramesh:x:500: [Note: default useradd command created ramesh as username and group]

Example 2: Creating an user with all the default options, and with the default group.
# useradd -n sathiya # grep sathiya /etc/passwd sathiya:x:511:100::/home/sathiya:/bin/bash # grep sathiya /etc/group [Note: No rows returned, as group sathiya was not created] # grep 100 /etc/group users:x:100: [Note: useradd -n command created user sathiya with default group id 100] # passwd sathiya Changing password for user sathiya. New UNIX password: Retype new UNIX password: passwd: all authentication tokens updated successfully. [Note: Always set the password immediately after user creation]

Example 3: Editing the default options used by useradd.


The following example shows how to change the default shell from /bin/bash to /bin/ksh during user creation.
Syntax: # useradd -D --shell=<SHELLNAME> # useradd -D GROUP=100 HOME=/home INACTIVE=-1 EXPIRE= SHELL=/bin/bash SKEL=/etc/skel [Note: The default shell is /bin/bash] # useradd -D -s /bin/ksh # useradd -D

GROUP=100 HOME=/home INACTIVE=-1 EXPIRE= SHELL=/bin/ksh SKEL=/etc/skel [Note: Now the default shell changed to /bin/ksh] # adduser priya # grep priya /etc/passwd priya:x:512:512::/home/priya:/bin/ksh [Note: New users are getting created with /bin/ksh] # useradd -D -s /bin/bash [Note: Set it back to /bin/bash, as the above is only for testing purpose]

Method 2: Linux useradd Command Create Users With Custom Configurations


Instead of accepting the default values (for example, group, shell etc.) that is given by the useradd command as shown in the above method, you can specify custom values in the command line as parameters to the useradd command.
Syntax: # useradd -s <SHELL> -m -d <HomeDir> -g <Group> UserName

-s SHELL : Login shell for the user. -m : Create users home directory if it does not exist. -d HomeDir : Home directory of the user. -g Group : Group name or number of the user. UserName : Login id of the user.

Example 4: Crate Linux User with Custom Configurations Using useradd Command
The following example creates an account (lebron) with home directory /home/king, default shell as /bin/csh and with comment LeBron James.
# useradd -s /bin/csh -m -d /home/king -c "LeBron James" -g root lebron # grep lebron /etc/passwd lebron:x:513:0:LeBron James:/home/king:/bin/csh

Note: You can give the password using -p option, which should be encrypted password. Or you can use the passwd command to change the password of the user.

Method 3: Linux adduser Command Create Users Interactively


These are the friendlier tools to the low level useradd. By default it chooses the Debian policy format for UID and GID. A very simple way of creating user in the command line interactively is using adduser command.

Syntax: # adduser USERNAME

Example 5: Creating an User Interactively With adduser Command


# adduser spidey Adding user `spidey' ... Adding new group `spidey' (1007) ... Adding new user `spidey' (1007) with group `spidey' ... Creating home directory `/home/spidey' ... Copying files from `/etc/skel' ... Enter new UNIX password: Retype new UNIX password: passwd: password updated successfully Changing the user information for spidey Enter the new value, or press ENTER for the default Full Name []: Peter Parker Room Number []: Work Phone []: Home Phone []: Other []: Is the information correct? [y/N] y

Method 4: Linux newusers Command Creating bulk users


Sometimes you may want to to create multiple users at the same time. Using any one of the above 3 methods for bulk user creation can be very tedious and time consuming. Fortunately, Linux offers a way to upload users using newusers command. This can also be executed in batch mode as it cannot ask any input.
# newusers FILENAME

This file format is same as the password file.


loginname:password:uid:gid:comment:home_dir:shell

Example 6: Creating Large Number of Users Using newusers Command


If Simpson family decides to join your organization and need access to your Linux server, you can create account for all of them together using newusers command as shown below.
# cat homer-family.txt homer:HcZ600a9:1008:1000:Homer Simpson:/home/homer:/bin/bash marge:1enz733N:1009:1000:Marge Simpson:/home/marge:/bin/csh bart:1y5eJr8K:1010:1000:Bart Simpson:/home/bart:/bin/ksh lisa:VGz638i9:1011:1000:Lisa Simpson:/home/lisa:/bin/sh maggie:5lj3YGQo:1012:1000:Maggie Simpson:/home/maggie:/bin/bash

Note: While specifying passwords for users, please follow the password best practices including the 8-4 password rule that we discussed a while back. Now create accounts for Simpsons family together using the newusers command as shown below.
# newusers homer-family.txt

28.Mount and view ISO file: ISO files are typically used to distribute the operating system. Most of the linux operating system that you download will be on ISO format. This explains how to view and mount any ISO file both as regular use and as root user.

How To Mount and View ISO File as Root and Regular User in Linux
by Ramesh Natarajan on June 22, 2009
Object 144 Object 146 Object 145

ISO stands for International Organization for Standardization, which has defined the format for a disk image. In simple terms iso file is a disk image. ISO files are typically used to distribute the operating system. Most of the linux operating system that you download will be on ISO format. If you have downloaded an Linux ISO file you typically burn it onto a CD or DVD as ISO image. Once youve burned the ISO image in a CD or DVD, you can boot the system to install the Linux OS. But sometimes, you may just want to mount the ISO file and view the content without burning it to CD or DVD. In this article let us review how to Mount & View iso file as root and regular user in Linux Operating system.

1. How to mount iso files without writing it to CD/DVD ?


If you have downloaded a *.iso file from a website (for example, any Linux OS distribution), you can view the content of the iso file without writing as an iso to a CD or DVD as explained below using mount -o loop.. Please note that a loop device is a pseudo-device which will make an iso file accessible to the user a block device.
Syntax: # mount ISOFILE MOUNT-POINT -o loop $ su # mkdir /tmp/mnt # mount -o loop /downloads/ubuntu-9.04-desktop-i386.iso /tmp/mnt # cd /tmp/mnt # ls -l

For mounting you need to be logged in as root or you should have sudo permission. Read below to find out how to mount iso file as regular non-root user.

2. How to mount or view an iso file as a non root user ?


A non root user can also mount a file, even without sudo permission. Using midnight commander you can mount the iso file. Actually, it is really not mounting the file. But you can view the iso file content just like viewing some other files. Refer to our previous article that explains about Linux mc midnight commander.

Steps to view iso file in midnight commander:


1. Open midnight command (mc). 2. Navigate to the path where ISO file exist. 3. Click on the iso file, it will enter in to the iso file as like a normal directory and now you will be seeing the content of the file. 4. To view the normal file or the file of the iso, Press <F3> when your cursor is on the file.

3. How to solve the issue iso is not a block device error ?


While mounting an iso file you may get the following error:
mount: file.iso is not a block device (maybe try `-o loop'?)

Problem:
# mount /downloads/Fedora-11-i386-DVD.iso /tmp/mnt mount: /downloads/Fedora-11-i386-DVD.iso is not a block device (maybe try `-o loop'?)

Solution: As it is suggested by the mount command, use the -o loop as the option.
# mount /downloads/Fedora-11-i386-DVD.iso /tmp/mnt -o loop

4. How to update the content of an iso file ?


ISO file content cannot be updated once the ISO file is created. Only way to do as of now is,

Steps to update the iso file.


1. Extract all the files from the iso. 2. Update the content. i.e Add or remove any individual files inside the iso file. 3. Create another iso with the updated files.

5. Extracting files from the iso file as root user ?


Mount the iso file as root user, and navigate to the directory to copy the required files from iso.

Steps to mount and extract the iso file as root user.


1. Mount the iso file as root user.
# mount /downloads/debian-501-i386-DVD-1.iso /tmp/mnt -o loop

2. Navigate to the mounted directory.


# cd /tmp/mnt

3. Copy the required files.

# cp some-file-inside-iso /home/test

6. Extracting files from the iso file as normal user ?


View the content of the file as non root user in midnight commander, and then copy it using midnight commander commands or using shell commands.

Steps to extract the content from iso file as non root user.
1. 2. 3. 4. open mc. Navigate to the directory where the iso file is located. Select the iso file and press enter to view the content of the iso file. When you are inside the iso file, you will be able to view the contents of it. To copy a particular file from the iso file you can use the shell commands in shell prompt as.
$ cp some-file-inside-iso /tmp/mnt

5. You can also do this copy using the mc commands. 29.Manage password expiration and aging: Linux chage command can be used to perform several practical password aging activities including how-to force users to change their password.

7 Examples to Manage Linux Password Expiration and Aging Using chage


by Dhineshkumar Manikannan on April 23, 2009
Object 147 Object 149 Object 148

Photo Courtesy: mattblaze

Best practice recommends that users keep changing the passwords at a regular interval. But typically developers and other users of Linux system wont change the password unless they are forced to change their password. Its the system administrators responsibility to find a way to force developers to change their password. Forcing users to change their password with a gun on their head is not an option!. While most security conscious sysadmins may be even tempted to do that. In this article let us review how you can use Linux chage command to perform several practical password aging activities including how-to force users to change their password. On debian, you can install chage by executing the following command:
# apt-get install chage

Note: It is very easy to make a typo on this command. Instead of chage you may end up typing it as change. Please remember chage stands for change age. i.e chage command abbreviation is similar to chmod, chown etc.,

1. List the password and its related details for an user


As shown below, any user can execute the chage command for himself to identify when his password is about to expire.
Syntax: chage -list username (or) chage -l username $ chage --list dhinesh Last password change Password expires Password inactive Account expires Minimum number of days between password change Maximum number of days between password change Number of days of warning before password expires : : : : : : : Apr 01, 2009 never never never 0 99999 7

If user dhinesh tries to execute the same command for user ramesh, hell get the following permission denied message.
$ chage --list ramesh chage: permission denied

Note: However, a root user can execute chage command for any user account. When user dhinesh changes his password on Apr 23rd 2009, it will update the Last password change value as shown below. Please refer to our earlier article: Best Practices and Ultimate Guide For Creating Super Strong Password, which will help you to follow the best practices while changing password for your account.
$ date Thu Apr 23 00:15:20 PDT 2009 $ passwd dhinesh Enter new UNIX password: Retype new UNIX password: passwd: password updated successfully $ chage --list dhinesh Last password change Password expires Password inactive Account expires Minimum number of days between password change Maximum number of days between password change Number of days of warning before password expires : : : : : : : Apr 23, 2009 never never never 0 99999 7

2. Set Password Expiry Date for an user using chage option -M


Root user (system administrators) can set the password expiry date for any user. In the following example, user dhinesh password is set to expire 10 days from the last password change. Please note that option -M will update both Password expires and Maximum number of days between password change entries as shown below.
Syntax: # chage -M number-of-days username

# chage -M 10 dhinesh # chage --list dhinesh Last password change Password expires Password inactive Account expires Minimum number of days between password change Maximum number of days between password change Number of days of warning before password expires : : : : : : : Apr 23, 2009 May 03, 2009 never never 0 10 7

3. Password Expiry Warning message during login


By default the number of days of warning before password expires is set to 7. So, in the above example, when the user dhinesh tries to login on Apr 30, 2009 hell get the following message.
$ ssh dhinesh@testingserver dhinesh@testingserver's password: Warning: your password will expire in 3 days

4. User Forced to Change Password after Expiry Date


If the password expiry date reaches and user doesnt change their password, the system will force the user to change the password before the login as shown below.
$ ssh dhinesh@testingserver dhinesh@testingserver's password: You are required to change your password immediately (password aged) WARNING: Your password has expired. You must change your password now and login again! Changing password for dhinesh (current) UNIX password: Enter new UNIX password: Retype new UNIX password:

5. Set the Account Expiry Date for an User


You can also use chage command to set the account expiry date as shown below using option -E. The date given below is in YYYY-MM-DD format. This will update the Account expires value as shown below.
# chage -E "2009-05-31" dhinesh # chage -l dhinesh Last password change Password expires Password inactive Account expires Minimum number of days between password change Maximum number of days between password change Number of days of warning before password expires : : : : : : : Apr 23, 2009 May 03, 2009 never May 31, 2009 0 10 7

6. Force the user account to be locked after X number of inactivity days


Typically if the password is expired, users are forced to change it during their next login. You can also set an additional condition, where after the password is expired, if the user never tried to login for 10 days, you can automatically lock their account using option -I as shown below. In this

example, the Password inactive date is set to 10 days from the Password expires value. Once an account is locked, only system administrators will be able to unlock it.
# chage -I 10 dhinesh # chage -l dhinesh Last password change Password expires Password inactive Account expires Minimum number of days between password change Maximum number of days between password change Number of days of warning before password expires : : : : : : : Apr May May May 0 10 7 23, 03, 13, 31, 2009 2009 2009 2009

7. How to disable password aging for an user account


To turn off the password expiration for an user account, set the following: -m 0 will set the minimum number of days between password change to 0 -M 99999 will set the maximum number of days between password change to 99999 -I -1 (number minus one) will set the Password inactive to never -E -1 (number minus one) will set Account expires to never.

# chage -m 0 -M 99999 -I -1 -E -1 dhinesh # chage --list dhinesh Last password change Password expires Password inactive Account expires Minimum number of days between password change Maximum number of days between password change Number of days of warning before password expires : : : : : : : Apr 23, 2009 never never never 0 99999 7

This article was written by Dhineshkumar Manikannan. He is working at bk Systems (p) Ltd, and interested in contributing to the open source. The Geek Stuff welcomes your tips and guest articles 30.ifconfig examples: Interface configurator command ifconfig is used to initialize the network interface and to enable or disable the interfaces as shown in these 7 examples.

Ifconfig: 7 Examples To Configure Network Interface


by Ramesh Natarajan on March 9, 2009
Object 150

Object 152

Object 151

Photo courtesy of new1mproved

This article is written by Lakshmanan G

Ifconfig command is used to configure network interfaces. ifconfig stands for interface configurator. Ifconfig is widely used to initialize the network interface and to enable or disable the interfaces. In this article, let us review 7 common usages of ifconfig command.

1. View Network Settings of an Ethernet Adapter


Ifconfig, when invoked with no arguments will display all the details of currently active interfaces. If you give the interface name as an argument, the details of that specific interface will be displayed.
# ifconfig eth0 eth0 Link encap:Ethernet HWaddr 00:2D:32:3E:39:3B inet addr:192.168.2.2 Bcast:192.168.2.255 Mask:255.255.255.0 inet6 addr: fe80::21d:92ff:fede:499b/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:977839669 errors:0 dropped:1990 overruns:0 frame:0 TX packets:1116825094 errors:8 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2694625909 (2.5 GiB) TX bytes:4106931617 (3.8 GiB) Interrupt:185 Base address:0xdc00

2. Display Details of All interfaces Including Disabled Interfaces


# ifconfig -a

3. Disable an Interface
# ifconfig eth0 down

4. Enable an Interface
# ifconfig eth0 up

5. Assign ip-address to an Interface


Assign 192.168.2.2 as the IP address for the interface eth0.
# ifconfig eth0 192.168.2.2

Change Subnet mask of the interface eth0.


# ifconfig eth0 netmask 255.255.255.0

Change Broadcast address of the interface eth0.


# ifconfig eth0 broadcast 192.168.2.255

Assign ip-address, netmask and broadcast at the same time to interface eht0.
# ifconfig eth0 192.168.2.2 netmask 255.255.255.0 broadcast 192.168.2.255

6. Change MTU
This will change the Maximum transmission unit (MTU) to XX. MTU is the maximum number of octets the interface is able to handle in one transaction. For Ethernet the Maximum transmission unit by default is 1500.
# ifconfig eth0 mtu XX

7. Promiscuous mode
By default when a network card receives a packet, it checks whether the packet belongs to itself. If not, the interface card normally drops the packet. But in promiscuous mode, the card doesnt drop the packet. Instead, it will accept all the packets which flows through the network card. Superuser privilege is required to set an interface in promiscuous mode. Most network monitor tools use the promiscuous mode to capture the packets and to analyze the network traffic. Following will put the interface in promiscuous mode.
# ifconfig eth0 promisc

Following will put the interface in normal mode.


# ifconfig eth0 -promisc

This article was written by Lakshmanan G. He is working in bk Systems (p) Ltd, and interested in contributing to the open source. The Geek Stuff welcomes your tips and guest articles.

31.Oracle db startup an sthudown: Every sysadmin should know some basic DBA operations. This explains how to shutdown and start the oracle database.

Oracle Database Startup and Shutdown Procedure


by Ramesh Natarajan on January 26, 2009
Object 153

Object 155

Object 154

Photo courtesy of Rob Shenk

For a DBA, starting up and shutting down of oracle database is a routine and basic operation. Sometimes Linux administrator or programmer may end-up doing some basic DBA operations on development database. So, it is important for non-DBAs to understand some basic database administration activities.

In this article, let us review how to start and stop an oracle database.

How To Startup Oracle Database


1. Login to the system with oracle username
Typical oracle installation will have oracle as username and dba as group. On Linux, do su to oracle as shown below.
$ su - oracle

2. Connect to oracle sysdba


Make sure ORACLE_SID and ORACLE_HOME are set properly as shown below.
$ env | grep ORA ORACLE_SID=DEVDB ORACLE_HOME=/u01/app/oracle/product/10.2.0

You can connect using either / as sysdba or an oracle account that has DBA privilege.
$ sqlplus '/ as sysdba' SQL*Plus: Release 10.2.0.3.0 - Production on Sun Jan 18 11:11:28 2009 Copyright (c) 1982, 2006, Oracle. All Rights Reserved. Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - Production With the Partitioning and Data Mining options SQL>

3. Start Oracle Database


The default SPFILE (server parameter file) is located under $ORACLE_HOME/dbs. Oracle will use this SPFILE during startup, if you dont specify PFILE. Oracle will look for the parameter file in the following order under $ORACLE_HOME/dbs. If any one of them exist, it will use that particular parameter file. 1. spfile$ORACLE_SID.ora 2. spfile.ora 3. init$ORACLE_SID.ora Type startup at the SQL command prompt to startup the database as shown below.
SQL> startup ORACLE instance started. Total System Global Area Fixed Size Variable Size Database Buffers Redo Buffers Database mounted. Database opened. SQL> 812529152 2264280 960781800 54654432 3498640 bytes bytes bytes bytes bytes

If you want to startup Oracle with PFILE, pass it as a parameter as shown below.
SQL> STARTUP PFILE=/u01/app/oracle/product/10.2.0/dbs/init.ora

How To Shutdown Oracle Database


Following three methods are available to shutdown the oracle database: 1. Normal Shutdown 2. Shutdown Immediate 3. Shutdown Abort

1. Normal Shutdown
During normal shutdown, before the oracle database is shut down, oracle will wait for all active users to disconnect their sessions. As the parameter name (normal) suggest, use this option to shutdown the database under normal conditions.
SQL> shutdown Database closed. Database dismounted. ORACLE instance shut down. SQL>

2. Shutdown Immediate
During immediate shutdown, before the oracle database is shut down, oracle will rollback active transaction and disconnect all active users. Use this option when there is a problem with your database and you dont have enough time to request users to log-off.
SQL> shutdown immediate; Database closed. Database dismounted. ORACLE instance shut down. SQL>

3. Shutdown Abort
During shutdown abort, before the oracle database is shutdown, all user sessions will be terminated immediately. Uncomitted transactions will not be rolled back. Use this option only during emergency situations when the shutdown and shutdown immediate doesnt work.
$ sqlplus SQL*Plus: Copyright Connected '/ as sysdba' Release 10.2.0.3.0 - Production on Sun Jan 18 11:11:33 2009 (c) 1982, 2006, Oracle. All Rights Reserved. to an idle instance.

SQL> shutdown abort ORACLE instance shut down. SQL>

32.PostgreSQL install and configure: Similar to mySQL, postgreSQL is very famous and feature packed free and open source database. This is a jumpstart guide to install and configure postgresql from source on Linux.

9 Steps to Install and Configure PostgreSQL from Source on Linux


by Ramesh Natarajan on April 9, 2009
Object 156

Object 158

Object 157

Similar to mySQL, postgreSQL is very famous and feature packed free and open source database. Earlier weve discussed several installations including LAMP stack installation, Apache2 installation from source, PHP5 installation from source and mySQL installation. In this article, let us review how to install postgreSQL database on Linux from source code.

Step 1: Download postgreSQL source code


From the postgreSQL download site, choose the mirror site that is located in your country.
# wget http://wwwmaster.postgresql.org/redir/198/f/source/v8.3.7/postgresql8.3.7.tar.gz

Step 2: Install postgreSQL


# tar xvfz postgresql-8.3.7.tar.gz # cd postgresql-8.3.7 # ./configure checking for sgmlspl... no configure: creating ./config.status config.status: creating GNUmakefile config.status: creating src/Makefile.global config.status: creating src/include/pg_config.h config.status: creating src/interfaces/ecpg/include/ecpg_config.h config.status: linking ./src/backend/port/tas/dummy.s to src/backend/port/tas.s config.status: linking ./src/backend/port/dynloader/linux.c to src/backend/port/dynloader.c config.status: linking ./src/backend/port/sysv_sema.c to src/backend/port/pg_sema.c config.status: linking ./src/backend/port/sysv_shmem.c to src/backend/port/pg_shmem.c config.status: linking ./src/backend/port/dynloader/linux.h to src/include/dynloader.h config.status: linking ./src/include/port/linux.h to src/include/pg_config_os.h config.status: linking ./src/makefiles/Makefile.linux to src/Makefile.port

# make make[3]: Leaving directory `/usr/save/postgresql-8.3.7/contrib/spi' rm -rf ./testtablespace mkdir ./testtablespace make[2]: Leaving directory `/usr/save/postgresql-8.3.7/src/test/regress' make[1]: Leaving directory `/usr/save/postgresql-8.3.7/src' make -C config all make[1]: Entering directory `/usr/save/postgresql-8.3.7/config' make[1]: Nothing to be done for `all'. make[1]: Leaving directory `/usr/save/postgresql-8.3.7/config' All of PostgreSQL successfully made. Ready to install. # make install make -C test/regress install make[2]: Entering directory `/usr/save/postgresql-8.3.7/src/test/regress' /bin/sh ../../../config/install-sh -c pg_regress '/usr/local/pgsql/lib/pgxs/src/test/regress/pg_regress' make[2]: Leaving directory `/usr/save/postgresql-8.3.7/src/test/regress' make[1]: Leaving directory `/usr/save/postgresql-8.3.7/src' make -C config install make[1]: Entering directory `/usr/save/postgresql-8.3.7/config' mkdir -p -- /usr/local/pgsql/lib/pgxs/config /bin/sh ../config/install-sh -c -m 755 ./install-sh '/usr/local/pgsql/lib/pgxs/config/install-sh' /bin/sh ../config/install-sh -c -m 755 ./mkinstalldirs '/usr/local/pgsql/lib/pgxs/config/mkinstalldirs' make[1]: Leaving directory `/usr/save/postgresql-8.3.7/config' PostgreSQL installation complete.

PostgreSQL ./configure options Following are various options that can be passed to the ./configure: prefix=PREFIX install architecture-independent files in PREFIX. Default installation location is /usr/local/pgsql enable-integer-datetimes enable 64-bit integer date/time support enable-nls[=LANGUAGES] enable Native Language Support disable-shared do not build shared libraries disable-rpath do not embed shared library search path in executables disable-spinlocks do not use spinlocks enable-debug build with debugging symbols (-g) enable-profiling build with profiling enabled enable-dtrace build with DTrace support enable-depend turn on automatic dependency tracking enable-cassert enable assertion checks (for debugging) enable-thread-safety make client libraries thread-safe enable-thread-safety-force force thread-safety despite thread test failure disable-largefile omit support for large files with-docdir=DIR install the documentation in DIR [PREFIX/doc] without-docdir do not install the documentation with-includes=DIRS look for additional header files in DIRS with-libraries=DIRS look for additional libraries in DIRS with-libs=DIRS alternative spelling of with-libraries with-pgport=PORTNUM change default port number [5432] with-tcl build Tcl modules (PL/Tcl)

with-tclconfig=DIR tclConfig.sh is in DIR with-perl build Perl modules (PL/Perl) with-python build Python modules (PL/Python) with-gssapi build with GSSAPI support with-krb5 build with Kerberos 5 support with-krb-srvnam=NAME default service principal name in Kerberos [postgres] with-pam build with PAM support with-ldap build with LDAP support with-bonjour build with Bonjour support with-openssl build with OpenSSL support without-readline do not use GNU Readline nor BSD Libedit for editing with-libedit-preferred prefer BSD Libedit over GNU Readline with-ossp-uuid use OSSP UUID library when building contrib/uuid-ossp with-libxml build with XML support with-libxslt use XSLT support when building contrib/xml2 with-system-tzdata=DIR use system time zone data in DIR without-zlib do not use Zlib with-gnu-ld assume the C compiler uses GNU ld [default=no]

PostgreSQL Installation Issue1: You may encounter the following error message while performing ./configure during postgreSQL installation.
# ./configure checking for -lreadline... no checking for -ledit... no configure: error: readline library not found If you have readline already installed, see config.log for details on the failure. It is possible the compiler isn't looking in the proper directory. Use --without-readline to disable readline support.

PostgreSQL Installation Solution1: Install the readline-devel and libtermcap-devel to solve the above issue.
# rpm -ivh libtermcap-devel-2.0.8-46.1.i386.rpm readline-devel-5.1-1.1.i386.rpm warning: libtermcap-devel-2.0.8-46.1.i386.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159 Preparing... ########################################### [100%] 1:libtermcap-devel ########################################### [ 50%] 2:readline-devel ########################################### [100%]

Step 3: Verify the postgreSQL directory structure


After the installation, make sure bin, doc, include, lib, man and share directories are created under the default /usr/local/pgsql directory as shown below.
# ls -l /usr/local/pgsql/ total 24 drwxr-xr-x 2 root root 4096 drwxr-xr-x 3 root root 4096 drwxr-xr-x 6 root root 4096 drwxr-xr-x 3 root root 4096 drwxr-xr-x 4 root root 4096 drwxr-xr-x 5 root root 4096 Apr Apr Apr Apr Apr Apr 8 8 8 8 8 8 23:25 23:25 23:25 23:25 23:25 23:25 bin doc include lib man share

Step 4: Create postgreSQL user account


# adduser postgres # passwd postgres Changing password for user postgres. New UNIX password: Retype new UNIX password: passwd: all authentication tokens updated successfully.

Step 5: Create postgreSQL data directory


Create the postgres data directory and make postgres user as the owner.
# mkdir /usr/local/pgsql/data # chown postgres:postgres /usr/local/pgsql/data # ls -ld /usr/local/pgsql/data drwxr-xr-x 2 postgres postgres 4096 Apr 8 23:26 /usr/local/pgsql/data

Step 6: Initialize postgreSQL data directory


Before you can start creating any postgreSQL database, the empty data directory created in the above step should be initialized using the initdb command as shown below.
# su - postgres # /usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data/ The files belonging to this database system will be owned by user postgres This user must also own the server process. The database cluster will be initialized with locale en_US.UTF-8. The default database encoding has accordingly been set to UTF8. The default text search configuration will be set to "english". fixing permissions on existing directory /usr/local/pgsql/data ... ok creating subdirectories ... ok selecting default max_connections ... 100 selecting default shared_buffers/max_fsm_pages ... 32MB/204800 creating configuration files ... ok creating template1 database in /usr/local/pgsql/data/base/1 ... ok initializing pg_authid ... ok initializing dependencies ... ok creating system views ... ok loading system objects' descriptions ... ok creating conversions ... ok creating dictionaries ... ok setting privileges on built-in objects ... ok creating information schema ... ok vacuuming database template1 ... ok copying template1 to template0 ... ok copying template1 to postgres ... ok WARNING: enabling "trust" authentication for local connections You can change this by editing pg_hba.conf or using the -A option the next time you run initdb. Success. You can now start the database server using: /usr/local/pgsql/bin/postgres -D /usr/local/pgsql/data or

/usr/local/pgsql/bin/pg_ctl -D /usr/local/pgsql/data -l logfile start

Step 7: Validate the postgreSQL data directory


Make sure all postgres DB configuration files (For example, postgresql.conf) are created under the data directory as shown below.
$ ls -l /usr/local/pgsql/data total 64 drwx------ 5 postgres postgres 4096 drwx------ 2 postgres postgres 4096 drwx------ 2 postgres postgres 4096 -rw------- 1 postgres postgres 3429 -rw------- 1 postgres postgres 1460 drwx------ 4 postgres postgres 4096 drwx------ 2 postgres postgres 4096 drwx------ 2 postgres postgres 4096 drwx------ 2 postgres postgres 4096 -rw------- 1 postgres postgres 4 drwx------ 3 postgres postgres 4096 -rw------- 1 postgres postgres 16592 Apr Apr Apr Apr Apr Apr Apr Apr Apr Apr Apr Apr 8 8 8 8 8 8 8 8 8 8 8 8 23:29 23:29 23:29 23:29 23:29 23:29 23:29 23:29 23:29 23:29 23:29 23:29 base global pg_clog pg_hba.conf pg_ident.conf pg_multixact pg_subtrans pg_tblspc pg_twophase PG_VERSION pg_xlog postgresql.conf

Step 8: Start postgreSQL database


Use the postgres postmaster command to start the postgreSQL server in the background as shown below.
$ /usr/local/pgsql/bin/postmaster -D /usr/local/pgsql/data >logfile 2>&1 & [1] 2222 $ cat LOG: LOG: LOG: logfile database system was shut down at 2009-04-08 23:29:50 PDT autovacuum launcher started database system is ready to accept connections

Step 9: Create postgreSQL DB and test the installation


Create a test database and connect to it to make sure the installation was successful as shown below. Once you start using the database, take backups frequently as mentioned in how to backup and restore PostgreSQL article.
$ /usr/local/pgsql/bin/createdb test $ /usr/local/pgsql/bin/psql test Welcome to psql 8.3.7, the PostgreSQL interactive terminal. Type: \copyright for distribution terms \h for help with SQL commands \? for help with psql commands \g or terminate with semicolon to execute query \q to quit

test=#

33.Magic SysRq key: Have you wondered what the SysRq key on your keyboard does. Here is one use for it. You can safely reboot Linux using the magic SysRq key as explained here.

Safe Reboot Of Linux Using Magic SysRq Key


by Ramesh Natarajan on December 11, 2008
Object 159 Object 161 Object 160

Photo courtesy of KCIvey

This is a guest post written by Lakshmanan G. If you are working on kernel development, or device drivers, or running a code that could cause kernel panic, SysRq key will be very valuable. The magic SysRq key is a key combination in the Linux kernel which allows the user to perform various low level commands regardless of the systems state. It is often used to recover from freezes, or to reboot a computer without corrupting the filesystem. The key combination consists of Alt+SysRq+commandkey. In many systems the SysRq key is the printscreen key. First, you need to enable the SysRq key, as shown below.
echo "1" > /proc/sys/kernel/sysrq

List of SysRq Command Keys


Following are the command keys available for Alt+SysRq+commandkey. k Kills all the process running on the current virtual console. s This will attempt to sync all the mounted file system. b Immediately reboot the system, without unmounting partitions or syncing. e Sends SIGTERM to all process except init. m Output current memory information to the console. i Send the SIGKILL signal to all processes except init r Switch the keyboard from raw mode (the mode used by programs such as X11), to XLATE mode. s sync all mounted file system. t Output a list of current tasks and their information to the console. u Remount all mounted filesystems in readonly mode. o Shutdown the system immediately.

p Print the current registers and flags to the console. 0-9 Sets the console log level, controlling which kernel messages will be printed to your console. f Will call oom_kill to kill process which takes more memory. h Used to display the help. But any other keys than the above listed will print help. We can also do this by echoing the keys to the /proc/sysrq-trigger file. For example, to re-boot a system you can perform the following.
echo "b" > /proc/sysrq-trigger

Perform a Safe reboot of Linux using Magic SysRq Key


To perform a safe reboot of a Linux computer which hangs up, do the following. This will avoid the fsck during the next re-booting. i.e Press Alt+SysRq+letter highlighted below. unRaw (take control of keyboard back from X11, tErminate (send SIGTERM to all processes, allowing them to terminate gracefully), kIll (send SIGILL to all processes, forcing them to terminate immediately), Sync (flush data to disk), Unmount (remount all filesystems read-only), reBoot.

34.Wakeonlan Tutorial: Using Wakeonlan WOL, you can turn on the remote servers where you dont have physical access to press the power button.

WOL Wakeonlan Guide: Turn On Servers Remotely Without Physical Access


by Ramesh Natarajan on November 27, 2008
Object 162 Object 164 Object 163

Photo courtesy of Jamison Judd

This is a guest post written by SathiyaMoorthy.

Wakeonlan (wol) enables you to switch ON remote servers without physically accessing it. Wakeonlan sends magic packets to wake-on-LAN enabled ethernet adapters and motherboards to switch on remote computers. By mistake, when you shutdown a system instead of rebooting, you can use Wakeonlan to power on the server remotely. Also, If you have a server that dont need to be up and running 247, you can turn off and turn on the server remotely anytime you want. This article gives a brief overview of Wake-On-LAN and instructions to set up Wakeonlan feature.

Overview of Wake-On-LAN
You can use Wakeonlan when a machine is connected to LAN, and you know the MAC address of that machine. Your NIC should support wakeonlan feature, and it should be enabled before the shut down. In most cases, by default wakeonlan is enabled on the NIC. You need to send the magic packet from another machine which is connected to the same network ( LAN ). You need root access to send magic packet. wakeonlan package should be installed on the machine. When the system crashes because of power failure, for the first time you cannot switch on your machine using this facility. But after the first first boot you can use wakeonlan to turn it on, if the server gets shutdown for some reason. WakeonLan is also referred as wol.

Check whether wol is supported on the NIC


Execute the following ethtool command in the server which you want to switch ON from a remote place.
# ethtool eth0 Settings for eth0: Supported ports: [ TP MII ] Supported link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full Supports auto-negotiation: Yes Advertised link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full Advertised auto-negotiation: Yes Speed: 100Mb/s Duplex: Full Port: MII PHYAD: 1 Transceiver: internal Auto-negotiation: on Supports Wake-on: pumbg [ Note: check whether flag g is present ] Wake-on: g [ Note: g mean enabled. d means disabled ] Current message level: 0x00000001 (1) Link detected: yes

If Supports Wake-on is g, then the support for wol feature is enabled on the NIC card.

Enabling wol option on the Ethernet Card


By default the Wake-on will be set to g in most of the machines. If not, use ethtool to set the g flag to the wol option of the NIC card as shown below.
# ethtool -s eth0 wol g

Note: You should execute ethtool as root, else you may get following error message.
$ /sbin/ethtool eth0 Settings for eth0: Cannot get device settings: Operation not permitted Cannot get wake-on-lan settings: Operation not permitted Current message level: 0x000000ff (255) Cannot get link status: Operation not permitted

Install wakeonlan package on a different machine


Install the wakeonlan package in the machine from where you need to send the magic packet to switch on your server.
# apt-get install wakeonlan

Note down the MAC address of the remote server


Note down the MAC address of the server that you wish to switch on remotely.
# ifconfig eth0 Link encap:Ethernet HWaddr 00:16:k5:64:A9:68 [ Mac address ] inet addr:192.168.6.56 Bcast:192.168.6.255 Mask:255.255.255.0 inet6 addr: fe80::216:17ff:fe6b:289/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:3179855 errors:0 dropped:0 overruns:0 frame:0 TX packets:2170162 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:3832534893 (3.5 GB) TX bytes:390304845 (372.2 MB) Interrupt:17

Finally, Switch ON the machine remotely without physical access


When the server is not up, execute the following command from another machine which is connected to the same LAN. Once the magic packet is sent, the remote system will start to boot.
# wakeonlan 00:16:k5:64:A9:68

35.List hardware spec using lshw: ls+hw = lshw, which lists the hardware specs of your system.

How To Get Hardware Specs of Your System Using lshw Hardware Lister
by Ramesh Natarajan on December 22, 2008
Object 165 Object 167 Object 166

Photo courtesy of viagallery.com

This is a guest post written by SathiyaMoorthy. lshw (Hardware Lister) command gives a comprehensive report about all hardware in your system. This displays detailed information about manufacturer, serial number of the system, motherboard,

CPU, RAM, PCI cards, disks, network card etc., Using lshw, you can get information about the hardware without touching a screwdriver to open the server chassis. This is also very helpful when the server is located in a remote data center, where you dont have physical access to the server. In our previous article, we discussed about how to display hardware information on linux using dmidecode command. In this article, let us review how to view the hardware specifications using lshw command.

Download lshw
Download the latest version of lshw from Hardware Lister website. Extract the source code to the /usr/src as shown below.
# # # # cd /usr/src wget http://ezix.org/software/files/lshw-B.02.13.tar.gz gzip -d lshw-B.02.13.tar.gz tar xvf lshw-B.02.13.tar

Note: To install the pre-compiled version, download it from Hardware Lister website.

Install lshw
Install lshw as shown below. This will install lshw in the /usr/sbin directory.
# make # make install make -C src install make[1]: Entering directory `/usr/src/lshw-B.02.13/src' make -C core all make[2]: Entering directory `/usr/src/lshw-B.02.13/src/core' make[2]: Nothing to be done for `all'. make[2]: Leaving directory `/usr/src/lshw-B.02.13/src/core' g++ -L./core/ -g -Wl,--as-needed -o lshw lshw.o -llshw -lresolv install -p -d -m 0755 ///usr/sbin install -p -m 0755 lshw ///usr/sbin install -p -d -m 0755 ///usr/share/man/man1 install -p -m 0644 lshw.1 ///usr/share/man/man1 install -p -d -m 0755 ///usr/share/lshw install -p -m 0644 pci.ids usb.ids oui.txt manuf.txt ///usr/share/lshw make[1]: Leaving directory `/usr/src/lshw-B.02.13/src'

lshw Output Layout


When executing lshw without option, you will get detailed information on the hardware configuration of the machine in text format. Following is the structure of lshw output.
system information motherboard information cpu information cache, logical cpu memory capacity, total size, individual bank information pci slot information ide slot information disk information

total size, partition, usb slot information network

Following is the partial output of lshw command.


# lshw | head local-host description: Rack Mount Chassis product: PowerEdge 2850 vendor: Dell Computer Corporation serial: 1234567 width: 32 bits capabilities: smbios-2.3 dmi-2.3 smp-1.4 smp configuration: boot=normal chassis=rackmount cpus=2 uuid=12345 *-core description: Motherboard

Note: lshw must be run as root to get a full report. lshw will display partial report with a warning message as shown below when you execute it from a non-root user.
jsmith@local-host ~> /usr/sbin/lshw WARNING: you should run this program as super-user.

lshw Classes
To get information about a specific hardware, you can use -class option. Following classes can be used with the -class option in the lshw command.
address bridge bus communication disk display generic input memory multimedia network power printer processor storage system tape volume

Get Information about the Disks using lshw


The example below will display all the information about the disks on the system. This indicates that the /dev/sda is a SCSI Disk, RAID1 configuration with a total capacity of 68G.
# lshw -class disk *-disk description: SCSI Disk product: LD 0 RAID1 69G vendor: MegaRAID physical id: 2.0.0

bus info: scsi@0:2.0.0 logical name: /dev/sda version: 516A size: 68GiB (73GB) capabilities: partitioned partitioned:dos configuration: ansiversion=2 signature=000e1213

Get Information about Physical Memory (RAM) of the System


Please note that only partial output is shown below.
# lshw -class memory *-memory description: System Memory size: 512MB capacity: 2GB *-bank:8 description: DIMM Synchronous [empty] *-bank:9 description: DIMM Synchronous size: 512MB width: 32 bits

Generate Compact Hardware Report Using lshw


By default lshw command generates multi-page detailed report. To generate a compact report use -short option as shown below. Only partial output is shown below.
# lshw -short H/W path Device Class Description ======================================================= system PowerEdge 2850 /0 bus 12345 /0/0 memory 64KiB BIOS /0/400 processor Intel(R) Xeon(TM) CPU 3.40GHz /0/400/700 memory 16KiB L1 cache /0/400/701 memory 1MiB L2 cache /0/400/702 memory L3 cache /0/400/1.1 processor Logical CPU /0/1000 memory 4GiB System Memory /0/1000/0 memory 1GiB DIMM Synchronous 400 MHz (2.5 ns) /0/1000/1 memory 1GiB DIMM Synchronous 400 MHz (2.5 ns) /0/100/6/0/4 eth2 network 82546EB Gigabit Ethernet Controller (Copper) /0/100/6/0/4.1 eth3 network 82546EB Gigabit Ethernet Controller (Copper) /0/100/6/0.2 bridge 6700PXH PCI Express-to-PCI Bridge B /0/100/6/0.2/2 bus Thor LightPulse Fibre Channel Host Adapter /0/100/1e bridge 82801 PCI Bridge /0/100/1e/d display Radeon RV100 QY [Radeon 7000/VE]

Generate HTML or XML Hardware Report Using lshw


You can generate a HTML or XML output from the lshw command directly as shown below.

# lshw -html > hwinfo.html # lshw -xml > hwinfo.xml

36.View hardware spec using dmidecode: dmidecode command reads the system DMI table to display hardware and BIOS information of the server. Apart from getting current configuration of the system, you can also get information about maximum supported configuration of the system using dmidecode. For example, dmidecode gives both the current RAM on the system and the maximum RAM supported by the system.

How To Get Hardware Information On Linux Using dmidecode Command


by Ramesh Natarajan on November 10, 2008
Object 168 Object 170 Object 169

Photo courtesy of B Naveen Kumar

dmidecode command reads the system DMI table to display hardware and BIOS information of the server. Apart from getting current configuration of the system, you can also get information about maximum supported configuration of the system using dmidecode. For example, dmidecode gives both the current RAM on the system and the maximum RAM supported by the system. This article provides an overview of the dmidecode and few practical examples on how to use dmidecode command.

1. Overview of dmidecode
Distributed Management Task Force maintains the DMI specification and SMBIOS specification. The output of the dmidecode contains several records from the DMI (Desktop Management interface) table. Following is the record format of the dmidecode output of the DMI table.
Record Header: Handle {record id}, DMI type {dmi type id}, {record size} bytes Record Value: {multi line record value}

record id: Unique identifier for every record in the DMI table. dmi type id: Type of the record. i.e BIOS, Memory etc., record size: Size of the record in the DMI table. multi line record values: Multi line record value for that specific DMI type.

Sample output of dmidecode command:


# dmidecode | head -15 # dmidecode 2.9 SMBIOS 2.3 present. 56 structures occupying 1977 bytes. Table at 0x000FB320. Handle 0xDA00, DMI type 218, 11 bytes OEM-specific Type Header and Data: DA 0B 00 DA B0 00 17 03 08 28 00 Handle 0x0000, DMI type 0, 20 bytes BIOS Information Vendor: Dell Computer Corporation Version: A07 Release Date: 01/13/2004

Get the total number of records in the DMI table as shown below:
# dmidecode | grep ^Handle | wc -l 56 (or) # dmidecode | grep structures 56 structures occupying 1977 bytes.

2. DMI Types
DMI Type id will give information about a particular hardware component of your system. Following command with type id 4 will get the information about CPU of the system.
# dmidecode -t 4 # dmidecode 2.9 SMBIOS 2.3 present. Handle 0x0400, DMI type 4, 35 bytes Processor Information Socket Designation: Processor 1 Type: Central Processor Family: Xeon Manufacturer: Intel ID: 29 0F 00 00 FF FB EB BF Signature: Type 0, Family 15, Model 2, Stepping 9 Flags: FPU (Floating-point unit on-chip) VME (Virtual mode extension) DE (Debugging extension) PSE (Page size extension) TSC (Time stamp counter) MSR (Model specific registers)

Following are the different DMI types available.


Type Information ---------------------------------------0 BIOS 1 System 2 Base Board 3 Chassis

4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39

Processor Memory Controller Memory Module Cache Port Connector System Slots On Board Devices OEM Strings System Configuration Options BIOS Language Group Associations System Event Log Physical Memory Array Memory Device 32-bit Memory Error Memory Array Mapped Address Memory Device Mapped Address Built-in Pointing Device Portable Battery System Reset Hardware Security System Power Controls Voltage Probe Cooling Device Temperature Probe Electrical Current Probe Out-of-band Remote Access Boot Integrity Services System Boot 64-bit Memory Error Management Device Management Device Component Management Device Threshold Data Memory Channel IPMI Device Power Supply

Instead of type_id, you can also pass the keyword to the -t option of the dmidecode command. Following are the available keywords.
Keyword Types -----------------------------bios 0, 13 system 1, 12, 15, 23, 32 baseboard 2, 10 chassis 3 processor 4 memory 5, 6, 16, 17 cache 7 connector 8 slot 9

For example, to get all the system baseboard related information execute the following command, which will display the type_id 2 and 10
# dmidecode -t baseboard # dmidecode 2.9 SMBIOS 2.3 present. Handle 0x0200, DMI type 2, 9 bytes Base Board Information Manufacturer: Dell Computer Corporation Product Name: 123456

Version: A05 Serial Number: ..CN123456789098. Handle 0x0A00, DMI type 10, 14 bytes On Board Device 1 Information Type: SCSI Controller Status: Enabled Description: LSI Logic 53C1030 Ultra 320 SCSI On Board Device 2 Information Type: SCSI Controller Status: Enabled Description: LSI Logic 53C1030 Ultra 320 SCSI On Board Device 3 Information Type: Video Status: Enabled Description: ATI Rage XL PCI Video On Board Device 4 Information Type: Ethernet Status: Enabled Description: Broadcom Gigabit Ethernet 1 On Board Device 5 Information Type: Ethernet Status: Enabled Description: Broadcom Gigabit Ethernet 2

3. Get Physical Memory (RAM) information using dmidecode


What is the maximum RAM supported by the system? In this example, this system can support maximum 8GB of RAM.
# dmidecode -t 16 # dmidecode 2.9 SMBIOS 2.3 present. Handle 0x1000, DMI type 16, 15 bytes Physical Memory Array Location: System Board Or Motherboard Use: System Memory Error Correction Type: Multi-bit ECC Maximum Capacity: 8 GB Error Information Handle: Not Provided Number Of Devices: 4

How much memory can I expand to? From /proc/meminfo you can find out the total current memory of your system as shown below.
# grep MemTotal /proc/meminfo MemTotal: 1034644 kB

In this example, the system has 1GB of RAM. Is this 1 x 1GB (or) 2 x 512MB (or) 4 x 256MB? This can be figured out by passing the type id 17 to the dmidecode command as shown below. Please note in the example below, if you have to expand upto 8GB of maximum RAM, you need to remove the existing 512MB from slot 1 and 2, and use 2GB RAM on all the 4 memory slots.
# dmidecode -t 17 # dmidecode 2.9 SMBIOS 2.3 present. Handle 0x1100, DMI type 17, 23 bytes

Memory Device Array Handle: 0x1000 Error Information Handle: Not Provided Total Width: 72 bits Data Width: 64 bits Size: 512 MB [Note: Slot1 has 512 MB RAM] Form Factor: DIMM Set: 1 Locator: DIMM_1A Bank Locator: Not Specified Type: DDR Type Detail: Synchronous Speed: 266 MHz (3.8 ns) Handle 0x1101, DMI type 17, 23 bytes Memory Device Array Handle: 0x1000 Error Information Handle: Not Provided Total Width: 72 bits Data Width: 64 bits Size: 512 MB [Note: Slot2 has 512 MB RAM] Form Factor: DIMM Set: 1 Locator: DIMM_1B Bank Locator: Not Specified Type: DDR Type Detail: Synchronous Speed: 266 MHz (3.8 ns) Handle 0x1102, DMI type 17, 23 bytes Memory Device Array Handle: 0x1000 Error Information Handle: Not Provided Total Width: 72 bits Data Width: 64 bits Size: No Module Installed [Note: Slot3 is empty] Form Factor: DIMM Set: 2 Locator: DIMM_2A Bank Locator: Not Specified Type: DDR Type Detail: Synchronous Speed: 266 MHz (3.8 ns) Handle 0x1103, DMI type 17, 23 bytes Memory Device Array Handle: 0x1000 Error Information Handle: Not Provided Total Width: 72 bits Data Width: 64 bits Size: No Module Installed [Note: Slot4 is empty] Form Factor: DIMM Set: 2 Locator: DIMM_2B Bank Locator: Not Specified Type: DDR Type Detail: Synchronous Speed: 266 MHz (3.8 ns)

4. Get BIOS information using dmidecode


# dmidecode -t bios # dmidecode 2.9 SMBIOS 2.3 present. Handle 0x0000, DMI type 0, 20 bytes BIOS Information Vendor: Dell Computer Corporation Version: A07 Release Date: 01/13/2004 Address: 0xF0000 Runtime Size: 64 kB ROM Size: 4096 kB Characteristics: ISA is supported PCI is supported PNP is supported BIOS is upgradeable BIOS shadowing is allowed ESCD support is available Boot from CD is supported Selectable boot is supported EDD is supported Japanese floppy for Toshiba 1.2 MB is supported (int 13h) 5.25"/360 KB floppy services are supported (int 13h) 5.25"/1.2 MB floppy services are supported (int 13h) 3.5"/720 KB floppy services are supported (int 13h) 8042 keyboard services are supported (int 9h) Serial services are supported (int 14h) CGA/mono video services are supported (int 10h) ACPI is supported USB legacy is supported LS-120 boot is supported BIOS boot specification is supported Function key-initiated network boot is supported Handle 0x0D00, DMI type 13, 22 bytes BIOS Language Information Installable Languages: 1 en|US|iso8859-1 Currently Installed Language: en|US|iso8859-1

5. View Manufacturer, Model and Serial number of the equipment using dmidecode
You can get information about the make, model and serial number of the equipment as shown below:
# dmidecode -t system # dmidecode 2.9 SMBIOS 2.3 present. Handle 0x0100, DMI type 1, 25 bytes System Information Manufacturer: Dell Computer Corporation Product Name: PowerEdge 1750 Version: Not Specified Serial Number: 1234567 UUID: 4123454C-4123-1123-8123-12345603431 Wake-up Type: Power Switch

Handle 0x0C00, DMI type 12, 5 bytes System Configuration Options Option 1: NVRAM_CLR: Clear user settable NVRAM areas and set defaults Option 2: PASSWD: Close to enable password Handle 0x2000, DMI type 32, 11 bytes System Boot Information Status: No errors detected

37.Use the support effectively: Companies spend lot of cash on support mainly for two reasons: 1) To get help from vendors to fix critical production issues 2) To keep up-to-date with the latest version of the software and security patches released by the vendors. In this article, Ive given 10 practical tips for DBAs, sysadmins and developers to use their hardware and software support effectively.

10 Tips to Use Your Hardware and Software Vendor Support Effectively


by Ramesh Natarajan on September 29, 2008
Object 171 Object 173 Object 172

Photo courtesy of wraithtdk

Companies purchase support for most of their enterprise hardwares (servers, switches, routers, firewalls etc.,) and softwares (databases, OS, applications, frameworks etc.,). They spend lot of cash on support mainly for two reasons: 1) To get help from vendors to fix critical production issues 2) To keep up-to-date with the latest version of the software and security patches released by the vendors. In this article, Ive given 10 practical tips for DBAs, sysadmins and developers to use their hardware and software support effectively.

1. Use the Knowledge Base


Most vendors have dedicated support website including a separate knowledge base section with lot of white papers, best practice documents, troubleshooting tips and tricks. Use the knowledge base section of support website to learn and expand your knowledge. Most of the time, the best possible solution to solve a specific problem can be found from the knowledge base or forum of your vendor support website. For example, when you have an issue setting up Automatic Storage Management during Oracle 11g installation, Oracles support website metalink, will give you appropriate solution than searching Google.

2. Use support website to create ticket


Instead of calling the support over phone, use their website to create a ticket. It is not easy to explain complex technical issue in detail to the support person over phone. Even when you take time to explain the issue in detail over phone, they may still miss lot of details or write the issue description little differently. This will cause unnecessary delay, as youve to explain the problem again to the support engineer who will be assigned to the ticket. If you create the ticket yourself from their website, you can upload all the supporting materials and copy/paste the error message. After you create a

ticket from their website, call the support to follow-up and make sure an engineer is getting assigned to it immediately. If they dont have a support website, ask them whether you can create a ticket by sending an email.

3. Explain the issue in detail


Provide as much as information possible in the ticket description. Dont assume that the support engineer will understand the issue just by looking at the error message youve provided. Providing as much as information upfront in the ticket will help you avoid lot of wasted time going back and forth explaining the issues in detail to the support. Provide a clear step-bystep instructions on how to reproduce the issue.

4. Do some research and debugging before submitting the ticket


Before creating a ticket, perform some basic debugging to eliminate some of the common issues. Attach related log files and debugging output to the ticket. If youve worked with your vendor before, youll have a good idea of all the basic log files and testing they may ask you to perform. Dont wait for them to ask the same thing again. Go-ahead and do those basic testing yourself and attach all the log files to the ticket.

5. Dont waste time with first level of support


Dealing with first level of support is waste of time for complex issues. If youve done #2, #3 and #4 mentioned above properly, call the support and demand them to escalate it to the second level of support. If they dont respond properly, escalate the issue through vendors account manager assigned to your company.

6. Use support for your research project


Dont just call support only for production issues. Call them even for your research project. For example, if you are performing a prototype of a new software that was released by your vendor, call the support to get their help when you get stuck. When you are testing their new bleeding edge software, that was released recently, most of the vendors will even assign a dedicated resource to help you resolve the issue, as they want to fix all the issues in their new software as soon as posible.

7. Setup your support profile


Anytime you create a ticket, you may have to repeatedly enter some basic information related to your account and environment. Most of the support site has the ability to setup a profile with all the basic information, which you can use when you are creating a ticket. This will speed up the ticket creation process.

8. Setup support access for admins


Make sure all your DBAs, sysadmins and senior developers have access to the support website. If you are the only person who has access to support website, identify another backup resource for you and make sure they know how to access the support website to create a ticket, when you are not available. Also, create a separate support-access document with vendors support telephone number, your account number, support website URL and put it in a shared area where all admins can access it.

9. Subscribe to security alert


It is very important for DBAs, sysadmins, and senior developers to subscribe to the security alerts from the support website. If there are any critical security updates that affects your hardware and software, it should be immediately tested on test environment and moved to production. I have seen admins who receive the security alerts, but dont read those emails consistently. It is very important to act on security alerts from your vendors immediately.

10. Get official documentation and diagnostics tools


Use support to get official documentation for your hardware and software. Call your vendor support and ask for diagnostics tools and best practice documents for maintaining your hardware and software. Most of us hate to read documentation. But experienced developers and admins understand that reading official documentation of hardware and software will give them indepth understanding about the product. 38.Install/Upgrade LAMP using Yum: Installing LAMP stack using yum is a good option for beginners who dont feel comfortable installing from source. Also, Installing LAMP stack using yum is a good choice, if you want to keep things simple and just use the default configuration.

How To Install Or Upgrade LAMP: Linux, Apache, MySQL and PHP Stack Using Yum
by Ramesh Natarajan on September 15, 2008
Object 174 Object 176 Object 175

Previously we discussed about how to install Apache and PHP from source. Installing LAMP stack from source will give you full control to configure different parameters. Installing LAMP stack using yum is very easy and takes only minutes. This is a good option for beginners who dont feel comfortable installing from source. Also, Installing LAMP stack using yum is a good choice, if you want to keep things simple and just use the default configuration.

1. Install Apache using Yum

# rpm -qa | grep httpd [Note: If the above command did not return anything, install apache as shown below] # yum install httpd

Verify that Apache got installed successfully


# rpm -qa | grep -i http httpd-tools-2.2.9-1.fc9.i386 httpd-2.2.9-1.fc9.i386

Enable httpd service to start automatically during system startup using chkconfig. Start the Apache as shown below.
# chkconfig httpd on # service httpd start Starting httpd: [ OK ]

2. Upgrade Apache using Yum


If youve selected web server package during Linux installation, Apache is already installed on your Linux. In which case, you can upgrade Apache to the latest version as shown below. Check whether Apache is already installed.
# rpm -qa | grep -i http httpd-tools-2.2.8-3.i386 httpd-2.2.8-3.i386 [Note: This indicates that Apache 2.2.8 version is installed already]

Check whether latest version of Apache is available for installation using yum.
# yum check-update httpd Loaded plugins: refresh-packagekit httpd.i386 2.2.9-1.fc9 updates [Note: This indicates that the latest Apache version 2.2.9 is available for upgrade]

Upgrade Apache to latest version using yum.


# yum update httpd

Output of the yum update httpd command:


Loaded plugins: refresh-packagekit Setting up Update Process Resolving Dependencies --> Running transaction check ---> Package httpd.i386 0:2.2.9-1.fc9 set to be updated --> Processing Dependency: httpd-tools = 2.2.9-1.fc9 for package: httpd --> Running transaction check ---> Package httpd-tools.i386 0:2.2.9-1.fc9 set to be updated --> Finished Dependency Resolution Dependencies Resolved =============================================================================

Package Arch Version Repository Size ============================================================================= Updating: httpd i386 2.2.9-1.fc9 updates 975 k httpd-tools i386 2.2.9-1.fc9 updates 69 k Transaction Summary ============================================================================= Install 0 Package(s) Update 2 Package(s) Remove 0 Package(s) Total download size: 1.0 M Is this ok [y/N]: y Downloading Packages: (1/2): httpd-tools-2.2.9-1.fc9.i386.rpm (2/2): httpd-2.2.9-1.fc9.i386.rpm Running rpm_check_debug Running Transaction Test Finished Transaction Test Transaction Test Succeeded Running Transaction Updating : httpd-tools Updating : httpd Cleanup : httpd Cleanup : httpd-tools

| 69 kB | 975 kB

00:00 00:00

[1/4] [2/4] [3/4] [4/4]

Updated: httpd.i386 0:2.2.9-1.fc9 httpd-tools.i386 0:2.2.9-1.fc9 Complete!

Verify whether the Apache got upgraded successfully.


# rpm -qa | grep -i http httpd-tools-2.2.9-1.fc9.i386 httpd-2.2.9-1.fc9.i386 [Note: This indicates that Apache was upgraded to 2.2.9 successfully]

3. Install MySQL using Yum

Yum is very smart to identify all the dependencies and install those automatically. For example, while installing mysql-server using yum, it also automatically installs the depended mysql-libs, perl-DBI, mysql, perl-DBD-MySQL packages as shown below.
# yum install mysql-server

Output of yum install mysql-server command:


Loaded plugins: refresh-packagekit Setting up Install Process Parsing package install arguments Resolving Dependencies --> Running transaction check ---> Package mysql-server.i386 0:5.0.51a-1.fc9 set to be updated --> Processing Dependency: libmysqlclient_r.so.15 for mysql-server --> Processing Dependency: libmysqlclient.so.15 for mysql-server --> Processing Dependency: perl-DBI for package: mysql-server

--> Processing Dependency: mysql = 5.0.51a-1.fc9 for package: mysql-server --> Processing Dependency: libmysqlclient.so.15 for package: mysql-server --> Processing Dependency: perl(DBI) for package: mysql-server --> Processing Dependency: perl-DBD-MySQL for package: mysql-server --> Processing Dependency: libmysqlclient_r.so.15 for package: mysql-server --> Running transaction check ---> Package mysql.i386 0:5.0.51a-1.fc9 set to be updated ---> Package mysql-libs.i386 0:5.0.51a-1.fc9 set to be updated ---> Package perl-DBD-MySQL.i386 0:4.005-8.fc9 set to be updated ---> Package perl-DBI.i386 0:1.607-1.fc9 set to be updated --> Finished Dependency Resolution Dependencies Resolved ============================================================================= Package Arch Version Repository Size ============================================================================= Installing: mysql-server i386 5.0.51a-1.fc9 fedora 9.8 M Installing for dependencies: mysql i386 5.0.51a-1.fc9 fedora 2.9 M mysql-libs i386 5.0.51a-1.fc9 fedora 1.5 M perl-DBD-MySQL i386 4.005-8.fc9 fedora 165 k perl-DBI i386 1.607-1.fc9 updates 776 k Transaction Summary ============================================================================= Install 5 Package(s) Update 0 Package(s) Remove 0 Package(s) Total download size: 15 M Is this ok [y/N]: y Downloading Packages: (1/5): perl-DBD-MySQL-4.005-8.fc9.i386.rpm (2/5): perl-DBI-1.607-1.fc9.i386.rpm (3/5): mysql-libs-5.0.51a-1.fc9.i386.rpm (4/5): mysql-5.0.51a-1.fc9.i386.rpm (5/5): mysql-server-5.0.51a-1.fc9.i386.rpm Running rpm_check_debug Running Transaction Test Finished Transaction Test Transaction Test Succeeded Running Transaction Installing : mysql-libs Installing : perl-DBI Installing : mysql Installing : perl-DBD-MySQL Installing : mysql-server

| | | | |

165 776 1.5 2.9 9.8

kB kB MB MB MB

00:00 00:00 00:00 00:00 00:01

[1/5] [2/5] [3/5] [4/5] [5/5]

Installed: mysql-server.i386 0:5.0.51a-1.fc9 Dependency Installed: mysql.i386 0:5.0.51a-1.fc9 mysql-libs.i386 0:5.0.51a-1.fc9 perl-DBD-MySQL.i386 0:4.005-8.fc9 perl-DBI.i386 0:1.607-1.fc9 Complete!

Verify whether MySQL got installed properly.


# rpm -qa | grep -i mysql php-mysql-5.2.6-2.fc9.i386 mysql-libs-5.0.51a-1.fc9.i386 mysql-server-5.0.51a-1.fc9.i386 perl-DBD-MySQL-4.005-8.fc9.i386

mysql-5.0.51a-1.fc9.i386 # mysql -V mysql Ver 14.12 Distrib 5.0.51a, for redhat-linux-gnu (i386) using readline 5.0

Configure MySQL to start automatically during system startup.


# chkconfig mysqld on

Start MySQL service.


# service mysqld start

The first time when you start mysqld, it will give additional information message indicating to perform post-install configuration as shown below.
Initializing MySQL database: Installing MySQL system tables... OK Filling help tables... OK To start mysqld at boot time you have to copy support-files/mysql.server to the right place for your system PLEASE REMEMBER TO SET A PASSWORD FOR THE MySQL root USER ! To do so, start the server, then issue the following commands: /usr/bin/mysqladmin -u root password 'new-password' /usr/bin/mysqladmin -u root -h dev-db password 'new-password' Alternatively you can run: /usr/bin/mysql_secure_installation which will also give you the option of removing the test databases and anonymous user created by default. This is highly recommended for production servers. See the manual for more instructions. You can start the MySQL daemon with: cd /usr ; /usr/bin/mysqld_safe & You can test the MySQL daemon with mysql-test-run.pl cd mysql-test ; perl mysql-test-run.pl Please report any problems with the /usr/bin/mysqlbug script! The latest information about MySQL is available on the web at http://www.mysql.com Support MySQL by buying support/licenses at http://shop.mysql.com Starting MySQL: [ OK ]

4. Perform MySQL post-installation activities


After the mysql installation, you can login to mysql root account without providing any password as shown below.
# mysql -u root Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 2 Server version: 5.0.51a Source distribution Type 'help;' or '\h' for help. Type '\c' to clear the buffer. mysql>

To fix this problem, you need to assign a password to mysql root account as shown below. Execute mysql_secure_installation script, which performs the following activities: Assign the root password Remove the anonymous user Disallow root login from remote machines Remove the default sample test database

# /usr/bin/mysql_secure_installation

Output of mysql_secure_installation script:


NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MySQL SERVERS IN PRODUCTION USE! PLEASE READ EACH STEP CAREFULLY! In order to log into MySQL to secure it, we'll need the current password for the root user. If you've just installed MySQL, and you haven't set the root password yet, the password will be blank, so you should just press enter here. Enter current password for root (enter for none): OK, successfully used password, moving on... Setting the root password ensures that nobody can log into the MySQL root user without the proper authorisation. Set root password? [Y/n] Y New password: [Note: Enter the mysql root password here] Re-enter new password: Password updated successfully! Reloading privilege tables.. ... Success! By default, a MySQL installation has an anonymous user, allowing anyone to log into MySQL without having to have a user account created for them. This is intended only for testing, and to make the installation go a bit smoother. You should remove them before moving into a production environment. Remove anonymous users? [Y/n] Y ... Success! Normally, root should only be allowed to connect from 'localhost'. This ensures that someone cannot guess at the root password from the network. Disallow root login remotely? [Y/n] Y ... Success! By default, MySQL comes with a database named 'test' that anyone can access. This is also intended only for testing, and should be removed before moving into a production environment. Remove test database and access to it? [Y/n] Y - Dropping test database... ... Success! - Removing privileges on test database... ... Success! Reloading the privilege tables will ensure that all changes made so far will take effect immediately. Reload privilege tables now? [Y/n] Y ... Success!

Cleaning up... All done! If you've completed all of the above steps, your MySQL installation should now be secure. Thanks for using MySQL!

Verify the MySQL post-install activities:


# mysql -u root ERROR 1045 (28000):Access denied for user 'root'@'localhost'(using password:NO) [Note: root access without password is denied] # mysql -u root -p Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 13 Server version: 5.0.51a Source distribution Type 'help;' or '\h' for help. Type '\c' to clear the buffer. mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | mysql | +--------------------+ 2 rows in set (0.00 sec) [Note: test database is removed]

5. Upgrade MySQL using Yum


Check whether MySQL is already installed.
# rpm -qa | grep -i mysql

Check whether a latest version of MySQL is available for installation using yum.
# yum check-update mysql-server

Upgrade MySQL to latest version using yum.


# yum update mysql-server

6. Install PHP using Yum

# yum install php

Output of yum install php:


Loaded plugins: refresh-packagekit

Setting up Install Process Parsing package install arguments Resolving Dependencies --> Running transaction check ---> Package php.i386 0:5.2.6-2.fc9 set to be updated --> Processing Dependency: php-common = 5.2.6-2.fc9 for package: php --> Processing Dependency: php-cli = 5.2.6-2.fc9 for package: php --> Running transaction check ---> Package php-common.i386 0:5.2.6-2.fc9 set to be updated ---> Package php-cli.i386 0:5.2.6-2.fc9 set to be updated --> Finished Dependency Resolution Dependencies Resolved ============================================================================= Package Arch Version Repository Size ============================================================================= Installing: php i386 5.2.6-2.fc9 updates 1.2 M Installing for dependencies: php-cli i386 5.2.6-2.fc9 updates 2.3 M php-common i386 5.2.6-2.fc9 updates 228 k Transaction Summary ============================================================================= Install 3 Package(s) Update 0 Package(s) Remove 0 Package(s) Total download size: 3.8 M Is this ok [y/N]: y Downloading Packages: (1/3): php-common-5.2.6-2.fc9.i386.rpm (2/3): php-5.2.6-2.fc9.i386.rpm (3/3): php-cli-5.2.6-2.fc9.i386.rpm Running rpm_check_debug Running Transaction Test Finished Transaction Test Transaction Test Succeeded Running Transaction Installing : php-common [1/3] Installing : php-cli [2/3] Installing : php [3/3]

| 228 kB | 1.2 MB | 2.3 MB

00:00 00:00 00:00

Installed: php.i386 0:5.2.6-2.fc9 Dependency Installed: php-cli.i386 0:5.2.6-2.fc9 php-common.i386 0:5.2.6-2.fc9 Complete!

Verify that php got installed successfully.


# rpm -qa | grep -i php php-cli-5.2.6-2.fc9.i386 php-5.2.6-2.fc9.i386 php-common-5.2.6-2.fc9.i386

Install MySQL module for PHP.


# yum search php-mysql Loaded plugins: refresh-packagekit =========== Matched: php-mysql ============= php-mysql.i386 : A module for PHP applications that use MySQL databases # yum install php-mysql

Output of yum install php-mysql:


Loaded plugins: refresh-packagekit Setting up Install Process Parsing package install arguments Resolving Dependencies --> Running transaction check ---> Package php-mysql.i386 0:5.2.6-2.fc9 set to be updated --> Processing Dependency: php-pdo for package: php-mysql --> Running transaction check ---> Package php-pdo.i386 0:5.2.6-2.fc9 set to be updated --> Finished Dependency Resolution Dependencies Resolved ============================================================================= Package Arch Version Repository Size ============================================================================= Installing: php-mysql i386 5.2.6-2.fc9 updates 81 k Installing for dependencies: php-pdo i386 5.2.6-2.fc9 updates 62 k Transaction Summary ============================================================================= Install 2 Package(s) Update 0 Package(s) Remove 0 Package(s) Total download size: 143 k Is this ok [y/N]: y Downloading Packages: (1/2): php-pdo-5.2.6-2.fc9.i386.rpm (2/2): php-mysql-5.2.6-2.fc9.i386.rpm Running rpm_check_debug Running Transaction Test Finished Transaction Test Transaction Test Succeeded Running Transaction Installing : php-pdo Installing : php-mysql

| |

62 kB 81 kB

00:00 00:00

[1/2] [2/2]

Installed: php-mysql.i386 0:5.2.6-2.fc9 Dependency Installed: php-pdo.i386 0:5.2.6-2.fc9 Complete!

If you need additional PHP modules, install them using yum as shown below.
# yum install php-common php-mbstring php-mcrypt php-devel php-xml php-gd

7. Upgrade PHP using Yum


Check whether PHP is installed.
# rpm -qa | grep -i php

Check whether a latest version of PHP is available for installation using yum.
# yum check-update php

Upgrade PHP to the latest version using yum.


# yum update php

Upgrade any additional PHP modules that youve installed using yum.
# yum check-update php-common php-mbstring php-mcrypt php-devel php-xml php-gd # yum update php-common php-mbstring php-mcrypt php-devel php-xml php-gd

Verify the PHP installation by creating a test.php file as shown below.


# cat /var/www/html/test.php <? phpinfo(); ?>

Invoke the test.php from the browser http://{lamp-server-ip}/test.php , which will display all PHP configuration information and the installed modules. 39.Template to track your hardware assests: If you are managing more than one equipment in your organization, it is very important to document and track ALL information about the servers effectively. In this article, I have listed 36 attributes that needs to be tracked for your equipments, with an explanation on why it needs to be tracked. I have also provided a spreadsheet template with these fields that will give you a jumpstart.

36 Items To Capture For Practical Hardware Asset Tracking


by Ramesh Natarajan on August 18, 2008
Object 177 Object 179 Object 178

If you are managing more than one equipment in your organization, it is very important to document and track ALL information about the servers effectively. In this article, I have listed 36 attributes that needs to be tracked for your equipments, with an explanation on why it needs to be tracked. I have also provided a spreadsheet template with these fields that will give you a jumpstart. Before getting into the details of what needs to be tracked, let us look at few reasons on why you should document ALL your equipments. Identifying What needs to be tracked is far more important than How you are tracking it. Dont get trapped into researching the best available asset tracking software. Keep it simple and use a spread sheet for tracking. Once you have documented everything, later you can always find a software and export this data to it. Sysadmins hates to document anything. They would rather spend time exploring cool new technology than documenting their current hardware and environment. But, a seasoned sysadmin knows that spending time to document the details about the equipemnts, is going to save lot of time in the future, when there is a problem. Never assume anything. When it comes to documentation, the more details you can add is better.

Dont create document because your boss is insisting on it. Instead, create the document because you truly believe it will add value to you and your team. If you document without understanding or believing the purpose, you will essentially leave out lot of critical details, which will eventually make the document worthless. Once youve captured the attributes mentioned below for ALL your servers, switches, firewalls and other equipments, you can use this master list to track any future enterprise wide implementation/changes. For e.g. If you are rolling out a new backup strategy throughout your enterprise, add a new column called backup and mark it as Yes or No, to track whether that specific action has been implemented on that particular equipment. I have arranged the 36 items into 9 different groups and provided a sample value next to the field name within parenthesis. These fields and groupings are just guidelines. If required, modify this accordingly to track additional attributes specific to your environment.

Equipment Detail
(1) Description (Production CRM DB Server) This field should explain the purpose of this equipment. Even a non-IT person should be able to identify this equipment based on this description. (2) Host Name (prod-crm-db-srv) The real host name of the equipment as defined at the OS level. (3) Department (Sales) Which department does this equipment belong to? (4) Manufacturer (DELL) Manufacturer of the equipment. (5) Model (PowerEdge 2950) Model of the equipment. (6) Status (Active) The current status of the equipment. Use this field to identify whether the equipment is in one of the following state: Active Currently in use Retired Old equipment, not getting used anymore Available Old/New equipment, ready and available for usage (7) Category (Server) I primarily use this to track the type of equipment. The value in this field could be one of the following depending the equipment: Server Switch Power Circuit Router Firewall etc.

Tag/Serial#
For tracking purpose, different vendors use different names for the serial numbers. i.e Serial Number, Part Number, Asset Number, Service Tag, Express Code etc. For e.g. DELL tracks their equipment using Service Tag and Express code. So, if majority of the equipments in your organization are DELL, it make sense to have separate columns for Service Tag and Express Code. (8) Serial Number (9) Part Number (10) Service TAG (11) Express Code (12) Company Asset TAG Every organization may have their own way of tracking the system using bar code or custom asset tracking number. Use this field to track the equipment using the code

assigned by your company

Location
(13) Physical Location (Los Angeles) Use this field to specify the physical location of the server. If you have multiple data center in different cities, use the city name to track it. (14) Cage/Room# The cage or room number where this equipment is located. (15) Rack # If there are multiple racks inside your datacenter, specify the rack # where the equipment is located. If your rack doesnt have any numbers, create your own numbering scheme for the rack. (16) Rack Position This indicates the exact location of the server within the rack. for e.g. the server located at the bottom of the rack has the rack position of #1 and the one above is #2.

Network
(17) Private IP (192.168.100.1) Specify the internal ip-address of the equipment. (18) Public IP Specify the external ip-address of the equipment. (19) NIC (GB1, Slot1/Port1) Tracking this information is very helpful, when someone accidentally pulls a cable from the server (If this never happened to you, it is only a matter of time before it happens). Using this field value, you will know exactly where to plug-in the cable. If the server has more than one network connection, specify all the NICs using a comma separated value. In this example (GB1, Slot1/Port1), the server has two ethernet cables connected. First one connected to the on-board NIC marked as GB1. Second one connected to the Port#1 on the NIC card, inserted to the PCI Slot#1. Even when the server has only one ethernet cable connected, specify the port # to which it is connected. For e.g. Most of the DELL servers comes with two on-board NIC labeled as GB1 and GB2. So, you should know to which NIC youve connected your ethernet cable. (20) Switch/Port (Switch1/Port10, Switch4/Port15) Using the NIC field above, youve tracked the exact port where one end of the ethernet cable is connected on the server. Now, you should track where the other end of the cable is connected to. In this example the cable connected to the server on the GB1 is connected to the Port 10 on Switch 1. The cable connected to the server on Port#1of PCI Slot#1 is connected to the Port 15 on Switch 4. (21) Nagios Monitored? (Yes) Use this field to indicate whether this equipment is getting monitored through any monitoring software.

Storage
(22) SAN/NAS Connected? (Yes) Use this field to track whether a particular server is connected to an external storage. (23) Total Drive Count (4) This indicates the total number of internal drives on the server. This can come very handy for capacity management. for e.g. Some of the dell servers comes only with 6 slots for internal hard-drives. In this example, just by looking at the document, we know that there are 4 disk drives in the servers and you have room to add 2 more disk drives.

OS Detail
(24) OS (Linux) Use this field to track the OS that is running on the equipment. For e.g. Linux, Windows, Cisco IOS etc. (25) OS Version (Red Hat Enterprise Linux AS release 4 (Nahant Update 5)) The exact version of the OS.

Warranty
(26) Warrenty Start Date (27) Warrenty End Date

Purchase & Lease


(28) Date of Purchase If you have purchased the equipment, fill-out the date of purchase and the price. (29) Purchase Price (30) Lease Begin Date - If you have leased the equipment, fill-out all the lease details. (31) Lease Expiry Date (32) Leasing Company The company who owns the lease on this equipment. (33) Buy-Out Option ($1) Is this a dollar-one buy-out (or) Fair Market Value purchase? This will give you an idea on whether to start planning for a new equipment after the lease expiry date or to keep the existing equipment. (34) Monthly Lease Payment

Additional Information
(35) URL If this is a web-server, give the URL to access the web application running on the system. If this is a switch or router, specify the admin URL. (36) Notes Enter additional notes about the equipment that doesnt fit under any of the above fields. It may be very tempting to add username and password fields to this spreadsheet. For security reasons, never use this spreadsheet to store the root or administrator password of the equipment. Asset Tracking Excel Template 1.0 This excel template contains all the 36 fields mentioned above to give you a jumpstart on tracking equipments in your enterprise. If you convert this spread sheet to other formats used by different tools, send it to me and Ill add it here and give credit to you. I hope you find this article helpful. Forward this to appropriate person in your organization who may benefit from this article by tracking the equipments effectively. Also, If you think Ive missed any attribute to track in the above list, please let me know. 40.Disable SELinux: If you dont understand how SELinux works and the fundamental details on how to configure it, keeping it enabled will cause lot of issues. Until you understand the implementation details of SELinux you may want to disable it to avoid some unnecessary issues as explained here.

4 Effective Methods to Disable SELinux Temporarily or Permanently


by Ramesh Natarajan on June 1, 2009
Object 180 Object 182 Object 181

On some of the Linux distribution SELinux is enabled by default, which may cause some unwanted issues, if you dont understand how SELinux works and the fundamental details on how to configure it. I strongly recommend that you understand SELinux and implement it on your environment. But, until you understand the implementation details of

SELinux you may want to disable it to avoid some unnecessary issues. To disable SELinux you can use any one of the 4 different methods mentioned in this article. The SELinux will enforce security policies including the mandatory access controls defined by the US Department of Defence using the Linux Security Module (LSM) defined in the Linux Kernel. Every files and process in the system will be tagged with specific labels that will be used by the SELinux. You can use ls -Z and view those labels as shown below.
# ls -Z /etc/ -rw-r--r-- root -rw-r--r-- root -rw-r--r-- root drwxr-x--- root drwxr-xr-x root drwxr-xr-x root drwx------ root -rw-rw-r-- root root root root root root root root disk system_u:object_r:etc_t:s0 a2ps.cfg system_u:object_r:adjtime_t:s0 adjtime system_u:object_r:etc_aliases_t:s0 aliases system_u:object_r:auditd_etc_t:s0 audit system_u:object_r:etc_runtime_t:s0 blkid system_u:object_r:bluetooth_conf_t:s0 bluetooth system_u:object_r:system_cron_spool_t:s0 cron.d system_u:object_r:amanda_dumpdates_t:s0 dumpdates

Method 1: Disable SELinux Temporarily


To disable SELinux temporarily you have to modify the /selinux/enforce file as shown below. Please note that this setting will be gone after the reboot of the system.
# cat /selinux/enforce 1 # echo 0 > /selinux/enforce # cat /selinux/enforce 0

You can also use setenforce command as shown below to disable SELinux. Possible parameters to setenforce commands are: Enforcing , Permissive, 1 (enable) or 0 (disable).
# setenforce 0

Method 2: Disable SELinux Permanently


To disable the SELinux permanently, modify the /etc/selinux/config and set the SELINUX=disabled as shown below. One you make any changes to the /etc/selinux/config, reboot the server for the changes to be considered.
# cat /etc/selinux/config SELINUX=disabled SELINUXTYPE=targeted SETLOCALDEFS=0

Following are the possible values for the SELINUX variable in the /etc/selinux/config file enforcing The Security Policy is always Encoforced permissive - This just simulates the enforcing policy by only printing warning messages and not really enforcing the SELinux. This is good to first see how SELinux works and later figure out what policies should be enforced. disabled - Completely disable SELinux

Following are the possible values for SELINUXTYPE variable in the /etc/selinux/config file. This indicates the type of policies that can be used for the SELinux. targeted - This policy will protected only specific targeted network daemons. strict - This is for maximum SELinux protection.

Method 3: Disable SELinux from the Grub Boot Loader


If you cant locate /etc/selinux/config file on your system, you can pass disable SELinux by passing it as parameter to the Grub Boot Loader as shown below.
# cat /boot/grub/grub.conf default=0 timeout=5 splashimage=(hd0,0)/boot/grub/splash.xpm.gz hiddenmenu title Enterprise Linux Enterprise Linux Server (2.6.18-92.el5PAE) root (hd0,0) kernel /boot/vmlinuz-2.6.18-92.el5PAE ro root=LABEL=/ rhgb quiet selinux=0 initrd /boot/initrd-2.6.18-92.el5PAE.img title Enterprise Linux Enterprise Linux Server (2.6.18-92.el5) root (hd0,0) kernel /boot/vmlinuz-2.6.18-92.el5 ro root=LABEL=/ rhgb quiet selinux=0 initrd /boot/initrd-2.6.18-92.el5.img

Method 4: Disable Only a Specific Service in SELinux HTTP/Apache


If you are not interested in disability the whole SELinux, you can also disable SELinux only for a specific service. For example, do disable SELinux for HTTP/Apache service, modify the httpd_disable_trans variable in the /etc/selinux/targeted/booleans file. Set the httpd_disable_trans variable to 1 as shown below.
# grep httpd /etc/selinux/targeted/booleans httpd_builtin_scripting=1 httpd_disable_trans=1 httpd_enable_cgi=1 httpd_enable_homedirs=1 httpd_ssi_exec=1 httpd_tty_comm=0 httpd_unified=1

Set SELinux boolean value using setsebool command as shown below. Make sure to restart the HTTP service after this change.
# setsebool httpd_disable_trans 1 # service httpd restart

41.Install PHP5 from source: This is a step-by-step guide to install PHP5 from source on UNIX environment.

Instruction Guide to Install PHP5 from Source on Linux


by Ramesh Natarajan on July 31, 2008

Object 183

Object 185

Object 184

All Linux distributions comes with PHP. However, it is recommended to download latest PHP source code, compile and install on Linux. This will make it easier to upgrade PHP on an ongoing basis immediately after a new patch or release is available for download from PHP. This article explains how to install PHP5 from source on Linux.

1. Prerequisites
Apache web server should already be installed. Refer to my previous post on How to install Apache 2 on Linux. If you are planning to use PHP with MySQL, you should have My SQL already installed. I wrote about How to install MySQL on Linux.

2. Download PHP
Download the latest source code from PHP Download page. Current stable release is 5.2.6. Move the source to /usr/local/src and extract is as shown below.
# bzip2 -d php-5.2.6.tar.bz2 # tar xvf php-5.2.6.tar

3. Install PHP
View all configuration options available for PHP using ./configure -help (two hyphen in front of help). The most commonly used option is -prefix={install-dir-name} to install PHP on a user defined directory.
# cd php-5.2.6 # ./configure --help

In the following example, PHP will be compiled and installed under the default location /usr/local/lib with Apache configuration and MySQL support.
# # # # ./configure --with-apxs2=/usr/local/apache2/bin/apxs --with-mysql make make install cp php.ini-dist /usr/local/lib/php.ini

4. Configure httpd.conf for PHP


Modify the /usr/local/apache2/conf/httpd.conf to add the following:
<FilesMatch "\.ph(p[2-6]?|tml)$"> SetHandler application/x-httpd-php </FilesMatch>

Make sure the httpd.conf has the following line that will get automatically inserted during the PHP installation process.
LoadModule php5_module modules/libphp5.so

Restart the apache as shown below:

# /usr/local/bin/apache2/apachectl restart

5. Verify PHP Installation


Create a test.php under /usr/local/apache2/htdocs with the following content
# vi test.php <?php phpinfo(); ?>

Go to http://local-host/test.php , which will show a detailed information about all the PHP configuration options and PHP modules installed on the system.

6. Trouble shooting during installation


Error 1: configure: error: xml2-config not found: While performing the ./configure during PHP installation, you may get the following error:
# ./configure --with-apxs2=/usr/local/apache2/bin/apxs --with-mysql Configuring extensions checking whether to enable LIBXML support... yes checking libxml2 install dir... no checking for xml2-config path... configure: error: xml2-config not found. Please check your libxml2 installation.

Install thelibxml2-devel and zlib-devel as shown below to the fix this issue.
# rpm -ivh /home/downloads/linux-iso/libxml2-devel-2.6.26-2.1.2.0.1.i386.rpm /home/downloads/linux-iso/zlib-devel-1.2.3-3.i386.rpm Preparing... ########################################### [100%] 1:zlib-devel ########################################### [ 50%] 2:libxml2-devel ########################################### [100%]

Error 2: configure: error: Cannot find MySQL header files. While performing the ./configure during PHP installation, you may get the following error:
# ./configure --with-apxs2=/usr/local/apache2/bin/apxs --with-mysql checking for MySQL UNIX socket location... /var/lib/mysql/mysql.sock configure: error: Cannot find MySQL header files under yes. Note that the MySQL client library is not bundled anymore!

Install the MySQL-devel-community package as shown below to fix this issue.


# rpm -ivh /home/downloads/MySQL-devel-community-5.1.25-0.rhel5.i386.rpm Preparing... ########################################### [100%] 1:MySQL-devel-community ########################################### [100%]

42.Install MySQL from source: This is a step-by-step guide to install MySQL from source on UNIX environment.

Howto Install MySQL on Linux


by Ramesh Natarajan on July 6, 2008
Object 186

Object 188

Object 187

Most of the Linux distro comes with MySQL. If you want use MySQL, my recommendation is that you download the latest version of MySQL and install it yourself. Later you can upgrade it to the latest version when it becomes available. In this article, I will explain how to install the latest free community edition of MySQL on Linux platform.

1. Download the latest stable relase of MySQL


Download mySQL from mysql.com . Please download the community edition of MySQL for your appropriate Linux platform. I downloaded the Red Hat Enterprise Linux 5 RPM (x86). Make sure to download MySQL Server, Client and Headers and libraries from the download page. MySQL-client-community-5.1.25-0.rhel5.i386.rpm MySQL-server-community-5.1.25-0.rhel5.i386.rpm MySQL-devel-community-5.1.25-0.rhel5.i386.rpm

2. Remove the existing default MySQL that came with the Linux distro
Do not perform this on an system where the MySQL database is getting used by some application.
[local-host]# rpm -qa | grep -i mysql mysql-5.0.22-2.1.0.1 mysqlclient10-3.23.58-4.RHEL4.1 [local-host]# rpm -e mysql --nodeps warning: /etc/my.cnf saved as /etc/my.cnf.rpmsave [local-host]# rpm -e mysqlclient10

3. Install the downloaded MySQL package


Install the MySQL Server and Client packages as shown below.
[local-host]# rpm -ivh MySQL-server-community-5.1.25-0.rhel5.i386.rpm MySQLclient-community-5.1.25-0.rhel5.i386.rpm Preparing... ########################################### [100%] 1:MySQL-client-community ########################################### [ 50%] 2:MySQL-server-community ########################################### [100%]

This will also display the following output and start the MySQL daemon automatically.
PLEASE REMEMBER TO SET A PASSWORD FOR THE MySQL root USER ! To do so, start the server, then issue the following commands: /usr/bin/mysqladmin -u root password 'new-password' /usr/bin/mysqladmin -u root -h medica2 password 'new-password' Alternatively you can run: /usr/bin/mysql_secure_installation which will also give you the option of removing the test databases and anonymous user created by default. This is strongly recommended for production servers. See the manual for more instructions. Please report any problems with the /usr/bin/mysqlbug script! The latest information about MySQL is available at http://www.mysql.com/ Support MySQL by buying support/licenses from http://shop.mysql.com/ Starting MySQL.[ OK ] Giving mysqld 2 seconds to start

Install the Header and Libraries that are part of the MySQL-devel packages.
[local-host]# rpm -ivh MySQL-devel-community-5.1.25-0.rhel5.i386.rpm

Preparing... 1:MySQL-devel-community

########################################### [100%] ########################################### [100%]

Note: When I was compiling PHP with MySQL option from source on the Linux system, it failed with the following error. Installing the MySQL-devel-community package fixed this problem in installing PHP from source.
configure: error: Cannot find MySQL header files under yes. Note that the MySQL client library is not bundled anymore!

4. Perform post-install security activities on MySQL.


At a bare minimum you should set a password for the root user as shown below:
[local-user]# /usr/bin/mysqladmin -u root password 'My2Secure$Password'

The best option is to run the mysql_secure_installation script that will take care of all the typical security related items on the MySQL as shown below. On a high level this does the following items: Change the root password Remove the anonymous user Disallow root login from remote machines Remove the default sample test database

[local-host]# /usr/bin/mysql_secure_installation NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MySQL SERVERS IN PRODUCTION USE! PLEASE READ EACH STEP CAREFULLY! In order to log into MySQL to secure it, we'll need the current password for the root user. If you've just installed MySQL, and you haven't set the root password yet, the password will be blank, so you should just press enter here. Enter current password for root (enter for none): OK, successfully used password, moving on... Setting the root password ensures that nobody can log into the MySQL root user without the proper authorisation. You already have a root password set, so you can safely answer 'n'. Change the root password? [Y/n] Y New password: Re-enter new password: Password updated successfully! Reloading privilege tables.. ... Success! By default, a MySQL installation has an anonymous user, allowing anyone to log into MySQL without having to have a user account created for them. This is intended only for testing, and to make the installation go a bit smoother. You should remove them before moving into a production environment. Remove anonymous users? [Y/n] Y ... Success! Normally, root should only be allowed to connect from 'localhost'. This ensures that someone cannot guess at the root password from the network. Disallow root login remotely? [Y/n] Y ... Success! By default, MySQL comes with a database named 'test' that anyone can access. This is also intended only for testing, and should be removed before moving into a production environment. Remove test database and access to it? [Y/n] Y - Dropping test database...

... Success! - Removing privileges on test database... ... Success! Reloading the privilege tables will ensure that all changes made so far will take effect immediately. Reload privilege tables now? [Y/n] Y ... Success! Cleaning up... All done! If you've completed all of the above steps, your MySQL installation should now be secure. Thanks for using MySQL!

5. Verify the MySQL installation:


You can check the MySQL installed version by performing mysql -V as shown below:
[local-host]# mysql -V mysql Ver 14.14 Distrib 5.1.25-rc, for redhat-linux-gnu (i686) using readline 5.1

Connect to the MySQL database using the root user and make sure the connection is successfull.
[local-host]# mysql -u root -p Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 13 Server version: 5.1.25-rc-community MySQL Community Server (GPL) Type 'help;' or '\h' for help. Type '\c' to clear the buffer. mysql>

Follows the steps below to stop and start MySQL


[local-host]# service mysql status MySQL running (12588) [local-host]# service mysql stop Shutting down MySQL. [local-host]# service mysql start Starting MySQL. [ [ [ OK OK OK ] ] ]

43.Launch Linux clients on windows: If you are using SSH client to connect to Linux server from your Windows laptop, sometimes it may be necessary to launch UI application on the remote Linux server, but to display the UI on the windows laptop. Cygwin can be used to install software on Linux from Windows and launch Linux X client software on Windows.

Launch software installers on Linux from Windows using Cygwin


by Ramesh Natarajan on June 18, 2008
Object 189

Object 191

Object 190

If you are using SSH client to connect to Linux server from your Windows laptop, sometimes it may be necessary to launch UI application on the remote Linux server, but to display the UI on the windows laptop. Following are two typical reasons to perform this activity: 1. Install software on Linux from Windows: To launch a UI based installer to install software on remote Linux server from windows laptop. For e.g. A DBA might want to install the Oracle on the Linux server where only the SSH connection to the remote server is available and not the console. 2. Launch Linux X client software on Windows: To launch X Client software (for e.g. xclock) located on your remote Linux server to the Windows laptop. Cygwin can be used to perform the above activities. Following 15 steps explains how to install Cygwin and launch software installers on Linux from Windows. Go to Cygwin and download the setup.exe. Launch the setup.exe on the Windows and follow the steps mentioned below. 1. Welcome Screen. Click next on the Cygwin installation welcome screen.

2. Choose a download source. Select the Install from internet option

3. Choose Installation directory. I selected C:\cygwin as shown below. This is the location where the Cygwin software will be installed on the Windows.

4. Select Local Package Install directory. This is the directory where the installation files will be downloaded and stored.

5. Select Connection Type. If you are connected to internet via proxy, enter the information. If not, select Direct Connection.

6. Choose a download site. You can either choose a download site that is closer to you or leave the default selection.

7. Download Progress. This screen will display the progress of the download.

8. Select Packages to install. I recommend that you leave the default selection here.

9. Installation Progress. This screen will display the progress of the installation.

10. Installation Completion.

11. Start the Cygwin Bash Shell on Windows. Click on cygwin icon on the desktop (or) Click on Start -> All Programs -> Cygwin -> Cygwin Bash shell, which will display the Cygwin Bash Shell window. 12. Start the X Server on Windows. From the Cygwin Bash Shell, type startx to start the X Server as shown below. Once the X Server is started, leave this window open and do not close it.

13. Xterm window: startx from the above step will open a new xterm window automatically as shown below.

14. SSH to the remote Linux host from the Xterm window as shown below. Please note that you should pass the -Y parameter to ssh. -Y parameter enables trusted X11 forwarding.

jsmith@windows-laptop ~ $ ssh -Y -l jsmith remote-host <This is from the xterm on windows laptop> jsmith@remotehost's password: Warning: No xauth data; using fake authentication data for X11 forwarding. Last login: Thu Jun 12 22:36:04 2008 from 192.168.1.102 /usr/bin/xauth: creating new authority file /home/jsmith/.Xauthority [remote-host]$ xclock & <Note that you are starting xclock on remote linux server> [1] 12593 [remote-host]$

15. xclock on windows laptop. From the Linux host, launch the xclock software as shown above, which will display the xclock on the windows laptop as shown below.

Use the same method explained above to launch any software installer on Linux (for e.g. Oracle database installer) and get it displayed on the Windows laptop.

Help me spread the news about The Geek Stuff.


Please leave your comments and feedback regarding this article. If you like this post, I would really appreciate if you can subscribe to RSS feed and spread the word around about The Geek Stuff blog by adding it to del.icio.us or Digg through the link below.

44.IPCS: IPC allows the processes to communicate with each another. The process can also communicate by having a file accessible to both the processes. Processes can open, and read/write the file, which requires lot of I/O operation that consumes time. This explains different types of IPCS and provides 10 IPCS command examples.

10 IPCS Command Examples (With IPC Introduction)


by Sasikala on August 12, 2010
Object 192

Object 194

Object 193

IPC stands for Inter-process Communication. This technique allows the processes to communicate with each another. Since each process has its own address space and unique user space, how does the process communicate each other? The answer is Kernel, the heart of the Linux operating system that has access to the whole memory. So we can request the kernel to allocate the space which can be used to communicate between processes. The process can also communicate by having a file accessible to both the processes. Processes can open, and read/write the file, which requires lot of I/O operation that consumes time.

Different Types of IPCS


There are various IPCs which allows a process to communicate with another processes, either in the same computer or different computer in the same network. Pipes Provides a way for processes to communicate with each another by exchanging messages. Named pipes provide a way for processes running on different computer systems to communicate over the network. Shared Memory Processes can exchange values in the shared memory. One process will create a portion of memory which other process can access. Message Queue It is a structured and ordered list of memory segments where processes store or retrieve data. Semaphores Provides a synchronizing mechanism for processes that are accessing the same resource. No data is passed with a semaphore; it simply coordinates access to shared resources.

10 IPCS Command Example


ipcs is a UNIX / Linux command, which is used to list the information about the inter-process communication ipcs command provides a report on System V IPCS (Message queue, Semaphore, and Shared memory).

IPCS Example 1: List all the IPC facility


ipcs command with -a option lists all the IPC facilities which has read access for the current process. It provides details about message queue, semaphore and shared memory.
# ipcs -a ------ Shared Memory Segments -------key shmid owner perms 0xc616cc44 1056800768 oracle 660 0x0103f577 323158020 root 664 0x0000270f 325713925 root 666 ------ Semaphore Arrays -------key semid owner perms 0x0103eefd 0 root 664 0x0103eefe 32769 root 664 0x4b0d4514 1094844418 oracle 660 ------ Message Queues -------bytes 4096 966 1 nsems 1 1 204 nattch 0 1 2 status

key msqid 0x000005a4 32768

owner root

perms 644

used-bytes 0

messages 0

All the IPC facility has unique key and identifier, which is used to identify an IPC facility.

IPCS Example 2: List all the Message Queue


ipcs with option -q, lists only message queues for which the current process has read access.
$ ipcs -q ------ Message Queues -------key msqid owner 0x000005a4 32768 root perms 644 used-bytes 0 messages 0

IPCS Example 3. List all the Semaphores


ipcs -s option is used to list the accessible semaphores.
# ipcs -s ------ Semaphore Arrays -------key semid owner perms 0x0103eefd 0 root 664 0x0103eefe 32769 root 664 0x4b0d4514 1094844418 oracle 660 nsems 1 1 204

IPCS Example 4. List all the Shared Memory


ipcs -m option with ipcs command lists the shared memories.
# ipcs -m ------ Shared Memory Segments -------key shmid owner perms 0xc616cc44 1056800768 oracle 660 0x0103f577 323158020 root 664 0x0000270f 325713925 root 666 bytes 4096 966 1 nattch status

0 1 2

IPCS Example 5. Detailed information about an IPC facility


ipcs -i option provides detailed information about an ipc facility.
# ipcs -q -i 32768 Message Queue msqid=32768 uid=0 gid=0 cuid=0 cgid=0 mode=0644 cbytes=0 qbytes=65536 qnum=0 lspid=0 lrpid=0 send_time=Not set rcv_time=Not set change_time=Thu Aug 5 13:30:22 2010

Option -i with -q provides information about a particular message queue. Option -i with -s provides semaphore details. Option -i with -m provides details about a shared memory.

IPCS Example 6. Lists the Limits for IPC facility


ipcs -l option gives the system limits for each ipc facility.

# ipcs -m -l ------ Shared Memory Limits -------max number of segments = 4096 max seg size (kbytes) = 67108864 max total shared memory (kbytes) = 17179869184 min seg size (bytes) = 1

The above command gives the limits for shared memory. -l can be combined with -q and -s to view the limits for message queue and semaphores respectively. Single option -l gives the limits for all three IPC facilities.
# ipcs -l

IPCS Example 7. List Creator and Owner Details for IPC Facility
ipcs -c option lists creator userid and groupid and owner userid and group id. This option can be combined with -m, -s and -q to view the creator details for specific IPC facility.
# ipcs -m -c ------ Shared Memory Segment Creators/Owners -------shmid perms cuid cgid uid 1056800768 660 oracle oinstall oracle 323158020 664 root root root 325713925 666 root root root gid oinstall root root

IPCS Example 8. Process ids that accessed IPC facility recently


ipcs -p option displays creator id, and process id which accessed the corresponding ipc facility very recently.
# ipcs -m -p ------ Shared Memory Creator/Last-op -------shmid owner cpid lpid 1056800768 oracle 16764 5389 323158020 root 2354 2354 325713925 root 20666 20668

-p also can be combined with -m,-s or -q.

IPCS Example 9. Last Accessed Time


ipcs -t option displays last operation time in each ipc facility. This option can also be combined with -m, -s or -q to print for specific type of ipc facility. For message queue, -t option displays last sent and receive time, for shared memory it displays last attached (portion of memory) and detached timestamp and for semaphore it displays last operation and changed time details.
# ipcs -s -t ------ Semaphore Operation/Change Times -------semid owner last-op last-changed 0 root Thu Aug 5 12:46:52 2010 Tue Jul 13 10:39:41 2010 32769 root Thu Aug 5 11:59:10 2010 Tue Jul 13 10:39:41 2010 1094844418 oracle Thu Aug 5 13:52:59 2010 Thu Aug 5 13:52:59 2010

IPCS Example 10. Status of current usage


ipcs with -u command displays current usage for all the IPC facility. This option can be combined with a specific option to display the status for a particular IPC facility.
# ipcs -u ------ Shared Memory Status -------segments allocated 30 pages allocated 102 pages resident 77 pages swapped 0 Swap performance: 0 attempts 0 successes ------ Semaphore Status -------used arrays = 49 allocated semaphores = 252 ------ Messages: Status -------allocated queues = 1 used headers = 0 used space = 0 bytes

45.Logical Volume Manager: Using LVM we can create logical partitions that can span across one or more physical hard drives.You can create and manage LVM using vgcreate, lvcreate, and lvextend lvm2 commands as shown here.

How To Create LVM Using vgcreate, lvcreate, and lvextend lvm2 Commands
by Balakrishnan Mariyappan on August 5, 2010
Object 195

Object 197

Object 196

LVM stands for Logical Volume Manager. With LVM, we can create logical partitions that can span across one or more physical hard drives. First, the hard drives are divided into physical volumes, then those physical volumes are combined together to create the volume group and finally the logical volumes are created from volume group. The LVM commands listed in this article are used under Ubuntu Distribution. But, it is the same for other Linux distributions. Before we start, install the lvm2 package as shown below.
$ sudo apt-get intall lvm2

To create a LVM, we need to run through the following steps. Select the physical storage devices for LVM Create the Volume Group from Physical Volumes Create Logical Volumes from Volume Group

Select the Physical Storage Devices for LVM Use pvcreate, pvscan, pvdisplay Commands
In this step, we need to choose the physical volumes that will be used to create the LVM. We can create the physical volumes using pvcreate command as shown below.
$ sudo pvcreate /dev/sda6 /dev/sda7 Physical volume "/dev/sda6" successfully created Physical volume "/dev/sda7" successfully created

As shown above two physical volumes are created /dev/sda6 and /dev/sda7. If the physical volumes are already created, you can view them using the pvscan command as shown below.
$ sudo pvscan PV /dev/sda6 lvm2 [1.86 GB] PV /dev/sda7 lvm2 [1.86 GB] Total: 2 [3.72 GB] / in use: 0 [0 ] / in no VG: 2 [3.72 GB]

You can view the list of physical volumes with attributes like size, physical extent size, total physical extent size, the free space, etc., using pvdisplay command as shown below.
$ sudo pvdisplay --- Physical volume --PV Name /dev/sda6 VG Name PV Size 1.86 GB / not usable 2.12 MB Allocatable yes PE Size (KByte) 4096 Total PE 476 Free PE 456 Allocated PE 20 PV UUID m67TXf-EY6w-6LuX-NNB6-kU4L-wnk8-NjjZfv --- Physical volume --PV Name /dev/sda7 VG Name PV Size 1.86 GB / not usable 2.12 MB Allocatable yes PE Size (KByte) 4096 Total PE 476 Free PE 476 Allocated PE 0 PV UUID b031x0-6rej-BcBu-bE2C-eCXG-jObu-0Boo0x

Note : PE Physical Extents are nothing but equal-sized chunks. The default size of extent is 4MB.

Create the Volume Group Use vgcreate, vgdisplay Commands


Volume groups are nothing but a pool of storage that consists of one or more physical volumes. Once you create the physical volume, you can create the volume group (VG) from these physical volumes (PV). In this example, the volume group vol_grp1 is created from the two physical volumes as shown below.
$ sudo vgcreate vol_grp1 /dev/sda6 /dev/sda7 Volume group "vol_grp1" successfully created

LVM processes the storage in terms of extents. We can also change the extent size (from the default

size 4MB) using -s flag. vgdisplay command lists the created volume groups.
$ sudo vgdisplay --- Volume group --VG Name System ID Format Metadata Areas Metadata Sequence No VG Access VG Status MAX LV Cur LV Open LV Max PV Cur PV Act PV VG Size PE Size Total PE Alloc PE / Size Free PE / Size VG UUID vol_grp1 2 1 lvm2 read/write resizable 0 0 0 0 2 2 3.72 GB 4.00 MB 952 0 / 0 952 / 3.72 GB Kk1ufB-rT15-bSWe-5270-KDfZ-shUX-FUYBvR

LVM Create: Create Logical Volumes Use lvcreate, lvdisplay command


Now, everything is ready to create the logical volumes from the volume groups. lvcreate command creates the logical volume with the size of 80MB.
$ sudo lvcreate -l 20 -n logical_vol1 vol_grp1 Logical volume "logical_vol1" created

Use lvdisplay command as shown below, to view the available logical volumes with its attributes.
$ sudo lvdisplay --- Logical volume --LV Name /dev/vol_grp1/logical_vol1 VG Name vol_grp1 LV UUID ap8sZ2-WqE1-6401-Kupm-DbnO-2P7g-x1HwtQ LV Write Access read/write LV Status available # open 0 LV Size 80.00 MB Current LE 20 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 252:0

After creating the appropriate filesystem on the logical volumes, it becomes ready to use for the storage purpose.
$ sudo mkfs.ext3 /dev/vol_grp1/logical_vol1

LVM resize: Change the size of the logical volumes Use lvextend Command
We can extend the size of the logical volumes after creating it by using lvextend utility as shown

below. The changes the size of the logical volume from 80MB to 100MB.
$ sudo lvextend -L100 /dev/vol_grp1/logical_vol1 Extending logical volume logical_vol1 to 100.00 MB Logical volume logical_vol1 successfully resized

We can also add additional size to a specific logical volume as shown below.
$ sudo lvextend -L+100 /dev/vol_grp1/logical_vol1 Extending logical volume logical_vol1 to 200.00 MB Logical volume logical_vol1 successfully resized

46.15 Tcpdump examples: tcpdump is a network packet analyzer. tcpdump allows us to save the packets that are captured, so that we can use it for future analysis. The saved file can be viewed by the same tcpdump command. We can also use open source software like wireshark to read the tcpdump pcap files.

Packet Analyzer: 15 TCPDUMP Command Examples


by Sasikala on August 25, 2010
Object 198

Object 200

Object 199

tcpdump command is also called as packet analyzer. tcpdump command will work on most flavors of unix operating system. tcpdump allows us to save the packets that are captured, so that we can use it for future analysis. The saved file can be viewed by the same tcpdump command. We can also use open source software like wireshark to read the tcpdump pcap files. In this tcpdump tutorial, let us discuss some practical examples on how to use the tcpdump command.

1. Capture packets from a particular ethernet interface using tcpdump -i


When you execute tcpdump command without any option, it will capture all the packets flowing through all the interfaces. -i option with tcpdump command, allows you to filter on a particular ethernet interface.
$ tcpdump -i eth1 14:59:26.608728 IP xx.domain.netbcp.net.52497 > valh4.lell.net.ssh: . ack 540 win 16554 14:59:26.610602 IP resolver.lell.net.domain > valh4.lell.net.24151: 4278 1/0/0 (73) 14:59:26.611262 IP valh4.lell.net.38527 > resolver.lell.net.domain: 26364+ PTR? 244.207.104.10.in-addr.arpa. (45)

In this example, tcpdump captured all the packets flows in the interface eth1 and displays in the standard output. Note: Editcap utility is used to select or remove specific packets from dump file and translate them into a given format.

2. Capture only N number of packets using tcpdump -c


When you execute tcpdump command it gives packets until you cancel the tcpdump command. Using -c option you can specify the number of packets to capture.
$ tcpdump -c 2 -i eth0 listening on eth0, link-type EN10MB (Ethernet), capture size 96 bytes 14:38:38.184913 IP valh4.lell.net.ssh > yy.domain.innetbcp.net.11006: P 1457255642:1457255758(116) ack 1561463966 win 63652 14:38:38.690919 IP valh4.lell.net.ssh > yy.domain.innetbcp.net.11006: P 116:232(116) ack 1 win 63652 2 packets captured 13 packets received by filter 0 packets dropped by kernel

The above tcpdump command captured only 2 packets from interface eth0. Note: Mergecap and TShark: Mergecap is a packet dump combining tool, which will combine multiple dumps into a single dump file. Tshark is a powerful tool to capture network packets, which can be used to analyze the network traffic. It comes with wireshark network analyzer distribution.

3. Display Captured Packets in ASCII using tcpdump -A


The following tcpdump syntax prints the packet in ASCII.
$ tcpdump -A -i eth0 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth0, link-type EN10MB (Ethernet), capture size 96 bytes 14:34:50.913995 IP valh4.lell.net.ssh > yy.domain.innetbcp.net.11006: P 1457239478:1457239594(116) ack 1561461262 win 63652 E.....@.@..]..i...9...*.V...]...P....h....E...>{..U=...g. ......G..7\+KA....A...L. 14:34:51.423640 IP valh4.lell.net.ssh > yy.domain.innetbcp.net.11006: P 116:232(116) ack 1 win 63652 E.....@.@..\..i...9...*.V..*]...P....h....7......X..!....Im.S.g.u:*..O&....^#Ba. .. E..(R.@.|.....9...i.*...]...V..*P..OWp........

Note: Ifconfig command is used to configure network interfaces

4. Display Captured Packets in HEX and ASCII using tcpdump -XX


Some users might want to analyse the packets in hex values. tcpdump provides a way to print packets in both ASCII and HEX format.
$tcpdump -XX -i eth0 18:52:54.859697 IP zz.domain.innetbcp.net.63897 > valh4.lell.net.ssh: . ack 232 win 16511 0x0000: 0050 569c 35a3 0019 bb1c 0c00 0800 4500 .PV.5.........E. 0x0010: 0028 042a 4000 7906 c89c 10b5 aaf6 0f9a .(.*@.y......... 0x0020: 69c4 f999 0016 57db 6e08 c712 ea2e 5010 i.....W.n.....P. 0x0030: 407f c976 0000 0000 0000 0000 @..v........ 18:52:54.877713 IP 10.0.0.0 > all-systems.mcast.net: igmp query v3 [max resp time 1s] 0x0000: 0050 569c 35a3 0000 0000 0000 0800 4600 .PV.5.........F. 0x0010: 0024 0000 0000 0102 3ad3 0a00 0000 e000 .$......:....... 0x0020: 0001 9404 0000 1101 ebfe 0000 0000 0300 ................ 0x0030: 0000 0000 0000 0000 0000 0000 ............

5. Capture the packets and write into a file using tcpdump -w


tcpdump allows you to save the packets to a file, and later you can use the packet file for further analysis.
$ tcpdump -w 08232010.pcap -i eth0 tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 96 bytes 32 packets captured 32 packets received by filter 0 packets dropped by kernel

-w option writes the packets into a given file. The file extension should be .pcap, which can be read by any network protocol analyzer.

6. Reading the packets from a saved file using tcpdump -r


You can read the captured pcap file and view the packets for analysis, as shown below.
$tcpdump -tttt -r data.pcap 2010-08-22 21:35:26.571793 00:50:56:9c:69:38 (oui Unknown) > Broadcast, ethertype Unknown (0xcafe), length 74: 0x0000: 0200 000a ffff 0000 ffff 0c00 3c00 0000 ............<... 0x0010: 0000 0000 0100 0080 3e9e 2900 0000 0000 ........>.)..... 0x0020: 0000 0000 ffff ffff ad00 996b 0600 0050 ...........k...P 0x0030: 569c 6938 0000 0000 8e07 0000 V.i8........ 2010-08-22 21:35:26.571797 IP valh4.lell.net.ssh > zz.domain.innetbcp.net.50570: P 800464396:800464448(52) ack 203316566 win 71 2010-08-22 21:35:26.571800 IP valh4.lell.net.ssh > zz.domain.innetbcp.net.50570: P 52:168(116) ack 1 win 71 2010-08-22 21:35:26.584865 IP valh5.lell.net.ssh > 11.154.12.255.netbios-ns: NBT UDP PACKET(137): QUERY; REQUEST; BROADC

7. Capture packets with IP address using tcpdump -n


In all the above examples, it prints packets with the DNS address, but not the ip address. The following example captures the packets and it will display the IP address of the machines involved.
$ tcpdump -n -i eth0 15:01:35.170763 IP 10.0.19.121.52497 > 11.154.12.121.ssh: P 105:157(52) ack 18060 win 16549

15:01:35.170776 IP 11.154.12.121.ssh > 10.0.19.121.52497: P 23988:24136(148) ack 157 win 113 15:01:35.170894 IP 11.154.12.121.ssh > 10.0.19.121.52497: P 24136:24380(244) ack 157 win 113

8. Capture packets with proper readable timestamp using tcpdump -tttt


$ tcpdump -n -tttt -i eth0 2010-08-22 15:10:39.162830 IP 10.0.19.121.52497 > 11.154.12.121.ssh: . ack 49800 win 16390 2010-08-22 15:10:39.162833 IP 10.0.19.121.52497 > 11.154.12.121.ssh: . ack 50288 win 16660 2010-08-22 15:10:39.162867 IP 10.0.19.121.52497 > 11.154.12.121.ssh: . ack 50584 win 16586

9. Read packets longer than N bytes


You can receive only the packets greater than n number of bytes using a filter greater through tcpdump command
$ tcpdump -w g_1024.pcap greater 1024

10. Receive only the packets of a specific protocol type


You can receive the packets based on the protocol type. You can specify one of these protocols fddi, tr, wlan, ip, ip6, arp, rarp, decnet, tcp and udp. The following example captures only arp packets flowing through the eth0 interface.
$ tcpdump -i eth0 arp tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth0, link-type EN10MB (Ethernet), capture size 96 bytes 19:41:52.809642 arp who-has valh5.lell.net tell valh9.lell.net 19:41:52.863689 arp who-has 11.154.12.1 tell valh6.lell.net 19:41:53.024769 arp who-has 11.154.12.1 tell valh7.lell.net

11. Read packets lesser than N bytes


You can receive only the packets lesser than n number of bytes using a filter less through tcpdump command
$ tcpdump -w l_1024.pcap less 1024

12. Receive packets flows on a particular port using tcpdump port


If you want to know all the packets received by a particular port on a machine, you can use tcpdump command as shown below.
$ tcpdump -i eth0 port 22 19:44:44.934459 IP valh4.lell.net.ssh > zz.domain.innetbcp.net.63897: P 18932:19096(164) ack 105 win 71 19:44:44.934533 IP valh4.lell.net.ssh > zz.domain.innetbcp.net.63897: P 19096:19260(164) ack 105 win 71 19:44:44.934612 IP valh4.lell.net.ssh > zz.domain.innetbcp.net.63897: P 19260:19424(164) ack 105 win 71

13. Capture packets for particular destination IP and Port


The packets will have source and destination IP and port numbers. Using tcpdump we can apply filters on source or destination IP and port number. The following command captures packets flows in eth0, with a particular destination ip and port number 22.
$ tcpdump -w xpackets.pcap -i eth0 dst 10.181.140.216 and port 22

14. Capture TCP communication packets between two hosts


If two different process from two different machines are communicating through tcp protocol, we can capture those packets using tcpdump as shown below.
$tcpdump -w comm.pcap -i eth0 dst 16.181.170.246 and port 22

You can open the file comm.pcap using any network protocol analyzer tool to debug any potential issues.

15. tcpdump Filter Packets Capture all the packets other than arp and rarp
In tcpdump command, you can give and, or and not condition to filter the packets accordingly.
$ tcpdump -i eth0 not arp and not rarp 20:33:15.479278 IP resolver.lell.net.domain > valh4.lell.net.64639: 26929 1/0/0 (73) 20:33:15.479890 IP valh4.lell.net.16053 > resolver.lell.net.domain: 56556+ PTR? 255.107.154.15.in-addr.arpa. (45) 20:33:15.480197 IP valh4.lell.net.ssh > zz.domain.innetbcp.net.63897: P 540:1504(964) ack 1 win 96 20:33:15.487118 IP zz.domain.innetbcp.net.63897 > valh4.lell.net.ssh: . ack 540 win 16486 20:33:15.668599 IP 10.0.0.0 > all-systems.mcast.net: igmp query v3 [max resp time

47.Manage partition using fdisk: Using fdisk you can create a maximum of four primary partition, delete an existing partition, or change existing partition. Using fidsk you are allowed to create a maximum of four primary partition, and any number of logical partitions, based on the size of the disk.

7 Linux fdisk Command Examples to Manage Hard Disk Partition


by Balakrishnan Mariyappan on September 14, 2010
Object 201 Object 203 Object 202

On Linux distributions, fdisk is the best tool to manage disk partitions. fdisk is a text based utility. Using fdisk you can create a new partition, delete an existing partition, or change existing partition. Using fidsk you are allowed to create a maximum of four primary partition, and any number of logical partitions, based on the size of the disk. Keep in mind that any single partition requires a minimum size of 40MB. In this article, let us review how to use fdisk command using practical examples. Warning: Dont delete, modify, or add partition, if you dont know what you are doing. You will lose your data!

1. View All Existing Disk Partitions Using fdisk -l


Before you create a new partition, or modify an existing partition, you might want to view all available partition in the system. Use fdisk -l to view all available partitions as shown below.
# fdisk -l Disk /dev/sda: 80.0 GB, 80026361856 bytes 255 heads, 63 sectors/track, 9729 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xf6edf6ed Device Boot /dev/sda1 /dev/sda2 /dev/sda3 /dev/sda4 /dev/sda5 * /dev/sda6 Start 1 1960 5284 6529 1960 2662 End 1959 5283 6528 9729 2661 2904 Blocks 15735636 26700030 10000462+ 25712032+ 5638752 1951866 Id c f 7 c 83 83 System W95 FAT32 (LBA) W95 Ext'd (LBA) HPFS/NTFS W95 FAT32 (LBA) Linux Linux

/dev/sda7 /dev/sda8 /dev/sda9

2905 3148 3265

3147 3264 5283

1951866 939771 16217586

83 82 b

Linux Linux swap / Solaris W95 FAT32

The above will list partitions from all the connected hard disks. When you have more than one disk on the system, the partitions list are ordered by the devices /dev name. For example, /dev/sda, /dev/sdb, /dev/sdc and so on.

2. View Partitions of a Specific Hard Disk using fdisk -l /dev/sd{a}


To view all partitions of the /dev/sda hard disk, do the following.
# fdisk -l /dev/sda

View all fdisk Commands Using fdisk Command m


Use fdisk command m, to view all available fdisk commands as shown below.
# fdisk /dev/sda The number of cylinders for this disk is set to 9729. There is nothing wrong with that, but this is larger than 1024, and could in certain setups cause problems with: 1) software that runs at boot time (e.g., old versions of LILO) 2) booting and partitioning software from other OSs (e.g., DOS FDISK, OS/2 FDISK) Command (m for help): m Command action a toggle a bootable flag b edit bsd disklabel c toggle the dos compatibility flag d delete a partition l list known partition types m print this menu n add a new partition o create a new empty DOS partition table p print the partition table q quit without saving changes s create a new empty Sun disklabel t change a partition's system id u change display/entry units v verify the partition table w write table to disk and exit x extra functionality (experts only)

3. Delete a Hard Disk Partition Using fdisk Command d


Let us assume that you like to combine several partitions (for example, /dev/sda6, /dev/sda7 and /dev/sda8) into a single disk partition. To do this, you should first delete all those individual partitions, as shown below.
# fdisk /dev/sda The number of cylinders for this disk is set to 9729. There is nothing wrong with that, but this is larger than 1024, and could in certain setups cause problems with: 1) software that runs at boot time (e.g., old versions of LILO) 2) booting and partitioning software from other OSs (e.g., DOS FDISK, OS/2 FDISK)

Command (m for help): p Disk /dev/sda: 80.0 GB, 80026361856 bytes 255 heads, 63 sectors/track, 9729 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xf6edf6ed Device Boot /dev/sda1 /dev/sda2 /dev/sda3 /dev/sda4 /dev/sda5 * /dev/sda6 /dev/sda7 /dev/sda8 /dev/sda9 Start 1 1960 5284 6529 1960 2662 2905 3148 3265 End 1959 5283 6528 9729 2661 2904 3147 3264 5283 Blocks 15735636 26700030 10000462+ 25712032+ 5638752 1951866 1951866 939771 16217586 Id c f 7 c 83 83 83 82 b System W95 FAT32 (LBA) W95 Ext'd (LBA) HPFS/NTFS W95 FAT32 (LBA) Linux Linux Linux Linux swap / Solaris W95 FAT32

Command (m for help): d Partition number (1-9): 8 Command (m for help): d Partition number (1-8): 7 Command (m for help): d Partition number (1-7): 6 Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. WARNING: Re-reading the partition table failed with error 16: Device or resource busy. The kernel still uses the old table. The new table will be used at the next reboot or after you run partprobe(8) or kpartx(8) Syncing disks.

4. Create a New Disk Partition with Specific Size Using fdisk Command n
Once youve deleted all the existing partitions, you can create a new partition using all available space as shown below.
# fdisk /dev/sda The number of cylinders for this disk is set to 9729. There is nothing wrong with that, but this is larger than 1024, and could in certain setups cause problems with: 1) software that runs at boot time (e.g., old versions of LILO) 2) booting and partitioning software from other OSs (e.g., DOS FDISK, OS/2 FDISK) Command (m for help): n First cylinder (2662-5283, default 2662): Using default value 2662 Last cylinder, +cylinders or +size{K,M,G} (2662-3264, default 3264): Using default value 3264

In the above example, fdisk n command is used to create new partition with the specific size. While creating a new partition, it expects following two inputs.

Starting cylinder number of the partition to be create (First cylinder). Size of the partition (or) the last cylinder number (Last cylinder, +cylinders or +size ). Please keep in mind that you should issue the fdisk write command (w) after any modifications.
Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. WARNING: Re-reading the partition table failed with error 16: Device or resource busy. The kernel still uses the old table. The new table will be used at the next reboot or after you run partprobe(8) or kpartx(8) Syncing disks.

After the partition is created, format it using the mkfs command as shown below.
# mkfs.ext3 /dev/sda7

5. View the Size of an existing Partition Using fdisk -s


As shown below, fdisk -s displays the size of the partition in blocks.
# fdisk -s /dev/sda7 4843566

The above output corresponds to about 4900MB.

6. Toggle the Boot Flag of a Partition Using fdisk Command a


Fdisk command displays the boot flag of each partition. When you want to disable or enable the boot flag on the corresponding partition, do the following. If you dont know why are you are doing this, youll mess-up your system.
# fdisk /dev/sda The number of cylinders for this disk is set to 9729. There is nothing wrong with that, but this is larger than 1024, and could in certain setups cause problems with: 1) software that runs at boot time (e.g., old versions of LILO) 2) booting and partitioning software from other OSs (e.g., DOS FDISK, OS/2 FDISK) Command (m for help): p Disk /dev/sda: 80.0 GB, 80026361856 bytes 255 heads, 63 sectors/track, 9729 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xf6edf6ed Device Boot /dev/sda1 /dev/sda2 /dev/sda3 /dev/sda4 /dev/sda5 * /dev/sda6 /dev/sda7 Start 1 1960 5284 6529 1960 3265 2662 End 1959 5283 6528 9729 2661 5283 3264 Blocks 15735636 26700030 10000462+ 25712032+ 5638752 16217586 4843566 Id c f 7 c 83 b 83 System W95 FAT32 (LBA) W95 Ext'd (LBA) HPFS/NTFS W95 FAT32 (LBA) Linux W95 FAT32 Linux

Partition table entries are not in disk order

Command (m for help): a Partition number (1-7): 5 Command (m for help): p Disk /dev/sda: 80.0 GB, 80026361856 bytes 255 heads, 63 sectors/track, 9729 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xf6edf6ed Device Boot /dev/sda1 /dev/sda2 /dev/sda3 /dev/sda4 /dev/sda5 /dev/sda6 /dev/sda7 Start 1 1960 5284 6529 1960 3265 2662 End 1959 5283 6528 9729 2661 5283 3264 Blocks 15735636 26700030 10000462+ 25712032+ 5638752 16217586 4843566 Id c f 7 c 83 b 83 System W95 FAT32 (LBA) W95 Ext'd (LBA) HPFS/NTFS W95 FAT32 (LBA) Linux W95 FAT32 Linux

Partition table entries are not in disk order Command (m for help):

As seen above, the boot flag is disabled on the partition /dev/sda5.

7. Fix Partition Table Order Using fdisk Expert Command f


When you delete a logical partition, and recreate it again, you might see the partition out of order issue. i.e Partition table entries are not in disk order error message. For example, when you delete three logical partitions (sda6, sda7 and sda8), and create a new partition, you might expect the new partition name to be sda6. But, the system mightve created the new partition as sda7. This is because, after the partitions are deleted, sda9 partition has been moved as sda6 and the free space is moved to the end. To fix this partition order issue, and assign sda6 to the newly created partition, execute the expert command f as shown below.
$ fdisk /dev/sda The number of cylinders for this disk is set to 9729. There is nothing wrong with that, but this is larger than 1024, and could in certain setups cause problems with: 1) software that runs at boot time (e.g., old versions of LILO) 2) booting and partitioning software from other OSs (e.g., DOS FDISK, OS/2 FDISK) Command (m for help): p Disk /dev/sda: 80.0 GB, 80026361856 bytes 255 heads, 63 sectors/track, 9729 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xf6edf6ed Device Boot /dev/sda1 /dev/sda2 /dev/sda3 /dev/sda4 /dev/sda5 * /dev/sda6 Start 1 1960 5284 6529 1960 3265 End 1959 5283 6528 9729 2661 5283 Blocks 15735636 26700030 10000462+ 25712032+ 5638752 16217586 Id c f 7 c 83 b System W95 FAT32 (LBA) W95 Ext'd (LBA) HPFS/NTFS W95 FAT32 (LBA) Linux W95 FAT32

/dev/sda7

2662

3264

4843566

83

Linux

Partition table entries are not in disk order Command (m for help): x Expert command (m for help): f Done. Expert command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. WARNING: Re-reading the partition table failed with error 16: Device or resource busy. The kernel still uses the old table. The new table will be used at the next reboot or after you run partprobe(8) or kpartx(8) Syncing disks.

Once the partition table order is fixed, youll not get the Partition table entries are not in disk order error message anymore.
# fdisk -l Disk /dev/sda: 80.0 GB, 80026361856 bytes 255 heads, 63 sectors/track, 9729 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xf6edf6ed Device Boot /dev/sda1 /dev/sda2 /dev/sda3 /dev/sda4 /dev/sda5 * /dev/sda6 /dev/sda7 Start 1 1960 5284 6529 1960 2662 3265 End 1959 5283 6528 9729 2661 3264 5283 Blocks 15735636 26700030 10000462+ 25712032+ 5638752 4843566 16217586 Id c f 7 c 83 83 b System W95 FAT32 (LBA) W95 Ext'd (LBA) HPFS/NTFS W95 FAT32 (LBA) Linux Linux W95 FAT32

48.VMWare fundamentals: At some point every sysadmin should deal with virtualization. VMWare is a very popular choise to virtualize your server environment. This article will provide the fundamental information for you to get a jumpstart on VMWare.

VMware Virtualization Fundamentals VMware Server and VMware ESXi


by Ramesh Natarajan on June 2, 2010
Object 204 Object 206 Object 205

We are starting a new series of articles on VMware that will help you install, configure and maintain VMware environments. In this first part of the VMware series, let us discuss the fundamental concepts of virtualization and review the VMware virtualization implementation options. Following are few reasons why you might want to think about virtualization for your environment.

Run multiple operation systems on one server. For example, instead of having developmentserver and QA-server, you can run both development and QA on a single server. You can have multiple flavours of OS on one server. For example, you can run 2 Linux OS, 1 Windows OS on a single server. Multiple OS running on the server shares the hardware resources among them. For example, CPU, RAM, network devices are shared among development-server and QA-server running on the same hardware. Allocate hardware resources to different applications based on the utilization. For example, if you have 8GB of RAM on the server, you can assign less RAM to one virtual machine (2GB to development-server) and more RAM (6GB to QA-server) to another virtual machine that is running on that server High availability and business continuity. If VMware is implemented properly, you can migrate a virtual machine from one server to another server quickly without any downtime. This reduces the operational cost and power consumption. For example, instead of buying and running two servers, you will be using only one server and run both development and QA on it. On a high level, there are two ways for you to get started on the virtualization using VMware products. Both of these are available for free from VMware.

1. VMware Server
VMware Server runs on top of an existing host operating system (either Linux or Windows). This is a good option to get started, as you can use any of the existing hardware along with its OS. VMware server also support 64-bit host and guest operating system. You also get VMware Infrastructure web access management interface and Virtual Machine console.

Fig: Virtual Machine running on top of VMware Server

2. VMware ESXi
VMware ESXi is based on the hypervisor architecture. VMware ESXi runs directly on the hardware

without the need of any host operating system, which makes is extremely effective in terms of performance. This is the best option to implement VMware for production usage.

Fig: Virtual Machine running on top of VMware ESXi Following are some of the key features of VMware ESXi: Memory compression, over commitment and deduplication. built-in high available with NIC teaming and HBA multipathing. Intelligent CPU virtualization Highly compatible with various servers hardware, storage and OS. Advanced security with VMSafe, VMKernel protection and encryption. Easy management using vsphere client, vCenter server and command line interface

49.Rotate the logs automatically: Manging log files is an importat part of sysadmin life. logrotate make it easy by allowing you to setup automatica log rotation based on several configurations. Using logrotate you can also configure it to execute custom shell scripts immediately after log rotation.

HowTo: The Ultimate Logrotate Command Tutorial with 10 Examples


by Balakrishnan Mariyappan on July 14, 2010
Object 207 Object 209 Object 208

Managing log files effectively is an essential task for Linux sysadmin. In this article, let us discuss how to perform following log file operations using UNIX logrotate utility. Rotate the log file when file size reaches a specific size Continue to write the log information to the newly created file after rotating the old log file Compress the rotated log files Specify compression option for the rotated log files Rotate the old log files with the date in the filename Execute custom shell scripts immediately after log rotation Remove older rotated log files

1. Logrotate Configuration files


Following are the key files that you should be aware of for logrotate to work properly. /usr/sbin/logrotate The logrotate command itself. /etc/cron.daily/logrotate This shell script executes the logrotate command everyday.
$ cat /etc/cron.daily/logrotate #!/bin/sh /usr/sbin/logrotate /etc/logrotate.conf EXITVALUE=$? if [ $EXITVALUE != 0 ]; then /usr/bin/logger -t logrotate "ALERT exited abnormally with [$EXITVALUE]" fi exit 0

/etc/logrotate.conf Log rotation configuration for all the log files are specified in this file.
$ cat /etc/logrotate.conf weekly rotate 4 create include /etc/logrotate.d /var/log/wtmp { monthly minsize 1M create 0664 root utmp

rotate 1

/etc/logrotate.d When individual packages are installed on the system, they drop the log rotation configuration information in this directory. For example, yum log rotate configuration information is shown below.
$ cat /etc/logrotate.d/yum /var/log/yum.log { missingok notifempty size 30k yearly create 0600 root root }

2. Logrotate size option: Rotate the log file when file size reaches a specific limit
If you want to rotate a log file (for example, /tmp/output.log) for every 1KB, create the logrotate.conf as shown below.
$ cat logrotate.conf /tmp/output.log { size 1k create 700 bala bala rotate 4 }

This logrotate configuration has following three options: size 1k logrotate runs only if the filesize is equal to (or greater than) this size. create rotate the original file and create the new file with specified permission, user and group. rotate limits the number of log file rotation. So, this would keep only the recent 4 rotated log files. Before the logrotation, following is the size of the output.log:
$ ls -l /tmp/output.log -rw-r--r-- 1 bala bala 25868 2010-06-09 21:19 /tmp/output.log

Now, run the logrotate command as shown below. Option -s specifies the filename to write the logrotate status.
$ logrotate -s /var/log/logstatus logrotate.conf

Note : whenever you need of log rotation for some files, prepare the logrotate configuration and run the logroate command manually. After the logrotation, following is the size of the output.log:
$ ls -l /tmp/output* -rw-r--r-- 1 bala bala 25868 2010-06-09 21:20 output.log.1 -rwx------ 1 bala bala 0 2010-06-09 21:20 output.log

Eventually this will keep following setup of rotated log files. output.log.4. output.log.3 output.log.2 output.log.1

output.log Please remember that after the log rotation, the log file corresponds to the service would still point to rotated file (output.log.1) and keeps on writing in it. You can use the above method, if you want to rotate the apache access_log or error_log every 5 MB. Ideally, you should modify the /etc/logrotate.conf to specify the logrotate information for a specific log file. Also, if you are having huge log files, you can use: 10 Awesome Examples for Viewing Huge Log Files in Unix

3. Logrotate copytruncate option: Continue to write the log information in the newly created file after rotating the old log file.
$ cat logrotate.conf /tmp/output.log { size 1k copytruncate rotate 4 }

copytruncate instruct logrotate to creates the copy of the original file (i.e rotate the original log file) and truncates the original file to zero byte size. This helps the respective service that belongs to that log file can write to the proper file. While manipulating log files, you might find the sed substitute, sed delete tips helpful.

4. Logrotate compress option: Compress the rotated log files


If you use the compress option as shown below, the rotated files will be compressed with gzip utility.
$ cat logrotate.conf /tmp/output.log { size 1k copytruncate create 700 bala bala rotate 4 compress }

Output of compressed log file:


$ ls /tmp/output* output.log.1.gz output.log

5. Logrotate dateext option: Rotate the old log file with date in the log filename
$ cat logrotate.conf /tmp/output.log { size 1k copytruncate create 700 bala bala dateext rotate 4 compress }

After the above configuration, youll notice the date in the rotated log file as shown below.
$ ls -lrt /tmp/output* -rw-r--r-- 1 bala bala 8980 2010-06-09 22:10 output.log-20100609.gz -rwxrwxrwx 1 bala bala 0 2010-06-09 22:11 output.log

This would work only once in a day. Because when it tries to rotate next time on the same day, earlier rotated file will be having the same filename. So, the logrotate wont be successful after the first run on the same day. Typically you might use tail -f to view the output of the log file in realtime. You can even combine multiple tail -f output and display it on single terminal.

6. Logrotate monthly, daily, weekly option: Rotate the log file weekly/daily/monthly
For doing the rotation monthly once,
$ cat logrotate.conf /tmp/output.log { monthly copytruncate rotate 4 compress }

Add the weekly keyword as shown below for weekly log rotation.
$ cat logrotate.conf /tmp/output.log { weekly copytruncate rotate 4 compress }

Add the daily keyword as shown below for every day log rotation. You can also rotate logs hourly.
$ cat logrotate.conf /tmp/output.log { daily copytruncate rotate 4 compress }

7. Logrotate postrotate endscript option: Run custom shell scripts immediately after log rotation
Logrotate allows you to run your own custom shell scripts after it completes the log file rotation. The following configuration indicates that it will execute myscript.sh after the logrotation.
$ cat logrotate.conf /tmp/output.log { size 1k copytruncate rotate 4 compress postrotate /home/bala/myscript.sh endscript

8. Logrotate maxage option: Remove older rotated log files


Logrotate automatically removes the rotated files after a specific number of days. The following example indicates that the rotated log files would be removed after 100 days.
$ cat logrotate.conf /tmp/output.log { size 1k copytruncate rotate 4 compress maxage 100 }

9. Logrotate missingok option: Dont return error if the log file is missing
You can ignore the error message when the actual file is not available by using this option as shown below.
$ cat logrotate.conf /tmp/output.log { size 1k copytruncate rotate 4 compress missingok }

10. Logrotate compresscmd and compressext option: Sspecify compression command for the log file rotation
$ cat logrotate.conf /tmp/output.log { size 1k copytruncate create compress compresscmd /bin/bzip2 compressext .bz2 rotate 4 }

Following compression options are specified above: compress Indicates that compression should be done. compresscmd Specify what type of compression command should be used. For example: /bin/bzip2 compressext Specify the extension on the rotated log file. Without this option, the rotated file would have the default extension as .gz. So, if you use bzip2 compressioncmd, specify the extension as .bz2 as shown in the above example. 50.Passwordless SSH login setup: Using ssh-keygen and ssh-copy-id you can setup passwordless login to remote Linux server. ssh-keygen creates the public and private keys. ssh-copy-id copies the local-hosts public key to the remote-hosts authorized_keys file.

3 Steps to Perform SSH Login Without Password Using ssh-keygen & ssh-copy-id
by Ramesh Natarajan on November 20, 2008
Object 210 Object 212

Object 211

You can login to a remote Linux server without entering password in 3 simple steps using sskykeygen and ssh-copy-id as explained in this article. ssh-keygen creates the public and private keys. sshcopy-id copies the local-hosts public key to the remote-hosts authorized_keys file. ssh-copy-id also assigns proper permission to the remote-hosts home, ~/.ssh, and ~/.ssh/authorized_keys. This article also explains 3 minor annoyances of using ssh-copy-id and how to use ssh-copy-id along with ssh-agent.

Step 1: Create public and private keys using ssh-key-gen on local-host


jsmith@local-host$ [Note: You are on local-host here] jsmith@local-host$ ssh-keygen Generating public/private rsa key pair. Enter file in which to save the key (/home/jsmith/.ssh/id_rsa):[Enter key] Enter passphrase (empty for no passphrase): [Press enter key] Enter same passphrase again: [Pess enter key] Your identification has been saved in /home/jsmith/.ssh/id_rsa. Your public key has been saved in /home/jsmith/.ssh/id_rsa.pub. The key fingerprint is: 33:b3:fe:af:95:95:18:11:31:d5:de:96:2f:f2:35:f9 jsmith@local-host

Step 2: Copy the public key to remote-host using ssh-copy-id


jsmith@local-host$ ssh-copy-id -i ~/.ssh/id_rsa.pub remote-host jsmith@remote-host's password: Now try logging into the machine, with "ssh 'remote-host'", and check in: .ssh/authorized_keys to make sure we haven't added extra keys that you weren't expecting.

Note: ssh-copy-id appends the keys to the remote-hosts .ssh/authorized_key.

Step 3: Login to remote-host without entering the password


jsmith@local-host$ ssh remote-host Last login: Sun Nov 16 17:22:33 2008 from 192.168.1.2 [Note: SSH did not ask for password.] jsmith@remote-host$ [Note: You are on remote-host here]

The above 3 simple steps should get the job done in most cases. We also discussed earlier in detail about performing SSH and SCP from openSSH to openSSH without entering password. If you are using SSH2, we discussed earlier about performing SSH and SCP without password from SSH2 to SSH2 , from OpenSSH to SSH2 and from SSH2 to OpenSSH.

Using ssh-copy-id along with the ssh-add/ssh-agent


When no value is passed for the option -i and If ~/.ssh/identity.pub is not available, ssh-copy-id will display the following error message.
jsmith@local-host$ ssh-copy-id -i remote-host /usr/bin/ssh-copy-id: ERROR: No identities found

If you have loaded keys to the ssh-agent using the ssh-add, then ssh-copy-id will get the keys from the ssh-agent to copy to the remote-host. i.e, it copies the keys provided by ssh-add -L command to the remote-host, when you dont pass option -i to the ssh-copy-id.
jsmith@local-host$ ssh-agent $SHELL jsmith@local-host$ ssh-add -L The agent has no identities. jsmith@local-host$ ssh-add Identity added: /home/jsmith/.ssh/id_rsa (/home/jsmith/.ssh/id_rsa) jsmith@local-host$ ssh-add -L ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAsJIEILxftj8aSxMa3d8t6JvM79DyBV aHrtPhTYpq7kIEMUNzApnyxsHpH1tQ/Ow== /home/jsmith/.ssh/id_rsa jsmith@local-host$ ssh-copy-id -i remote-host jsmith@remote-host's password: Now try logging into the machine, with "ssh 'remote-host'", and check in: .ssh/authorized_keys to make sure we haven't added extra keys that you weren't expecting. [Note: This has added the key displayed by ssh-add -L]

Three Minor Annoyances of ssh-copy-id


Following are few minor annoyances of the ssh-copy-id. 1. Default public key: ssh-copy-id uses ~/.ssh/identity.pub as the default public key file (i.e when no value is passed to option -i). Instead, I wish it uses id_dsa.pub, or id_rsa.pub, or identity.pub as default keys. i.e If any one of them exist, it should copy that to the remotehost. If two or three of them exist, it should copy identity.pub as default. 2. The agent has no identities: When the ssh-agent is running and the ssh-add -L returns The agent has no identities (i.e no keys are added to the ssh-agent), the ssh-copy-id will still copy the message The agent has no identities to the remote-hosts authorized_keys entry. 3. Duplicate entry in authorized_keys: I wish ssh-copy-id validates duplicate entry on the remote-hosts authorized_keys. If you execute ssh-copy-id multiple times on the local-host, it will keep appending the same key on the remote-hosts authorized_keys file without checking for duplicates. Even with duplicate entries everything works as expected. But, I

would like to have my authorized_keys file clutter free.

15 Examples To Master Linux Command Line History


by Ramesh Natarajan on August 11, 2008
Object 213

Object 215

Object 214

When you are using Linux command line frequently, using the history effectively can be a major productivity boost. In fact, once you have mastered the 15 examples that Ive provided here, youll find using command line more enjoyable and fun.

1. Display timestamp using HISTTIMEFORMAT


Typically when you type history from command line, it displays the command# and the command. For auditing purpose, it may be beneficial to display the timepstamp along with the command as shown below.
# export HISTTIMEFORMAT='%F %T ' # history | more 1 2008-08-05 19:02:39 service network restart 2 2008-08-05 19:02:39 exit 3 2008-08-05 19:02:39 id 4 2008-08-05 19:02:39 cat /etc/redhat-release

2. Search the history using Control+R


I strongly believe, this may be your most frequently used feature of history. When youve already executed a very long command, you can simply search history using a keyword and re-execute the same command without having to type it fully. Press Control+R and type the keyword. In the following example, I searched for red, which displayed the previous command cat /etc/redhatrelease in the history that contained the word red.
# [Press Ctrl+R from the command prompt, which will display the reverse-i-search prompt] (reverse-i-search)`red': cat /etc/redhat-release [Note: Press enter when you see your command, which will execute the command from the history] # cat /etc/redhat-release Fedora release 9 (Sulphur)

Sometimes you want to edit a command from history before executing it. For e.g. you can search for httpd, which will display service httpd stop from the command history, select this command and change the stop to start and re-execute it again as shown below.
# [Press Ctrl+R from the command prompt, which will display the reverse-i-search prompt] (reverse-i-search)`httpd': service httpd stop

[Note: Press either left arrow or right arrow key when you see your command, which will display the command for you to edit, before executing it] # service httpd start

3. Repeat previous command quickly using 4 different methods


Sometime you may end up repeating the previous commands for various reasons. Following are the 4 different ways to repeat the last executed command. 1. 2. 3. 4. Use the up arrow to view the previous command and press enter to execute it. Type !! and press enter from the command line Type !-1 and press enter from the command line. Press Control+P will display the previous command, press enter to execute it

4. Execute a specific command from history


In the following example, If you want to repeat the command #4, you can do !4 as shown below.
# history | more 1 service network restart 2 exit 3 id 4 cat /etc/redhat-release # !4 cat /etc/redhat-release Fedora release 9 (Sulphur)

5. Execute previous command that starts with a specific word


Type ! followed by the starting few letters of the command that you would like to re-execute. In the following example, typing !ps and enter, executed the previous command starting with ps, which is ps aux | grep yp.
# !ps ps aux | grep yp root 16947 0.0 root 17503 0.0 0.1 0.0 36516 4124 1264 ? 740 pts/0 Sl S+ 13:10 19:19 0:00 ypbind 0:00 grep yp

6. Control the total number of lines in the history using HISTSIZE


Append the following two lines to the .bash_profile and relogin to the bash shell again to see the change. In this example, only 450 command will be stored in the bash history.
# vi ~/.bash_profile HISTSIZE=450 HISTFILESIZE=450

7. Change the history file name using HISTFILE


By default, history is stored in ~/.bash_history file. Add the following line to the .bash_profile and relogin to the bash shell, to store the history command in .commandline_warrior file instead of .bash_history file. Im yet to figure out a practical use for this. I can see this getting used when you want to track commands executed from different terminals using different history file name.
# vi ~/.bash_profile HISTFILE=/root/.commandline_warrior

If you have a good reason to change the name of the history file, please share it with me, as Im interested in finding out how you are using this feature.

8. Eliminate the continuous repeated entry from history using HISTCONTROL


In the following example pwd was typed three times, when you do history, you can see all the 3 continuous occurrences of it. To eliminate duplicates, set HISTCONTROL to ignoredups as shown below.
# pwd # pwd # pwd # history | tail -4 44 pwd 45 pwd 46 pwd [Note that there are three pwd commands in history, after executing pwd 3 times as shown above] 47 history | tail -4 # export HISTCONTROL=ignoredups # pwd # pwd # pwd # history | tail -3 56 export HISTCONTROL=ignoredups 57 pwd [Note that there is only one pwd command in the history, even after executing pwd 3 times as shown above] 58 history | tail -4

9. Erase duplicates across the whole history using HISTCONTROL


The ignoredups shown above removes duplicates only if they are consecutive commands. To eliminate duplicates across the whole history, set the HISTCONTROL to erasedups as shown below.
# export HISTCONTROL=erasedups # pwd # service httpd stop # history | tail -3 38 pwd 39 service httpd stop 40 history | tail -3 # ls -ltr # service httpd stop # history | tail -6 35 export HISTCONTROL=erasedups 36 pwd 37 history | tail -3 38 ls -ltr 39 service httpd stop [Note that the previous service httpd stop after pwd got erased] 40 history | tail -6

10. Force history not to remember a particular command using HISTCONTROL


When you execute a command, you can instruct history to ignore the command by setting HISTCONTROL to ignorespace AND typing a space in front of the command as shown below. I

can see lot of junior sysadmins getting excited about this, as they can hide a command from the history. It is good to understand how ignorespace works. But, as a best practice, dont hide purposefully anything from history.
# export HISTCONTROL=ignorespace # ls -ltr # pwd # service httpd stop [Note that there is a space at the beginning of service, to ignore this command from history] # history | tail -3 67 ls -ltr 68 pwd 69 history | tail -3

11. Clear all the previous history using option -c


Sometime you may want to clear all the previous history, but want to keep the history moving forward.
# history -c

12. Subtitute words from history commands


When you are searching through history, you may want to execute a different command but use the same parameter from the command that youve just searched. In the example below, the !!:$ next to the vi command gets the argument from the previous command to the current command.
# ls anaconda-ks.cfg anaconda-ks.cfg # vi !!:$ vi anaconda-ks.cfg

In the example below, the !^ next to the vi command gets the first argument from the previous command (i.e cp command) to the current command (i.e vi command).
# cp anaconda-ks.cfg anaconda-ks.cfg.bak anaconda-ks.cfg # vi !^ vi anaconda-ks.cfg

13. Substitute a specific argument for a specific command.


In the example below, !cp:2 searches for the previous command in history that starts with cp and takes the second argument of cp and substitutes it for the ls -l command as shown below.
# cp ~/longname.txt /really/a/very/long/path/long-filename.txt # ls -l !cp:2 ls -l /really/a/very/long/path/long-filename.txt

In the example below, !cp:$ searches for the previous command in history that starts with cp and takes the last argument (in this case, which is also the second argument as shown above) of cp and substitutes it for the ls -l command as shown below.
# ls -l !cp:$ ls -l /really/a/very/long/path/long-filename.txt

14. Disable the usage of history using HISTSIZE


If you want to disable history all together and dont want bash shell to remember the commands youve typed, set the HISTSIZE to 0 as shown below.
# export HISTSIZE=0 # history # [Note that history did not display anything]

15. Ignore specific commands from the history using HISTIGNORE


Sometimes you may not want to clutter your history with basic commands such as pwd and ls. Use HISTIGNORE to specify all the commands that you want to ignore from the history. Please note that adding ls to the HISTIGNORE ignores only ls and not ls -l. So, you have to provide the exact command that you would like to ignore from the history.
# # # # # export HISTIGNORE="pwd:ls:ls -ltr:" pwd ls ls -ltr service httpd stop

# history | tail -3 79 export HISTIGNORE="pwd:ls:ls -ltr:" 80 service httpd stop 81 history [Note that history did not record pwd, ls and ls -ltr]

Recommended Reading

Bash 101 Hacks, by Ramesh Natarajan. I spend most of my time on Linux environment. So, naturally Im a huge fan of Bash command line and shell scripting. 15 years back, when I was working on different flavors of *nix, I used to write lot of code on C shell and Korn shell. Later years, when I started working on Linux as system administrator, I pretty much automated every possible task using Bash shell scripting. Based on my Bash experience, Ive written Bash 101 Hacks eBook that contains 101 practical examples on both Bash command line and shell scripting. If youve been thinking about mastering Bash, do yourself a favor and read this book, which will help you take control of your Bash command line and shell scripting.

Unix LS Command: 15 Practical Examples


by Ramesh Natarajan on July 13, 2009
Object 216 Object 218

Object 217

ls Unix users and sysadmins cannot live without this two letter command. Whether you use it 10 times a day or 100 times a day, knowing the power of ls command can make your command line journey enjoyable. In this article, let us review 15 practical examples of the mighty ls command.

1. Open Last Edited File Using ls -t


To open the last edited file in the current directory use the combination of ls, head and vi commands as shown below. ls -t sorts the file by modification time, showing the last edited file first. head -1 picks up this first file.
$ vi first-long-file.txt $ vi second-long-file.txt $ vi `ls -t | head -1` [Note: This will open the last file you edited (i.e second-long-file.txt)]

2. Display One File Per Line Using ls -1


To show single entry per line, use -1 option as shown below.
$ ls -1 bin boot cdrom dev etc home initrd initrd.img lib

3. Display All Information About Files/Directories Using ls -l


To show long listing information about the file/directory.
$ ls -l -rw-r----- 1 ramesh team-dev 9275204 Jun 13 15:27 mthesaur.txt.gz

1st Character File Type: First character specifies the type of the file. In the example above the hyphen (-) in the 1st character indicates that this is a normal file.

Following are the possible file type options in the 1st character of the ls -l output. Field Explanation - normal file d directory s socket file l link file Field 1 File Permissions: Next 9 character specifies the files permission. Each 3 characters refers to the read, write, execute permissions for user, group and world In this example, -rw-r indicates read-write permission for user, read permission for group, and no permission for others. Field 2 Number of links: Second field specifies the number of links for that file. In this example, 1 indicates only one link to this file. Field 3 Owner: Third field specifies owner of the file. In this example, this file is owned by username ramesh. Field 4 Group: Fourth field specifies the group of the file. In this example, this file belongs to team-dev group. Field 5 Size: Fifth field specifies the size of file. In this example, 9275204 indicates the file size. Field 6 Last modified date & time: Sixth field specifies the date and time of the last modification of the file. In this example, Jun 13 15:27 specifies the last modification time of the file. Field 7 File name: The last field is the name of the file. In this example, the file name is mthesaur.txt.gz.

4. Display File Size in Human Readable Format Using ls -lh


Use ls -lh (h stands for human readable form), to display file size in easy to read format. i.e M for MB, K for KB, G for GB.
$ ls -l -rw-r----- 1 ramesh team-dev 9275204 Jun 12 15:27 arch-linux.txt.gz* $ ls -lh -rw-r----- 1 ramesh team-dev 8.9M Jun 12 15:27 arch-linux.txt.gz

5. Display Directory Information Using ls -ld


When you use ls -l you will get the details of directories content. But if you want the details of directory then you can use -d option as., For example, if you use ls -l /etc will display all the files under etc directory. But, if you want to display the information about the /etc/ directory, use -ld option as shown below.
$ ls -l /etc total 3344 -rw-r--r--rw-r--r-drwxr-xr-x -rw-r--r-drwxr-xr-x 1 1 4 1 4 root root root root root root root root root root 15276 2562 4096 48 4096 Oct Oct Feb Feb Feb 5 5 2 8 2 2004 2004 2007 2008 2007 a2ps.cfg a2ps-site.cfg acpi adjtime alchemist

$ ls -ld /etc drwxr-xr-x 21 root root 4096 Jun 15 07:02 /etc

6. Order Files Based on Last Modified Time Using ls -lt


To sort the file names displayed in the order of last modification time use the -t option. You will be finding it handy to use it in combination with -l option.
$ ls -lt total 76 drwxrwxrwt 14 root root 4096 Jun drwxr-xr-x 121 root root 4096 Jun drwxr-xr-x 13 root root 13780 Jun drwxr-xr-x 13 root root 4096 Jun drwxr-xr-x 12 root root 4096 Jun drwxr-xr-x 2 root root 4096 May lrwxrwxrwx 1 root root 11 May drwx-----2 root root 16384 May drwxr-xr-x 15 root root 4096 Jul 22 22 22 20 18 17 17 17 2 07:36 07:05 07:04 23:12 08:31 21:21 20:29 20:29 2008 tmp etc dev root home sbin cdrom -> media/cdrom lost+found var

7. Order Files Based on Last Modified Time (In Reverse Order) Using ls -ltr
To sort the file names in the last modification time in reverse order. This will be showing the last edited file in the last line which will be handy when the listing goes beyond a page. This is my default ls usage. Anytime I do ls, I always use ls -ltr as I find this very convenient.
$ ls -ltr total 76 drwxr-xr-x 15 root drwx-----2 root lrwxrwxrwx 1 root drwxr-xr-x 2 root drwxr-xr-x 12 root drwxr-xr-x 13 root drwxr-xr-x 13 root drwxr-xr-x 121 root drwxrwxrwt 14 root root 4096 Jul 2 2008 var root 16384 May 17 20:29 lost+found root 11 May 17 20:29 cdrom -> media/cdrom root 4096 May 17 21:21 sbin root 4096 Jun 18 08:31 home root 4096 Jun 20 23:12 root root 13780 Jun 22 07:04 dev root 4096 Jun 22 07:05 etc root 4096 Jun 22 07:36 tmp

8. Display Hidden Files Using ls -a (or) ls -A


To show all the hidden files in the directory, use -a option. Hidden files in Unix starts with . in its file name.
$ ls -a [rnatarajan@asp-dev ~]$ ls -a . Debian-Info.txt .. CentOS-Info.txt .bash_history Fedora-Info.txt .bash_logout .lftp .bash_profile libiconv-1.11.tar.tar .bashrc libssh2-0.12-1.2.el4.rf.i386.rpm

It will show all the files including the . (current directory) and .. (parent directory). To show the hidden files, but not the . (current directory) and .. (parent directory), use option -A.
$ ls -A Debian-Info.txt Fedora-Info.txt CentOS-Info.txt Red-Hat-Info.txt .bash_history SUSE-Info.txt .bash_logout .lftp .bash_profile libiconv-1.11.tar.tar .bashrc libssh2-0.12-1.2.el4.rf.i386.rpm [Note: . and .. are not displayed here]

9. Display Files Recursively Using ls -R


$ ls /etc/sysconfig/networking devices profiles $ ls -R /etc/sysconfig/networking /etc/sysconfig/networking: devices profiles /etc/sysconfig/networking/devices: /etc/sysconfig/networking/profiles: default /etc/sysconfig/networking/profiles/default:

To show all the files recursively, use -R option. When you do this from /, it shows all the unhidden files in the whole file system recursively.

10. Display File Inode Number Using ls -i


Sometimes you may want to know the inone number of a file for internal maintenance. Use -i option as shown below to display inone number. Using inode number you can remove files that has special characters in its name as explained in the example#6 of the find command article.
$ ls -i /etc/xinetd.d/ 279694 chargen 279724 cups-lpd 279695 chargen-udp 279696 daytime 279697 daytime-udp 279698 echo

11. Hide Control Characters Using ls -q


To print question mark instead of the non graphics control characters use the -q option.
ls -q

12. Display File UID and GID Using ls -n


Lists the output like -l, but shows the uid and gid in numeric format instead of names.
$ ls -l ~/.bash_profile -rw-r--r-- 1 ramesh ramesh 909 Feb 8 11:48 /home/ramesh/.bash_profile $ ls -n ~/.bash_profile -rw-r--r-- 1 511 511 909 Feb 8 11:48 /home/ramesh/.bash_profile [Note: This display 511 for uid and 511 for gid]

13. Visual Classification of Files With Special Characters Using ls -F


Instead of doing the ls -l and then the checking for the first character to determine the type of file. You can use -F which classifies the file with different special character for different kind of files.
$ ls -F Desktop/ Documents/ Ubuntu-App@ firstfile Music/ Public/ Templates/

Thus in the above output, / directory. nothing normal file.

@ link file. * Executable file

14. Visual Classification of Files With Colors Using ls -F


Recognizing the file type by the color in which it gets displayed is an another kind in classification of file. In the above output directories get displayed in blue, soft links get displayed in green, and ordinary files gets displayed in default color.
$ ls --color=auto Desktop Documents Examples firstfile Music Pictures Public Templates Videos

15. Useful ls Command Aliases


You can take some required ls options in the above, and make it as aliases. We suggest the following. Long list the file with size in human understandable form.
alias ll="ls -lh"

Classify the file type by appending special characters.


alias lv="ls -F"

Classify the file type by both color and special character.


alias ls="ls -F --color=auto"

You might also like