Fun With Terminal Text in C

I have always thought it was cool when a program like wget would show you ‘visually’ on a command console how much longer a file would take to download, essentially an ASCII progress bar. The message was ASCII graphics really, and would have some movement to it – I always wondered how you could achieve this using standard out and some simple fprintf commands to modify the text in place. There are a few ways you can achieve some graphics on the console.

  • You can utilize the ncurses library.
  • You can utilize fprintf and the \b (backspace) character to replace some text.

The advantage of ncurses is it can re-draw and clear the entire screen, draw certain aspects of the screen at different points. There is a predefined API for ncurses and is far more powerful than simple fprintf work. Even the common ‘make menuconfig’ directives utilize the ncurses library.

Using the second method is a little mickey mouse but still fun none-the-less. I wrote a little program that will visually give you progress by swapping textual values within two square opening and closing brackets. In this case [w] and [e].

#include <stdlib.h>
#include <stdio.h>
#include <unistd.h>
 
int main()
{
	fprintf(stdout, "Writing from a to b ....");
	fprintf(stdout, " [ ]");
 
	for (;;) {
		fprintf(stdout, "\b\b\b   ");
		fprintf(stdout, "\b\b\b[e]");
		usleep(500000);
		fprintf(stdout, "\b\b\b[w]");
		usleep(500000);
	}
 
	return 0;
}

Initially the code writes out the standard line. Then the second fprintf outputs the [ ], then the first line in the ‘for’ loop backspaces three characters, this includes the opening and closing square brackets. The following lines will pause for half a second, remove the [ ] then populate them with the ‘[e]‘ or the ‘[w]‘. This is merely a trick of backspacing at such a rate in which your eyes do not notice.

What cool things have you seen done in a terminal?

Linux DF Command Examples

Ever wondered how much space you have left on your hard drive? Cannot remember what file format your partitions are? You can use the Linux df command to get some of this data.

This article will give you the heads up on df.

1. Display Disk Usage Using df

erik@debian:~$ df
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/sda1              7850996   4116300   3335884  56% /
tmpfs                   258484         0    258484   0% /lib/init/rw
udev                     10240        52     10188   1% /dev
tmpfs                   258484         0    258484   0% /dev/shm

The example above shows the basic output of the df command. However, the output isn’t very readable.

2. Readable Disk Usage Using df -h

The df -h command will display the partition size in human readable form, G for gigabytes, M for megabytes, K for kilobytes.

erik@debian:~$ df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda1             7.5G  4.0G  3.2G  56% /
tmpfs                 253M     0  253M   0% /lib/init/rw
udev                   10M   52K   10M   1% /dev
tmpfs                 253M     0  253M   0% /dev/shm

3. Display Partition Types Using df -Th

To display the partition type you can execute the command df -T (with the h for a more user friendly file size).

erik@debian:~$ df -Th
Filesystem    Type    Size  Used Avail Use% Mounted on
/dev/sda1     ext3    7.5G  4.0G  3.2G  56% /
tmpfs        tmpfs    253M     0  253M   0% /lib/init/rw
udev         tmpfs     10M   52K   10M   1% /dev
tmpfs        tmpfs    253M     0  253M   0% /dev/shm

4. Display Dummy File Systems Using df -ah

erik@debian:~$ df -ah
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda1             7.5G  4.0G  3.2G  56% /
tmpfs                 253M     0  253M   0% /lib/init/rw
proc                     0     0     0   -  /proc
sysfs                    0     0     0   -  /sys
procbususb               0     0     0   -  /proc/bus/usb
udev                   10M   52K   10M   1% /dev
tmpfs                 253M     0  253M   0% /dev/shm
devpts                   0     0     0   -  /dev/pts
none                  0.0K  0.0K  0.0K   -  /proc/fs/vmblock/mountPoint

The Linux operating systems uses various virtual file systems shown here, including /proc /sys /dev. These will displayed when using the df -h command.

Backing Up Your MySQL Database from Command Line

If you find yourself in the unfortunate position of losing all your data from a dead hard drive or non-posting server you immediately realize…hey, I should have backed up my data! Well here are a few steps to backup specific databases within MySQL, or your entire database. I am assuming you have your database password and the ability to login to it.

There is a great tool called mysqldump that has:

mysqldump -u [username] -p [password] [databasename] > [backupfile.sql]

  • [username] – this is your database username
  • [password] – this is the password for your database
  • [databasename] – the name of your database
  • [backupfile.sql] – the file to which the backup should be written.

Lets say you have a WordPress database and you want to back that to a file. You simply enter the command:

$ mysqldump -u [uname] -p[pass] [dbname] | gzip -9 > [backupfile.sql.gz]

This will also compress your backup using gzip to save space on your harddrive. Voila!

How To Split Files For Easy Transferring

I recently ran into the problem of having this very large file and no ‘fast’ way of transferring it to another computer. Over Wireless it was calculated to take upwards of four hours to transfer (due to weak signal etc.), but I had a 4 GB USB key that I could use. The problem was that my file was over 4 gigabytes and of course my key was only 4G, in fact only 3.9G of usable space. What to do?! My initial thought was to do some trickery using dd, where I could make two files with the first up to the 3.9G barrier and create the second file with the remainder then piece it back together. This is a feasible option. But I came across the split function in Linux.

Split allows you to, you guessed it, split a file into pieces and automatically increment the file name so you can keep track of the pieces. This was a perfect solution for my needs. I was transferring the file from a Linux machine to a Windows machine, so I also had to figure out how to piece the file back together in Windows. In Linux its just a matter of “catting” the files together, but in Windows you can actually do it using copy /b for binary.

Here I will outline the steps for you to split a file into chunks and reassemble it at the destination.

Split A File Into Pieces

The split program is quite handy and has various flags to play around with, the important ones are:

  • -a, –suffix-length=N – use suffixes of length N (default 2)
  • -b, –bytes=SIZE – put SIZE bytes per output file
  • -C, –line-bytes=SIZE – put at most SIZE bytes of lines per output file
  • -d, –numeric-suffixes – use numeric suffixes instead of alphabetic
  • -l, –lines=NUMBER – put NUMBER lines per output file
  • –verbose              print a diagnostic just before each output file is opened

SIZE may have a multiplier suffix: b 512, kB 1000, K 1024, MB 1000*1000, M 1024*1024, GB 1000*1000*1000, G 1024*1024*1024, and so on for T, P, E, Z, Y

For example, I have a debian 5 ISO that’s 3.2 GB that I decided to split into 500MB files.

$ ls
debian5.iso
$ split -b 500M -d debian5.iso 
$ ls -l
total 6304488
-rw-r--r-- 1 erik erik 3224731648 2011-03-16 11:45 debian5.iso
-rw-r--r-- 1 erik erik  524288000 2011-03-16 11:52 x00
-rw-r--r-- 1 erik erik  524288000 2011-03-16 11:52 x01
-rw-r--r-- 1 erik erik  524288000 2011-03-16 11:53 x02
-rw-r--r-- 1 erik erik  524288000 2011-03-16 11:53 x03
-rw-r--r-- 1 erik erik  524288000 2011-03-16 11:54 x04
-rw-r--r-- 1 erik erik  524288000 2011-03-16 11:54 x05
-rw-r--r-- 1 erik erik   79003648 2011-03-16 11:54 x06

So now I can just transfer chunks at a time.

Okay, now the files are on my second machine, but how do I put them all back together again. Poor Humpty Dumpty!

Reassemble Split File In Linux

To reassemble the file in Linux is quite easy:

$ cat x0* > debian5.iso
$ ls -l
total 6304488
-rw-r--r-- 1 erik erik 3224731648 2011-03-16 11:45 debian5.iso
-rw-r--r-- 1 erik erik  524288000 2011-03-16 11:52 x00
-rw-r--r-- 1 erik erik  524288000 2011-03-16 11:52 x01
-rw-r--r-- 1 erik erik  524288000 2011-03-16 11:53 x02
-rw-r--r-- 1 erik erik  524288000 2011-03-16 11:53 x03
-rw-r--r-- 1 erik erik  524288000 2011-03-16 11:54 x04
-rw-r--r-- 1 erik erik  524288000 2011-03-16 11:54 x05
-rw-r--r-- 1 erik erik   79003648 2011-03-16 11:54 x06

cat will automatically combine all files into one binary file. We redirect the output using the Bash redirection symbol >. For more information on Bash redirection.

Reassemble Split File In Windows

Great, but I am running Windows! In fact I am running Windows on my other machine, unfortunately I am not running cygwin so I had to find an alternate method. It turns out you can use the copy command with the /b flags to copy a set of files into one. The /b informs copy that we are dealing with a binary file rather than ASCII.

Using the command prompt in Windows you can rebuild the files with:

E:\>copy /b x00 + x01 + x02 + x03 + x04 + x05 + x06 debian5.iso
x00
x01
x02
x03
x04
x05
x06

The computer chunked along spitting out the x0* on each line until finally my file was pieced back together. The plus sign indicates to copy that these are the files we need to concatenate together with the last argument the output file.

There you have it. This can be used to break down any file, and could be used to send multiple e-mails to yourself to get around an e-mail attachment limit.

Makefile Examples To Compile A Linux Kernel Module

A huge benefit of Linux is the design of the kernel. Kernel modules break out pieces of code that provide support and functionality for hardware and software on your system. These pieces can be loaded and unloaded on the fly without requiring a reboot to the system. From a programming standpoint this also makes it easier to identify bugs in the kernel, as well as creating fixes without having to create a new kernel image for ever new test. Compiling a kernel module differs slightly from user space processes you may have compiled before, and unlike your user space processes a kernel module with an invalid pointer can kill your system. Sounds like fun, doesn’t it!

If you want to build the kernel module for your running system, then you need the header files of your running kernel, this can be achieved by:

# apt-get install linux-headers-`uname -r`

This will auto-magically download the headers for your running kernel. If you built your own kernel at some obscure version then you probably already know how to do all this.

Okay, now with the headers we need a working directory in which we will build our module.

$ mkdir do_work
$ cd do_work

This is the beauty of the kernel makefile, you can have the kernel headers and source somewhere else and work in your own directory with just your source code, but build your module for the running kernel. If you want to modify some pre-existing kernel module you can download the kernel source code and copy the modules *.c to your working directory. You may have to fidget with the includes but for the most part it can be its own existing entity.

So what does the kernel makefile for our own module look like?

Kernel Module Makefile

It is actually more basic than most of your user space C applications.

# Makefile for compiling Kernel
# modules on the fly.
obj-m = erik_calc.o
KVERSION = $(shell uname -r)
all:
        make -C /lib/modules/$(KVERSION)/build M=$(PWD) modules
clean:
        make -C /lib/modules/$(KVERSION)/build M=$(PWD) clean

So from the command line you can execute the commands you are familiar with, make and make clean to remove all the .o files and such. Lets build our module and check out the output.

$ make
make -C /lib/modules/2.6.32-23-generic/build M=/***/**/do_work/ modules
make[1]: Entering directory `/usr/src/linux-headers-2.6.32-23-generic'
  Building modules, stage 2.
  MODPOST 1 modules
make[1]: Leaving directory `/usr/src/linux-headers-2.6.32-23-generic'
--------- Current Directory After Compilation ---------
$ ls -l
total 28
-rw-r--r-- 1 erik erik  438 2011-04-06 15:57 erik_calc.c
-rw-r--r-- 1 erik erik 3619 2011-04-06 16:01 erik_calc.ko
-rw-r--r-- 1 erik erik  690 2011-04-06 16:01 erik_calc.mod.c
-rw-r--r-- 1 erik erik 2568 2011-04-06 16:01 erik_calc.mod.o
-rw-r--r-- 1 erik erik 1864 2011-04-06 16:01 erik_calc.o
-rw-r--r-- 1 erik erik  173 2011-04-06 16:02 Makefile
-rw-r--r-- 1 erik erik   44 2011-04-06 16:03 modules.order
-rw-r--r-- 1 erik erik    0 2011-04-06 16:01 Module.symvers

The most important file we are looking for is the *.ko file, or kernel object file. This is the file that we can load into our running kernel using modprobe or insmod. So that basic Makefile will compile a kernel module for our running kernel…crazy! But what if you need to cross-compile your kernel module for a different architecture? Guess what, you can do that too!

Kernel Makefile For Cross-Compilation

To cross-compile our kernel module we just have to specify the compiler and architecture that should be used during the compilation phase. Lets say we want to cross compile this module for the ARM architecture. ARM is becoming very popular these days due to its use in smart phones because of its low energy usage. This example shows how to cross-compile to a different architecture using a different kernel from the running kernel, we just have to specify where that kernel source is. I have the armeb-linux compiler within my Bash $PATH variable, if your compiler lives somewhere else you can pass the entire path in.

# Cross compilation Makefile for ARM
KERN_SRC=/home/erik/linux-2.6.30.2
obj-m := erik_calc.o
 
all:
        make -C $(KERN_SRC) ARCH=arm CROSS_COMPILE=armeb-linux- M=`pwd` modules
clean:
        make -C $(KERN_SRC) ARCH=arm CROSS_COMPILE=armeb-linux- M=`pwd` clean

Pay special note to ARCH and the CROSS_COMPILE arguments that are passed in during the build phase. If you wanted to compile for MIPS you could just specify ‘mips’ as long as you have the ‘mips’ compiler. There you have it, now you can compile your own kernel module…you just have to code something now.

Clear The Linux Cache

If you have ever kept your Linux machine running for extended periods of time (months), you will begin to notice that your system increasingly uses the swap partition with the cached area growing. Linux stores caches, dentries and inodes within memory to speed up reads and writes of the more commonly and recently used data segments. In doing so, this increases speed and efficiency for the applications you happen to be running. But wouldn’t it be nice to clear all of that from cache and ensure it is being placed in memory? Well guess what, you can!

Here is an example of my current system resources using the top command:

Mem:   2960104k total,  2884464k used,    75640k free,   325464k buffers
Swap:  2931820k total,   281840k used,  2649980k free,   344532k cached

As you can see my system is utilizing a lot of my swap space, and it would be nice to clear that and the cache out. This can be done if your kernel version is >= 2.6.16. A proc variable gives us the ability to clear specific portions, for instance:

  1. To free pagecache: echo 1 > /proc/sys/vm/drop_caches
  2. To free dentries and inodes: echo 2 > /proc/sys/vm/drop_caches
  3. To free pagecache, dentries and inodes: echo 3 > /proc/sys/vm/drop_caches

To execute these commands you must be running as root or using the sudo command. It is also recommended to run the sync command prior to the aforementioned echo commands, sync will ensure that all cached objects are freed.

Executing the command on my system produces the following results:

# sync; echo 3 > /proc/sys/vm/drop_caches
Mem:   2960104k total,  2220148k used,   739956k free,      576k buffers
Swap:  2931820k total,   281628k used,  2650192k free,   144664k cached

Wow! Look at the memory buffer difference, and the amount of now free memory.

A lot of this information came from Linux Insight, check it out for a thorough discussion on its uses.

Linux Interface Calls With ‘ioctl’ In C

There are a set of socket calls that can be done on an interface using the ioctl function. I have written a few articles about interacting with an interface via socket and ioctl calls which you can check out here:

You will notice that each ioctl call uses some defines such as SIOCGIFADDR and SIOCSIFHWADDR to name a few. I suppose the question is, how do you know which parameter is required for each action? And where can I find the entire list of options? I think the first time I attempted doing an interface modification I had to bring an interface up or down based on some circumstances, but had no idea how to do it. I found some arbitrary code that referenced these odd defines. I went all out and did a:

$ find / -print | xargs grep "SIOCGIFADDR"

I waited about 30 mins and came back to my computer. In hindsight I should have just looked at all the includes in the code example and see if there is more information within them related to these ioctl calls.

The ioctl calls I am using interface with a socket file descriptor, the ioctl call is actually a generic interface to any device type as long as those interfaces are defined in their respective kernel code. This article gives a basic rundown of using ioctl. Another piece to look at is the kernel code for the driver which defines the ioctl available interfaces. But I digress.

I would like to include the list of the socket-level I/O control calls as defined in linux/sockios.h.

/* Linux-specific socket ioctls */
#define SIOCINQ		FIONREAD
#define SIOCOUTQ	TIOCOUTQ
 
/* Routing table calls. */
#define SIOCADDRT	0x890B		/* add routing table entry	*/
#define SIOCDELRT	0x890C		/* delete routing table entry	*/
#define SIOCRTMSG	0x890D		/* call to routing system	*/
 
/* Socket configuration controls. */
#define SIOCGIFNAME	0x8910		/* get iface name		*/
#define SIOCSIFLINK	0x8911		/* set iface channel		*/
#define SIOCGIFCONF	0x8912		/* get iface list		*/
#define SIOCGIFFLAGS	0x8913		/* get flags			*/
#define SIOCSIFFLAGS	0x8914		/* set flags			*/
#define SIOCGIFADDR	0x8915		/* get PA address		*/
#define SIOCSIFADDR	0x8916		/* set PA address		*/
#define SIOCGIFDSTADDR	0x8917		/* get remote PA address	*/
#define SIOCSIFDSTADDR	0x8918		/* set remote PA address	*/
#define SIOCGIFBRDADDR	0x8919		/* get broadcast PA address	*/
#define SIOCSIFBRDADDR	0x891a		/* set broadcast PA address	*/
#define SIOCGIFNETMASK	0x891b		/* get network PA mask		*/
#define SIOCSIFNETMASK	0x891c		/* set network PA mask		*/
#define SIOCGIFMETRIC	0x891d		/* get metric			*/
#define SIOCSIFMETRIC	0x891e		/* set metric			*/
#define SIOCGIFMEM	0x891f		/* get memory address (BSD)	*/
#define SIOCSIFMEM	0x8920		/* set memory address (BSD)	*/
#define SIOCGIFMTU	0x8921		/* get MTU size			*/
#define SIOCSIFMTU	0x8922		/* set MTU size			*/
#define SIOCSIFNAME	0x8923		/* set interface name */
#define	SIOCSIFHWADDR	0x8924		/* set hardware address 	*/
#define SIOCGIFENCAP	0x8925		/* get/set encapsulations       */
#define SIOCSIFENCAP	0x8926		
#define SIOCGIFHWADDR	0x8927		/* Get hardware address		*/
#define SIOCGIFSLAVE	0x8929		/* Driver slaving support	*/
#define SIOCSIFSLAVE	0x8930
#define SIOCADDMULTI	0x8931		/* Multicast address lists	*/
#define SIOCDELMULTI	0x8932
#define SIOCGIFINDEX	0x8933		/* name -> if_index mapping	*/
#define SIOGIFINDEX	SIOCGIFINDEX	/* misprint compatibility :-)	*/
#define SIOCSIFPFLAGS	0x8934		/* set/get extended flags set	*/
#define SIOCGIFPFLAGS	0x8935
#define SIOCDIFADDR	0x8936		/* delete PA address		*/
#define	SIOCSIFHWBROADCAST	0x8937	/* set hardware broadcast addr	*/
#define SIOCGIFCOUNT	0x8938		/* get number of devices */
 
#define SIOCGIFBR	0x8940		/* Bridging support		*/
#define SIOCSIFBR	0x8941		/* Set bridging options 	*/
 
#define SIOCGIFTXQLEN	0x8942		/* Get the tx queue length	*/
#define SIOCSIFTXQLEN	0x8943		/* Set the tx queue length 	*/
 
/* SIOCGIFDIVERT was:	0x8944		Frame diversion support */
/* SIOCSIFDIVERT was:	0x8945		Set frame diversion options */
 
#define SIOCETHTOOL	0x8946		/* Ethtool interface		*/
 
#define SIOCGMIIPHY	0x8947		/* Get address of MII PHY in use. */
#define SIOCGMIIREG	0x8948		/* Read MII PHY register.	*/
#define SIOCSMIIREG	0x8949		/* Write MII PHY register.	*/
 
#define SIOCWANDEV	0x894A		/* get/set netdev parameters	*/
 
/* ARP cache control calls. */
		    /*  0x8950 - 0x8952  * obsolete calls, don't re-use */
#define SIOCDARP	0x8953		/* delete ARP table entry	*/
#define SIOCGARP	0x8954		/* get ARP table entry		*/
#define SIOCSARP	0x8955		/* set ARP table entry		*/
 
/* RARP cache control calls. */
#define SIOCDRARP	0x8960		/* delete RARP table entry	*/
#define SIOCGRARP	0x8961		/* get RARP table entry		*/
#define SIOCSRARP	0x8962		/* set RARP table entry		*/
 
/* Driver configuration calls */
 
#define SIOCGIFMAP	0x8970		/* Get device parameters	*/
#define SIOCSIFMAP	0x8971		/* Set device parameters	*/
 
/* DLCI configuration calls */
 
#define SIOCADDDLCI	0x8980		/* Create new DLCI device	*/
#define SIOCDELDLCI	0x8981		/* Delete DLCI device		*/
 
#define SIOCGIFVLAN	0x8982		/* 802.1Q VLAN support		*/
#define SIOCSIFVLAN	0x8983		/* Set 802.1Q VLAN options 	*/
 
/* bonding calls */
 
#define SIOCBONDENSLAVE	0x8990		/* enslave a device to the bond */
#define SIOCBONDRELEASE 0x8991		/* release a slave from the bond*/
#define SIOCBONDSETHWADDR      0x8992	/* set the hw addr of the bond  */
#define SIOCBONDSLAVEINFOQUERY 0x8993   /* rtn info about slave state   */
#define SIOCBONDINFOQUERY      0x8994	/* rtn info about bond state    */
#define SIOCBONDCHANGEACTIVE   0x8995   /* update to a new active slave */
 
/* bridge calls */
#define SIOCBRADDBR     0x89a0		/* create new bridge device     */
#define SIOCBRDELBR     0x89a1		/* remove bridge device         */
#define SIOCBRADDIF	0x89a2		/* add interface to bridge      */
#define SIOCBRDELIF	0x89a3		/* remove interface from bridge */
 
/* hardware time stamping: parameters in linux/net_tstamp.h */
#define SIOCSHWTSTAMP   0x89b0
 
/* Device private ioctl calls */
 
/*
 *	These 16 ioctls are available to devices via the do_ioctl() device
 *	vector. Each device should include this file and redefine these names
 *	as their own. Because these are device dependent it is a good idea
 *	_NOT_ to issue them to random objects and hope.
 *
 *	THESE IOCTLS ARE _DEPRECATED_ AND WILL DISAPPEAR IN 2.5.X -DaveM
 */
 
#define SIOCDEVPRIVATE	0x89F0	/* to 89FF */

Now the table is here for easy perusal. But what do these hex values really mean? These values refer to request information that the kernel device driver code will present back to the userspace application. In my earlier articles I use a few of these defines to retrieve interface information like name and interface index. I would suggest trying out a few of these calls, try adding a route with SIOCADDRT.

Static Variables in C

The definition of static as an adjective is: Lacking in movement, action, or change, esp. in a way viewed as undesirable or uninteresting: “the whole ballet appeared too static”. Well guess what, that is essentially what it means for your C variables declared as static as well. Static variables come in quite handy if you need to record a value between function calls, rather than having an ephemeral variable that loses its previous value for each iteration.

A static variables lifetime exists during the entire execution of the program, rather than local variables that only exist within the scope of the function. To illustrate the use of a static variable here is an example for your perusal:

#include <stdio.h>
 
void static_function_test() {
        static int a = 0; // x is initialized only once, on initial call
        fprintf(stdout, "%d\n", a);
        a++;
}
 
int main(int argc, char **argv) {
       static_function_test(); // prints 0
       static_function_test(); // prints 1
       static_function_test(); // prints 2
       return 0;
}

If I did not declare ‘int a’ as static, then every time I called the ‘static_function_test()’ function, the output of ‘a’ would always be 0. Using the static declaration is useful for counters or any instance where you need to have a variable always retain its previous value.

In C, the static declaration is actually known as a storage classFrom Wikipedia:

1. Static global variables: variables declared as static at the top level of a source file (outside any function definitions) are only visible throughout that file (“file scope”, also known as “internal linkage”).

2. Static local variables: variables declared as static inside a function are statically allocated while having the same scope as automatic local variables. Hence whatever values the function puts into its static local variables during one call will still be present when the function is called again.

There is only one thing left to do, go try it out yourself!

How To: Install C Package From Source

This article outlines installing packages from source into your Linux system.

Sometimes the luxury of apt-get install, or yum install is not an option. However, a tar.gz or tar.bz2 source code is the only way of getting the package you need. Basically you have to unpack the tar and compile and finally install the newly compiled binary, simple right? I will outline the process here.

1. Unpack The Source Package Using tar -xvf

Lets say you have a tar.gz package, you must unpack the tarball first. So for example:

erik@debian:~$ tar xvzf data.tar.gz (or tar xvjf data.tar.bz2 if its a bz2 file)

This will unpack the tarball into a data/ directory. Enter that directory. (cd data)

2. Run ./configure To Setup Makefile

When downloading source packages, the designers usually take into account that each Linux distribution and architecture (check endianness of 32bit or 64bit) has different compilers and may need to be configured a different way. This is where the ./configure comes into play. It will setup the Makefile in such a way that it will detect correct libraries and compilers to use, or inform you if your system does not meet basic requirements to build the package.

To execute the ./configure:

data@debian:~/data$ ./configure 
checking for g++... g++
checking for C++ compiler default output file name... a.out
checking whether the C++ compiler works... yes
checking whether we are cross compiling... no
checking for suffix of executables...
checking for suffix of object files... o
checking whether we are using the GNU C++ compiler... yes
checking whether g++ accepts -g... yes
checking for win32... no
checking for Qt >= 4.1.0... 4.2.1
...

This will output information as to what your system provides for compilation. If everything works correctly we can move onto the compilation step.

3. Compile Source Using Makefile make

Now we can get down to business. After configure has finished, a Makefile will be present to compile your application from scratch. If everything went as planned it is only one more step to installing the program. Enter the command make. This will begin to compile your program and may take quite awhile depending on the package you are trying to install.

data@debian:~/data$ make
g++ -c -pipe -O2 -Wall -W -D_REENTRANT  -DQT_NO_DEBUG -DQT_XML_LIB 
-DQT_GUI_LIB -DQT_NETWORK_LIB -DQT_CORE_LIB -I/usr/share/qt4/mkspecs/linux-g++ 
-I. -I/usr/include/qt4/QtCore
-I/usr/include/qt4/QtCore -I/usr/include/qt4/QtNetwork -I/usr/include/qt4/QtNetwork 
-I/usr/include/qt4/QtGui -I/usr/include/qt4/QtGui -I/usr/include/qt4/QtXml 
-I/usr/include/qt4/QtXml -I/usr/include/qt4 -Isrc/gui/common -Isrc/gui/help/browser 
-Isrc/lang -Isrc -Isrc/config -Isrc/control -Isrc/gen/moc -Isrc/gen/ui 
-o obj/mainwindow.o src/gui/mainwindow.cpp

Barring any errors (it will show you if there were) we can continue on to the installation step.

4. Install Package To System Using make install

Finally, the package is now compiled and ready for all users of the system to use it, there is just one problem… it is not yet installed to a directory where everyone can access it. In some cases, you have to run as root or use sudo make install to actually install the software.

data@debian:~/data$ sudo make install

or

data@debian:~/data$ su
 // enter root password
data@debian:~/data# make install

If all went as planned you have now unpacked, configured, compiled, and installed your new source package to your system. While it is not as elegant as apt-get install or yum install, sometimes this is the only way to install packages to your system.

Follow A Packet With ‘traceroute’

Ever wondered how a packet traveled from your computer to a website? Or, why in some circumstances it takes longer to get to google.com than it did the day before? Some of the issues are due to routers that make up the Internet and sometimes you can thank your ISP for the slow downs. With the traceroute (tracert in Windows), you can follow a packets round-trip from computer to server and back. The traceroute tool will display how many hops (the amount of routers or intermediate devices the packet traveled through to get to the destination) the packet took, the IP address if resolvable of each router along the way, and the amount of time it took to reach each intermediate device.

The traceroute utility, like most utilities has many flag options, here is a filtered list of the most useful pieces:

traceroute [ -46dFITnreAUV ] [ -f first_ttl ] [ -g gate,... ] [ -i device ] 
[ -m max_ttl ] [ -N squeries ] [ -p port ] [ -t tos ] [ -l flow_label ] 
[ -w waittime ] [ -q nqueries ] [ -s src_addr ] [ -z sendwait ] host [ packetlen ]
Options:
  -4                          Use IPv4
  -6                          Use IPv6
  -d  --debug                 Enable socket level debugging
  -F  --dont-fragment         Do not fragment packets
  -g gate,...  --gateway=gate,...
                              Route packets through the specified gateway
                              (maximum 8 for IPv4 and 127 for IPv6)
  -I  --icmp                  Use ICMP ECHO for tracerouting
  -T  --tcp                   Use TCP SYN for tracerouting
  -i device  --interface=device
                              Specify a network interface to operate with
  -m max_ttl  --max-hops=max_ttl
                              Set the max number of hops (max TTL to be
                              reached). Default is 30
  -n                          Do not resolve IP addresses to their domain names
  -p port  --port=port        Set the destination port to use. It is either
                              initial udp port value for "default" method
                              (incremented by each probe, default is 33434), or
                              initial seq for "icmp" (incremented as well,
                              default from 1), or some constant destination
                              port for other methods (with default of 80 for
                              "tcp", 53 for "udp", etc.)
  -q nqueries  --queries=nqueries
                              Set the number of probes per each hop. Default is
                              3
  -A  --as-path-lookups       Perform AS path lookups in routing registries and
                              print results directly after the corresponding
                              addresses
  -U  --udp                   Use UDP to particular port for tracerouting
                              (instead of increasing the port per each probe),
                              default port is 53
  -P prot  --protocol=prot    Use raw packet of protocol prot for tracerouting
  --mtu                       Discover MTU along the path being traced. Implies
                              `-F -N 1
  --back                      Guess the number of hops in the backward path and
                              print if it differs

While it may look daunting I will outline some basic uses of the tool. To execute a basic traceroute:

$ traceroute google.com
traceroute to google.com (74.125.53.104), 30 hops max, 60 byte packets
 1  debian (192.168.3.1)  0.180 ms  0.142 ms  0.135 ms
 2  x.x.x.x (x.x.x.x)  7.162 ms  7.189 ms  7.448 ms
 3  rd2cv-ge7-1-0-2.gv.shawcable.net (64.59.162.242)  13.339 ms  13.364 ms  17.688 ms
 4  rc1wt-pos3-0-0.wa.shawcable.net (66.163.77.182)  19.773 ms  20.056 ms  20.039 ms
 5  rc6wt-tge0-15-0-0.wa.shawcable.net (66.163.68.46)  22.592 ms  22.636 ms  22.613 ms
 6  74.125.48.233 (74.125.48.233)  19.759 ms  19.267 ms  19.325 ms
 7  209.85.249.34 (209.85.249.34)  19.316 ms 209.85.249.32 (209.85.249.32)  17.528 ms 209.85.249.34 (209.85.249.34)  56.147 ms
 8  66.249.94.199 (66.249.94.199)  15.776 ms 66.249.94.201 (66.249.94.201)  15.752 ms 66.249.94.197 (66.249.94.197)  19.818 ms
 9  216.239.46.212 (216.239.46.212)  26.981 ms 216.239.46.200 (216.239.46.200)  26.855 ms 216.239.46.208 (216.239.46.208)  69.883 ms
10  64.233.174.123 (64.233.174.123)  26.841 ms 64.233.174.129 (64.233.174.129)  26.769 ms 216.239.48.167 (216.239.48.167)  26.754 ms
11  72.14.232.10 (72.14.232.10)  31.015 ms 72.14.232.70 (72.14.232.70)  27.408 ms  26.492 ms
12  pw-in-f104.1e100.net (74.125.53.104)  25.626 ms  25.652 ms  25.614 ms

Note: I have replaced my IP address with X’s, whereas when you run your traceroute you will see your route able IP.

So what do we make of this? As I mentioned above, we are given the hop count, IP address of the router, and 3 ‘pings’ to each router and the time taken for an average. Each hop in this list is a router that our packet has traversed to get to its destination. The traceroute output also gives us some geographical information just from showing the hostnames of these intermediate devices. For example:

3  rd2cv-ge7-1-0-2.gv.shawcable.net (64.59.162.242)  13.339 ms  13.364 ms  17.688 ms
4  rc1wt-pos3-0-0.wa.shawcable.net (66.163.77.182)  19.773 ms  20.056 ms  20.039 ms
5  rc6wt-tge0-15-0-0.wa.shawcable.net (66.163.68.46)  22.592 ms  22.636 ms  22.613 ms

Hop 3 has “gv.shawcable”, or greater Vancouver, then the next packet traverses the border into “wa.shawcable” at hop 4 – Washington state. Now that we know the router in our path, we can actually ping it directly.

$ ping 216.239.46.212
PING 216.239.46.212 (216.239.46.212) 56(84) bytes of data.
64 bytes from 216.239.46.212: icmp_seq=1 ttl=56 time=26.2 ms
64 bytes from 216.239.46.212: icmp_seq=2 ttl=56 time=24.2 ms

Notice the time is essentially the same as above from the traceroute. This is because our pings have actually traveled the exact same route, we just do not travel the entire path to google.com .

Use Flags For Hop Count, Query, and Wait Times in Traceroute

The wikipedia article for traceroute has an interesting example.

$ traceroute -w 3 -q 1 -m 16 example.com
traceroute to example.com (192.0.32.10), 16 hops max, 60 byte packets
 1  spartan.localdomain (192.168.2.1)  0.185 ms
 2  70.67.156.1 (70.67.156.1)  7.446 ms
 3  rd2cv-ge7-1-0-2.gv.shawcable.net (64.59.162.242)  13.906 ms
 4  rc1wt-pos3-0-0.wa.shawcable.net (66.163.77.182)  44.708 ms
 5  xe-11-1-0.edge1.Seattle3.Level3.net (4.71.152.49)  67.470 ms
 6  ae-31-51.ebr1.Seattle1.Level3.net (4.68.105.30)  17.405 ms
 7  4.69.132.49 (4.69.132.49)  36.370 ms
 8  ae-34-34.ebr4.SanJose1.Level3.net (4.69.153.34)  43.512 ms
 9  ae-5-5.ebr2.SanJose5.Level3.net (4.69.148.141)  38.893 ms
10  ae-6-6.ebr2.LosAngeles1.Level3.net (4.69.148.201)  46.692 ms
11  ae-62-62.csw1.LosAngeles1.Level3.net (4.69.137.18)  45.929 ms
12  ae-31-80.car1.LosAngeles1.Level3.net (4.69.144.131)  46.317 ms
13  INTERNET-CO.car1.LosAngeles1.Level3.net (4.71.140.222)  103.076 ms
14  www.example.com (192.0.32.10)  102.329 ms

In this example, the -w (wait x seconds) in this case three seconds, the -q (only 1 query instead of 3, note only 1 column for time), and the -m (maximum hop count, so if we were to use traceroute on an IP address in Australia from Canada we might require more than 16 hops). Note also in this example the amount of information that traceroute has garnered. The packet travels Vancouver->Seattle->San Jose->Los Angeles … all this information from a simple traceroute command. This also tells us that most likely there are multiple routers in the same facility as their time difference is essentially the same and their names are almost identical. Look at lines 10,11,12 above:

ae-6-6.ebr2.LosAngeles1.Level3.net (4.69.148.201)  46.692 ms
ae-62-62.csw1.LosAngeles1.Level3.net (4.69.137.18)  45.929 ms
ae-31-80.car1.LosAngeles1.Level3.net (4.69.144.131)  46.317 ms

The main differences between these routers is their prefix with “ebr2 and csw1″ having some meaning. If you were to enter these into Google most likely some interesting information would appear.

How Does Traceroute Function?

By default traceroute uses UDP packets sent to each intermediate device and increments the time-to-live (ttl) of the packet until the UDP reply comes back as a port unreachable.

There are a few flags to note: !H, !N, or !P (host, network or protocol unreachable), !S (source route failed), !F (fragmentation needed), !X (communication administratively prohibited), !V (host precedence violation), !C (precedence cutoff in effect), or ! (ICMP unreachable code ).

Traceroute Protocol Options

Traceroute can now use ICMP, TCP, TCP connection attempt, UDP, and raw packets at the IP layer if you are brave. This is to help alleviate the issue of a firewall getting in the way. A firewall may permit port 80 (http), so you could force traceroute to use TCP at port 80 for your traceroute request.