Linux in the ShellLinux In The Shell aims to explore the use of many commands a user can run in the Bash Shell. Tutorials include a write up with examples, an audio component about the write up, and a video component to demonstrate the usage of the command.
The website is http://www.linuxintheshell.com/
Episode 32 of Linux in the Shell talks about the use of the cat command. Learn the different switches to cat and how through the use of redirection cat becomes more than just a tool to view the contents of a file. For the full write-up of the command and the corresponding video examples check out http://www.linuxintheshell.com/2013/06/18/episode-032-cat/
Episode 31 of Linux in the Shell discuses the use of the who command. The who command does more than just identify who is logged into a system. Who is coupled with init and will produce statistical information about the system since the last boot. Make sure you visit the entry on http://www.linuxintheshell.com/2013/06/04/episode-031-who/ to get the full write up of the who command and for further information in the bibliography on topics discussed.
Episode 30 of Linux in the Shell talks about the use of the vmstat command. Learn about Linux Virtual Memory managment and the files in /proc where vmstat gathers information.For the full write-up of the command and the corresponding video examples check out http://www.linuxintheshell.com/2013/05/22/episode-030-vmstat/
This episode of LITS talks about using Apache Benchmark utility to test websites. Learn how to use and interpret the results of Apache Benchmark.
Link to the full episode and video http://www.linuxintheshell.com/2013/05/10/episode-029-ab-apache-benchmark/
Episode 28 of Linux in the Shell talks about extended attributes and how to view them with lsattr and change them with chattr. Attributes are discussed in some detail and those that are mutable by chattr are noted.
Episode 27 of Linux in the Shell continues on with looking at some mathematical commands. Four programs are discussed:
factor - which will give you the prime factors of a number
primes - which will list all the prime numbers between a start and option stopping number
seq - sequence will list all the numbers given a stopping point or a starting and stopping point. You can also specify an increment or decrement value.
- arithmetic - Arithmetic is a game from the bsd games package that will quiz you on arithmetic problems.
Last episode of Linux in the Shell discussed the use of the bc command to perform math on the command line. This episode continues in suit with a mathematical theme picking up from the last examples of converting between different number systems or units. While bc can help you convert between units if you know the formulas, there is another program which will do it all for your units. Chances are units is not installed by default but a simple check in your package manager should allow you to add units to your daily tool set.
For more on this post and to see the video please see the main article
Math from the Linux command line is one of those tasks that is not as straight forward as you may think. There are many tools that will allow you to perform mathematical functions accessible to you, but to perform simple arithmetic is not as simple as just entering some equation. You can use the echo command to perform basic mathematical problems but it does not allow for decimals making division in particular problematic.
For more on this post and to see the video please see the main article http://www.linuxintheshell.com/2013/03/12/episode-025-bc/
The time program is a handy tool to not only gauge how much time in seconds it takes a program to run, but will also display how much user CPU time and system CPU time was used to execute the process. To understand these values you must grasp how the kernel handles the time reporting for the process. For example, the output of:
For the complete show including video and a complete write up go to http://www.linuxintheshell.com/2013/02/26/episode-024-time-and-usrbintime/
Spring is in the air and Valentine's day is just around the corner and Dann Sexy Washko tells us all we need to know about dates on his regular Linux In The Shell series.
The date command will not only display or let you change the current date and time but is the go-to utility for getting date and time information into scripts. Invoked by itself the date command will output the current system date based upon the rules of the LC_TIME format. The LC_TIME format defines the rules for formatting dates and times. LC_TIME is a subset of locale which defines the overall environment based upon the chosen language and cultural conventions. You can see the current LC value by issuing the locale command. You can see time specific information for your system by issuing:
locale -m LC_TIME
The sort command does just that, it sorts input. Input can be a list of files, standard in, or files with standard in. The first example presents this simple file, shopping.txt, containing a list of items:
Issuing the sort command on this file:
Would present the following output:
For more information including a complete video please see http://www.linuxintheshell.com/2013/01/29/episode-022-sort/
The previous two shows have discussed different ways to kill a process using kill and pkill. This episode will cover a third command, killall. The killall command is used to send a signal to every process that is running the identified command. For instance:
Will send the SIGTERM process to all instances of xterm. Should there be any xterm processes running they would receive the default SIGTERM signal (recall, number 15) and be terminated. If there were no xterm processes running then killall would report the following:
xterm: no process found
For the rest of this episode please check out the shownotes and video at http://www.linuxintheshell.com/2013/01/01/episode-21-killall/
This episode the focus will be on two commands that go hand-in-hand: pgrep and pkill. Like the kill command, pkill is used to send a signal to a process usually with the intent to terminate or stop the process. Instead of passing the Process ID (PID) you can pass the process name:
For the rest of this episode please check out the shownotes and video at http://www.linuxintheshell.com/2012/12/18/episode-20-pgrep-and-pkill/
The kill command is used in the shell to terminate a process. Kill works by sending a signal to the process and typically this signal is either the SIGTERM or SIGKILL signal, but there are others that can be used. To properly use the kill command you need to know the Process ID, or PID, of the process you want to kill. Also be aware that some processes can spawn child processes of the same or similar name. For instance, if you have are running the Chromium browser you may find multiple instances of the chromium process running. Killing one of these processes may not terminate all the processes because typically all but the first process are children processes. Killing any or all of the children processes will not terminate the mother process. But terminated the mother process will typically kill the children processes.
For more see:
Linux In The Shell aims to explore the use of many commands a user can run in the Bash Shell. Tutorials include a write up with examples, an audio component about the write up, and a video component to demonstrate the usage of the command.
The website is http://www.linuxintheshell.com/
Today it's the turn of the ln command. The rest of the shownotes and video can be found at
http://www.linuxintheshell.com/2012/11/20/episode-018-ln-command/ The ln command is used to create a link between an existing file and a destination, typically newly created, file. Some operating systems may all this creating a short-cut. Recall that Linux treats everything like a file, thus you can create links to files, directories, or even devices.
There are two types of links:
Hard Links: A hard like is a connection where two files share the same inode.
Symbolic Links: A symbolic link is a special file that refers to a different file.
Dann makes a welcome return with his podcast, blog and video entry over at http://www.linuxintheshell.com/2012/11/06/episode-017-split/
The split command is used to split up a file into smaller files. For example, if you need to transfer a 3GB file but are restricted in storage space of the transfer to 500 MB you can split the 3GB file up into about 7 smaller files each 500MB or less in size. Once the files are transferred restoring them is done using the cat command and directing the output of each file back into the master file:
split -b500M some3GBfile
Please visit his site for more splitty goodness
This final installment on the top command will discuss the alternate displays for top. When starting top with the defaults one is presented with a full screen view of top containing the summary window at the top and the task area in the bottom. The task area usually takes up three quarters of the top window. This display is not the only informative view that top has. By pressing the “A” key the “Alternate Display” view is presented where the task area becomes four separate task areas of equal size called “field groups”. The summary area remains where it is. Each of the four field groups displays the task information in a different manner.
For complete shownotes, and video see http://www.linuxintheshell.com/2012/09/25/episode-016-top-pt-4-alternate-windows/
Others would have given up by now. Not our Dann ! He continues his epic coverage of the Top command and in this episode will detail how to control the output of top via shortcut keys and command line switches.
For full notes go to http://www.linuxintheshell.com/2012/09/11/episode-015-top-part-3-control-top/
Dann continues his systematic analysis of the top command and you absolutely need to check out the text, and video for this one.
The top command is a very complex and feature-full application. When executed from the command line the top command displays two sections of information: Summary information (contained in the yellow box in the screen-shot below) and running application field information (contained in the red box):
The focus of this entry will be on the Summary window of top:
The screen shot above shows the summary section. The first line contains the following information in this order by default:
- The current time
- up time
- how many users are logged in
- load average
For the rest of the show notes and the video please go to http://www.linuxintheshell.com/2012/08/14/episode-013-top-of-top/
The tail command is used to print out the last 10 lines of a file to standard out. This command is a staple in a system administrator’s tool kit and especially handy when monitoring log files. The basic syntax is:
Which will output the last 10 lines of the file. You can alter the number of lines with the -n, or –lines=, flag:
tail -n20 some_file
tail –lines=20 some_file
In some versions of tail you can get away with specifying the number of lines from the end with just a “-” and number:
tail -30 some_file
Instead of working backwards with the -n command you can specify a “+” and some number to start from that number and list the contents to the end:
tail -n+30 some_file
This will display the contents of some_file from line 30 to the end of the file.
For the complete write up including video please go to http://www.linuxintheshell.com/2012/07/31/episode-012-tail/
The du command provides a summary of disk usage for files and directories. The default behavior is to show the number of blocks used by the contents of a directory or directories the command is run on. Usage is calculated recursively for directories. When du encounters a directory it will recurse into subdirectories and show the disk utilization of the files and directories under that directory and then present a total for the topmost directory. This cascades down through each subdirectory where the subdirectory becomes the parent and each child directory is summarized and the parent then totalled.
For complete show notes see http://www.linuxintheshell.com/2012/07/17/episode-011-du-disk-usage/
The df command is used to report file system usage. The df command will show you the amount of storage available, used, and free per partition for each fileystem currently mounted on the system. Values are shown in blocks. http://www.linuxintheshell.com
Today's show is brought to you by the letter "w" and the number "9"
To be more specific it's about the w command and Linux load averages
and it's brought to you by Dann from Linux In The Shell. Dann aims to explore the use of many commands a user can run in the Bash Shell. Tutorials include a write up with examples, an audio component about the write up, and a video component to demonstrate the usage of the command. http://www.linuxintheshell.com/
In today's show Dann explains to us what it means to be free.
The free command is a handy snapshot into your systems memory and how much of it is being used. In conjunction with other tools like top you can begin to understand where your system resources are being utilized and weed out potential bottlenecks and bugs. But before jumping into the deep end in system analysis, you need to have a decent grasp on how the Linux kernel utilizes memory, or your initial observations may send you tearing through the interwebs looking for a solution to a problem that does not exist.
As ever catch the complete shownotes and video at http://www.linuxintheshell.com
This is LITS 007
Pay attention everyone, this is serious stuff. This is CHMOD a powerful and dangerous operator that has infiltrated to the heart of every unix and linux system. We have been receiving reports that it has also behind many strange incidents leading to computer compromise and in some cases complete lock down.
Our American colleague, Special Agent Washko, will show us how to, in his own words "turn this bad boy around" so we can get it working for us.
As ever the extremely detailed shownotes can be found on his site http://www.linuxintheshell.com/2012/05/22/episode-007-chmod-and-unix-permissions/.
In our continuing journey around the command line, Dann takes us to visit the outer edges and talks about the pmount command.NAME pmount - mount arbitrary hotpluggable devices as normal user
As ever the very very detailed shownotes can be found on his site http://www.linuxintheshell.com
Don't forget that he also has a video component, and as ever this one is worth a watch.
Fear not Dann has not decided to branch and do a plumbing show. Rather he sticks with the plan and brings us yet another excellent explanation of a common unix utility, namely wc
Ever want to know how many lines are in a file? How about how many words are in a file or even how many characters? Well then the “wc” command is just for you. The “wc” command, short for word count, is a very simple command that will print “new line, word and byte counts for file specified, and a total count for all files combined if more than one file is included.”
Consider the following little ditty:
the linux wc command
for those not in the know
stands for word count and
does a lot you should know
It counts lines and words and bytes
producing output on site
quickly giving you the numbers
without any blunders
Executing the following command:
Results in the following output:
9 40 215 poem.txt
To break it down:
- 9 lines
- 40 words
- 215 characters
In the fourth in his series Dann, shows us the benefits of the paste command:
The paste command merges the lines of two or more files or a file and standard in if a second file is not specified or a "-" is used in place of the second file. Consider the following two files. The first file, test1.txt contains the following lines:
The second file, test2.txt contains the following lines:
The paste command can be used to paste these two files like so:
paste test1.txt test2.txt
producing the following output:
one blue finch
Each line in test1.txt has been “pasted” to the corresponding line in test2.txt.
http://www.linuxintheshell.com/2012/04/10/episode-004-paste/ for the complete shownotes, including video.
In the third in his series Dann, shows us the benefits of the cut command:
The cut command, as the man page states, "removes sections from each line of a file." The cut command can also be used on a stream and it can do more than just remove section. If a file is not specified or "-" is used, the cut command takes input from standard in. The cut command can be used to extract sections from a file or stream based upon a specific criteria. An example of this would be cutting specific fields from a csv (comma separated values) file. For instance, cut can be used to extract the name and email address from a csv file with the following content:
http://www.linuxintheshell.com/2012/03/28/episode-003-cut/ for the complete shownotes, including video.
In the third in the series, Dann introduces us to the tr command.
Here's a flavour:
The tr, or translate (aka: transliterate) command, substitutes one more characters for another set of characters or it will delete a specified set of characters. The tr command takes input from standard in and writes to standard out. This simple example of the tr command translates some numbers into a word:
echo "12234" |tr '1234' 'aple'
The entire article, including links to the videos can be found on his site:
In the second in the series, Dann concentrates on producing a image from the command line, QR codes to be precise.
He says: "The qrencode application is a tool to rapidly produce qrcodes. Qrcodes are handy little images that embed information many cell-phone cameras can read to do a number of tasks like provide a link to install applications, provide links to web sites or videos, or to add contacts into the address book. With qrencode, in seconds you can generate these images.
Find the excellent write up and video at http://www.linuxintheshell.com/2012/03/01/entry-001-qrencode/
or if you prefer:
Welcome to the first entry of Linux in the Shell. Before delving into specific commands, redirection will be explored as redirection will be used frequently in the examples going forward. The Unix philosophy posits program simplicity and that a program should do one thing and do it well (Mike Gancarz, the Unix Philosophy). Eric Raymond adds the Rule of Composition: "Design programs to be connected to other programs." Redirection is the glue that achieves this design.
Redirection is applied to any of the following standard streams to achieve results beyond simply outputting some value from a single command:
Standard Input (stdin) – 0
Standard Output (stdout) – 1
Standard Error (stderr) – 2
For the rest of this article and accompanying video please go to http://www.linuxintheshell.com/2012/02/16/entry-000-redirection/
The video can be downloaded http://www.archive.org/download/LinuxInTheShellEpisode000-Redirection/lits-000.ogv