Linux tips & techniques for developers and system administrators.
(Following taken from: http://tutorials.papamike.ca/pub/lftp.html):
Every sysadmin should have a decent command line client for transferring files (beyond scp of course). The lftp program written by Alexander Lukyanov can handle seven file access methods: FTP, FTPS, HTTP, HTTPS, HFTP, FISH, and SFTP. The openssl library is required during compile-time in order for FTPS and HTTPS to function (the FreeBSD port attempts to include this library by default).
HFTP is FTP-over-HTTP-proxy protocol. It can be used in a web proxy scenario. FISH is a protocol that works over an SSH connection to a Unix account. SFTP is a protocol implemented in SSH2 as the sftp subsystem.
On CentOS 5.5, the default version of lftp is quite old, currently is version 3.7.11, released on 2009-03-20, while the current version is 4.1.3, released 2011-01-17. I had a few problems with lftp segfaulting, so I’ve created two binary RPMs to replace it, one 32 bit and one 64 bit.
To install, just remove the old version of lftp and install the new one using RPM:
rpm -e lftp
for 32 bit:
lftp-r.1.3-1.i386.rpm (1.3 KiB, 561 hits)rpm -ivh lftp-4.1.3-1.i386.rpm
for 64 bit:
lftp-4.1.3-1.x86_64.rpm (1.3 KiB, 674 hits)rpm -ivh lftp-4.1.3-1.x86_64.rpm
If you want to roll-your-own, you can get the source at:
Sometimes when building an open-source package using the “configure” script, you will get errors about missing libraries, even though you know the libraries are actually installed. This usually happens on a 64 bit system.
Assuming that the libraries are actually installed, and the pkg-config program is installed and working then what is happening is that the libraries are installed in a different location than the configure script expects them to be.
An easy solution is to add the following option to the configure line:
So the configure line would start with:
When managing many systems in a server farm or a virtual environment, quite often the same script needs to be run on all the systems. However, if the script happens to put a severe load on a common resource such as a storage device, the possibility exists that the storage device can be overwhelmed and all the scripts will run slowly. One way to avoid this is to stagger the running of the scripts on all the systems.
If you are trying to keep the systems identical and avoid customizing scripts or configuration files for each server, a simple way to delay is to use something on the system which is unique, such as the IP address.
The attached script does just that. It has reasonable defaults, but you can specify which octet to use to when calculating a delay period, a multipler of that octet, a network to execute on, and which network interface to use. If the network interface is not specified then it will find the first ethernet interface and use that. The usage is as follows:
usage: scriptdelay.sh [ options ] script [ script options ] Options:
-o # Octet to use (default to 4) -m # Multiplier to use (default to 1) -n network Only execute on this network (default to 10.11) -e iface Network interface to use
As an example, we use this to stagger the running of the updatedb script. We need updatedb to run hourly, but don’t want 10 systems hitting the storage unit at the same time. So the following entry is in /etc/crontab:
0 * * * * root /usr/local/sbin/scriptdelay.sh -o 4 -m 3 -n 10.11 /usr/bin/updatedb
scriptdelay.tar.gz (1.8 KiB, 297 hits)
With the explosion of inexpensive video cameras for computer use, (both USB and IP cameras), it has become fairly easy to set up a comprehensive video surveillance system. You can go out and buy complete kits, which include a set of cameras and a central control station. Or, you could roll your own, giving you the ability to totally customize your solution for your budget and needs.
ZoneMinder is a free, open source package designed to do exactly that. We’ve been using it for several years to monitor the outside of our building. It can control a large number of cameras, and has a large feature list.
I recently had to rebuild our installation, and in the process I developed a script to take a bare-bones CentOS 5.5 system and install everything needed to run ZoneMinder. I was inspired by a set of steps written by a user called butlerm1977 in the Zoneminder forums. However, the steps he published didn’t work as written.
The attached script will install all needed packages, and the ZoneMinder system. I use the checkinstall program to build an RPM of the ZoneMinder system so that it is installed as a package instead of from a build directory. This enables both the ability to install it on other systems from an RPM as well as removing it, if so desired.
While ZoneMinder has the ability to control X10 devices, X10 control requires installation of a perl module, which I did not include in this script. If you want X10 control, use the following (untested by myself) commands to install the perl module, switch on X10 in the options, then restart:
perl -MCPAN -eshell install X10::ActiveHome quit
If you don’t want to use cpan, you can install it manually as follows:
wget http://search.cpan.org/CPAN/authors/id/R/RO/ROBF/X10-0.03.tar.gz tar xzf X10-0.03.tar.gz cd X10-0.03 perl Makefile.PL make make install
This script has been tested on CentOS, both 32 and 64 bit, minimal install. While not tested, it will probably work on a simple desktop install.
The download was updated on 3/2/2011 to fix a small bug in the install.
installZoneminder.tar.gz (8.9 KiB, 408 hits)
These days virtual machines are all the rage. They make sense in a lot of areas, and can reduce both the floorspace, power requirements, costs, testing, stability, etc. This article is not about the reasons to have a virtual machine, but rather, how to control them from a command line.
Most Linux distributions use libvirt to control the virtual machines. From a GUI desktop a very convenient way to control the VMs is the Virtual Machine manager, also known as the virt-manager program. This program provides a fairly clean interface to the libvirt library, allowing you to create, delete, start, stop and configure the VMs. I will go into this program in more detail in another article.
Sometimes you need to control the VMs from a command line. For example, if you need to have an automated process to restart a hung or crashed VM. There are a number of ways of doing this, for a complete list look at this page at http://www.linux-kvm.org/page/Management_Tools.
I had a need to restart one of a number of VMs, and have written the attached script to allow me to do so. The script can be run interactively or as an automated process.
restartVM.tar.gz (1.6 KiB, 600 hits)
To use it, install it in your path. The usage instructions are fairly simple, you can print them out by using the command: restartVM.sh -h
I’ve reproduced it below:
usage: restartVM.sh options [vm name] Options: -q Quiet. Errors will still be shown. Repeat to hide all output -d Just destroy the specified instance if it is currently running -s Just stop the specified instance, if it is currently running -w [#] Maximum amount of time to wait to let currently running instance to shutdown, range from 0-300 -h Show this message vmname Optionally, specify the VM name on the command line
To use this in a non-interactive mode, you will need to specify the name of the virtual machine as it exists in /etc/libvirt/qemu AND the image directory. They need to be the same! You don’t need to specify the suffix, if you do the script will fail.
The -d option is very dangerous. Destroying an instance stops it immediately, it is equivilant to pulling out the plug on a running server.
The -w specifies how long to wait before assuming that a server is either hung or unable to stop itself. It defaults to 60 seconds.
If run interactively, you will be asked to confirm the operation. You need to confirm by typing the word “yes” in capitol letters.
There is little configuration needed. Just make sure that following two variables are set properly:
IMAGEDIR Directory where the disk images are stored, usually /var/lib/libvirt/images
XMLDIR Directory where the qemu files are stored, usually in /etc/libvirt/qemu.
One of the core tasks of a good system administrator is monitoring systems, and responding to problems as quickly as possible.
Zabbix is an open source monitoring system which offers advanced monitoring, alerting and visualisation features today which are missing in other monitoring systems, even some of the best commercial ones.
Installing Zabbix, while not difficult, has a number of steps and requirements. To simplify the installation process, there are a number of HowTo’s on the web, as well as a few scripts. One of the better scripts on the web, which I have used, is a bit outdated. I’ve taken the script, updated it to CentOS 5.5 and Zabbix 1.8.3.
The attached script was originally written by Brendon Baumgartner, the URL for the original posting is:
This is an updated script to install Zabbix 1.8.3 on CentOS/Red Hat 5. I have tested it on CentOS 5.5. The script was made for Zabbix 1.8.3, but if you modify the ZBX_VER variable in the script, it should work on any version in the 1.8 series.
Basically, the script tries to do a few things and assumes some things:
- Only run this for NEW installations, you will lose data if you run on an existing installation
- Run at your own risk
- Installs Zabbix 1.8.3 on CentOS 5
- Do not corrupt an existing system
- Be able to run the script over and over in the event that it errors
- Be somewhat flexible
- The database server, web server, and zabbix server all run on one box
This is a reworked script. I’ve made the following changes:
The assumptions are that the server is a minimal install; no options at all are installed
during the OS install. All necessary packages will be installed by this script.
- Changed 20 second sleep at beginning to a query
- Installation of the mysql-server package, if necessary
- Installation of the checkinstall script
- Installs php 5.3
- Installion of the yum-priorities plugin
- Addition of webtatic to the package dir (for PHP5.3)
- Added install of php-mbstring
- Added update of php.ini to recommended values
- Install latest version of libxml2, getting code from xmlsoft.org
- Addition of priority lines to the CentOS-Base.repo, CentOS-Media.repo files
- Installation of Dag’s GPG key (needed for rpmforge)
- Calls the checkinstall script to make binary RPMS
- Installs the binary RPM
- Install selected configuration file for mySql, only if the script installs mysql-server
- Moved all minor functionality into seperate functions, useful for future expandability
When the script is done, there will be at least two RPMs in /usr/src/redhat/RPMS/$arch/zabbix…..
You can use these RPMs to install Zabbix on other identical systems, as well as use the rpm manager to remove Zabbix from the system if you so desire.
I’ll be updating this in a few days with the ability to build either a server, agent or both. Right now it automatically builds both the server and the agent.
If you are installing on a RHEL 6 system (or CentOS 6, SL 6), use this link instead for an updated version
CentOS is usually used as a server. As such it quite often needs to deal with large files and large data transfers. The default filesystems for CentOS is ext3, which, while a very reliable and proven filesystem, is not well suited to a large server environment. There are quite a number of other filesystems available, among the more popular ones are XFS and ReiserFS.
JFS and ReiserFS
In order to have CentOS use a JFS or ReiserFS filesystem, you will need to install the CentOSPlus kernel and the creation tools. Use the following steps to prepare for the installation:
1. Edit /etc/yum.repos.d/CentOS-Base.repo and modify the following in the [centosplus] section:
enabled=1 includepkgs=kernel* jfsutils reiserfs-utils
2. In the [base] and [update] section you would do the following:
exclude=kernel kernel-devel kernel-PAE-*
3. Finally, run the following command:
yum install kernel
4. Reboot and you’ll have ReiserFS support
JFS and ReiserFS usage details are beyond the scope of this page.
XFS is a high-performance journaling file system created by Silicon Graphics, originally for their IRIX operating system and later ported to the Linux kernel. XFS is particularly proficient at handling large files and at offering smooth data transfers.
The 64 bit version of CentOS does support XFS.
Note that XFS is not available for i386 since it has problems with 4K kernel
stacks (in some situations).
1. Issue the following command
yum list available kmod-xfs\*
The results would be something like this (example is for CentOS-5 x86_64):
Available Packages kmod-xfs.x86_64 0.4-2 extras kmod-xfs-xen.x86_64 0.4-2 extras
2. You would then pick the module that you need and install it with a command similar to:
yum install kmod-xfs xfsdump xfsprogs
Note: The kernel module also has dmapi support, so you can add dmapi to the above install line if you want to use it. XFS usage details are beyond the scope of this page.
There are several repositories provided by CentOS and other 3rd party developers that offer software packages that are not included in the default base and updates repositories. While no list can be 100% complete, as anyone may announce an archive, it represents some major efforts and provides a summary of what each repository offers. These repositories have varying levels of stability, support and cooperation within the CentOS community.
I have used the following repositories with generally good success:
The Webtatic Yum repository is a CentOS 5 repository containing updated web-related packages. Its main goals are:
- to provide CentOS administrators with the latest stable minor releases of web development/hosting software, which are not provided in CentOS distribution minor releases.
- to serve as an additional installation option for some of Webtatic’s projects.
The CentOSPlus repository contains packages that are upgrades to the packages in the CentOS base + CentOS updates repositories. These packages are not part of the upstream distribution and extend CentOS’s functionality at the expense of upstream compatibility. Enabling this repository makes CentOS different from upstream. You should understand the implications of this prior to enabling CentOSPlus. Here is the CentOSPlus Readme file for CentOS 4 and CentOS 5. You should also browse the CentOSPlus directory for CentOS 4 or CentOS 5 on our mirrors for the architecture you intend to use.
RPMforge is one of the participating repositories in the rpmrepo project. On this page you can find information about the RPMforge project and help out with RPMforge during the pre-rpmrepo phase. This repository sometimes is referred to as DAG repository or similar.
Extra Packages for Enterprise Linux (EPEL) is a volunteer-based community effort from the Fedora project to create a repository of high-quality add-on packages for Red Hat Enterprise Linux (RHEL) and its compatible spinoffs such as CentOS or Scientific Linux. Fedora is the upstream of RHEL and add-on packages for EPEL are primarily sourced from the Fedora repository and built against RHEL.
Installing Google Chrome will add the Google repository so your system will automatically keep Google Chrome up to date. If you don’t want Google’s repository, do “sudo touch /etc/default/google-chrome” before installing the package.
The CentOSPlus repository contains packages that are upgrades to the packages in the CentOS base + CentOS updates repositories. These packages are not part of the upstream distribution and extend CentOS’s functionality at the expense of upstream compatibility. Enabling this repository makes CentOS different from upstream. You should understand the implications of this prior to enabling CentOSPlus.
The following repositories, while I have not personally used, also seem to be good sites:
This site (by a CentOS team member) provides a rebuild of selected packages from the archive formerly known as Fedora Extras but patched as neded for CentOS, as well as number of other packages. This repository has a reputation for being stable and safe.
This repository provides many bleeding-edge applications and media utilities such as myth-tv.
The ELRepo Project focuses on hardware related packages to enhance your experience with Enterprise Linux. This includes filesystem drivers, graphics drivers, network drivers, sound drivers, webcam and video drivers.
This list is by no means exhaustive. There are many other repositories available. Be careful about which ones you use, and always backup your system before using an unknown repository.
Screen is a full-screen window manager that multiplexes a physical terminal between several processes (typically interactive shells). Each virtual terminal provides the functions of a DEC VT100 terminal and, in addition, several control functions from the ISO 6429 (ECMA 48, ANSI X3.64) and ISO 2022 standards (e.g. insert/delete line and support for multiple character sets). There is a scrollback history buffer for each virtual terminal and a copy-and-paste mechanism that allows moving text regions between windows.
When screen is called, it creates a single window with a shell in it (or the specified command) and then gets out of your way so that you can use the program as you normally would. Then, at any time, you can create new (full-screen) windows with other programs in them (including more shells), kill existing windows, view a list of windows, turn output logging on and off, copy-and-paste text between windows, view the scrollback history, switch between windows in whatever manner you wish, etc. All windows run their programs completely independent of each other. Programs continue to run when their window is currently not visible and even when the whole screen session is detached from the user’s terminal. When a program terminates, screen (per default) kills the window that contained it. If this window was in the foreground, the display switches to the previous window; if none are left, screen exits.
A simple use of screen is as follows:
To start a screen session, use the following command:
You may get a screen of text describing the program, the license etc. If you get this screen, just press the <Return> key to continue. The screen will be cleared and you will then get a normal command prompt. You are now inside of a window within the screen program. This functions just like a normal shell except for a few special characters. Screen uses the command “Ctrl-A” as a signal to send commands to screen instead of the shell. To get help, just use “Ctrl-A” then “?”. You should now have the screen help page.
The following are the most common commands used during a screen session:
<Ctrl-A> ? Display the help screen(s)
<Ctrl-A> * List the current screens
<Ctrl-A> c Create a new screen
<Ctrl-A> n Next screen
<Ctrl-A> p Previous screen
<Ctrl-A> H Create a running log of the session
<Ctrl-A> K Kill the current screen. This will kill the current windows. If you have other windows, you will drop into one of those. If this is the last window, then you will exit screen.
Another way to leave the screens is o detach from a windows. This method leaves the process running and simple closes the window. If you have really long processes, you need to close your SSH program, you can detach from the window using
This will drop you into your shell. All screen windows are still there and you can re-attach to them later.
So you are using screen now and compiling that program. It is taking forever and suddenly your connection drops. Don’t worry screen will keep the compilation going. Login to your system and use the screen listing tool to see what sessions are running:
To reattach to a screen, you use the following command:
screen -r [screen session]
If there is only a single screen session running, you don’t need to specify the session name; otherwise you get the session name from the “screen -ls” command.
The logging function is very useful. Once started, screen will keep appending data to the file through multiple sessions. Using the log function is very useful for capturing what you have done, especially if you are making a lot of changes. If something goes awry, you can look back through your logs.
This only covers the most common uses of screen. It has a lot more power, to learn more, use the man pages to get additional information.
There are a number of online guides on how to convert a running Linux system to a RAID-1 setup. Rather than repeat the same information, I’m providing you with a set of scripts which will do the conversion automatically. While these scripts have been tested with multiple filesystems and in multiple ways, there could always be a configuration which would have problems. So BACK UP before you use these.
raid-conversion-ubuntu.tar.gz (5.9 KiB, 365 hits)
These scripts are designed to convert a running Ubuntu system from a single
hard drive to a RAID 1 system with two identical drives. Before running
these scripts, the new hard drive needs to be installed in the system.
These scripts have been tested with Ubuntu 8.04, Grub 1, and without LVM
These scripts assume that the original disk is either sda or hda, and that
the new drive is either sdb or hdb.
These scripts are able to work with the following types of filesystems:
ext2, ext3, reiserfs, jfs and xfs
It is STRONGLY recommended that you backup your system BEFORE running
The scripts MUST be placed in /usr/local/bin.
raid-conversion-ubuntu.tar.gz (5.9 KiB, 365 hits)