Linux tips & techniques for developers and system administrators.


Upgrading Existing CentOS 5 with Percona Server

By jbayer - Last updated: Saturday, December 31, 2011

Percona has fixed a number of bugs and made improvements to MySql.  If you want to upgrade your mysql server to Percona Server, the following steps are necessary to upgrade an existing CentOS 5 server.  Be sure to backup your database BEFORE doing this.

Install Percona Yum repository:

     rpm -Uhv

Identify the mysql rpms:

     Rpm –q –a | grep “^mysql-5“

Remove mysql rpms (use the actual RPM version found in step 2):

     rpm –e mysql-5.0.77-4.el5_6_6.i386 --nodeps
     rpm –e mysql  --nodeps
     yum remove mysql-server


Install Percona server.  The following commands assume Percona Server 5.5

     yum install Percona-Server-client-55.x86_64 \
        Percona-Server-server-55.x86_64 \
        Percona-Server-shared-55.x86_64 \
        Percona-Server-shared-compat.x86_64 \


Filed in Database

Upstart on CentOS 6+ and redir

By jbayer - Last updated: Friday, December 30, 2011

This is a follow-up to the redir posting I made a while ago.

I had to install it on a CentOS 6.2 system.  Redhat (and CentOS) are now using upstart instead of the old System V inittab to control jobs.  Without getting into the reasons, while it gives greater flexibility, it is also more complicated.

To make redir start automatically, and to restart if it stops, I created the following small Upstart script.  To use it, just put it into a file:  /etc/init/redir.conf, and run the command below to tell init to reload the configuration files.  Be sure to replace the localIPaddress and smtp.<server> with your appropriate values.  This script assumes that redir is in /usr/local/sbin, replace it with wherever you put the program.

description "SMTP redirection using redir"
start on started
stop on shutdown
exec /usr/local/sbin/redir --lport=587 --laddr=<localIPaddress> --caddr=smtp.<server> --cport=25
end script


/sbin/initctl start redir

Filed in Administration, email

Find large files

By jbayer - Last updated: Tuesday, December 27, 2011

There is no easy way to find large files on a system.  The attached file contains a command called lsbig, which will do just that.  It also prints out stars and the length of the stars show the usage of each folder / file from smallest to largest on the box.

  lsbig (611 bytes, 277 hits)

Usage is very simple.  Put the file in your path and make it executable.  It accepts two parameters:

lsbig [dir [size] ]
dir       Directory or file to work on
size      Minimum size of files to report. Suffixes 
          are allowed (k for K, m for meg, g for gig),
          both lower and upper case are allowed

So for example:

lsbig       will do the current directory and report all files

lsbig /    will do the entire system and report all files

lsbig / 50g    will do the entire system and report all files greater than 50 gig in size.

Filed in Linux commands

Installing Zabbix on a CentOS/RedHat system (Updated)

By jbayer - Last updated: Friday, December 23, 2011

I needed to do several repeated installs of Zabbix, on both a CentOS 5.6 and Centos 6.1 system.  I also noted that Zabbix had been updated, along with a couple of other version changes.  This version also allows you to do upgrades on an existing system.  However, the upgrade has only been tested on installations done by previous versions of this script.  Anything else and you’re on your own.

There were a bunch of other changes I made, the complete list is below:

1.4	If webtatic installed, don't reinstall it
        If the user doesn't want to update the OS in the beginning,
             ask if the OS should be updated after completion
        If checkinstall is installed, don't rebuild
        Ask for a builddir, create if necessary
        If the zabbix database already exists, ask if it should be reinitialized
        If upgrade or keeping install, don't install zabbix-config
        When doing upgrade at end, manually install libxml2-python due to
             dependency on older libxml2
        Updated libxml version to 2.7.8-1
        updated Zabbix version to 1.8.9
        If a PHP accelerator is installed, don't ask to install another
        Removed install of rpm from libxlmsoft due to dependency problems
        Added exclude line to /etc/yum.conf when building on an x86_64 system
             to avoid loading 386/686 packages
        Merged versions for the 5.* and 6.* into one script
        Fixed bug for 6.* script which prevented PHP from being installed
        Made entry of MySql passwords hidden
        Added choice of accelerators:  apc, eaccelerator or xcache
        Added ability to do an upgrade of previous installs using this script
        For upgrade, added backup of database
        For upgrade, added backup of zabbix directories
        For x86_64 systems, added remove of all *86 rpms due to some dependency problems
        Detection of a 5.* vs a 6.* system is done two ways.  First, it looks
        for the number in /etc/issue.  After that, to confirm, it looks for
        the directory /etc/init, which only exists on the 6.* series.
        /etc/init is from the new upstart package.
        Added module to disable unneeded apache modules
        Added disable of iptables
        Added exclude line to the epel repo when on os version 6+, to exclude
             zabbix from being read



Filed in Database, Open Source, Zabbix

Flush a memcached server

By jbayer - Last updated: Tuesday, December 13, 2011

We use memcached a lot here.  Occasionally we need to flush the entire cache, and would prefer not to do a restart.

The attached script does just that.  To use it: [server [port]]

server defaults to localhost, and port defaults to 11211

You will be prompt to confirm.  You must type YES in all capitol letters to confirm. (864 bytes, 2,483 hits)

Filed in Administration

Scripts and cron

By jbayer - Last updated: Tuesday, December 13, 2011

Today I’m going to talk about three things;  two fairly common, and the third not so common.  Two are solved with the same script, while the third is a separate include.

The problems addressed are the following:

  1. Making sure that only one copy of a script can be active at a time
  2. Limiting the run time of a script
  3. Having a script run multiple times, usually within a minute

There are two files attached to this blog.  They are: (5.0 KiB, 486 hits)

  onlyRunOnce (794 bytes, 647 hits)

 Put both scripts into the same directory;  I usually put them in /usr/local/bin.


This contains a small bash function called (oddly enough) onlyRunOnce, which addresses the first problem of only allowing one instance of the script to be active at any time.  To use it, simply put the following 2 lines at the beginning of a bash script:

. ./onlyRunOnce

This will check to make sure that the script isn’t currently running;  if it is, it just exits.  If you want a message displayed when it exits, just add a parameter to the function call as follows:

onlyRunOnce verbose

The value parameter doesn’t matter, it only checks for the presence of it.  Currently it puts the log files in /tmp, that can be easily changed by editing the line where LOCKDIR is defined.

While not necessary, it is nice to cleanup when you are done.  So there is a second function called:


which simply removes the lock file.  Call this function at the end of your script.

This is designed to run once a minute from cron.

It will run a specified program or script multiple times per minute, prevent a program or script from running too long, and will send an email if it either exits with a non-zero error or exceeds the timeout.

There are a number of options which can be set.  All of these options can be set either by the command line or by setting variables inside the escript.

To use it, put it into a cronjob using the following syntax:

/usr/local/bin/   [-m maxWaitTime] [-t totalRunTime] [-c checksPerRun] [-l logfile] [-s script] [-e emailto] [-n notifyon] [-x maxEmails] [-1] [-d] [-?]


-m maxWaitTime
Maximum time to wait for a script to execute before killing it
-t totalRunTime
Maximum time for this script to run
-c checksPerRun
How many times to do the checks during the TOTALRUNTIME
-l logfile
File to log to
-s script
Program or script to run.  Be sure to specify the full path to be safe
-e emailto
Address(s) to send messages to.  If multiple addresses, put them inside quotes and comma separated.
-n notifyon
Email on either any non-zero error, or 137 (which results from a kill). Acceptable values are:  nonzero or 137
-x maxEmails
Maximum number of emails to send out during a run.
Only allow a single instance to be active
Debug mode.  Print all values and exit
Print short usage and exit


The following is an example of how to use it inside a crontab file:

# This doesn't do anything special, it just runs the script using the default.
# The defaults, if not changed, will simply run the script:
* * * * * /usr/local/bin/ -s
# Run once every minute, waiting a max of 5 seconds, checking 3 times during the run, which
# effectively means that the testscript will be run once every 20 seconds:
* * * * * /usr/local/bin/ -s -m 5 -c 3 -t 60 -e "" -n 137 -1

# Run once every 5 minutes, waiting a max of 5 seconds, checking 10 times during the run,
# which effectively runs the testscript once every 30 seconds
*/5 * * * * /usr/local/bin/ -s -t 300 -c 10 -m 5
Filed in Administration, Bash

Monitor long MySql queries

By jbayer - Last updated: Tuesday, December 13, 2011

At work we have a very large mysql database, over 250 gig in size, and it is extremely busy.  We need to know when a query is taking too long;  the reason could be a code bug, or just a very long operation.  Regardless, these long queries have the possibility to hang the server.

The attached script monitors the database, and if a query is found that is taking too long, sends out an email.  The only change you need to make is to enter the user and password to access the database, about line 87-88.

Additionally, the script, while intended to run once a minute from cron, can do multiple checks of the database during a single run.  This gives the ability to monitor the database faster than once a minute. [ -t timelimit ] [-e emailAddress] [ -h host ] [ -p port ] [ -d ] [ -i interval ] [ -s
 sleep ]
      timelimit           Minimum time to consider a query as "long-running" for purposes of sending an email
      emailAddress        Email addresses to send messages to, can be repeated for multiple
      host                Mysql host
      port                Mysql port on the host
      interval            How long to sleep after sending a message; this
                          is in addition to the sleep value
      sleep               How long to sleep between checks

Here are some example crontab entries:

# Daily, every minute, 7am-11pm, mon-sat
* 7-23 * * 1-6  /usr/local/bin/ root@localhost.localdomain
* 7-23 * * 1-6  /usr/local/bin/ -t 600
# Evenings, every 20 minutes, 11pm-7am, mon-sat
*/20 23,0-7 * * 1-6     /usr/local/bin/ root@localhost.localdomain
*/20 23,0-7 * * 1-6     /usr/local/bin/ -t 600

  Long MySql query monitor (6.6 KiB, 542 hits)

Filed in Database

Monitoring ping times to a server with Zabbix

By jbayer - Last updated: Tuesday, November 15, 2011

We needed to monitor ping times from one server to another, neither being the Zabbix server.  Zabbix doesn’t have a way to do this; the only pings that Zabbix can do are from the Zabbix server to another server.

I wrote the attached script to solve this problem.  Install the script onto each client that you need to do this sort of monitoring, in the /etc/zabbix/externalscripts directory (or wherever you have configured them to be).  Make it executable, and add the following lines to the /etc/zabbix/zabbix_agentd.conf file:

       # For
       UserParameter=pingTimeToServer[*],/bin/bash /etc/zabbix/externalscripts/ $1 $2 $3 $4 $5

Restart the Zabbix client after you put this line in.

You can run the script by hand if you like, the options are: server [option] [count] [maxage] [interval]

server Server to ping, either ip or dns
option blank for a single ping
“loss” to get the percentage of lost pings in a range of pings
“min” to get the minimum time in a range of pings
“avg” to get the average time
“max” to get the max time
count  How many times to ping when doing a range of pings
maxage Max age of tmpfile before doing pings again
interval Interval between pings during a range.  Must be
greater than 0.2 (only root can go less than 0.2)

To use as an item inside Zabbix, create an item (either in a template or a host) with the following options:

Type: Zabbix agent
Key: pingTimeToServer[server [,option [,count [,maxage [,interval]]]]
Type of info: Numeric (float) (except for “loss”)
For loss option:  Numeric (unsigned)
Units: all options except loss:        ms
for loss option:                    %

  pingserver (1.1 KiB, 636 hits)

Filed in Administration, Bash, Zabbix

Easy remote virt-manager

By jbayer - Last updated: Monday, October 24, 2011

First, some background:

We have a number of large hosts which run KVM for virtual machines.  All these machines are remote, as in more than 1000 miles away.  The host systems all run Redhat 5.6.  We access our systems remotely using putty.

The problem is in how to run virt-manager remotely, easily.



First, install putty.  You can download it here

Second, install Xming locally;  you can download it here.  You can either use the included plink.exe or install putty.  plink.exe is a console program, so we install putty on all our systems, and those which need, install Xming.

Finally, when you start putty, before you start  the putty session, go into the Connection->SSH->X11 options and enable X11 forwarding

In order to for virt-manager to be able to access the libvirt management daemon, it has to have root permissions.  We do with using sudo, and use the following alias to both set up the xauth permissions and run the virt-manager:

alias vmanager='xauth list | while read line; do sudo -i xauth add $line; done; sudo -i virt-manager'

Once this alias is set, all you need to do is type:


and the virt-manager will start.  If necessary, sudo will ask you for your password.  It will then set up the xauth list and start virt-manager.

Filed in Administration, Virtulization

Easy Automated Snapshot-Style Backups with Linux and Rsync

By jbayer - Last updated: Saturday, October 22, 2011

The inspiration for this is the original post at:

The basic idea is to create a snapshot of a directory structure using rsync.  Multiple snapshots can be created without taking up extra space by using file links for files which haven’t changed.

I was dissatisfied with the scripts there.  The ideas were good, but I wanted more flexibility in them.  After examining all the contributions, I settled on the script written by  Elio Pizzottelli,

I took his script and extended it with more options and abilities.  The attached script is the result. (14.0 KiB, 497 hits)

Basic documention follows:  source_dir [-dev device ] [-mount mount_point ] [ (-d | --dest) dest_dir ]
 [ --sourcedir source_dir ] [--nomount ] [ -y | --yes ] [ -q | --quiet ]
 [ ( -N | --name ) backup_name ] [-f exclude_file] [ ( -l | --number ) backup_level ]
 [ ( -f | --excludefile ) exclude_file ] [ -h | --help ] [ -x | --onefilesystem ]


 device:      a partition device
 mount_point: a valid mount point
 source_dir:   the directory to backup

 dest_dir:     a optional destination dir, default it is unset
                 (starting from the mount_point)
 backup_name:  a name for the backup, default BAKUP
 backup_level: the number of backups to preseve, default 5, min 2
 exclude_file: a file for rsync --exclude-from, default it is unset


  /usr/local/bin/  -dev /dev/hdd15 -mount /root/backup /home/
  /usr/local/bin/  -dev /dev/hdd15 -mount /root/backup /home/ -d home
  /usr/local/bin/  -dev /dev/hdd15 -mount /root/backup /home/ -d home -N daily
  /usr/local/bin/  -dev /dev/hdd15 -mount /root/backup /home/ -d home -N daily -m 10;
  /usr/local/bin/  -dev /dev/hdd15 -mount /root/backup /home/ -d home -N daily -m 10 \\
 -f /root/make_snapshot_exclude

Example Crontab-lines:

0 8-18/2 * * *  /usr/local/bin/  /dev/hdd15 /root/backup /home/ -d home -N hourly -m 10;
0 12 * * *  /usr/local/bin/  /dev/hdd15 /root/backup /home/ -d home -N daily -m 10;
0 14 * * *  /usr/local/bin/  0 /dev/hdd15 /root/backup /www/ -d /web/www -N daily -m 5;
0 0 * * *  /usr/local/bin/   --srcdir /home --dest /mnt/backup/home --name Daily --number 5 --nomount -y -q
Filed in Administration