Archive for the ā€˜Uncategorizedā€™ Category

How to Change the WordPress Network Domain (Multisite)

Thursday, July 26th, 2012

Recently I spent hours banging my head against my desk while trying to change the wordpress network domain name of a site I was workng on. What I observed was that I could change the domain of a site as a wordpress network super admin, but upon doing so I could no longer log in to the individual siteā€™s dashboard. After much trial and error I stumbled upon a solution that actually works quite well.

Surprisingly, the solution is not to change the domain name at all. Instead, simply enable the ā€œWordPress MU Domain Mappingā€ plugin and map the desired domain name to the existing site.

How to Change the WordPress Network Domain Name For a Site:

1. As a wordpress ā€œnetwork administratorā€, make sure that the ā€œWordPress MU Domain Mappingā€ plugin is installed and activated.
2. If youā€™ve already tried changing the domain and site name of any individual sites, first change them back and ensure that youā€™re restored full functionality.
3. Log in to the dashboard of the individual site you wish to change the domain of.
4. Select Tools > Domain Mapping
5. Input the domain you wish to use for this site, and select ā€œPrimary domain for this blogā€.
6. Verify that your web server and DNS are configured to reflect the new domain.

And thatā€™s it! Your site should now be responding on the new domain name.

WordPress multisite network documentation: http://codex.wordpress.org/Create_A_Network

Vyatta Config Sync Howto and Overview

Tuesday, July 24th, 2012

Here is a quick run down of how to set up vyatta config sync on VSE (subscription edition).

Note: Vyatta subscription edition is required for this functionality.

Vyatta config sync allows you to select a subset of your router config, then set one of your routers as the master for that subset. In this example weā€™ll be syncing our nat config section from vyatta-01 (master) to vyatta-02 (slave).

Preparing the slave system:

First, we need to enable the web server on the slave system as this is the transport used by config-sync.

vyatta@vyatta-02# configure
vyatta@vyatta-02# set service https listen-address <IP_OF_VYATTA-02>
vyatta@vyatta-02# commit

Then, I recommend creating a user account dedicated to the purpose of config-sync.

vyatta@vyatta-02# configure
vyatta@vyatta-02# set system login user config-sync authentication plaintext-password secret
vyatta@vyatta-02# commit

Configuring vyatta config sync on the master system

Now we tell vyatta-01 about vyatta-02:

vyatta@vyatta-01# configure
vyatta@vyatta-01# set system config-sync remote-router <IP_OF_VYATTA-02>
vyatta@vyatta-01# set system config-sync remote-router <IP_OF_VYATTA-02> username config-sync
vyatta@vyatta-01# set system config-sync remote-router <IP_OF_VYATTA-02> password secret
vyatta@vyatta-01# commit

Then, we specify what to sync from vyatta-01 to vyatta-02ā€¦

vyatta@vyatta-01# configure
vyatta@vyatta-01# set system config-sync sync-map default rule 0 action include
vyatta@vyatta-01# set system config-sync sync-map default rule 0 location nat
vyatta@vyatta-01# commit

And finally we go back and assign the sync-map to the remote-router entry.

vyatta@vyatta-01# configure
vyatta@vyatta-01# set system config-sync remote-router <IP_OF_VYATTA-02> sync-map default
vyatta@vyatta-01# commit

Thatā€™s it. You should see vyatta-01 attempt to sync to vyatta-02 upon subsequent commits, and anything under nat should appear on vyatta-02.

For quick reference, hereā€™s a config dump of the relevant sections:

Vyatta-01:

vyatta@vyatta-01# show system config-sync 
 remote-router <IP_OF_VYATTA-02> {
     password secret
     sync-map default
     username config-sync
 }
 sync-map default {
     rule 0 {
         action include
         location nat
     }
 }

Vyatta-02:

vyatta@vyatta-02# show service http
 listen-address <IP_OF_VYATTA-02>
Ā 
vyatta@vyatta-02# show system login user config-sync
 authentication {
     encrypted-password $19aboeuo/20u230b+8239bulkj8271J.
     plaintext-password ""
 }

Recursive Inotify Scripting With Lsyncd

Thursday, July 12th, 2012

Lsyncd is a tool which was built to keep two locations in sync with each other efficiently by only sending updates when file changes are detected with inotify events. However, lsyncd is actually quite extensible in that it that supports scripting for each of itā€™s various types of inotify events. This allows us to perform customized tasks when file changes are detected.

Here are a few examples:

Enforce File Permissions Recursively With Inotify

This lsyncd config will ensure that that files changed, moved or created under the defined directory have mode 777.

#/etc/lsyncd.d/chown.lsyncd
Ā 
settings = {
	statusFile = "/tmp/chown.lsyncd.stat",
	statusIntervall = 1,
	logfacility = daemon,
}
Ā 
chown = {
    delay = 5,
    maxProcesses = 5,
    onCreate  = "chmod 777 ^sourcePathname",
    onModify  = "chmod 777 ^sourcePathname",
    onMove    = "chmod 777 ^d.targetPathname",
    onStartup = "sysctl fs.inotify.max_user_watches=1048576; sysctl fs.inotify.max_queued_events=2097152; chmod -R 777 ^source"
}
Ā 
sync { chown,
         source="/path/to/files",
         target="/dev/null",
}

To start lsyncd run this:

lsyncd -pidfile /var/run/chown.lsyncd /etc/lsyncd.d/chown.lsyncd

This will result in the defined ā€œsourceā€ directory being monitored for file changes and additionally when lsyncd is started it will recursively chmod the ā€œsourceā€ directory to ensure that any potentially missed file have the correct permissions. You may notice the sysctl commands that are being run as the ā€œonStartupā€ command. This is because my watched directory is quite large, and requires adjustments to the default inotify sysctl values.

Inotify Backup ā€“ Backup Files When Changed

The below will watch your home directory for file changes and after detecting a changed file will immediately copy that file to the backup destination using rsync while appending a date-stamp to the backup file. To accomplish remote backups you could specify a remote rsync server, or use rsync+ssh with pre shared keys.

#/etc/lsyncd.d/backup.lsyncd
settings = {
	statusFile = "/tmp/backup.lsyncd.stat",
	statusIntervall = 1,
	logfacility = daemon,
}
Ā 
backup = {
    delay = 5,
    maxProcesses = 5,
    onCreate = "rsync -a --backup --suffix=-`date +%F-%T` ^sourcePathname ^target",
    onModify = "rsync -a --backup --suffix=-`date +%F-%T` ^sourcePathname ^target",
}
Ā 
sync { backup,
       source="/home",
       target="/var/backups/"
}

To start lsyncd run this:

lsyncd -pidfile /var/run/backup.lsyncd /etc/lsyncd.d/backup.lsyncd

I chose lsyncd over the various alternatives out there because to me it made the most sense. I liked that it is built to run as a daemon, and does the vast majority of the heavy lifting for me. Watching a directory recursively was a must have for me, and it requires a minimal amount of scripting for most uses.

Logging IPMI To Syslog And Generating Alerts With OpenIPMI, Ipmievd Syslog-ng and Swatch

Thursday, July 12th, 2012

Monitoring the health of numerous servers can be a challenging and time consuming task. Luckily modern servers support a software suite which allows administrators to monitor the health of the hardware itself. This includes temperature monitoring, power supply status, memory and ECC status, fan rpm, and many other attributes of the server hardware. The toolkit that Iā€™m talking about is OpenIPMI, and itā€™s available in just about every linux distribution. For the purposes of this article Iā€™m going to focus on RHEL5, but it should be straightforward to adapt these instructions to your distro.

Installing OpenIPMI

OpenIPMI is available as an rpm, and can be installed with yum like so:

yum install OpenIPMI

Once installed youā€™ll want to start the service, which in turn will load the necessary kernel modules.

/etc/init.d/ipmi start

And weā€™ll also ensure that it starts up on boot.

chkconfig ipmi on

This allows us to use the ipmitool command on the local machine, among other things. Letā€™s list the system event log to be sure that itā€™s working.

ipmitool sel elist

Hopefully your SEL is clear, but you may see hardware issues logged here that you werenā€™t aware of. On the bright side, now you can fix them before they crash your system!

Logging IPMI Events To Syslog

Ipmievd is a utility which can run as a daemon and will monitor your SEL for events, sending them to syslog when they occur. On RHEL5 it is available in the OpenIPMI-tools package.

Ensure that OpenIPMI-tools is installed.

yum install OpenIPMI-tools

Before starting the daemon I needed to set the mode to SEL, as the default of ā€œopenā€ did not work on my servers. YMMV.

#/etc/sysconfig/ipmievd
# ipmievd configuration scripts
Ā 
# Command line options of ipmievd, see man ipmievd for details
IPMIEVD_OPTIONS="sel"

Now we start the service, and ensure that it starts on boot. (note: ipmievd requires that the ipmi service be running)

/etc/init.d/ipmievd start
Ā 
chkconfig ipmievd on

You should now see SEL events logged in syslog, by default with the local4 facility.

Generating Alerts When IPMI Events Happen

To generate an email alert when an IPMI event is logged Iā€™m using swatch. I run the swatch process on my central log server so that I can monitor and alert off all my logs centrally, however this could be run on individual servers as well.

Swatch rpms are available for RHEL5 via Fedora Packages for Enterprise Linux.

First, we install swatch.

yum install swatch

Then we define the regular expressions we will generate alerts from when they are matched in the logs. In my cause Iā€™m using /etc/swatchrc, however you may use any file you wish. Swatch defaults to ~/.swatchrc.

Swatch swatchrc configuration example:

#/etc/swatchrc
Ā 
# swatchrc - define regular expressions and generate alerts when matches are found in logs
#	     daemon is started from /etc/cron.d/swatch
#
Ā 
### IPMI EVENTS ###
#
Ā 
# Ignore common IPMI startup output
#
ignore /Reading\ sensors/
ignore /Waiting\ for\ events/
Ā 
# Match ipmievd syslog entries like the following:
# Jul 12 09:36:39 server-01 ipmievd: foo bar baz
#
watchfor /(\S*)\ ([0-9]*)\ ([0-9]{2}:[0-9]{2}:[0-9]{2})\ (\S*)\ (ipmievd:)\ (.*)/
	 exec=echo $1 $2 $3 $4 $5 $6 | nail -r "[email protected]" -s "IPMI Event on $4" sysadmin@example.com

Note: I am using the nail command in order to specify a from and subject header in the email itself. Nail is available from the RPMForge yum repositories, or could be substituted with your favorite mail command.

Now weā€™re ready to start swatch:

swatch -c /etc/swatchrc -p 'tail -f -n 0 /var/log/*log'

To ensure that the process is running I made use of the ā€“pid-file and ā€“daemon options, and wrote a cron job to test if the pid is running which will restart swatch if not.

#/etc/cron.d/swatch
Ā 
# make sure that swatch is running every minute
*/1 * * * * root pgrep -F /var/run/swatch.pid 2>&1 > /dev/null || swatch -c /etc/swatchrc --pid-file=/var/run/swatch.pid --daemon -c /etc/swatchrc -p 'tail -f -n 0 /var/log/*log'
Ā 
# restart swatch every hour to ensure that new log files are monitored
0 */1 * * * root kill `cat /var/run/swatch.pid` 2>&1 > /dev/null

Once this is complete you should begin seeing emails that look like this when IPMI events happen:

From: alert@example.com
To: sysadmin@example.com
Subject: IPMI Event on server-01
Ā 
Ā 
Jul 12 11:54:38 server-01 ipmievd: SEL overflow is cleared

Auto Mounting a Single Directory at the Root of the Filesystem with Autofs

Thursday, June 14th, 2012

Recently I wanted to ensure that /pub on my server automatically mounted an NFS export using autofs. As it turns out this is very easy to do, but took me some research to figure out how autofs handles this use case.

Autofs refers to a single mount point as a ā€œdirect mapā€. Direct maps look something like this:

#/etc/auto.master
Ā 
/-	/etc/auto.direct
#/etc/auto.direct
Ā 
/pub        -fstype=nfs             nfsserver.example.com:/pub/

After an autofs restart, nfsserver.example.com:/pub/ is auto mounted on /pub when used.

Puppet Foreach / For Loop Workaround

Tuesday, June 12th, 2012

Here is a quick workaround to effectively write a foreach loop in puppet. There isnā€™t a native foreach function that Iā€™m aware of, however it is possible to obtain similar functionality by using a define statement.

Letā€™s create a simple example to loop over multiple users, and perform an action on each user. In this case weā€™ll print ā€œFound user ā€ for each user.

$users = [ "user1", "user2" ]
Ā 
define print_users {
        $user = $name
        notify { "Found user $user":; }
}
Ā 
print_users { $users:; }

On the agent we should see output that looks like this:

notice: /Stage[main]/Test/Test::Print_users[user1]/Notify[Found user user1]/message: current_value absent, should be Found user user1 (noop)
notice: Test::Print_users[user1]: Would have triggered 'refresh' from 1 events
notice: /Stage[main]/Test/Test::Print_users[user2]/Notify[Found user user2]/message: current_value absent, should be Found user user2 (noop)
notice: Test::Print_users[user2]: Would have triggered 'refresh' from 1 events
notice: Class[Test]: Would have triggered 'refresh' from 2 events

So, as you can see, this effectively allows us to loop through the elements in an array and perform actions in a way that is very similar to a foreach loop.

Thanks to https://blog.kumina.nl/tag/puppet-tips-and-tricks/ for documenting this tip.

How to Hot Add/Remove VCPUs from a Xen Domain

Wednesday, June 6th, 2012

Overview

Xen supports vcpu hot add and remove, which allows you to add and remove CPUs from a running system without downtime.

How it works

Inside your domU config, youā€™ll need to set the ā€œmaxvcpusā€ setting to the maximum number of VCPUs that this domU will be allowed to have. If you donā€™t define this, it defaults to the value of ā€œvcpusā€, so youā€™ll always be able to hot remove, but wouldnā€™t be able to hot-add anything more than what it was booted with.

#vcpus - number of VCPUs to boot the system with.
vcpus = 2;
Ā 
#maxvcpus - maximum number of VCPUs (total) that can be hot added later.
maxvcpus = 8;

VCPU Hot Add Example

Lets say I have a virtual machine named foo which Iā€™ve given 1 VCPU. One day I notice that the system is struggling to keep up with a CPU heavy load. So, I want to add another VCPU to the VM, but I canā€™t afford a downtime. No problem (as long as I configured the maxvcpus value above).

Hereā€™s the system with one 1 VCPU:

# xm list
Name                                      ID Mem(MiB) VCPUs State   Time(s)
foo                                      11  4096     1     -b----     23.7

To resize we use the ā€˜xm vcpu-setā€™ command. For example, to re-size our 1 VCPU domain to 2 VCPUs, execute the following.

# xm vcpu-set foo 2
Ā 
# xm list
Name                                      ID Mem(MiB) VCPUs State   Time(s)
foo                                      11  4096     2     -b----     31.6

VCPU Hot Remove Example

Similarly, we can hot remove VCPUs from a domain using the ā€˜xm vcpu-setā€™ command.

Hereā€™s the system with one 2 VCPUs:

# xm list
Name                                      ID Mem(MiB) VCPUs State   Time(s)
foo                                      11  4096     2     -b----     52.5

To hot remove VPCUs simply execute ā€˜xm vcpu-setā€™, specifying a lower number than what is currently assigned.

# xm vcpu-set foo 1
Ā 
# xm list
Name                                      ID Mem(MiB) VCPUs State   Time(s)
comlag                                      11  4096     1     -b----     56.7

And thatā€™s it. As you can see itā€™s very straightforward, and as long as youā€™ve taken the time to set your ā€˜maxvcpusā€™ setting in the domU config before booting the machine youā€™ll be able to adjust your VCPU assignments as load varies.

How to Fix Aide ā€œlgetfilecon_raw failed for / : No data availableā€ errors

Wednesday, June 6th, 2012

Recently at I observed that aide was generating extremely large reports. Upon closer inspection I noticed that the logs were full of lgetfilecon_raw errors, much like the following:

lgetfilecon_raw failed for /opt:No data available
lgetfilecon_raw failed for /etc/exports:No data available
lgetfilecon_raw failed for /etc/crontab:No data available
lgetfilecon_raw failed for /etc/bashrc:No data available
lgetfilecon_raw failed for /etc/group:No data available
lgetfilecon_raw failed for /etc/sudoers:No data available
lgetfilecon_raw failed for /etc/gshadow:No data available
lgetfilecon_raw failed for /etc/aliases:No data available
lgetfilecon_raw failed for /etc/sysctl.conf:No data available

As it turns out the stock aide config that was in place was configured to check selinux contexts, and because we had selinux disabled aide was unable to read them. The fix was to redefine our groups so that they donā€™t inherit anything from the default groups. Redefining the following items in /etc/aide.conf was enough to fix the issue for me:

#/etc/aide.conf
ALLXTRAHASHES = sha1+rmd160+sha256+sha512+tiger
EVERYTHING = p+i+n+u+g+s+m+c+acl+xattrs+md5+ALLXTRAHASHES
NORMAL = p+i+n+u+g+s+m+c+acl+xattrs+md5+rmd160+sha256
DIR = p+i+n+u+g+acl+xattrs
PERMS = p+i+u+g+acl
LOG = p+u+g+i+n+S+acl+xattrs
LSPP = p+i+n+u+g+s+m+c+acl+xattrs+md5+sha256
DATAONLY = p+n+u+g+s+acl+xattrs+md5+sha256+rmd160+tiger

After setting that, I was able to re-initialize the aide database and subsequent checks ran without error.

Hope that helps!

References: http://beginlinux.com/server/centos/using-advanced-intrusion-detection-environment

iSCSI Multi LUN Howto

Tuesday, May 15th, 2012

Recently I needed to connect a server to multiple iSCSI targets with each target requiring a unique username and password. From what I can tell this isnā€™t a well documented feature of openiscsi, nor is it handled cleanly in the config file. All that I could find was an email thread that briefly outlined how to get it working.

Hereā€™s how I implemented this on a RHEL 5 host with an equallogic backend.

How it works

From what I gather, In order to make this work you first need to add multiple new discovery records. Each time changing the contents of /etc/iscsi/iscsid.conf to reflect your differ username/password combinations. This must be done in a particular way to avoid the default discovery behavior which removes records that are no longer present (for your current user). The command used to perform discovery is as follows, the ā€˜-o newā€™ argument prevents removal of old records.

  # edit /etc/iscsi/iscsid.conf to change user/pass for current target, then
  iscsiadm -m discovery -t st -p portal_ip_address -o new --discover

Once discovered, the username and password needs to be set individually for that target.

  iscsiadm -m node --targetname TARGET -p PORTAL -o update -n node.session.auth.username -v USERNAME
  iscsiadm -m node --targetname TARGET -p PORTAL -o update -n node.session.auth.password -v PASSWORD

At this point you should be able to log in/out of that target

  iscsiadm -m node --targetname TARGET -p PORTAL --login

Now rince, lather and repeat for the remainder if your target/portal/username/password combinations.

Scripting the process

This script takes an array of portal ips, usernames and passwords then loops through them attempting to log in to all available iSCSI available. Once performed once the connections should be remembered across iSCSI logouts and reboots.

This is currently working well for me on RHEL5, YMMV.

#!/bin/bash
# iscsi_multi_setup.sh - Set up connections for multiple iSCSI targets having 
#                        unique portal/user/password combinations.
#			
#			 This script will attempt to log in to all available targets
#			 on the supplied portal addresses.
#
#                        2012 Keith Herron <[email protected]>
Ā 
  PORTAL[0]=portal_ip_1
USERNAME[0]=username1
PASSWORD[0]=password1
Ā 
  PORTAL[1]=portal_ip_2
USERNAME[1]=username2
PASSWORD[1]=password2
Ā 
  PORTAL[2]=portal_ip_3
USERNAME[2]=username3
PASSWORD[2]=passowrd3
Ā 
i=0
while [ $i -lt ${#PORTAL[@]} ]; do
Ā 
  echo Discovering on ${PORTAL[$i]} as user ${USERNAME[$i]}:
Ā 
  # Set discovery user/pass in /etc/iscsi/iscsid.conf
  sed -i "s/^#\?discovery.sendtargets.auth.username.*/discovery.sendtargets.auth.username = ${USERNAME[$i]}/g" /etc/iscsi/iscsid.conf
  sed -i "s/^#\?discovery.sendtargets.auth.password.*/discovery.sendtargets.auth.password = ${PASSWORD[$i]}/g" /etc/iscsi/iscsid.conf
  sleep 1
Ā 
  # Perform discovery (note -o new argument, this is important)
  TARGET[$i]=`iscsiadm -m discovery -t st -p ${PORTAL[$i]} -o new --discover | awk '{ print $2}'`
Ā 
  echo Found IQN: ${TARGET[$i]}
Ā 
  # Set username/password individually for each target, and login
  iscsiadm -m node --targetname ${TARGET[$i]} -p ${PORTAL[$i]} -o update -n node.session.auth.username -v ${USERNAME[$i]}
  iscsiadm -m node --targetname ${TARGET[$i]} -p ${PORTAL[$i]} -o update -n node.session.auth.password -v ${PASSWORD[$i]}
  iscsiadm -m node --targetname ${TARGET[$i]} -p ${PORTAL[$i]} --login
Ā 
  # Log out of target
  iscsiadm -m node --targetname ${TARGET[$i]} --logout 
Ā 
 i=$((i+1))
done
Ā 
# Set discovery user/password to nothing in hopes to prevent manual discovery
sed -i "s/^discovery.sendtargets.auth.username.*/discovery.sendtargets.auth.username = /g" /etc/iscsi/iscsid.conf
sed -i "s/^discovery.sendtargets.auth.password.*/discovery.sendtargets.auth.password = /g" /etc/iscsi/iscsid.conf
Ā 
# Log in to all nodes now
echo Logging in to all nodes
iscsiadm -m node --login

Hope that helps, please leave a comment if you found this helpful. Or if you were able to adapt it to work on a different distribution.

OOM Killer ā€“ How To Create OOM Exclusions in Linux

Wednesday, October 19th, 2011

When a linux machine runs low on memory the kernel will begin killing processes to free up ram. This is called the OOM Killer. OOM stands for out of memory. Unfortunately, the Linux kernel OOM killer often kills important processes. On numerous occasions my system has become completely hosed once OOM killer rears itā€™s ugly head. Luckily, you can tell the kernel to never OOM kill certain processes by supplying a list of pid numbers. If youā€™re running a system with high memory pressure, and want to ensure that important processes (sshd for instance) are never killed, these options may be of use to you.

Telling the OOM killer to ignore a process

Disabling OOM killer is done on a process by process basis, so youā€™ll need to know the PID of the running process that you want to protect. This is far from ideal, as process IDs can change frequently, but we can script around it.

As documented by http://linux-mm.org/OOM_Killer: ā€œAny particular process leader may be immunized against the oom killer if the value of its /proc/$pid/oom_adj is set to the constant OOM_DISABLE (currently defined as -17).ā€

This means we can disable OOM killer on an individual process, if we know its PID, using the command below:

# OOM_DISABLE on $PID
echo -17 > /proc/$PID/oom_adj

Using pgrep we can run this knowing only the name of the process. For example, letā€™s ensure that the ssh listener doesnā€™t get OOM killed:

pgrep -f "/usr/sbin/sshd" | while read PID; do echo -17 > /proc/$PID/oom_adj; done

Here we used pgrep to search for the full command line (-f) matching ā€œ/usr/sbin/sshdā€ and then echo -17 into the procfs entry for each matching pid.

In order to automate this, you could run a cron regularly to update the oom_adj entry. This is a simple way to ensure that sshd is excluded from OOM killer after restarting the daemon or the server.

#/etc/cron.d/oom_disable
*/1 * * * * root pgrep -f "/usr/sbin/sshd" | while read PID; do echo -17 > /proc/$PID/oom_adj; done

The above job will run every minute, updating the oom_adj of the current process matching /usr/sbin/sshd. Of course this could be extended to include any other processes you wish to exclude from OOM killer.

I recommend disabling OOM killer at the individual processes level rather than turning it off system-wide. Disabling OOM killer altogether will cause your system to kernel panic under heavy memory pressure. By excluding critical administrative processes you should at least be able to log in to troubleshoot high memory use.