PHP Frameworks

I've finally become sick of writing my own frameworks for websites. I keep reinventing the wheel, and while I think I do some novel stuff I'm never going to match the breadth functionality available from frameworks developed by dozens or hundreds of people. Time is money and it's time to save (make) some money.

I've been looking at two frameworks:



  • PHP5 only.
  • Modular enough to easily use within existing codebases.


  • Terrible documentation. I could not find one single example of even how to get started: I had to work it out from first principles.
  • Awful documentation.
  • Documentation leaves you reading source code to try to work out how to actually pass the parameters mentioned in the docs to methods.

Code Igniter


  • Easy to build complex sites from nothing and retain good practises.


  • PHP4 - no excuses now, PHP5 is essential. While 5 is far from ideal, PHP4's object model is unacceptable. Of course, CI being written in PHP4 doesn't mean that I have to write my code in 4 - but the very mention of PHP4 put me off enough to spend a day fighting ZendFramework before coming back for a second look.

Okay, there's not much here, but as I work through some projects I'll find things to say.

Saying that, anyone want to hire me for a project? Don't let this Drupal site with no effort spared "default theme and logo" look fool you, I am a real life web developer - I actually like writing business applications! Seriously.

Ignorance: pointing at aeroplanes

The issue in this news story is a structure being put up in a village without planning permission. A fairly straight forward legal issue.

But, it's a mobile phone mast. So naturally some ignorant fool gets quoted as saying:

"The mast is close to a pub and local residents live on the opposite side of the road. People have fears about radiation from the mast, and the effect particularly on children."

Somehow I picture the first few scenes of 2001: A Space Odyssey.

Well, what do I know? Maybe the mast is a gamma emitter. Maybe this village is 1km below ground, safe from many other forms of radiation.

Maybe I can start a petition to stop the radiation being emitted by the local commercial radio station? I find it's crass commercialism and never-ending generic pop music nauseating. Think of the children!

Dell Poweredge 1950 with Intel Quad Port Ethernet

I've spent the last couple of days configuring a batch of new Dell servers, 1950 rackmounts. A couple of them have Intel four port cards fitted (PRO/1000 Quad Port) and need to run Linux (the rest, FreeBSD).

The servers were bought from Dell on the understanding that they'd be immediately compatible with a bog-standard Linux install.

Well, I've been bitten by that before. Do NOT let sales people to spec a computer. Do NOT trust Windows users to spec a Linux machine.

Before we get to the add-in card, let's look at the internal pair of ethernet ports. With FreeBSD these were fine - FreeBSD's driver support is surprisingly robust. Something for Linux developers to take a look at perhaps?

Under Linux I had a significant problem: they did not work.
To cut a long and arduous tale short (this was a WHOLE day wasted):

1. The Dell machines ship with TOE (traffic offload engine) enabled. No good. Firstly, it doesn't work with Linux. Secondly, when they do work, it's with binary drivers. Thirdly, when your machine is going to be a firewall, you don't want some feature-sparse hardware taking over from the highly tuned Linux firewall stack you're using.

To disable TOE is fairly simple: pop the top off the machine, and somewhere around where the power plugs into the motherboard is a little dongle, a little like an ethernet jack. Remove it. The hardware is built into the motherboard, but somehow we seem to have been sold the dongles - I assume they're like licenses.

2. The ports are reversed! eth0 attaches to port 2, eth1 to port 1. I can see this being a support issue in years to come.

Onwards to the add in card: again, did not work. Thank you Dell, I suppose zero out of two isn't bad really. It could have exploded when I plugged it in?

There is no support in the Linux kernel for this new Intel card. Intel's own site takes you to the e1000 driver modules. Well, I thought - maybe that's because they're an updated version compared to the e1000 in the latest 2.6.24 kernel patches? No. Still did not work.

Eventually I find my way here: has a newer version of Intel's own e1000 driver, .5 verses .4!
But also I find that there is a new driver for the newest gigabit hardware: igb. Aha, maybe this is what I need? Yes, one rapid download and 'make install' later, the ports fire into life. "igb" is the driver to try if you have a new Intel quad port card. Hopefully this will make its way into the kernel otherwise supporting the machine with kernel upgrades is bound to fail eventually when someone forgets to put the drivers back on - thankfully this is part of a high-availability firewall, using Heartbeat (and lots of bash/ifconfig/ip/tc/iptables scripting). An identical second machine will come online within seconds when the first fails.

Zimbra ClamAV failure

I run Zimbra on a virtual machine. It's not ideal to run it like this; it does like to have a fair bit of CPU time available.

Sometimes it fails - usually when the VM gets shutdown unexpectedly. Most recently I discovered I wasn't getting email due to clamav not working - this was sometime after upgrading from 4.5 to 5.0, although 5.0 had been working.. The Zimbra watchdog was attempting to start it every few minutes and the host machine's CPU time was disappearing rapidly.

I spent some time looking at the configurations, wasting time wondering why the zimbra/clamav/etc/ files were defaults (it doesn't matter, they're not used).

Eventually I deciphered the log message: ERROR: MD5 verification error

It means, delete the contents of zimbra/clamav/db/, then run 'sudo -u zimbra zimbra/clamav/bin/freshclam'.

This is the first time I've ever seen ClamAV corrupt its database.

I consider myself an expert on installation, deployment and management of Postfix, amavisd-new, ClamAV... but they're well integrated with Zimbra so problems like this can often not be as obvious as they should be.

AWStats - ignore visits to your own sites

Here's a handy tip for those viewing stats for their own sites and finding that their own IP addresses are skewing the results. It applies to AWStats and only if you have access to the configuration or can ask your hosting ISP to add this line - if you can't do that then consider a new hosting ISP; I can certainly recommend one or two!

If you have a fixed IP or range from your ISP, this is quite easy. Otherwise you could block the whole ISP ranges you're likely to use - but that runs the risk of then not recording many of the legitimate visits from other people on the same ISP as you.

The line to configure is SkipHosts. Here is an example:

SkipHosts="localhost REGEX[^192\.168\.10\.2[0-7]$]"

You can list as many IPs here as you like, and also put in some powerful regular expression matchers. Above there are three elements:

localhost: if you have a cronjob on the machine hosting the site which accesses it (such as by wget or curl), add this. It doesn't hurt to anyway. This is a single fixed IP that will now be blocked from the statistics. to this is the range of IPs blocked by the regular expression. It isn't possible to use CIDR notation, so you have to get creative like this.

If you get stuck, drop a comment in and I'll see if I can help!


vmware-cmd <machine> reset not working?

If your VMWare client crashes and becomes none responsive regularly (e.g. you're running a Windows client), it's a good idea to have a script monitoring it and calling reset when it dies.

The first obstacle I encountered was when "vmware-cmd /mymachine.vmx reset" was run and I got the following:

VMControl error -8: Invalid operation for virtual machine's current state: Make sure the VMware Server Tools are running

This obscure error was because I didn't read the documentation. 'Reset' takes an argument, see [1]. By default, it will try to do a 'soft' reboot - i.e. it will only talk to vmware-tools inside the machine and try a clean reboot. If the machine is dead, that's not going to happen. The answer is to use 'hard' or 'trysoft'. I prefer trysoft. If a response is not found from vmware-tools, it tells VMWare to do a "hard" reset of the client.

Here's the bit of perl I'm using to monitor my VMWare machine:

#!/usr/bin/perl -w

use strict;
use Net::Ping;

# This is the host we check is up.
my $host = "";

# Explict binding, not normally needed.
#my $local_addr = "";

my $p = Net::Ping->new();
if( !$p->ping($host) )
print "MachineX is down, forcing VMWare client reset...\n";
`/opt/vmware/server/bin/vmware-cmd "/var/lib/vmware/Virtual Machines/client directory/client configuration.vmx" reset trysoft`


MacOS X Leopard slow shutdown

I've had a problem recently where my new MacBook has been very slow to shut down. I was unable to point to any change I'd made which happened at the same time as the problem started. But I did notice that if I were to log in as another user on the machine then the problem disappeared - I was sure it was something I had done to my account but wasn't sure where to start looking...

Recently I installed Tunnelblick, a MacOS X GUI for OpenVPN. This ran fine on Tiger, and fine when I upgrade the Tiger machine to Leopard. But I did have a lot of issues (known issues) getting it to install and work on the fresh Leopard prep of my MacBook. But work it did, eventually, and I thought no more of it.

It is now obvious that Tunnelblick is the cause of the problems. I can't get it to quit, it just hangs. The same will be happening on shutdown, but MacOS will eventually kill it to switch the machine off.

The solution is to wait for a fix for this very important bit of software. The website was last updated two months ago stating that there was a problem. Initially I thought it was 7 months out of date and that the project was dead, but the site is using a USA date format.

Until then, load the software only when needed and remove it using Activity Monitor immediately afterwards.

MacBook, review part 2

Battery life and Charger heat

The battery life of the MacBook is an improvement on the Powerbook but not an earth-shattering difference. I get a good four hours out of it while the Powerbook was good for 2.5 hours (the battery was a few years old though). Very light usage will see five to six hours. Charging seems to take an age, but that's an illusion. It charges rapidly to 80% then switches to a trickle charge. This is why the PSU light seems to remain on orange for so long before going green. The charger gave me a bit of a scare the first time I used it. When under full load, i.e. changing a battery from nothing, it'll become very hot. Too hot to hold in your hand in fact. In normal use where you charge before losing power, it never gets beyond warm. This seems to be the normal behaviour.

The magnetic attachment on the power connector is genius - I've pulled it out accidently a few times and cursed it, but the question is would I have otherwise damaged it in those circumstances?


The MacBook does feel like the electronics run warmer than the G4 Powerbook. That is, there's always a lot of heat around the back where the fans exhaust is. On the G4, I found that the hard disk was a major source of heat in normal use but the fan did not usually kick in. This may have been because the HD was a larger replacement. I can't feel any heat from the MacBook's hard disk.

When putting the machine under a bit more stress, so that it needs the fans to keep cool, the MacBook is much better than the Powerbook. The Powerbook had a habit of running the fans on for a long time after the CPU intensive work had ended. The MacBook is in comparison much more responsive. It'll spin the fan up and down almost in sync with the CPU monitor in Activity Monitor! It also has a wider working range. The G4 may have remained off for a lot longer, but it had a minimum speed as soon as it was going. The MacBook will spin the fan almost silently when required, but it does seem to be turning more often.


I've not had any of the reported problems that earlier MacBook owners have reported. No 'mooing' fans. No click of death from the hard disk. No screen problems (well, there's a dead pixel or a bit of dust in the title bar area of the screen).

But I do find the plastic case poor. It flexes a lot - if I pick it up by one corner, it bends - so much so that if the fan is turning you can hear the fan blades start to rub on something inside. The keyboard itself I'm still happy with - it's got a very nice feel and responsive keys - but there must be a tiny bit of flex on it's plastic mountings - there's something there that doesn't quite feel as it should.

Perhaps this is because I'm so use to the aluminium Powerbook. I do find it better than the myriad cheap PC laptops out there. But not better than the PC laptops of equal cost - remember, the Mac comes at a premium. In all areas but for the case, the premium is worth it.

Please Apple, bring back the 12" pro notebook - I suppose I'm just asking for a 13" MacBook Pro really. Hmm, isn't that really what the Air is? No, the Air is a really good idea and certain to change the laptop market in the long run but I need those ports down the left side and that CD slot on the right.


The last little note for today relates to the keyboard layout. Apple have screwed up here, along with not being able to hold mute to stop the startup bong.

The older MacBooks shared the Fn keyboard layout with the Powerbooks. You had all the Expose features, screen brightness and sound along the F keys. Apple have changed this on the new models so that they now look like a cheap PC 'multimedia' laptop from long ago. The Expose buttons have gone, replaced with a single Expose button on F3. There original functions are there still, but you need to hold the Fn button. The sound control are now on F10-F12. On F7-F9 are three new keys: rewind, pause/play, fast forward.

This ranks as the most trivial, useless and annoying change to a computer platform that I've even seen. I have a remote control with these functions on! Why duplicate them on the keyboard! I've effectively lost the most used key from my Powerbook, the 'show desktop' Expose function.

I will quickly grow use to it, I suppose. Until then, a curse on whoever approved that change.


Cool'n'Quiet, Mandriva Linux 2008

I have an AMD Athlon X2 5600.

I found that as soon as my Linux box used some CPU time and got a bit warm as a result it would increase the CPU fan speed to 3,000 RPM or more. As expected. However, it would then never drop back down to the 2,000 RPM idle speed.

The reason is that the kernel defaults to 'performance' mode for cool'n'quiet.

Take a look at:
# cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor

To see what your kernel/CPU supports:
# cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_available_governors

To enable a near instant change of fan speed when the CPU usage drops, try:
# echo "ondemand" > /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor
# echo "ondemand" > /sys/devices/system/cpu/cpu1/cpufreq/scaling_governor

I'm not sure what performance issues this may have. But my machine is quiet again now. I'm researching cpufreqd and trying to userstand how this all works, and will report back!


Backing up Data

I have a bit of experience in backing up unix and Windows platforms using open tools. I'll take five minutes to share a little of what I do, but I know that this is imperfect. I'm really looking for hints and tips on how to improve things!

Backing up Linux from one disk to another

The quickest way to retrieve a lost file is to have it sitting on the file system so that you can simply copy it back. If that file system is mounted remotely by NFS, you get protection from disk loss too.

What we're really doing here is replicating the data exactly as it sits on the source.

The way to do this is to use rsync. Backing up using cp or tar will result in all files being copied every time. Rsync reduces the load on the system by only copying changes - this is very important once you get beyond a few megabytes and into gigabytes. I initially started doing this using an NFS mount, but found it to be inefficient. Disk I/O and NFS traffic was much higher than I wanted and this lead to high CPU load too. The solution was to have the network I/O use rsyncd. This is specialised to network rather than disk I/O.

Configuring rsyncd:

The defaults in most installations are good, you need only add your 'module' which is where backups are placed.

path = /mnt/massstorage/backups
comment = Backup storage
read only = no
write only = yes
hosts allow =
hosts deny = *
auth users = newuser
secrets file = /etc/rsyncd.secrets
# required for preserve attributes.
uid = root
gid = root

In rsyncd.secrets, make a user. Perhaps one per machine backed up.


On the client machine, create yourself a script to be run from cron. Note that there are two ways to give the password to rsync - either by environment variable or by a secured file. The file is the correct way to go as this reduces the chances of leaking the password to other programs. The example below uses a bash style local environment variable.

The use of slashes on the end of source and end paths is significant. Please read the rsync man page which does explain it well.

RSYNC_PASSWORD="password" rsync -a --delete -x /var/ rsync://newuser@mybackupmachine/backup/var/

Backing up Windows to unix using rsync

This is much like the above, but I prefer to do things a little differently to ensure that the more involved tasks are done on unix, where they are more easily managed.

I install the best Windows port I've found - cwRsync. If this is to go over the public internet or anything untrusted, I install the cwRsync Server version together with OpenSSH.

I configure the Windows version to run as a server and then pull the files over from unix - specifying the files from the remote unix machine. This gives a great deal of flexibility and control if you are backing up a customer's machine. Configuring the SSH tunnel to run reliably when customers have very unreliable ADSL lines can be a challenge but I leave this up to the reader.

There are a couple of issues with backing up Windows:
Locked files.

For locked files (and database style files which are locked and need to be grabbed in a consistent fashion - think Exchange) I configure Windows Backup to created a large bundle of files in a .bkf file. This is what rsync grabs. Rsync does a good job of applying deltas to large files like this to speed up the transfer, but it can still make the situation difficult as it is still much slower and more I/O intensive to do this than to grab smaller individual files.
Permissions is more of a mess. Rsync runs in the Backup Operators group by default. Normally, a Windows backup utility would do this but also set a special Windows API bit to say "I'm a backup tool, let me at the files". Rsync can't do this. Therefore, any file not explicitly readable by the Administrators or Backup Operators groups is lost. The best solution I have here is to change the rsync service to run under the one user which does have the equivalent of 'root' access - Administrator. This isn't the cleanest or most secure solution. You may find that rsync then refuses to start - you need to delete the two special stdin and stdout log files in C:\Program Files\cwRsync.
This method still has problems as you can block access to Administrator with Windows permissions. This is quite common in my experience. To this the only solution is to watch the rsync logs and ask the customer/admin to add Backup Operator access to any files you can't get to. Messy.

Backing up to tape

Use Amanda if you have a changer. In fact, I'd say use Amanda otherwise too. The reporting features are useful even if you only use a fraction of the software's capabilities. This works well under Windows too using the available Windows client package. The same permissions and locked files issues will occur under Windows with the same solutions as rsync.

I can't think of much to say here regarding Amanda and tapes - it seems to my memory to be more down to getting the configuration correct than of any special voodoo. The Amanda docs, mailing list and especially wiki are by far the best sources of information and more valuable than anything I could write here.


Subscribe to Technological Wanderings RSS