You are here

Blogs

CCTV as designed by a computer scientist

For several years now I've been slowly building up a computer controlled CCTV system. Naturally, I'm not going to be satisfied with pointing a camera out the window and saving plain old video to disk. Although I have yet to do any proper study of the subject, I am fascinated by computer vision and this is an ideal platform for me to mess about with in my spare time. And, wondrously, the result of my hacking around is something worth using!

Here's a quick description of my system. I started building it in late 2005 and have added to it in fits and starts over the years. The computer vision part was implemented this weekend based on perl scripts I originally prototyped at the end of 2006.

CCTV cameras are wired to a standard DV card with composite inputs.
This card is in machine 1.
'motion' is run on machine 1 which detects movement within the field of vision of each camera.

On detecting movement:
* frames from the camera start spooling to disk.
* additionally, a number of preceding frames are dumped to disk from memory.
* a perl script is fired off for each movement detection (script 1).

A cleanup script (script 2) is called each hour to maintain storage by removing old images.

Script 1 takes the meta-data of each event:
* if the movement/event that has been detected is not significant, the script exits.
* if an alert has been raised in the last n seconds, the script exits (this is intended to smooth a 12 FPS stream down to something manageable - the script initially sent SMS).
* the data about the event is inserted into an Oracle 10g database running on machine 2.

Machine 2 runs a script (script 3) which polls this database, looking for new events.
On finding an event:
* the corresponding image is fetched over NFS.
* an image from the seconds directly preceding the event is chosen and also fetched over NFS.
* using these two images, the object which caused the motion-detection is extracted and handed over to an analyser.
* the analyser picks out details regarding the object and hands these to a classifier.
* the classifier decides what the object is most likely to be:
- a raindrop, insect, cat, bird, person or vehicle.
* this extracted information is pushed back into the database.

From the classification of the object and the time of day, the software can decide whether to send an email, an SMS, call a phone number using VoIP or do nothing. From here an output controller can be added fairly easily added allowing the CCTV to do things like switching on lights.

The computer vision scripts are written in perl and as such aren't overly fast. A frame takes 30 seconds or so to analyse, hence the two part system. Machine 2 is also much faster than machine 1 hence the offloading of CV tasks. Of course, I know that my code is sub-optimal in many ways.

My intention is to rewrite the image processing in Java. My perl stuff tends to be for prototyping, but if I think that I can't improve it then I'll leave it in perl. In this case, rewrites in a compiled language is certainly near the top of the list for the thing I'll need to do next.

svn: cannot rename file 'entries'

Using an SVN working directory on a Samba share from a Mac? With Subclipse or similar? You may have seen the following message as it bombs during checkout:

svn: cannot rename file 'entries'

I did and Google says others have for the last three years at least.

Examining the files in the .svn directories, I find that the root 'entries' file is created with permissions of -r--r--r--, so naturally a file can't be renamed over it as that requires write permissions.

After a bit of hacking I came up with the following solution, which may or may not be suitable for others. Use with care.

Create a share in Samba specifically for your working directories if you haven't already.

Add these lines to the share definition:
force create mode = 0640
force security mode = 0640

or, if you want world readability (I don't, hence 640..):

force create mode = 0644
force security mode = 0644

Now everything you write to that share will be forced as writable (at least -rw-r-----) by your user account. This fixes the SVN bug.

I say 'SVN bug', but it could be Subclipse, the Java SVN library, Samba,... I don't know.

Keywords: 

PHP Frameworks, Part 2

I wrote a while ago about how I wanted to give up trying to develop my own frameworks because there were so many third party kits available and how they must be so much better than what I have time for. Well, I can't say that I've really pushed ahead with trying to learn any PHP frameworks. I've played with the two I mentioned but there remained a general feeling of unease - that somehow I'd be wasting my time and that I should move on to bigger and better things. That is to say, Java.

I have a need to develop a couple of webapps for myself and I am absolutely convinced within myself that these are best written in modern Java. There really is so much more you can do when you're in the JVM than in pretty much any other environment. Disclosure: my day job is mostly PHP and bit of Windows. I haven't dealt with Java on a day to day basis for many years now and I am completely out of date with modern frameworks.

So, I've spent the last few months slowly picking at Java and working out how to best write my apps. I've come to the conclusion that perhaps Spring and Hibernate as the best building blocks (not that I'm yet fully aware of what dependency injection is) and found AppFuse2, which is fantastic. AppFuse takes away a year of learning from getting started with Java.

But it's slow. I read the books, pick at the examples, look at forums. I'm still stuck. The learning curve is defeating me, even for someone who was totally immersed in Java half a decade ago and is a full time web programmer, albeit in PHP. Each obstacle I encounter doesn't just slow me down, it puts me off and makes it that much harder to get started again the next morning.

So on Saturday I made the decision to abandon the work I've done on my app and restart it in PHP. It's now Tuesday and I've spend ten hours working on it. I've spent much of that time building a PHP framework, but doing so for the needs dictated by my application (I find that unless something has a reason, it is difficult for me to concentrate on - this is true to me for maths too, where applications in physics and computing fascinate me but the pure form is an obscure cloud).

I already had a basic system in place: some objects, Smarty, controllers tying it together.. it worked. But I've rewritten it in vast amounts. Pretty much the only part that's left is the database abstraction layer, which is only because I don't see it as it hides under my ORM. (But I want to replace it: a) it's currently MySQLi only (give me Oracle, Postgres!) and b) I have new ideas even for this - see the PHP Optimisation post which describes a memcached aware database class)

This happens to me each time I delve back into Java. I never quite get there, but I take many ideas back with me to PHP. The future is created in Java. It really is. Look at any Computer Science research programme - you're going to find Java in there somewhere. I don't like playing follow-the-leader. I want to be there, at the head - I was once, but life got in the way.

But it pays to watch what the leaders are doing. This is true of programming as with anything else. I might not become the best, but I can become better.

So back to my framework. I've written several in PHP, both at work and for myself. At work, I do not innovate. It's a case of getting something done in the shortest possible time (giving the client what they want within their budget) and ignoring the long term possibilities (would you build a CMS using your own money then resell, or write something specific each time using the client's money?). That's how most hourly paid consultancy / outsourcing businesses work. At home, however, I am free to do my own thing. I am often more efficient, having time to think things over before starting work. I can plan, and not worry about who'll be picking up the bill.

My last 'home' framework pushed forward with using PHP classes. But it wasn't complete - a partially done MVC system used normal PHP procedural scripts to interface the view (Smarty) to the model (the classes). It wasn't correct, lacking use of interfaces and like, and to be quite honest being written in such a way that an interface never made good sense.

This has changed. The only bits of code not contained within classes are the PHP __autoload() function and a dispatcher. Interfaces define classes that can accept input from HTTP and generate output data (controllers). And then there's an interface for classes that can render that data - meaning that although I am still using Smarty, I don't need to. My renderer which provides part of the 'View' of 'MVC' is defined by a strict interface.

Classes now extend classes which extend classes ... My stack traces are growing; unheard of in my past PHP work. I'm applying Don't Repeat Yourself (DRY) and using well established patterns. The amount of code required to add a page or module shrinks with every modification I make. I'm abstracting the HTML forms down to ever-simpler arrays of meta-data. Classes can define dependencies which are injected into them at run time - despite PHP's typelessness.

There is a lot of the Spring framework in this, even though I perhaps don't understand Spring.

I really feel like I've accomplished something: indeed in terms of personal improvement, I have. I'd made more progress on my application in ten hours - including having written a generic framework - than I did in two months with Java where the framework was provided. But I still love you, Java.

I've also beaten a million times over the efforts of the last two years of my day job; the time over which I've been snatching an hour here and an hour there of client's time to adapt that framework to their needs and at the same time make something worth using for future projects. I worry now that I'll end up giving this code to my employer for nothing simply to make my days more bearable, as I have done many times before.

I have written a lot but said little so I'll wander off and do some more work I think. I will complete my application, which is really the whole point of this, then decided what to do with my framework.

I nearly wrote 'going forward' at the end of that last paragraph. I'm glad I didn't and that I won't be using that stinking phrase, going forward.

PHP, MySQL Optimisation

One of my current projects is the scaling up and out of a fairly large website. It currently sits on a dedicated server. Quite a large machine, but now three years old and we're starting to hit some limits and see slowdown at peak times. The site has already outgrown one server, scaling up to the current box.

The site sees steady growth and is doubling every six months or so. A target has been set and so I must build a system to meet it. The new system is currently in the testing phase so I expect some changes to what I'll write here as we go on.

Some background before I begin detailing some of the experiences I've had working on the site recently:
The site was originally written in 2003 by an unknown third party and was a PHP4 site. 'Register globals' and ext_mysql abound. Since then it's been in constant development with myself taking over in 2004 but the essential framework has not been significantly altered away from typical PHP4 ways. In the last year or so I've started to introduce some classes into the system in an attempt to compartmentalise functionality, which is so often spread over a dozen files otherwise. The size of the code base has increased tenfold since 2004; while I have made some bad decisions in some of the things I've implemented, the majority of the site is quite well written, given that it's PHP4 procedural, and I thank the original author for the clean design.
The hosting platform is the highly capable FreeBSD. MySQL is version 4.1. The current PHP platform is 5.2.

The site itself is one code base with two front ends: the customer side and the administration side. The admin is very complex and contains a mass of functionality to run a multinational business. The client side is much simpler but data intensive (text and images) and very much the focus of traffic. The customer side also customises based on the URL - the software serves many hundreds of sites, each one potentially run by a physical office which could be anywhere around the world (we're on three continents now I believe).

Okay, so how do we go about scaling up what is essentially a PHP4 site running on FreeBSD?

The first job is to get some new servers in and separate out some of the tasks.
1. New database server. Masses of RAM and very fast disks. Priority goes to RAM and disk. We've got this in RAID5 which perhaps isn't best but should see us good for a few years to come.
2. Repurpose old server as a database slave.
3. An application server with as much CPU and RAM as funds can bear - running only PHP.
4. Another application server identical to the first.

The above scheme gives us two ways to scale horizontally: more application servers and more database slaves. Our limit then is database writes: once we start to stress that I would be seriously looking at moving to Oracle but MySQL continues to improve and we'll see what happens in the next three to five years.

The structure of the site is developing like this:

  • db1: hosting MySQL 5.1rc, PHP files, image files and Apache configurations. The PHP, images and Apache are exported using NFS.
  • db2: not implemented yet.
  • app1: running Apache 2.2 and PHP 5.2.6. /home is imported from db1, /usr/local/etc/apache/Includes is imported from db1. In this way each box has its own httpd.conf but gets global site configurations from a central point.
  • app2: a clone of app1 but for IPs and hostname

Every service listed above (files, database, ..) is allocated a host name in DNS and is treated separately in software, such that if necessary we can break services out from a single server on new dedicated boxes.

In terms of centralised server management, I have written a small perl daemon (actually reused something I wrote for my home network years and years ago). This consists of a program running on each server which knows certain commands and can perform them on the local machine, and a control program which is available on each node and can connect to every appropriate daemon and execute these functions.

I have my doubts on hosting the PHP via NFS - even over gigabit. A caching NFS client would make me feel happier. If I detect significant problems with performance I will have the entire file structure replicated to each box using rsync.

With this being a single purpose cluster, it also makes sense to centralise login and authentication. So I am using NIS from the central DB server. I have found that NIS under FreeBSD is far from as transparent as under Linux, which is my only complaint. I must manually copy the master.passwd file, trim out the system accounts and then remake the yp files. Under Linux (Mandriva and Gentoo at least) this is more automatic.

That's it for the hardware and system configuration of the cluster - the rest is in the load balancers and software.

The load balancers (there are two in a failover cluster) are actually also the firewalls and intrusion detection boxes for this part of the ISP. The machines are so flexible that we can centralise much of our network management here. They are two Gentoo linux machines, running iptables, tc, ipvs, snort and a number of other services. These are, as mentioned, in a failover cluster. Each machine is identical and configuration managed by propagating changes from any one machine to all others (today there are only two machines, but we can scale even into active/active for any number of boxes). Otherwise the machines are completely independent. A custom Heartbeat resource was written and is capable of failover in 35 seconds of a problem being detected - with the ARP problem solved by migrating MAC addresses around the cluster. Even all the cabling is in pairs for redundancy. I'll write more on this system later, perhaps, but suffice it to say that incoming traffic is shared between application servers in a controlled way and we can detect node failures at any point and work around them automatically within seconds.

That's the hardware pretty much out of the way. Onto the software changes.

Over the last few days I've made a raft of changes to improve performance and prepare for clustering. As part of this I've also spent time optimising to avoid DDoS attacks as we've suffered some recently from spambots working in geographically disparate botnets hitting forms at a rate of hundreds/second.

Naturally, I would like to rewrite the site starting pretty much from scratch (programmers are like architects, once something has been built they want to do it again but bigger and better). To do that would take a very long time and probably kill the site dead in terms of technological lead; rewrites are very often the worst thing you can do.

So, what have I started doing?

  • Sessions: standard PHP sessions don't cut it in a cluster if you send users to servers in a round-robin fashion. While it's better to keep users on one server (unless your machines are untaxed or you can measure server load by current connections, which isn't appropriate for short term HTTP hits, there's no real gain) a true cluster needs sessions shared among the application nodes. So a PHP session handler class was written and implemented. This was mostly transparent to the site code. I have given the session class its own connection to the database - we'll see if that's a good idea or not soon enough...
  • Modernisation of database control. I moved the entire site from ext mysql to ext mysqli for performance, transactions and a class interface. I added a database class as a subclass of mysqli to centralise all SQL state and functionality. I wrote wrappers for all ext mysql functions used to avoid large rewrites. Why was this important? Centralising SQL and not calling mysql_*() functions directly will allow me to add features to handle a MySQL replicated database. I haven't added this yet though. Being in a class provides a way to more easily add functionality later on.

    In essence all mysql_*() have been removed from the code and replaced with sql_*() functions. These work in exactly the same way as the old mysql_*() functions. Behind the scenes, they call methods in $db.

    The wrapper class (instantiated as $db) contains all the same functions again as methods. That is:

    $result = sql_query_value( "SELECT a FROM .." );

    can be written as:

    $db->query_value( "SELECT a FROM .." );

    This is exactly the same way that MySQLi itself implements both procedural and object oriented methods.

    This also means we can implement the same class interface again to do other things with SQL without rewriting any calling code, which brings me on to:

  • memcached: this is marvellous, it really is. The increased memory on each application node makes this a realistic option now. memcached holds data in memory for a set period of time. Example: on the front page we make more than a hundred SQL calls, the majority of which return data which changes on a scale of days, some over months and some immediately. Why tax the database in requerying this data a hundred times a second? Okay, so the possibilities for performance gains are obvious. What about the implementation? Can we do it without a big rewrite? Yes.

    We write a class to manage memcached, extending the memcached pecl plugin class. Importantly, this class implements the same interface as our new database class: we call it with normal SQL, and if it is cached, we get that. If it is not cached, it goes off and queries the database for us.

    The database class was implemented like this:
    mysql_query_value() became sql_query_value() which is a wrapper for $db->query_value().
    The memcached class works like this:
    $memcached->query_value()
    - query_value is a helper function written in PHP which has been used since day one - much of the site is written like this and it makes development rapid.

    As you can see, as long as you can identify time consuming or rapidly hit calls, you can cache them with ease.

    An additional feature we can then provide is a polymorphic constructor. Should memcached fail, we can return an instance of the mysqli wrapper to the caller. Since they implement the same interface, the same calls work and the site continues to run - just perhaps somewhat slower.

    All this is elementary to Java developers but it's quite exciting to see it working in PHP, particularly to drive such a PHP4 site!

The memcached subclass and mysqli wrapper class are implemented as singletons, so can be instantiated anywhere and reuse existing connections already made on the page. Thus we begin to eliminate global variables.

Would you like some figures? Some of the more complex pages within the site now load in a third of the time, but get perhaps half the number of hits that the main home page gets. So lets look at that:

Without caching:

DB Queries: 156
Execution Time: 0.1264

With:

DB Queries: 15
Execution Time: 0.0530

i.e. the page is twice as fast to load and the database server is doing almost nothing (relatively speaking).
This is on top of the performance increases from physically seperating MySQL and PHP servers, and of course the speed increases of the new servers themselves.

Interestingly the partial modernisation of the code framework into classes allows us the flexibility to actually collect these sorts of statistics now. Previously I did not have these numbers. A fair bit of data can be collected now, much more than I reveal above.

Whist doing all this I also did some more basic optimisations. In some cases I had left some unoptimised code in place (I swear, it must have been for testing!), such as a couple of SELECT * to get one variable from a row, with a row being around 64KB!

I also looked at the cachability of images and pages. Work had been done years ago to ensure processing effort was cached on the server, but much less on the more basic aspect of caching on the user/proxy side.

The best example here was the main unit of images, which cover every page of the site. It is one of many possible photos automatically scaled and processed on the server and presented to the user. The photos simply do not change over time. Often, once set they are set for years.

These were marked by the code as no-cache, due to no-cache headers being sent by the framework for every page. Although we were not reprocessing on each view, we did loose bandwidth to the hit fetching the data from the file system. This was quickly solved and the next problem hit: the URL for the image contains a '?' which essentially kills caching dead. The solution was already implemented elsewhere on the site for the PHP pages: use mod_rewrite to provide a clean URL to the client to fetch the image.

The final result is that with a primed cache only two calls are made to the server for the home page down from twenty six. I may soon be able to reduce this to one.

As an aside YSlow is reporting the wrong numbers for this and still showing masses of hits for the page; I can't think why as Firebug explicitly shows what is really fetched.

That's all I can think to write about for now.
I will try to report back as development and testing continues and what happens when the cluster goes live. Particularly if NFS can handle serving up all those /home directories.

Recovering Linux RAID5 with mdadm

If a Linux box has hardware troubles and you temporarily loose a disk or two on a RAID5, you might get into a state where mdadm --assemble does not work. This might happen with a controller failure, or if you have faulty cabling.
You're seeing stuff like:

mdadm: failed to run array /dev/md7: Input/output error
md: pers->run() failed

Don't panic yet!
First step, ensure you have good backups and use dd or another tool to clone the hard disks.

What you need to do is recreate the RAID. This will work in most cases to get your data back, but need to be done carefully to ensure you don't destroy it in the process.

This doesn't really matter for RAID1, as your data is always consistent on both disks - you can put one disk in and resync everything from that to any other disk.

For RAID5, the data is held across all disks. The thing to realise here is that the order of the disks in the array really matters. The trick to recreating a RAID5 and having it work is to get the order right.

Problems:
1. What if the order is not obvious?
2. Resyncing.

If you add the disks in the wrong order and start the array in a working state, it will perform an initial sync of the array. This will destroy your data as RAID5 starts to write checksum data across it.
There may be a trick with mdadm to determine the correct order, but I do not know it (yet).

You must create the array in a degraded state with a disk missing. This will allow you to mount the disk, but will not cause a resync attempt.

So, here's the scenario. There are three disks in an array, hda, hdd, hdg. One failed completely (hdg) a while ago and you had to wait for new disks to be delivered. While waiting, there was an IDE failure and another disk was lost temporarily. Oh dear, we've got a broken array.

You bring the disk back online but the array won't auto-reassemble and mdadm --assemble isn't working. So we move on to recreating.

What you will do is attempt to create the array using two disks. The other will be marked as missing (even though we now have the replacement sitting on the workbench ready).
But we don't know what order the disks belong in the array - maybe we'll get lucky and they are alphabetical, maybe we won't.

This is what we'll do:

$ mdadm --create /dev/md7 --level=5 --raid-devices=3 -f /dev/hda1 /dev/hdd1 missing
$ cat /proc/mdstat
md7 : active raid5 hda1[2] hdd1[1]
240121472 blocks level 5, 64k chunk, algorithm 2 [3/2] [_UU]

So far so good, the RAID5 is running and has not touched the data by any resync attempt. So try to mount it readonly and see what happens.

It if worked, great. Backup your data and carry on your life. If not, stop the array try another order. Treat 'missing' like any other disk and move it around also. Perhaps get out a bit of paper and work out all the possible combinations to try.

$ mdadm -S /dev/md7
$ mdadm --create /dev/md7 --level=5 --raid-devices=3 -f /dev/hda1 missing /dev/hdd1

Keep repeating this until the array successfully mounts your file system.

When it has finally worked, and you've backed up, you can add your new disk back in. The array will resync your data across all three disks (or whatever number you have) and everything will be back to normal.

Email Disclaimers

Email disclaimers are rubbish.

  • If I paid attention to them, I would get nothing done. Whenever an email is forwarded to me, I'll find a message telling me that if the email is not addressed to me (it isn't) then I am to: destroy it, not read it, inform the police, stand on one leg and to break the toes of the person 4 factorial to the left.
  • A chain of forwarded/replied emails will contain dozens of contradictory "legal" messages. Partly this is due to the Microsoft Outlook enforced 'top posting' culture that has destroyed emails by enforcing the lowest common denominator for message editing and comprehension.
  • Nobody reads them. How do I know? I insert stories and alter words in then when I get bored. As long as it looks English on quick glance, nobody will look twice. Of course, in my experience nobody in sales or management reads the emails I send them anyway.
  • Legally they have no standing. It's the utter wank of the disclaimer within sealed packaging: I have the read the email to get to the message telling me not to read the email.

I hate the modern idea of email. It makes me sick. Even the government is getting in on the act to ruin it. There is a vague and unintelligible piece of legislation which seems to demand that all business correspondence includes a company number, VAT no, and registered address. Many people think this applies to emails also (my employer is making good money selling the implementation of this as a service, "or you get sued!") - but the way the legal document reads it either doesn't or it does. It says nothing really. But the result is that every email has the same bloody footer on it containing the same rubbish time and time again.

Do I have to give this information when on the phone, given that email and phone tend to be used for the same purposes? Maybe I should! "Hello, computer breakers limited. Company number 12341234, VAT number 12344321, registered in England and Wales, suite 772 widget offices, sometown, someplace, SW90 9AA. How can I help you?"

I have one customer who appears to have run his email template (which is actually an embedded webpage) by an SEO company. It contains some 20 to 30KB of links to various categories within his website.

Keywords: 

iMac

A few weeks ago I bought myself an iMac.

My primary server developed faults which became too much for me to fix (it would lock up regularly). A second server also developed faults which led to the loss of a 220GB RAID5. The first machine was a proper server, with IPMI, watchdogs, multi-SCSI ports and a huge, well built chassis. But it was only a dual P3, capable but very slow for a primary machine.

So the decision was made to repurpose an existing desktop machine into a server and virtualise as much as possible onto it with VMWare Server. It's a consumer grade 939 motherboard in a 7 year old ATX case, with an Athlon X2 3800. It maxes out at 4GB of RAM, but that'll do for now. It's not bad in terms of speed, but I'm getting spoilt by some new dual Xeon Dell 1950s in work which are stunningly quick in all respects (we're talking 12GB/s disk buffer compared with 1GB/s on my machine, and Gentoo emerge --syncs which complete within seconds rather than minutes).

The replacement for this machine on the desktop was to be a Mac.
I wasted weeks deciding which to choose: a highly specced Mac Mini or an iMac. Both had stuff going for them:
* A highly specced Mini would cost as much as the entry level iMac, yet be less well specced and have a slower, smaller hard disk, and of course not include a 20" TFT.
* I have more than one machine on the desk in question, but really only want one monitor. You can't plug other machines into an iMac's screen.

Anyway, I obviously went for the iMac.
I should have pondered for another week. Exactly 8 days after I placed my order, Apple refreshed the iMac range. I could have had a 2.4GHz processor on a faster bus instead of the 2.0GHz 667Mhz machine I have. But never mind.

The iMac is fast. Although the basic spec seems very much similar to my MacBook, it is much faster in every respect. I suspect this is mainly the hard disk (the display is quicker on the ATI chip, but I'm talking launching programs and the like). Naturally the 3.5" SATA2 disk in the iMac is going to be quicker than the 2.5" laptop SATA2 in the MacBook. But it really does pervade everything you do.

The build quality is excellent. Everything about the iMac says high quality; the metals and glass used in the external construction are solid and really give the impression that you've got yourself value for money. The aluminium keyboard looks initially as if it would be flimsy, but it's not - it feels like a slab of aluminium with good weight to hold it on the desk and good key feedback. Although essentially the same as the MacBook's keyboard, it's more solid. The MacBook keyboard suffers from being mounted in plastic, which can flex while typing.

On the Mighty Mouse:
* It's smooth, no problems with movement of the pointer on screen. The tracking is high quality.
* It feels very plasticky, because it is.
* The ball - the replacement for the wheel - is excellent! 2D scrolling is well implemented.
* Touch sensitive buttons: left clicks work fine. Middle clicks (on the ball) work fine. Right clicks can be very awkward - you must remove your fingers from the left side of the mouse completely. As a one button mouse, it's good, but the touch sensitive top needs work.
* Side squeeze buttons: do not work as advertised. Apparently, you should press both together to activate. Not on mine. If either are pressed a click is registered. They are very sensitive and right under my fingers, so I found myself sending spurious clicks constantly. If you really did have to squeeze both, it'd be fine. Maybe my mouse is broken? Anyway, the buttons are disabled in System Preferences.
* You can't hold both left and right buttons down together. This is a problem if you want to run (e.g.) an Amiga in emulation.

I virtualised the old desktop PC into a VMWare machine using the Converter software (as it was my mother's machine, it primarily ran WindowsXP). It was painless to do this. I bought VMWare Fusion with the iMac and WindowsXP runs really well in it - faster than on the AMD processor in the old machine! Unity mode, where Windows applications are brought onto the iMac desktop, is good but you can tell the windows don't belong as they slice and stutter when moved about. My mother mostly uses Windows - still :( - and tends to stick to the full screen mode.

I got 1GB RAM in the machine, Apple's costs for upgrades were very high at the time. I bought 4GB from work, using the free Apple SODIMM to bring my MacBook up to 2GB. I obviously don't push my Macs much as the only place I see a difference is in VMWare, where obviously two operating systems fighting over the same 1GB of RAM will cause swapping and slowdown.

So, my opinion of the iMac? Given that I've always built my own PCs and work at a company which has a strict "no Macs" policy (I've even inadvertently had companies switch from Mac or Linux servers to Windows 2003 - I hate that I am now part of the problem)?

Buy one. They are flippin' great.

But a caveat: my servers still run Gentoo, or FreeBSD, CentOS, Mandriva, ... :-)

Keywords: 

Debugging in Zend Studio for Eclipse

I've been playing with the Zend Studio trial and found it to be really good. Eclipse is an excellent PHP editor but Zend manage to bring some of Eclipse's real power to PHP, that which you can see when writing code in Java.

One useful tool is local debugging. Yet Zend Studio's built in PHP doesn't include extensions such as MySQL, making it not overly useful when you start to use it properly. Now, at this point you should perhaps be running the code on the server and using remote debugging, but what about unit tests? Another excellent feature in Zend Studio is the PHPUnit support. But you can't test MySQL methods as there's no support.

So, here's how to get PHP extensions working in Zend Studio 6 (on Windows, anyway - yes, this is what I use at work..)

Locate the PHP directory inside Eclipse's plugins. You can find this by looking at the preferences for the PHP executable.
Download the binary archive (not installer) of the same PHP version from php.net.
Copy the extensions from the ZIP into the PHP directory.
Edit the php.ini in the same directory and add the line:

extensions_dir = .

(Note: PHP claims to be using c:\windows\php.ini - it is not).
Open the default php.ini from the ZIP you downloaded, and copy all the extension (.dll) lines to your live php.ini.
Also manually add php_mysql.dll and php_mysqli.dll, which are NOT compiled in to Zend's PHP as the example php.ini file says (Zend's php.exe is a special build I believe).

Now try debugging some PHP which uses extensions, and it'll work. Unless you have bugs. Then it'll need debugging.

Keywords: 

DLink DWL 2000AP+ rev B rebooting

I have one of these wireless routers, running the last beta firmware from 2005. It works well, for a while, then starts to reboot whenever a non-trivial amount of data is sent through it. This means I loose connectivity and this gets me upset.

The problem seems to be that the firmware corrupts its configuration and this eventually leads to these problems of it rebooting itself. Resetting the configuration to factory defaults, reflashing the firmware (to be sure) and then manually reinputting the config (NOT doing a save/restore) will fix it for another six to twelve months.

Keywords: 

VMWare can't be run purely from the command line

VMControl error -16: Virtual machine requires user input to continue

If you see that when manipulating a VMWare machine, you'll need to connect the management console and click on the window VMWare will show you.

vmware-server-console or whatever you use.

I can't see that even a hard reset can get rid of this without starting the graphical interface. If there is a way to get a machine back from the command line only, I'd be thankful to hear of it.

Pages

Subscribe to RSS - blogs