Technological Wanderings - raid http://www.technologicalwanderings.co.uk/taxonomy/term/13 en Recovering Linux RAID5 with mdadm http://www.technologicalwanderings.co.uk/node/33 <div class="field field-name-taxonomy-vocabulary-1 field-type-taxonomy-term-reference field-label-above"><div class="field-label">Keywords:&nbsp;</div><div class="field-items"><div class="field-item even"><a href="/taxonomy/term/13">raid</a></div><div class="field-item odd"><a href="/taxonomy/term/15">mdadm</a></div><div class="field-item even"><a href="/taxonomy/term/46">linux</a></div><div class="field-item odd"><a href="/taxonomy/term/63">raid5</a></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>If a Linux box has hardware troubles and you temporarily loose a disk or two on a RAID5, you might get into a state where mdadm --assemble does not work. This might happen with a controller failure, or if you have faulty cabling.<br /> You're seeing stuff like:<br /><code><br /> mdadm: failed to run array /dev/md7: Input/output error<br /> md: pers-&gt;run() failed<br /></code><br /> Don't panic yet!<br /> First step, ensure you have good backups and use dd or another tool to clone the hard disks.</p> <p>What you need to do is recreate the RAID. This will work in most cases to get your data back, but need to be done carefully to ensure you don't destroy it in the process.</p> <p>This doesn't really matter for RAID1, as your data is always consistent on both disks - you can put one disk in and resync everything from that to any other disk.</p> <p>For RAID5, the data is held across all disks. The thing to realise here is that the order of the disks in the array really matters. The trick to recreating a RAID5 and having it work is to get the order right.</p> <p>Problems:<br /> 1. What if the order is not obvious?<br /> 2. Resyncing.</p> <p>If you add the disks in the wrong order and start the array in a working state, it will perform an initial sync of the array. This will destroy your data as RAID5 starts to write checksum data across it.<br /> There may be a trick with mdadm to determine the correct order, but I do not know it (yet).</p> <p>You must create the array in a degraded state with a disk missing. This will allow you to mount the disk, but will not cause a resync attempt.</p> <p>So, here's the scenario. There are three disks in an array, hda, hdd, hdg. One failed completely (hdg) a while ago and you had to wait for new disks to be delivered. While waiting, there was an IDE failure and another disk was lost temporarily. Oh dear, we've got a broken array.</p> <p>You bring the disk back online but the array won't auto-reassemble and mdadm --assemble isn't working. So we move on to recreating.</p> <p>What you will do is attempt to create the array using two disks. The other will be marked as missing (even though we now have the replacement sitting on the workbench ready).<br /> But we don't know what order the disks belong in the array - maybe we'll get lucky and they are alphabetical, maybe we won't.</p> <p>This is what we'll do:<br /><code><br /> $ mdadm --create /dev/md7 --level=5 --raid-devices=3 -f /dev/hda1 /dev/hdd1 missing<br /> $ cat /proc/mdstat<br /> md7 : active raid5 hda1[2] hdd1[1]<br /> 240121472 blocks level 5, 64k chunk, algorithm 2 [3/2] [_UU]<br /></code></p> <p>So far so good, the RAID5 is running and has not touched the data by any resync attempt. So try to mount it readonly and see what happens.</p> <p>It if worked, great. Backup your data and carry on your life. If not, stop the array try another order. Treat 'missing' like any other disk and move it around also. Perhaps get out a bit of paper and work out all the possible combinations to try.<br /><code><br /> $ mdadm -S /dev/md7<br /> $ mdadm --create /dev/md7 --level=5 --raid-devices=3 -f /dev/hda1 missing /dev/hdd1<br /></code></p> <p>Keep repeating this until the array successfully mounts your file system.</p> <p>When it has finally worked, and you've backed up, you can add your new disk back in. The array will resync your data across all three disks (or whatever number you have) and everything will be back to normal.</p> </div></div></div> Sat, 07 Jun 2008 09:11:06 +0000 techuser 33 at http://www.technologicalwanderings.co.uk http://www.technologicalwanderings.co.uk/node/33#comments Linux soft RAID hanging on boot at Mounting Root http://www.technologicalwanderings.co.uk/node/8 <div class="field field-name-taxonomy-vocabulary-1 field-type-taxonomy-term-reference field-label-above"><div class="field-label">Keywords:&nbsp;</div><div class="field-items"><div class="field-item even"><a href="/taxonomy/term/13">raid</a></div><div class="field-item odd"><a href="/taxonomy/term/14">md</a></div><div class="field-item even"><a href="/taxonomy/term/15">mdadm</a></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>I have a Linux (Gentoo) server which has been somewhat unreliable, and suffers from frequent lockups[1]. Today, it started to hang at boot on "Mounting Root Filesystem".</p> <p>I booted a recovery CD and took a look at the RAID filesystems, all using Linux's MD software RAID1. They all assembled fine, and mounted the ext3 and reiser3 filesystems without trouble. So I started to look in more detail:</p> <p>On doing a query of one of the components of the root RAID, I found:</p> <p># mdadm -Q /dev/hda2<br /> /dev/hda2: is not an md array<br /> /dev/hda2: device 0 in 2 device mismatch raid1 /dev/md3. Use mdadm --examine for more detail.</p> <p>"Mismatch!" All the others show "active" or "inactive". I look closer and note "md3" - my root is md1, /boot is md3!<br /> What is happening is that the RAID block device notes in its superblock which md device node it is assigned to. When booting, Linux is looking for /dev/md3 to mount the root. Knowing this to an MD RAID, it examines devices and starts those that match.</p> <p>In this case, I've probably made a mistake during a previous recovery and mounted / as md3, which it has remembered. So on bootup, I have two filesystems claiming to be for the root device, which is set as /dev/md3 in the LILO boot loader.</p> <p>To fix this, you need to update the super block. This is done when assembling the device, so do it from a fresh boot off your recovery disk.</p> <p>This is what I did:</p> <p># mdadm --assemble /dev/md3 --update=super-minor /dev/hda2 /dev/hdd2</p> <p>Once done, a query shows:<br /> # mdadm -Q /dev/hda2<br /> /dev/hda2: is not an md array<br /> /dev/hda2: device 0 in 2 device active raid1 /dev/md1. Use mdadm --examine for more detail.</p> <p>Rebooting, the root is mounted instantly and everything works. Huzzah!</p> <p>[1] Once every couple of days, and almost certainly temperature related as the environment has been getting very hot and humid at the same time. It has a hardware based watchdog which brings it back up - I do like real server hardware.. I pulled the heatsinks off the CPUs and noticed a lot of thermal transfer compound (which would be my fault) - I've wiped these down and left just a very thin film and will see how well it works now.</p> <p>========</p> <p>Update:<br /> I noticed that the machine is running the disks on mdma2, rather than udma5.<br /> So I played with the kernel options (2.6.22-r9) to try to fix that and on rebooting got the same problem again. Going back to kernel 2.6.21-r5 solved both the mounting root and UDMA issues. So I suspect the real reason behind all this is a broken kernel revision, at least with Broadcom CSB5 (Intel SDS2 board).</p> </div></div></div> Sun, 25 Nov 2007 17:26:41 +0000 techuser 8 at http://www.technologicalwanderings.co.uk http://www.technologicalwanderings.co.uk/node/8#comments