Messages in this thread |  | | Date | Sun, 30 Sep 2001 19:51:25 -0500 (CDT) | From | Evan Harris <> |
| |
Thanks for the fast reply!
I'm not sure I understand why drive 5 should be failed. It is one of the four disks with the most recently correct superblocks. The disk with the oldest superblock is #1. Can you point me to documentation which explains this better? I'm a little afraid of doing that without reading more on it, since it seems to mark yet another of the 4 remaining "good" drives as "bad".
Also, should the failed-disk directive be substituted for the raid-disk directive (as your example was), or should it be:
device /dev/sdd1 raid-disk 0 device /dev/sde1 raid-disk 1 device /dev/sdf1 raid-disk 2 device /dev/sdg1 raid-disk 3 device /dev/sdh1 raid-disk 4 device /dev/sdi1 raid-disk 5 failed-disk 5
or should it really be:
device /dev/sdd1 raid-disk 0 device /dev/sde1 raid-disk 1 failed-disk 1 device /dev/sdf1 raid-disk 2 device /dev/sdg1 raid-disk 3 device /dev/sdh1 raid-disk 4 device /dev/sdi1 raid-disk 5
Thanks!
Evan
-- | Evan Harris - Consultant, Harris Enterprises - eharris@puremagic.com | | Custom Solutions for your Software, Networking, and Telephony Needs
On Mon, 1 Oct 2001, Jakob Østergaard wrote:
> On Sun, Sep 30, 2001 at 07:29:06PM -0500, Evan Harris wrote: > > > > And yes, I'm using the real --force option. :) > > Good (hush now, it's a secret ;) > > > > > I have a 6 disk RAID5 scsi array that had one disk go offline through a > > dying power supply, taking the array into degraded mode, and then another > > went offline a couple of hours later from what I think was a loose cable. > > > > The first drive to go offline was /dev/sde1. > > The second to go offline was /dev/sdd1. > > > > Both drives are actually fine after fixing the connection problems and a > > reboot, but since the superblocks are out of sync, it won't init. > > Ok. > > ... > [huge snip] > ... > > > > I set the first disk that went offline out with a failed-disk directive, and > > tried to recover with a: > > > > mkraid --force /dev/md0 > > Good ! > > (to anyone reading this without having read the docs: don't pull this trick > unless you absolutely positively understand the consequences of screwing up > here) > > > > > I'm _positive_ that the /etc/raidtab is correct, but it fails to force the > > update with: > > > > DESTROYING the contents of /dev/md0 in 5 seconds, Ctrl-C if unsure! > > handling MD device /dev/md0 > > analyzing super-block > > raid_disk conflict on /dev/sde1 and /dev/sdi1 (1) > > mkraid: aborted, see the syslog and /proc/mdstat for potential clues. > ... > > Read on > > > [snip] > > For info, here is my raidtab: > > > > raiddev /dev/md0 > > raid-level 5 > > nr-raid-disks 6 > > nr-spare-disks 0 > > chunk-size 256 > > persistent-superblock 1 > > device /dev/sdd1 > > raid-disk 0 > > device /dev/sde1 > > raid-disk 1 > > device /dev/sdf1 > > raid-disk 2 > > device /dev/sdg1 > > raid-disk 3 > > device /dev/sdh1 > > raid-disk 4 > > device /dev/sdi1 > > raid-disk 5 > > failed-disk 1 > > > Wrong ! device /dev/sdi1 is railed-disk 5 not failed-disk 1, > that's why mkraid is confused. > > What you want is: > device /dev/sdd1 > raid-disk 0 > device /dev/sde1 > raid-disk 1 > device /dev/sdf1 > raid-disk 2 > device /dev/sdg1 > raid-disk 3 > device /dev/sdh1 > raid-disk 4 > device /dev/sdi1 > failed-disk 5 > > > Good luck, > > -- > ................................................................ > : jakob@unthought.net : And I see the elder races, : > :.........................: putrid forms of man : > : Jakob Østergaard : See him rise and claim the earth, : > : OZ9ABN : his downfall is at hand. : > :.........................:............{Konkhra}...............: >
- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
|  |