Messages in this thread | | | Date | Mon, 1 Oct 2001 01:25:42 -0500 (CDT) | From | Evan Harris <> |
| |
Ok, thanks. I did that and it worked. But I have (unfortunately) one more question about how raid disks are used. I've now remade the restarted the raid, having left the oldest drive (/dev/sde1) as a failed-disk. I do a raidhotadd /dev/md0 /dev/sde1, and this starts the raid parity rebuild and gives this status in /proc/mdstat:
md0 : active raid5 sde1[6] sdi1[5] sdh1[4] sdg1[3] sdf1[2] sdd1[0] 179203840 blocks level 5, 256k chunk, algorithm 0 [6/5] [U_UUUU] [=>...................] recovery = 8.4% (3023688/35840768) finish=88.9min speed=6148K/sec
Now, my question is: the hotadd seems to have reordered the disks, so when the rebuild is completed, do I need to reorder my raidtab to reflect this? Like this?
device /dev/sdd1 raid-disk 0 device /dev/sdf1 raid-disk 1 device /dev/sdg1 raid-disk 2 device /dev/sdh1 raid-disk 3 device /dev/sdi1 raid-disk 4 device /dev/sde1 raid-disk 5
Or does the kernel still keep the drives in order as the raidtab already is, even though they seem to be out of order in the syslog and /proc/mdstat? If I have to force the recreation of the superblocks at some later point, which way will keep the data from being lost?
Thanks. Evan
-- | Evan Harris - Consultant, Harris Enterprises - eharris@puremagic.com | | Custom Solutions for your Software, Networking, and Telephony Needs
On Mon, 1 Oct 2001, Jakob Østergaard wrote:
> On Sun, Sep 30, 2001 at 07:51:25PM -0500, Evan Harris wrote: > > > > Thanks for the fast reply! > > > > I'm not sure I understand why drive 5 should be failed. It is one of the > > four disks with the most recently correct superblocks. The disk with the > > oldest superblock is #1. Can you point me to documentation which explains > > this better? I'm a little afraid of doing that without reading more on it, > > since it seems to mark yet another of the 4 remaining "good" drives as > > "bad". > > Oh, sorry, of course the oldest disk should be marked as failed. > > But the way you mark a disk failed is to replace "raid-disk" with "failed-disk". > > What you did in your configuration was to say that sde1 was disk 1, and sdi1 was > disk 5 *AND* disk 1 *AND* it was failed. > > Replace "raid-disk" with "failed-disk" for the device that you want to mark > as failed. Don't touch the numbers. > > Cheers, > > -- > ................................................................ > : jakob@unthought.net : And I see the elder races, : > :.........................: putrid forms of man : > : Jakob Østergaard : See him rise and claim the earth, : > : OZ9ABN : his downfall is at hand. : > :.........................:............{Konkhra}...............: >
- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
| |