lkml.org 
[lkml]   [2003]   [Aug]   [17]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: data corruption using raid0+lvm2+jfs with 2.6.0-test3
On Sun, Aug 17, 2003 at 10:12:27AM +1000, Neil Brown wrote:
> On Saturday August 16, mfedyk@matchmail.com wrote:
> > I have a raid5 with "4" 18gb drives, and one of the "drives" is two 9gb
> > drives in a linear md "array".
> >
> > I'm guessing this will hit this bug too?
>
> This should be safe. raid5 only ever submits 1-page (4K) requests
> that are page aligned, and linear arrays will have the boundary
> between drives 4k aligned (actually "chunksize" aligned, and chunksize
> is atleast 4k).
>

So why is this hitting with raid0? Is lvm2 on top of md the problem and md
on lvm2 is ok?

> So raid5 should be safe over everything (unless dm allows striping
> with a chunk size less than pagesize).
>
> Thinks: as an interim solution of other raid levels - if the
> underlying device has a merge_bvec_function which is being ignored, we
> could set max_sectors to PAGE_SIZE/512. This should be safe, though
> possibly not optimal (but "safe" is trumps "optimal" any day).

Assuming that sectors are always 512 bytes (true for any hard drive I've
seen) that will be 512 * 8 = one 4k page.

Any chance sector != 512?
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:47    [W:0.078 / U:5.352 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site