lkml.org 
[lkml]   [2007]   [May]   [15]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectSoftware raid0 will crash the file-system, when each disk is 5TB
Date
From
Hi everyone:

We are experiencing problems with software raid0, with very
large disk arrays.
We are using two 3ware disk array controllers, each of them is connected
8 750GB harddrives. And we build a software raid0 on top of that. The
total capacity is 5.5TB+5.5TB=11TB

We use jfs as the file-system, we have a test application that write
data continuously to the disks. After writing 52 10GB files, jfs
crashed. And we are not able to recover it, fsck doesn't recognise it
anymore.
We then tried xfs, same application, lasted a little longer, but gives
kernel crash later.

We then reconfigured the hardware array, this time we configured two
disk array from each controller, than we have 4 disk arrays, each of
them have 4 750GB harddrives. Than build a new software raid0 on top of
that. Total capacity is still the same, but 2.75T+2.75T+2.75T+2.75T=11T.

This time we managed to fill the whole 11T data without problem, we are
still doing validation on all 11TB of data written to the disks.

It happened on 2.6.20 and 2.6.13.

So I think the problem is in the way on software raid handling very
large disk, maybe a integer overflow or something. I've searched on the
web, only find another guy complaining the same thing on the xfs mailing
list.

Anybody have a clue?


Jeff
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2007-05-16 01:27    [W:0.086 / U:0.136 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site