lkml.org 
[lkml]   [2006]   [May]   [22]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: tuning for large files in xfs
On 5/22/06, fitzboy <fitzboy@iparadigms.com> wrote:
> I've got a very large (2TB) proprietary database that is kept on an XFS
> partition under a debian 2.6.8 kernel. It seemed to work well, until I
> recently did some of my own tests and found that the performance should
> be better then it is...
>
> basically, treat the database as just a bunch of random seeks. The XFS
> partition is sitting on top of a SCSI array (Dell PowerVault) which has
> 13+1 disks in a RAID5, stripe size=64k. I have done a number of tests
> that mimic my app's accesses and realized that I want the inode to be
> as large as possible (which in an intel box is only 2k), played with su
> and sw and got those to 64k and 13... and performance got better.
>
> BUT... here is what I need to understand, the filesize has a drastic
> effect on performance. If I am doing random reads from a 20GB file
> (system only has 2GB ram, so caching is not a factor), I get
> performance about where I want it to be: about 5.7 - 6ms per block. But
> if that file is 2TB then the time almost doubles, to 11ms. Why is this?
> No other factors changed, only the filesize.
>
> Another note, on this partition there is no other file then this one
> file.
>

Why use a flesystem with just one file?? Why not use the device node
of the partition directly?
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2006-05-23 00:33    [W:0.080 / U:0.068 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site