lkml.org 
[lkml]   [1997]   [Dec]   [30]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: Filesystem optimization.. - why not optimise squid?
From
Date
"Russell Coker - mailing lists account" <bofh@snoopy.virtual.net.au> writes:

> >> MR> In practise, on large server, it's rare to get a very high level of
> >> MR> cache hits (3 million file filesystem would need 384K of ram just to
> >> MR> hold the inode tables in the best case, ignoring all the directories,
> >> MR> the other meta-data, and the on-going disk activity).
> >>
> >> Perhaps the directory cache is too small for your machine?
>
> >There are around 390,000 directories holding those files. Just how big did
> >you want to the directory cache to get!?
>
> I think that the easiest solution is to re-write squid to use some sort
> of database instead of the file system.

Laugh. You've never looked at the squid source, have you? belive me,
modifying the kernel would be _far_ easier.

The other thing of course is there I'd like everything to benifit from
a faster filesystem, rather than just squid (admittedly squid is the
main push at the moment). Maximal benifit for minimum effort and all
that jazz.

[ ... ]
> What squid currently does is convert internal index numbers into
> dirname/dirname/filename combinations and then use these for accessing the
> data. If it could use the index numbers to look up a database table
> directly then it'll save a lot of stuffing around and should give great
> performance increases.

Yes, and no. Most dbases aren't too good at coping with multi-mega
byte items, and aren't too quick at updates either (all that logging
overhead etc).

It's a possiblity that it may be faster, but it's by no means a
given.

michael.

\
 
 \ /
  Last update: 2005-03-22 13:40    [W:0.272 / U:0.052 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site