lkml.org 
[lkml]   [1997]   [Dec]   [30]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: Filesystem optimization.. - why not optimise squid?
>> >> MR> In practise, on large server, it's rare to get a very high level of
>> >> MR> cache hits (3 million file filesystem would need 384K of ram just to
>> >> MR> hold the inode tables in the best case, ignoring all the directories,
>> >> MR> the other meta-data, and the on-going disk activity).
>> >>
>> >> Perhaps the directory cache is too small for your machine?
>>
>> >There are around 390,000 directories holding those files. Just how big did
>> >you want to the directory cache to get!?
>>
>> I think that the easiest solution is to re-write squid to use some sort
>> of database instead of the file system.

>Laugh. You've never looked at the squid source, have you? belive me,
>modifying the kernel would be _far_ easier.

I have looked at the Squid source. It's not the best code I've ever
seen, but I've seen (and worked on) a lot worse. I've also worked on the
kernel source. Sure the code's generally fairly clean in the kernel, but
one mistake in kernel code and the machine's locked solid. Also there are
lots of hidden dependencies as well.

>The other thing of course is there I'd like everything to benifit from a
>faster filesystem, rather than just squid (admittedly squid is the main
>push at the moment). Maximal benifit for minimum effort and all that jazz.

Squid and INN are the only 2 applications I know of which do that sort
of thing (along with other less popular programs which try to emulate those
two). INN is moving towards a database, so if Squid does the same then
what do you gain from a new FS?

>> What squid currently does is convert internal index numbers into
>> dirname/dirname/filename combinations and then use these for accessing the
>> data. If it could use the index numbers to look up a database table
>> directly then it'll save a lot of stuffing around and should give great
>> performance increases.

>Yes, and no. Most dbases aren't too good at coping with multi-mega byte
>items, and aren't too quick at updates either (all that logging overhead
>etc).

There doesn't have to be any great logging overhead. Logging is only
needed on meta-data, but this is time you'll save on fsck if the machine
crashes...

As for multi-megabyte items, I've just checked 3 Squid caches that I run
and found that the AVERAGE ovject size on the caches was 6.5, 10.5, and
13.8K. When the average object is 13.8K in size there must be an extremely
small number of multi-megabyte items. If your caches have similar object
sizes then you could almost ignore the case of multi-megabyte items in
terms of overall performance. Perhaps it would be best to have a seperate
database server process to save blocking squid on IO, this should be a good
win for when it writes out a 10meg file and may solve the problem that
Squid usually runs without asynchronous disk IO.

>It's a possiblity that it may be faster, but it's by no means a given.

The current method is about the least efficient method I can imagine of
storing the data for squid. I really doubt that you could do worse.

--
-----------------------------------------------------------
In return for "mailbag contention" errors from buggy Exchange
servers I'll set my mail server to refuse mail from your domain.
The same response applies when a message to a postmaster
account bounces.
"Russell Coker - mailing lists account" <bofh@snoopy.virtual.net.au>
-----------------------------------------------------------


\
 
 \ /
  Last update: 2005-03-22 13:40    [W:0.064 / U:0.372 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site