lkml.org 
[lkml]   [2002]   [Jun]   [12]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] 2.5.21 Nonlinear CPU support
At 20:32 12/06/02, Linus Torvalds wrote:

>Hmm.. Since the cpu_online_map thing can be used to fix this, this doesn't
>seem to be a big issue

Yes, we are all just nitpicking now. (-;

>, BUT
>
>On Wed, 12 Jun 2002, Anton Altaparmakov wrote:
> >
> > 1) Use a single buffer and lock it so once one file is under decompression
> > no other files can be and if multiple compressed files are being accessed
> > simultaneously on different CPUs only one CPU would be decompressing. The
> > others would be waiting for the lock. (Obviously scheduling and doing
> other
> > stuff.)
> >
> > 2) Use multiple buffers and allocate a buffer every time the decompression
> > engine is used. Note this means a vmalloc()+vfree() in EVERY ->readpage()
> > for a compressed file!
> >
> > 3) Use one buffer for each CPU and use a critical section during
> > decompression (disable preemption, don't sleep). Allocated at mount
> time of
> > first partition supporting compression. Freed at umount time of last
> > partition supporting compression.
> >
> > I think it is obvious why I went for 3)...
>
>I don't see that as being all that obvious. The _obvious_ choice is just
>(1), protected by a simple spinlock. 128kB/CPU seems rather wasteful,
>especially as the only thing it buys you is scalability on multiple CPU's
>for the case where you have multiple readers all at the same time touching
>a new compressed block.
>
>That scalability operation seems dubious, especially since this will only
>happen when you just had to do IO anyway, so in order to actually take
>advantage of the scalability that IO would have had to happen on multiple
>separate controllers.
>
>Ehh?

That is a fair point from a reality check point of view, I freely admit to
being one of the people who count the bytes and cycles... But I do think it
is quite legitimate to have two different controllers or at least two
different disks (/me ignorant: SCSI can operate multiple disks
simultaneously on same controller, can it not?) or as I do quite a lot
myself, have one disk on IDE controller and one via NBD device over 100MBit
ethernet. (Mind you I have a single CPU machine...)

Admittedly in reality you would need to have some damn high load on the
ntfs driver for this optimization to make a difference. But lets take as an
example a company who is migrating from windows to Linux but for whatever
reason is keeping their data on NTFS (yes such companies exist (-:). I
could see this optimization bringing making real world difference (albeit a
small one!) to a big web/file server.

I know ntfs is currently read-only but it is not going to stay this way and
I see the possibility of people using ntfs on Linux quite extensively, so I
am trying to make it as robust and as fast as possible. - Quite a few
companies keep asking me when write support will be available so they can
install Linux shared with windows, run their Linux based app in Windows, do
antivirus checks/cleaning from Linux, do backup recovery of windows from
Linux, the list goes on, I have lost track of all the things people want it
for. (-:

Best regards,

Anton


--
"I've not lost my mind. It's backed up on tape somewhere." - Unknown
--
Anton Altaparmakov <aia21 at cantab.net> (replace at with @)
Linux NTFS Maintainer / IRC: #ntfs on irc.openprojects.net
WWW: http://linux-ntfs.sf.net/ & http://www-stu.christs.cam.ac.uk/~aia21/

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:26    [W:0.078 / U:0.032 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site