lkml.org 
[lkml]   [2005]   [Dec]   [20]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH RT 00/02] SLOB optimizations
From
Date
On Tue, 2005-12-20 at 13:43 -0600, Matt Mackall wrote:
> >
> > I bet after a while of running, your performance will still suffer due
> > to fragmentation. The more fragmented it is, the more space you lose
> > and the more steps you need to walk.
> >
> > Remember, because of the small stack, kmalloc and kfree are used an
> > awful lot. And if you slow those down, you will start to take a big hit
> > in performance.
>
> True, with the exception that the improved packing may be the
> difference between fitting the working set in memory and
> thrashing/OOMing for some applications. Not running at all =
> infinitely bad performance.

Well the best way to see, is to try it out with real applications on
small machines. I guess I need to pull out my IBM Thinkpad 75c (32
megs, I'll need to only allocate half) and try out the two and see how
far I can push it. Unfortunately, this test may need to wait, since I
have a ton of other things to push out first.

If someone else (perhaps yourself) would like to give my patches a try,
I would be really appreciate it. :)

>
> And the fragmentation is really not all that bad. Remember, Linux and
> other legacy systems used similar allocators for ages.

But the performance, was greatly reduced, and the system just booted up.

>
> > Ingo can answer this better himself, but I have a feeling he jumped to
> > your SLOB system just because of the simplicity.
>
> And only a config switch away..
>
> > > This I like a lot. I'd like to see a size/performance measurement of
> > > this by itself. I suspect it's an unambiguous win in both categories.
> >
> > Actually the performance gain was disappointingly small. As it was a
> > separate patch and I though it would gain a lot. But if IIRC, it only
> > increased the speed by a second or two (of the 1 minute 27 seconds).
> > That's why I spent so much time in the next approach.
>
> Still, if it's a size win, it definitely makes sense to merge.
> Removing the big block list lock is also a good thing and might make a
> bigger difference on SMP.

Well, I guess I can check out the -mm branch and at least port the first
patch over.

>
> > > > The next patch was the big improvement, with the largest changes. I
> > > > took advantage of how the kmem_cache usage that SLAB also takes
> > > > advantage of. I created a memory pool like the global one, but for
> > > > every cache with a size less then PAGE_SIZE >> 1.
> > >
> > > Hmm. By every size, I assume you mean powers of two. Which negates
> > > some of the fine-grained allocation savings that current SLOB provides.
> >
> > Yeah its the same as what the slabs use. But I would like to take
> > measurements of a running system between the two approaches. After a
> > day of heavy network traffic, see what the fragmentation is like and how
> > much is wasted. This would require me finishing my cache_chain work,
> > and adding something similar to your SLOB.
> >
> > But the powers of two is only for the kmalloc, which this is a know
> > behavior of the current system. So it <should> only be used for things
> > that would alloc and free within a quick time (like for things you would
> > like to put on a stack but cant), or the size is close to (less than or
> > equal) a power of two. Otherwise a kmem_cache is made which is the size
> > of expected object (off by UNIT_SIZE).
>
> There are a fair number of long-lived kmalloc objects. You might try
> playing with the kmalloc accounting patch in -tiny to see what's out
> there.
>
> http://www.selenic.com/repo/tiny?f=bbcd48f1d9c1;file=kmalloc-accounting.patch;style=raw

I'll have to try this out too. Thanks for the link.
>
> > Oh, this reminds me, I probably still need to add a shrink cache
> > algorithm. Which would be very hard to do in the current SLOB.
>
> Hmmm? It already has one.

The current version in Ingo's 2.6.15-rc5-rt2 didn't have one.

>
> > > For what it's worth, I think we really ought to consider a generalized
> > > allocator approach like Sun's VMEM, with various removable pieces.
> >
> > Interesting, I don't know how Sun's VMEM works. Do you have links to
> > some documentation?
>
> http://citeseer.ist.psu.edu/bonwick01magazines.html

Thanks, I'll read up on this.

>
> > That looks like quite an undertaking, but may be well worth it. I think
> > Linux's memory management is starting to show it's age. It's been
> > through a few transformations, and maybe it's time to go through
> > another. The work being done by the NUMA folks, should be taking into
> > account, and maybe we can come up with a way that can make things easier
> > and less complex without losing performance.
>
> Fortunately, it can be done completely piecemeal.

If you would like me to test any code, I'd be happy to when I have time.
And maybe even add a few patches myself.

-- Steve


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-12-20 21:09    [W:0.142 / U:0.020 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site