lkml.org 
[lkml]   [1999]   [Jun]   [27]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: Apache performance: Run queue proportional to number of connections?
Andrea Arcangeli wrote:
> Which patch exactly? Did you tried my wake-one patch for 2.2.x alone? You
> may get zero improvement from it (for example if you have only one task
> sleeping on accept() all the time) but it's really weird that you got a
> performance drop from it.

(Because this conversation might be of interest to linux-kernel,
I'm cc'ing the rest of their conversation, lightly edited.
Zach is going to do a real writeup of the ZD tests eventually. - dan)

Zach:
> I used the patch you/dan pointed me to originally.. lessee..
>
> -rw-r--r-- 1 zab zab 40061 Jun 12 02:59 2.2.9_andrea-perf1.gz
>
> there was no way I had time to try and unwind just the wake one patch from
> that. and as I've been trying to say, the wake one is nice but certainly
> not a limiting factor here.
>
> > may get zero improvement from it (for example if you have only one task
> > sleeping on accept() all the time) but it's really weird that you got a
>
> yeah, in this work load we rarely have lots of apaches all waiting in
> accept(), and even when we do their coordinated wakeup is lost in the
> noise of the rest of the things the other apaches are doing.
>
> > performance drop from it.
>
> it wasn't terrible, but the slope of the hits/second graph after it
> reached its peak went down slightly more quickly than 2.2.10. I imagine
> that was subtle scheduler changes that were also in that patch?
>
> did you ever test this stuff on a quad cpu quad interface machine, all of
> which going full bore? :) Its a different universe entirely..

Andrea:
>
> On Sat, 26 Jun 1999, Zach Brown wrote:
>
> >I used the patch you/dan pointed me to originally.. lessee..
> >
> >-rw-r--r-- 1 zab zab 40061 Jun 12 02:59 2.2.9_andrea-perf1.gz
>
> Ok, it has many other things included. Do you have all you working set in
> cache?
>
> Here it is my wake-one patch alone for 2.2.x:
> [patch omitted; affects kernel/sched.c, kernel/signal.c,
> net/ipv4/tcp.c, include/linux/sched.h - dan]
>
> >yeah, in this work load we rarely have lots of apaches all waiting in
> >accept(), and even when we do their coordinated wakeup is lost in the
> >noise of the rest of the things the other apaches are doing.
>
> I would like if you would give a try to the above patch alone though :).
>
> >> performance drop from it.
> >
> >it wasn't terrible, but the slope of the hits/second graph after it
> >reached its peak went down slightly more quickly than 2.2.10. I imagine
> >that was subtle scheduler changes that were also in that patch?
>
> It maybe.
>
> >did you ever test this stuff on a quad cpu quad interface machine, all of
> >which going full bore? :) Its a different universe entirely..
>
> Never tried on a quad-SMP.
>
> If you have the time I would also appreciate feedback over:
>
> ftp://ftp.suse.com/pub/andrea/kernel/2.2.10_andrea-VM9.gz
>
> It's a kind of "stable" release of my pagemap-lru code. (if all your data
> fits in the working set you can only get dropped performances due
> the additional time spent in the lru handling though)
>
> But since usually a disk is larger than memory the pagemap-LRU make a
> difference. (very high if you then goes also low on memory and you need to
> swap)

Zach:
> > Do you have all you working set in cache?
>
> yeah. 6000 files in around 75megs.
>
> > Here it is my wake-one patch alone for 2.2.x:
>
> cool, thanks.
>
> > I would like if you would give a try to the above patch alone though :).
>
> I can if I have time around here, but I guarantee you it won't make a
> meaningful difference under this workload.
>
> > Never tried on a quad-SMP.
>
> thats really the point of this 'test environment' that they setup. They
> created something that has massive concurrency in the networking path in
> SMP. 4 interfaces all constantly raising interrupts kills us dead. all
> the hacking of the scheduler and vm and such are just noise when compared
> to this bottleneck.
>
> > But since usually a disk is larger than memory the pagemap-LRU make a
> > difference. (very high if you then goes also low on memory and you need to
> > swap)
>
> it would be very interesting to run tests of us vs nt when the machine
> starts to page to disk :)

Andrea:
> On Sat, 26 Jun 1999, Zach Brown wrote:
>
> >yeah. 6000 files in around 75megs.
>
> If you don't do I/O and you never need to recycle the cache/buffers to
> make space then my page-LRU can't help (it will instead decrease a bit
> performances due the additional lru work).
>
> >I can if I have time around here, but I guarantee you it won't make a
> >meaningful difference under this workload.
>
> It maybe of course.
>
> >thats really the point of this 'test environment' that they setup. They
>
> They == microsoft? :)
>
> >created something that has massive concurrency in the networking path in
> >SMP. 4 interfaces all constantly raising interrupts kills us dead. all
> >the hacking of the scheduler and vm and such are just noise when compared
> >to this bottleneck.
>
> Let me know better. They have 4 CPU and 4 network card. Each network card
> overload each CPU with a flood of network IRQs, right?
>
> Why you raise the finger against interrupts? Is the problem the irq
> lantency? It's true that we call irq handlers through two function
> pointers, we could remove such function pointers but I am not convinced
> that that's the problem. I think also NT may have something similar to
> allow the code to be object oriented and cleaner.
>
> >it would be very interesting to run tests of us vs nt when the machine
> >starts to page to disk :)
>
> Yes. In such case I bet my VM9 patch would make an _huge_ difference. But
> it will make a big difference even using a working set larger than memory
> for example sending out to clients around 500mbyte but only with
> 200mbyte of memory. If you are going to run such a test then I really
> suggest you to run my VM9 (or leather) patches. The stock kernel is slow
> like hell in recycling pages from the cache. Mostly if your cache is low
> and it's bad distributed over the VM.

Zach:
> > They == microsoft? :)
>
> yeah. they specced the original test and the pcweek guys made us stick
> to that :(
>
> > Let me know better. They have 4 CPU and 4 network card. Each network card
> > overload each CPU with a flood of network IRQs, right?
>
> yeah. quad 400mhz cpus and 4 100mb network cards. there were a lot of
> clients hanging off each of the network cards, and all the clients only
> asked for very simple requests. this made the whole operation very
> sensitive to network op latency.
>
> > Why you raise the finger against interrupts? Is the problem the irq
> > lantency? It's true that we call irq handlers through two function
> > pointers, we could remove such function pointers but I am not convinced
> > that that's the problem. I think also NT may have something similar to
> > allow the code to be object oriented and cleaner.
>
> the problem was the single lock for being in the tcp paths. we'd have N
> apaches running on the cpus trying to transmit the data they just figured
> out the client wants, and N interrupts arriving at the same time trying to
> pass more data in to the waiting apaches. sync_bh was WAY at the top of
> the profiling runs.
>
> they managed to engineer a test that stressed a point of our system that
> wasn't very well tuned. If we could have used a box with a single p3 and
> a gigabit card..
>
> > 200mbyte of memory. If you are going to run such a test then I really
>
> oh, the next run of tests will use 2.4 :) We will do very well there
> indeed :)

-- fin --

- Dan

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:52    [W:0.092 / U:0.124 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site