lkml.org 
[lkml]   [2000]   [Jan]   [29]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: Auto-Adaptive scheduler - Final chapter ( the numbers ) ...
Davide Libenzi wrote:

> On Fri, 28 Jan 2000, water modem wrote:
> > FYI:
> > In the telcom world we try to drive our processors at
> > about 75%. Cache hits are real important here.
> > In our Tandem based Call Processing machines
> > we typically run with 10 to 14 MB L2 cache and even
> > after tuning near everything still get much better
> > improvements just keeping something in cache compared
> > to major rewrites on a functional basis.
> > It is kind of interesting that we see the following repeated
> > on various platforms (includeing embedded) under various workloads.
> > 1/3 resources (time&space) for OS
> > 1/3 resources (applications written by us)
> > 1/3 resources for 1 or 2 purchased drivers or applications
> > {^^^always the best but most complictated target for improvement}
>
> I can agree with You about cache optimization, but the fact is that the patch,
> for RQ < limit introduced only a memory load ( 4 bytes ) and memory write ( 4
> bytes ).
> It is the wrong way to get off the patch ( benchmarks first, cache issues
> after, ...) that driven me to this kind of defence of the patch.
> As I've stated into the message opening this thread I'm all but sure that
> the RQs that makes the patch to boost, will be usual in real world.
>
> As I've reported to Linus, Alan and Ingo, the better way to reject the patch
> is the one that state that the code bloat ( 200 lines ) is not sustained
> by a real gain under normal loads.
>
> Cheers,
> Davide.
>

Davide,
This was an FYI. Perhaps I should have said a some more...
The 1/3 of the applications written by us are about
120 million lines of c++ code. Lots of objects, lots of state machines
lots of task switching, lot of threads. Is this the way to write
applications?
Obviously not but with 120 million lines and growing of legacy it will
be around for a long long time. We have had Compaq/Tandem optimize
the OS. We optimize our code and objects to keep as much important
info at an instant in cache. The task switching rate is very high.
The third party drivers for some SS7 software kill us as we have no
leverage to get them to optimize. That means that we keep adding
more "iron". And to be honest as long as the cost/performance
ratio is good the customers don't care. Data bases are turned into
memory based object bases (ram is cheap) and these are optimized for
cache and task attributes. Prior to the Mips R10K+ processors we
would use HP-InCircuit Devices to view a running processor and
tie activities causing cache misses to actual code accesses. With
the newer processors we can get the same information from the
statistics registers. What we discovered is that theory can be
debated all one wants but nothing equals a good real trace of a
running device with a real load. The results are usually very
enlightening!



>
> --
> All this stuff is IMVHO


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:56    [W:1.182 / U:0.092 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site