lkml.org 
[lkml]   [1998]   [Apr]   [27]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: faster strcpy()
On Mon, 27 Apr 1998, Khimenko Victor wrote:

#> > You're using a single buffer, which has vastly different L1
#> > characteristics than multiple buffers; and different characteristics from
#> > what real-world apps would see because they have other things polluting
#> > the L1. I bring up this point mostly because the strlen/memcpy version is
#> > probably better on the pentium because of the L1 cache design, this is
#> > less likely to be an issue on the pentium pro.
#> >
#> I am using two buffers, one a source and another a destination. They
#> are deliberately the same buffers for both tests. There is no way that
#> the execution of one string function could affect the other since
#> the strings are way too long to fit in a cache.
#>
#"Way to long" ? What you mean ? Pentium MMX (and if I am remeber right PII)
#has >=16K data cache ! The same with K6 ... Even standard Pentium has 8K data
#cache ...

Cache Clearup Time(tm).

Pentium has 16K (2x8K), IIRC
P5MMX has 32K (2x16K), IIRC
K5 has 16K (2x8K), IIRC
K6 has 64K (2x32K), IIRC
Cx6x86 has 32K (1x32K (Unified Cache IS Faster(tm)))
Cx6x86MX has 64K (1x64K)
PentiumII varies. I believe Celerons have no internal cache, relying
entirely on the L2. Xeon I don't remember offhand. I'm not a Wintel kinda
guy. :)

The L1 cache is used for instructions. The more you have, the more
instructions you can cache. The more instructions you can cache, the more
you can do.

However, you run into the issue with cache block sizes. If you have 64K in
4 16K blocks, it will seem like you have 32K, because the latency will
increase from having to address each cache block to get the instructions
out of it. If you have a 64K cache in 32K blocks, you won't really notice
any decrease. If you have 64K cache unified, it's a lot faster. Block
sizes also count when you're doing LONG instructions. If you've got 8K
blocks, and you've got a 24K instruction, it has to span three blocks,
which gives you a MAJOR performance hit right there, because it has to
waste cycles addressing and retrieving from each block, and then
combining.

So, in short; not only does cache size count, but cache addressing and
block sizes.

Oh, BTW, has anybody gotten their hands on the new Cyrix mII-300 yet? I
don't want to order mine till I know how Linux likes 'em so far. I haven't
had time to check the datasheets, much less print them, but as far as I
can guess, with the kind of FPU performance I've been seeing from my
6x86MX's, they're using a dual-line FPU, which makes me wonder if that's
going to raise any issues in the kernel. (No, mII-300 isn't Cayenne core
yet. They're talking December or 1st Quarter 99 for the Cayennes, with the
quad-line FPU. (Intel, eat your heart out.;))

-Phil R. Jaenke (kernel@nls.net / prj@nls.net)
TheGuyInCharge(tm), Ketyra Designs - We get paid to break stuff :)
Linux pkrea.ketyra.INT 2.0.33 #15 Sat Apr 18 00:40:21 EDT 1998 i586
Linux eiterra.nls.net 2.0.33 #15 Fri Apr 17 00:22:13 EDT 1998 i586
- Linus says for 'brave people only.' I say 'keep a backup.' - :)


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu

\
 
 \ /
  Last update: 2005-03-22 13:42    [W:1.092 / U:0.908 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site