lkml.org 
[lkml]   [1999]   [Apr]   [14]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [PFC]: hash instrumentation
Hi,

On Tue, 13 Apr 1999 23:45:51 -0400 (EDT), Chuck Lever <cel@monkey.org>
said:

> On Wed, 14 Apr 1999, Stephen C. Tweedie wrote:
>> As an interesting data point, I applied Chuck's last pair of buffer and
>> page cache hash updates (the new buffer hash function, and the larger
>> PAGE_HASH_BITS) to a 2.2.5 kernel to measure the impact on a parallel
>> build of the current glibc-2.1 rpms on a 4-way Xeon. Each of those two
>> improvements gained about 30 seconds of wall-clock time on a complete
>> build, taking it down from 15 minutes to 14.

> how many hash bits did you try? 13? you might consider trying even more,
> say 15 or 16. benchmarking has shown that the page hash function is
> stable for any bit size between 11 and 16 (i didn't try others), so
> varying it, as Doug's patch does, won't degenerate the hash.

13, but that was quite enough to eliminate __find_page as a significant
CPU cost in this instance, as reported by readprofile.

>> Adding 4 extra bits to the dcache hash was worth 2 full minutes; 12
>> minutes total build time.

> readprofile traces i've taken show mostly the same thing, although the
> page fault handler is by an order of magnitude the highest CPU user
> (find_vma() is still up there, too).

Yes; it does seem as if a glibc build is pretty bad at stressing the
dcache. However, I'd expect things like squid to show similar
dependencies on the name cache performance. Behind d_lookup,
do_anonymous_page and do_wp_page were the major costs (at about a
quarter to half of the cycles of d_lookup).

Hmm. This looks like another place where dropping the kernel lock
during the copy would be beneficial: we already hold the mm semaphore at
the time, so we're not vulnerable to too many races. I'll look at this.

> increasing the efficiency of the dcache hash will buy probably the
> biggest performance improvement of any hash modifications.

For some workloads, yes. Building the kernel spends very little time at
all in d_lookup().

>> Shrinking the dcaches excessively in this case will simply masaccre the
>> performance.

> actually, that's not strictly true. shrinking the dcache early will
> improve the lookup efficiency of the hash, i've found almost by two
> times.

Sure, but a glibc build is referencing a _lot_ of header files! My
concern is that the vmscan loop currently invokes a prune_dcache(0),
which is as aggressive as you can get. If we do that any more
frequently, getting a good balance of the dcache will be a lot harder.

> my preliminary analysis of the inode hash table is that it is also too
> small. find_inode() seems to be used mostly for unsuccessful searches
> these days, and the hash table size is so small that the buckets contain
> long long lists of inodes, making the unsuccessful searches very
> expensive.

FWIW, the profile with the new hash functions but small dcache started
like this (__find_page and find_buffer have been taken out of inline for
profiling here):

4893 d_lookup 23.5240
2741 do_anonymous_page 21.4141
1486 file_read_actor 18.5750
1475 do_wp_page 2.6721
1218 __get_free_pages 2.5805
1075 __find_page 15.8088
844 filemap_nopage 1.1405
684 brw_page 0.7403
600 lookup_dentry 1.2295
594 find_buffer 6.4565
567 page_fault 47.2500
564 handle_mm_fault 1.2261
523 __free_page 2.2543
439 free_pages 1.6140
420 do_con_write 0.2471
403 strlen_user 8.3958
391 zap_page_range 0.8806
382 do_page_fault 0.4799

and with the larger dcache,

2434 do_anonymous_page 19.0156
1451 do_wp_page 2.6286
1343 file_read_actor 16.7875
1328 __find_page 19.5294
1149 __get_free_pages 2.4343
1112 d_lookup 5.3462
847 find_buffer 9.2065
847 filemap_nopage 1.1446
628 brw_page 0.6797
580 page_fault 48.3333
577 lookup_dentry 1.1824
563 handle_mm_fault 1.2239
543 __free_page 2.3405
414 do_con_write 0.2435
397 free_pages 1.4596
377 system_call 6.7321
356 strlen_user 7.4167
354 zap_page_range 0.7973
319 do_page_fault 0.4008

Interestingly, do_anonymous_page, do_wp_page and file_read_actor are all
places where we can probably optimise things to drop the kernel lock.
That won't make them run faster but on SMP it will certainly let other
CPUs get more kernel work done. Film at 11.

--Stephen

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:51    [W:0.054 / U:0.416 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site