lkml.org 
[lkml]   [2004]   [Sep]   [12]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [pagevec] resize pagevec to O(lg(NR_CPUS))
On Sun, Sep 12, 2004 at 12:19:48AM -0700, William Lee Irwin III wrote:
> Sorry, 4*lg(NR_CPUS) is 64 when lg(NR_CPUS) = 16, or 65536 cpus. 512x
> Altixen would have 4*lg(512) = 4*9 = 36. The 4*lg(NR_CPUS) sizing was
> rather conservative on behalf of users of stack-allocated pagevecs.

And for the extra multiplications, that's a pagevec 296B in size, and
touching 36 page structure cachelines, or 2304B with a 64B cacheline,
4608B for a 128B cacheline, etc. and that even with a ridiculously
large number of cpus. A more involved batching factor may make
sense, though. e.g. 2**(2.5*sqrt(lg(NR_CPUS)) - 1) or some such to
get 4 -> 6, 9 -> 11, 16 -> 16, 25 -> 21, 36 -> 26, 49 -> 31, 64 -> 35,
81 -> 40, 100 -> 44, 121 -> 48, 144 -> 52, 169 -> 56, 196 -> 60,
225 -> 64, 256 -> 68, 289 -> 71, 324 -> 75, 361 -> 79, 400 -> 82,
441 -> 86, 484 -> 89, 529 -> 92, 576 -> 96, 625 -> 99, 676 -> 102,
729 -> 105, 784 -> 108, 841 -> 111, 900 -> 114, 961 -> 117, 1024 -> 120
etc., which looks like a fairly good tradeoff between growth with
NR_CPUS and various limits. I can approximate this well enough in the
preprocessor, but it would be somewhat more obscure than 4*lg(NR_CPUS)
(basically nest expansions of sufficiently rapidly convergent series
and use some functional relations to transform arguments into areas of
rapid convergence), but I suspect we should explore differentiating
between on-stack rapid-fire usage and longer-term amortization if we
must adapt so precisely rather than tuning a global PAGEVEC_SIZE to death.


-- wli
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 14:06    [W:1.374 / U:0.020 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site