Messages in this thread | | | From | Rusty Russell <> | Subject | Re: #tj-percpu has been rebased | Date | Wed, 18 Feb 2009 17:41:13 +1030 |
| |
On Wednesday 18 February 2009 17:10:20 H. Peter Anvin wrote: > Rusty Russell wrote: > >>> > >> num_possible_cpus() can be very large though, so in many cases the > >> likelihood of finding that many pages approach zero. Furthermore, > >> num_possible_cpus() may be quite a bit larger than the actual number of > >> CPUs in the system. > > > > Sure, so we end up at vmalloc. No worse, but simpler and much better if we > > *can* do it. > > If the likelihood is near zero, then you're wasting opportunities to do > it better. If we have compact per-cpu virtual areas then we can use > large pages if we know we'll have large percpu areas.
You're right; we'd need that defrag wonderness people keep speculating about.
What finally convinced me is that the per-cpu chunks have to be at least the size of the .data.percpu section (24k here). 7*num_possible_cpus() is even worse.
Thanks, Rusty.
| |