Messages in this thread |  | | Subject | Re: [PATCH] kthread: NUMA aware kthread_create_on_cpu() | From | Eric Dumazet <> | Date | Mon, 29 Nov 2010 00:37:04 +0100 |
| |
Le lundi 29 novembre 2010 à 00:01 +0100, Andi Kleen a écrit : > On Sun, Nov 28, 2010 at 11:51:51PM +0100, Eric Dumazet wrote: > > > Also this messes up the policy of the caller process. You really > > > need to save/restore it. > > > > Well, caller process duty is to create kthreads in a loop. > > In this case any other allocations it may do > are still on those > nodes.
As I said, it does only create_kthread() calls, and no "other allocations".
while (!list_empty(&kthread_create_list)) { struct kthread_create_info *create;
create = list_entry(kthread_create_list.next, struct kthread_create_info, list); list_del_init(&create->list); spin_unlock(&kthread_create_lock);
create_kthread(create);
spin_lock(&kthread_create_lock); }
> > > > Problem is that this ends up in architecture specific code > > > for the stack, so may be a larger patch. > > > > I suggest arches that need slab to allocate kthread stacks do the > > appropriate changes, because I am not able to make them myself. > > > > On x86, we use page allocator only, so NUMA mempolicy is used. > > task_struct is always allocated from slab.
Hmm, I meant stack (the thing that might be trashed a lot in ksoftirqd), so it is included in struct thread_info
And this one uses __get_free_pages(GFP_KERNEL, THREAD_SIZE_ORDER) from alloc_thread_info()
By the way, I re-tested my original patch (MPOL_BIND) on x86_32
# cat /proc/buddyinfo Node 0, zone DMA 0 1 0 1 2 1 1 0 1 1 3 Node 0, zone Normal 22 14 10 3 2 3 4 2 3 2 165 Node 0, zone HighMem 41 35 346 223 124 140 40 19 2 0 143 Node 1, zone HighMem 21 7 8 4 217 97 33 11 3 1 415
And got correct stacks. Are you sure we must use PREFERRED ?
-- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
|  |