lkml.org 
[lkml]   [2015]   [Mar]   [30]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 0/2] workqueue: fix a bug when numa mapping is changed
Hi Kame-san,

On 03/27/2015 12:42 AM, Kamezawa Hiroyuki wrote:

> On 2015/03/27 0:18, Tejun Heo wrote:
>> Hello,
>>
>> On Thu, Mar 26, 2015 at 01:04:00PM +0800, Gu Zheng wrote:
>>> wq generates the numa affinity (pool->node) for all the possible cpu's
>>> per cpu workqueue at init stage, that means the affinity of currently un-present
>>> ones' may be incorrect, so we need to update the pool->node for the new added cpu
>>> to the correct node when preparing online, otherwise it will try to create worker
>>> on invalid node if node hotplug occurred.
>>
>> If the mapping is gonna be static once the cpus show up, any chance we
>> can initialize that for all possible cpus during boot?
>>
>
> I think the kernel can define all possible
>
> cpuid <-> lapicid <-> pxm <-> nodeid
>
> mapping at boot with using firmware table information.

Could you explain more?

Regards,
Gu

>
> One concern is current x86 logic for memory-less node v.s. memory hotplug.
> (as I explained before)
>
> My idea is
> step1. build all possible mapping at boot cpuid <-> apicid <-> pxm <-> node id at boot.
>
> But this may be overwritten by x86's memory less node logic. So,
> step2. check node is online or not before calling kmalloc. If offline, use -1.
> rather than updating workqueue's attribute.
>
> Thanks,
> -Kame
>
> .
>




\
 
 \ /
  Last update: 2015-03-30 12:41    [W:0.159 / U:0.176 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site