lkml.org 
[lkml]   [2019]   [Nov]   [14]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRe: [PATCH 2/3] mm: Introduce subsection_dev_map
Date
On 2019/11/15 9:46, David Hildenbrand wrote:
>
>
>> Am 15.11.2019 um 00:42 schrieb Toshiki Fukasawa <t-fukasawa@vx.jp.nec.com>:
>>
>> On 2019/11/14 6:26, Dan Williams wrote:
>>>> On Wed, Nov 13, 2019 at 1:22 PM David Hildenbrand <david@redhat.com> wrote:
>>>>
>>>>
>>>>
>>>>> Am 13.11.2019 um 22:12 schrieb Dan Williams <dan.j.williams@intel.com>:
>>>>>
>>>>> On Wed, Nov 13, 2019 at 12:40 PM David Hildenbrand <david@redhat.com> wrote:
>>>>> [..]
>>>>>>>>>> I'm still struggling to understand the motivation of distinguishing
>>>>>>>>>> "active" as something distinct from "online". As long as the "online"
>>>>>>>>>> granularity is improved from sections down to subsections then most
>>>>>>>>>> code paths are good to go. The others can use get_devpagemap() to
>>>>>>>>>> check for ZONE_DEVICE in a race free manner as they currently do.
>>>>>>>>>
>>>>>>>>> I thought we wanted to unify access if we don’t really care about the zone as in most pfn walkers - E.g., for zone shrinking.
>>>>>>>>
>>>>>>>> Agree, when the zone does not matter, which is most cases, then
>>>>>>>> pfn_online() and pfn_valid() are sufficient.
>>>>>>
>>>>>> Oh, and just to clarify why I proposed pfn_active(): The issue right now is that a PFN that is valid but not online could be offline memory (memmap not initialized) or ZONE_DEVICE. That‘s why I wanted to have a way to detect if a memmap was initialized, independent of the zone. That‘s important for generic PFN walkers.
>>>>>
>>>>> That's what I was debating with Toshiki [1], whether there is a real
>>>>> example of needing to distinguish ZONE_DEVICE from offline memory in a
>>>>> pfn walker. The proposed use case in this patch set of being able to
>>>>> set hwpoison on ZONE_DEVICE pages does not seem like a good idea to
>>>>> me. My suspicion is that this is a common theme and others are looking
>>>>> to do these types page manipulations that only make sense for online
>>>>> memory. If that is the case then treating ZONE_DEVICE as offline seems
>>>>> the right direction.
>>>>
>>>> Right. At least it would be nice to have for zone shrinking - not sure about the other walkers. We would have to special-case ZONE_DEVICE handling there.
>>>>
>>>
>>> I think that's ok... It's already zone aware code whereas pfn walkers
>>> are zone unaware and should stay that way if at all possible.
>>>
>>>> But as I said, a subsection map for online memory is a good start, especially to fix pfn_to_online_page(). Also, I think this might be a very good thing to have for Oscars memmap-on-memory work (I have a plan in my head I can discuss with Oscar once he has time to work on that again).
>>>
>>> Ok, I'll keep an eye out.
>>
>> I understand your point. Thanks!
>>
>> By the way, I found another problem about ZONE_DEVICE, which
>> is race between memmap initialization and zone shrinking.
>>
>> Iteration of create and destroy namespace causes the panic as below:
>>
>> [ 41.207694] kernel BUG at mm/page_alloc.c:535!
>> [ 41.208109] invalid opcode: 0000 [#1] SMP PTI
>> [ 41.208508] CPU: 7 PID: 2766 Comm: ndctl Not tainted 5.4.0-rc4 #6
>> [ 41.209064] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.11.0-0-g63451fca13-prebuilt.qemu-project.org 04/01/2014
>> [ 41.210175] RIP: 0010:set_pfnblock_flags_mask+0x95/0xf0
>> [ 41.210643] Code: 04 41 83 e2 3c 48 8d 04 a8 48 c1 e0 07 48 03 04 dd e0 59 55 bb 48 8b 58 68 48 39 da 73 0e 48 c7 c6 70 ac 11 bb e8 1b b2 fd ff <0f> 0b 48 03 58 78 48 39 da 73 e9 49 01 ca b9 3f 00 00 00 4f 8d 0c
>> [ 41.212354] RSP: 0018:ffffac0d41557c80 EFLAGS: 00010246
>> [ 41.212821] RAX: 000000000000004a RBX: 0000000000244a00 RCX: 0000000000000000
>> [ 41.213459] RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffffffffbb1197dc
>> [ 41.214100] RBP: 000000000000000c R08: 0000000000000439 R09: 0000000000000059
>> [ 41.214736] R10: 0000000000000000 R11: ffffac0d41557b08 R12: ffff8be475ea72b0
>> [ 41.215376] R13: 000000000000fa00 R14: 0000000000250000 R15: 00000000fffc0bb5
>> [ 41.216008] FS: 00007f30862ab600(0000) GS:ffff8be57bc40000(0000) knlGS:0000000000000000
>> [ 41.216771] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
>> [ 41.217299] CR2: 000055e824d0d508 CR3: 0000000231dac000 CR4: 00000000000006e0
>> [ 41.217934] Call Trace:
>> [ 41.218225] memmap_init_zone_device+0x165/0x17c
>> [ 41.218642] memremap_pages+0x4c1/0x540
>> [ 41.218989] devm_memremap_pages+0x1d/0x60
>> [ 41.219367] pmem_attach_disk+0x16b/0x600 [nd_pmem]
>> [ 41.219804] ? devm_nsio_enable+0xb8/0xe0
>> [ 41.220172] nvdimm_bus_probe+0x69/0x1c0
>> [ 41.220526] really_probe+0x1c2/0x3e0
>> [ 41.220856] driver_probe_device+0xb4/0x100
>> [ 41.221238] device_driver_attach+0x4f/0x60
>> [ 41.221611] bind_store+0xc9/0x110
>> [ 41.221919] kernfs_fop_write+0x116/0x190
>> [ 41.222326] vfs_write+0xa5/0x1a0
>> [ 41.222626] ksys_write+0x59/0xd0
>> [ 41.222927] do_syscall_64+0x5b/0x180
>> [ 41.223264] entry_SYSCALL_64_after_hwframe+0x44/0xa9
>> [ 41.223714] RIP: 0033:0x7f30865d0ed8
>> [ 41.224037] Code: 89 02 48 c7 c0 ff ff ff ff eb b3 0f 1f 80 00 00 00 00 f3 0f 1e fa 48 8d 05 45 78 0d 00 8b 00 85 c0 75 17 b8 01 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 58 c3 0f 1f 80 00 00 00 00 41 54 49 89 d4 55
>> [ 41.225920] RSP: 002b:00007fffe5d30a78 EFLAGS: 00000246 ORIG_RAX: 0000000000000001
>> [ 41.226608] RAX: ffffffffffffffda RBX: 000055e824d07f40 RCX: 00007f30865d0ed8
>> [ 41.227242] RDX: 0000000000000007 RSI: 000055e824d07f40 RDI: 0000000000000004
>> [ 41.227870] RBP: 0000000000000007 R08: 0000000000000007 R09: 0000000000000006
>> [ 41.228753] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000004
>> [ 41.229419] R13: 00007f30862ab528 R14: 0000000000000001 R15: 000055e824d07f40
>>
>> While creating a namespace and initializing memmap, if you destroy the namespace
>> and shrink the zone, it will initialize the memmap outside the zone and
>> trigger VM_BUG_ON_PAGE(!zone_spans_pfn(page_zone(page), pfn), page) in
>> set_pfnblock_flags_mask().
>
> Does that happen with -next as well? There, we currently don‘t shrink the ZONE_DEVICE zone anymore.

I confirmed the patch. The panic doesn't occur with linux-next kernel.
https://lore.kernel.org/linux-mm/20191006085646.5768-6-david@redhat.com/

Thank you for your information.

Thanks,
Toshiki Fukasawa
\
 
 \ /
  Last update: 2019-11-15 04:01    [W:0.081 / U:0.820 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site