lkml.org 
[lkml]   [2020]   [Jan]   [17]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
SubjectRe: [PATCH -next v4] mm/hotplug: silence a lockdep splat with printk()
From
Date


> On Jan 17, 2020, at 2:15 PM, David Hildenbrand <david@redhat.com> wrote:
>
>
>
>> Am 17.01.2020 um 19:49 schrieb Qian Cai <cai@lca.pw>:
>>
>> 
>>
>>> On Jan 17, 2020, at 10:46 AM, Michal Hocko <mhocko@kernel.org> wrote:
>>>
>>>> On Fri 17-01-20 10:05:12, Qian Cai wrote:
>>>>
>>>>
>>>>> On Jan 17, 2020, at 9:39 AM, Michal Hocko <mhocko@kernel.org> wrote:
>>>>>
>>>>> Thanks a lot. Having it in a separate patch would be great.
>>>>
>>>> I was thinking about removing that WARN together in this v5 patch,
>>>> so there is less churn to touch the same function again. However, I
>>>> am fine either way, so just shout out if you feel strongly towards a
>>>> separate patch.
>>>
>>> I hope you meant moving rather than removing ;). The warning is useful
>>> because we shouldn't see unmovable pages in the movable zone. And a
>>> separate patch makes more sense because the justification is slightly
>>> different. We do not want to have a way for userspace to trigger the
>>> warning from userspace - even though it shouldn't be possible, but
>>> still. Only the offlining path should complain.
>>
>> Something like this?
>>
>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>> index 621716a25639..32c854851e1f 100644
>> --- a/mm/page_alloc.c
>> +++ b/mm/page_alloc.c
>> @@ -8307,7 +8307,6 @@ struct page *has_unmovable_pages(struct zone *zone, struct page *page,
>> }
>> return NULL;
>> unmovable:
>> - WARN_ON_ONCE(zone_idx(zone) == ZONE_MOVABLE);
>> return pfn_to_page(pfn + iter);
>> }
>>
>> diff --git a/mm/page_isolation.c b/mm/page_isolation.c
>> index e70586523ca3..08571b515d9f 100644
>> --- a/mm/page_isolation.c
>> +++ b/mm/page_isolation.c
>> @@ -54,9 +54,11 @@ static int set_migratetype_isolate(struct page *page, int migratetype, int isol_
>>
>> out:
>> spin_unlock_irqrestore(&zone->lock, flags);
>> +
>> if (!ret)
>> drain_all_pages(zone);
>> else if ((isol_flags & REPORT_FAILURE) && unmovable)
>
> We have a dedicated flag for the offlining part.

This should do the trick then,

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 621716a25639..4bb3e503cb9e 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -8231,7 +8231,7 @@ struct page *has_unmovable_pages(struct zone *zone, struct page *page,
if (is_migrate_cma(migratetype))
return NULL;

- goto unmovable;
+ return page;
}

for (; iter < pageblock_nr_pages; iter++) {
@@ -8241,7 +8241,7 @@ struct page *has_unmovable_pages(struct zone *zone, struct page *page,
page = pfn_to_page(pfn + iter);

if (PageReserved(page))
- goto unmovable;
+ return page;

/*
* If the zone is movable and we have ruled out all reserved
@@ -8303,12 +8303,9 @@ struct page *has_unmovable_pages(struct zone *zone, struct page *page,
* is set to both of a memory hole page and a _used_ kernel
* page at boot.
*/
- goto unmovable;
+ return pfn_to_page(pfn + iter);
}
return NULL;
-unmovable:
- WARN_ON_ONCE(zone_idx(zone) == ZONE_MOVABLE);
- return pfn_to_page(pfn + iter);
}

#ifdef CONFIG_CONTIG_ALLOC
diff --git a/mm/page_isolation.c b/mm/page_isolation.c
index e70586523ca3..e140eaa901b2 100644
--- a/mm/page_isolation.c
+++ b/mm/page_isolation.c
@@ -54,14 +54,20 @@ static int set_migratetype_isolate(struct page *page, int migratetype, int isol_

out:
spin_unlock_irqrestore(&zone->lock, flags);
- if (!ret)
+
+ if (!ret) {
drain_all_pages(zone);
- else if ((isol_flags & REPORT_FAILURE) && unmovable)
- /*
- * printk() with zone->lock held will guarantee to trigger a
- * lockdep splat, so defer it here.
- */
- dump_page(unmovable, "unmovable page");
+ } else {
+ if (isol_flags & MEMORY_OFFLINE)
+ WARN_ON_ONCE(zone_idx(zone) == ZONE_MOVABLE);
+
+ if ((isol_flags & REPORT_FAILURE) && unmovable)
+ /*
+ * printk() with zone->lock held will likely trigger a
+ * lockdep splat, so defer it here.
+ */
+ dump_page(unmovable, "unmovable page");
+ }

return ret;
}
>
>> + WARN_ON_ONCE(zone_idx(zone) == ZONE_MOVABLE);
>> /*
>> * printk() with zone->lock held will guarantee to trigger a
>> * lockdep splat, so defer it here.
>>
>
> So, are we fine with unmovable data ending up in ZONE_MOVABLE as long as we can offline it?
>
> This might make my life in virtio-mem a little easier (I can unplug chunks falling into ZONE_MOVABLE).

\
 
 \ /
  Last update: 2020-01-17 20:42    [W:0.082 / U:0.360 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site