lkml.org 
[lkml]   [2020]   [Oct]   [29]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[PATCH 06/16] mm/pagemap: Cleanup PREEMPT_COUNT leftovers
Date
From: Thomas Gleixner <tglx@linutronix.de>

CONFIG_PREEMPT_COUNT is now unconditionally enabled and will be
removed. Cleanup the leftovers before doing so.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: linux-mm@kvack.org
Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
---
include/linux/pagemap.h | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index c77b7c31b2e4..cbfbe2bcca75 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -204,9 +204,7 @@ void release_pages(struct page **pages, int nr);
static inline int __page_cache_add_speculative(struct page *page, int count)
{
#ifdef CONFIG_TINY_RCU
-# ifdef CONFIG_PREEMPT_COUNT
- VM_BUG_ON(!in_atomic() && !irqs_disabled());
-# endif
+ VM_BUG_ON(preemptible())
/*
* Preempt must be disabled here - we rely on rcu_read_lock doing
* this for us.
--
2.20.1
\
 
 \ /
  Last update: 2020-10-29 17:51    [W:1.976 / U:0.004 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site