lkml.org 
[lkml]   [2020]   [Oct]   [9]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH RFC PKS/PMEM 05/58] kmap: Introduce k[un]map_thread
    Date
    From: Ira Weiny <ira.weiny@intel.com>

    To correctly support the semantics of kmap() with Kernel protection keys
    (PKS), kmap() may be required to set the protections on multiple
    processors (globally). Enabling PKS globally can be very expensive
    depending on the requested operation. Furthermore, enabling a domain
    globally reduces the protection afforded by PKS.

    Most kmap() (Aprox 209 of 229) callers use the map within a single thread and
    have no need for the protection domain to be enabled globally. However, the
    remaining callers do not follow this pattern and, as best I can tell, expect
    the mapping to be 'global' and available to any thread who may access the
    mapping.[1]

    We don't anticipate global mappings to pmem, however in general there is a
    danger in changing the semantics of kmap(). Effectively, this would cause an
    unresolved page fault with little to no information about why the failure
    occurred.

    To resolve this a number of options were considered.

    1) Attempt to change all the thread local kmap() calls to kmap_atomic()[2]
    2) Introduce a flags parameter to kmap() to indicate if the mapping should be
    global or not
    3) Change ~20 call sites to 'kmap_global()' to indicate that they require a
    global enablement of the pages.
    4) Change ~209 call sites to 'kmap_thread()' to indicate that the mapping is to
    be used within that thread of execution only

    Option 1 is simply not feasible. Option 2 would require all of the call sites
    of kmap() to change. Option 3 seems like a good minimal change but there is a
    danger that new code may miss the semantic change of kmap() and not get the
    behavior the developer intended. Therefore, #4 was chosen.

    Subsequent patches will convert most ~90% of the kmap callers to this new call
    leaving about 10% of the existing kmap callers to enable PKS globally.

    Cc: Randy Dunlap <rdunlap@infradead.org>
    Signed-off-by: Ira Weiny <ira.weiny@intel.com>
    ---
    include/linux/highmem.h | 34 ++++++++++++++++++++++++++--------
    1 file changed, 26 insertions(+), 8 deletions(-)

    diff --git a/include/linux/highmem.h b/include/linux/highmem.h
    index 2a9806e3b8d2..ef7813544719 100644
    --- a/include/linux/highmem.h
    +++ b/include/linux/highmem.h
    @@ -60,7 +60,7 @@ static inline void kmap_flush_tlb(unsigned long addr) { }
    #endif

    void *kmap_high(struct page *page);
    -static inline void *kmap(struct page *page)
    +static inline void *__kmap(struct page *page, bool global)
    {
    void *addr;

    @@ -74,20 +74,20 @@ static inline void *kmap(struct page *page)
    * Even non-highmem pages may have additional access protections which
    * need to be checked and potentially enabled.
    */
    - dev_page_enable_access(page, true);
    + dev_page_enable_access(page, global);
    return addr;
    }

    void kunmap_high(struct page *page);

    -static inline void kunmap(struct page *page)
    +static inline void __kunmap(struct page *page, bool global)
    {
    might_sleep();
    /*
    * Even non-highmem pages may have additional access protections which
    * need to be checked and potentially disabled.
    */
    - dev_page_disable_access(page, true);
    + dev_page_disable_access(page, global);
    if (!PageHighMem(page))
    return;
    kunmap_high(page);
    @@ -160,10 +160,10 @@ static inline struct page *kmap_to_page(void *addr)

    static inline unsigned long totalhigh_pages(void) { return 0UL; }

    -static inline void *kmap(struct page *page)
    +static inline void *__kmap(struct page *page, bool global)
    {
    might_sleep();
    - dev_page_enable_access(page, true);
    + dev_page_enable_access(page, global);
    return page_address(page);
    }

    @@ -171,9 +171,9 @@ static inline void kunmap_high(struct page *page)
    {
    }

    -static inline void kunmap(struct page *page)
    +static inline void __kunmap(struct page *page, bool global)
    {
    - dev_page_disable_access(page, true);
    + dev_page_disable_access(page, global);
    #ifdef ARCH_HAS_FLUSH_ON_KUNMAP
    kunmap_flush_on_unmap(page_address(page));
    #endif
    @@ -238,6 +238,24 @@ static inline void kmap_atomic_idx_pop(void)

    #endif

    +static inline void *kmap(struct page *page)
    +{
    + return __kmap(page, true);
    +}
    +static inline void kunmap(struct page *page)
    +{
    + __kunmap(page, true);
    +}
    +
    +static inline void *kmap_thread(struct page *page)
    +{
    + return __kmap(page, false);
    +}
    +static inline void kunmap_thread(struct page *page)
    +{
    + __kunmap(page, false);
    +}
    +
    /*
    * Prevent people trying to call kunmap_atomic() as if it were kunmap()
    * kunmap_atomic() should get the return value of kmap_atomic, not the page.
    --
    2.28.0.rc0.12.gb6a658bd00c9
    \
     
     \ /
      Last update: 2020-10-09 22:11    [W:4.024 / U:0.252 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site