Messages in this thread Patch in this message | | | From | Stephane Eranian <> | Subject | [PATCH 3/8] perf,x86: add uvirt_to_phys_nmi helper function | Date | Fri, 21 Jun 2013 16:20:43 +0200 |
| |
Function to convert from a user level address to its physical address. To be used by perf_events memory access sampling feature.
Signed-off-by: Hugh Dickins <hughd@google.com> Signed-off-by: Stephane Eranian <eranian@google.com> --- arch/x86/include/asm/uaccess.h | 1 + arch/x86/lib/usercopy.c | 43 ++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 44 insertions(+)
diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h index 5ee2687..4c9a102 100644 --- a/arch/x86/include/asm/uaccess.h +++ b/arch/x86/include/asm/uaccess.h @@ -513,6 +513,7 @@ struct __large_struct { unsigned long buf[100]; }; extern unsigned long copy_from_user_nmi(void *to, const void __user *from, unsigned long n); +extern phys_addr_t uvirt_to_phys_nmi(const void __user *address); extern __must_check long strncpy_from_user(char *dst, const char __user *src, long count); diff --git a/arch/x86/lib/usercopy.c b/arch/x86/lib/usercopy.c index 4f74d94..3f19023 100644 --- a/arch/x86/lib/usercopy.c +++ b/arch/x86/lib/usercopy.c @@ -47,3 +47,46 @@ copy_from_user_nmi(void *to, const void __user *from, unsigned long n) return len; } EXPORT_SYMBOL_GPL(copy_from_user_nmi); + +/* + * Best effort, NMI-safe GUP-fast-based user-virtual to physical translation. + * + * Does not really belong in "usercopy.c", but kept here for comparison with + * copy_from_user_nmi() above. + * + * __get_user_pages_fast() may fail at awkward moments e.g. while transparent + * hugepage is being split. And at present it happens to SetPageReferenced(): + * not really a problem when this is used for profiling pages which are being + * referenced, but should be fixed if this were to be used any more widely. + * + * At time of writing, __get_user_pages_fast() is supported by mips, s390, sh + * and x86 (with a weak fallback returning 0 on other architectures): we have + * not established whether it is NMI-safe on any other architecture than x86. + */ +phys_addr_t uvirt_to_phys_nmi(const void __user *address) +{ + unsigned long vaddr = (unsigned long)address; + phys_addr_t paddr = vaddr & ~PAGE_MASK; + struct page *page; + + if (!current->mm) + return -1; + + if (__range_not_ok(address, 1, TASK_SIZE)) + return -1; + + if (!__get_user_pages_fast(vaddr, 1, 0, &page)) + return -1; + + paddr += (phys_addr_t)page_to_pfn(page) << PAGE_SHIFT; + + /* + * If called under NMI, this put_page(page) cannot be its final + * put_page (which would indeed be problematic): a racing munmap + * on another CPU cannot free the page until it has flushed TLB + * on our CPU, and that must wait for us to leave NMI. + */ + put_page(page); + + return paddr; +} -- 1.8.1.2
| |