lkml.org 
[lkml]   [2015]   [May]   [27]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Subject[PATCH 10/19] x86, mpx: Trace the attempts to find bounds tables
From
Date

From: Dave Hansen <dave.hansen@linux.intel.com>

There are two different events being traced here. They are
doing similar things so share a trace "EVENT_CLASS" and are
presented together.

1. Trace when MPX is zapping pages "mpx_unmap_zap":

When MPX can not free an entire bounds table, it will
instead try to zap unused parts of a bounds table to free
the backing memory. This decreases RSS (resident set
size) without decreasing the virtual space allocated
for bounds tables.

2. Trace attempts to find bounds tables "mpx_unmap_search":

This event traces any time we go looking to unmap a
bounds table for a given virtual address range. This is
useful to ensure that the kernel actually "tried" to free
a bounds table versus times it succeeded in finding one.

It might try and fail if it realized that a table was
shared with an adjacent VMA which is not being unmapped.

Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
---

b/arch/x86/include/asm/trace/mpx.h | 32 ++++++++++++++++++++++++++++++++
b/arch/x86/mm/mpx.c | 2 ++
2 files changed, 34 insertions(+)

diff -puN arch/x86/include/asm/trace/mpx.h~mpx-trace_unmap_zap arch/x86/include/asm/trace/mpx.h
--- a/arch/x86/include/asm/trace/mpx.h~mpx-trace_unmap_zap 2015-05-27 09:32:18.350617416 -0700
+++ b/arch/x86/include/asm/trace/mpx.h 2015-05-27 09:32:18.355617642 -0700
@@ -62,6 +62,38 @@ TRACE_EVENT(bounds_exception_mpx,
__entry->bndstatus)
);

+DECLARE_EVENT_CLASS(mpx_range_trace,
+
+ TP_PROTO(unsigned long start,
+ unsigned long end),
+ TP_ARGS(start, end),
+
+ TP_STRUCT__entry(
+ __field(unsigned long, start)
+ __field(unsigned long, end)
+ ),
+
+ TP_fast_assign(
+ __entry->start = start;
+ __entry->end = end;
+ ),
+
+ TP_printk("[0x%p:0x%p]",
+ (void *)__entry->start,
+ (void *)__entry->end
+ )
+);
+
+DEFINE_EVENT(mpx_range_trace, mpx_unmap_zap,
+ TP_PROTO(unsigned long start, unsigned long end),
+ TP_ARGS(start, end)
+);
+
+DEFINE_EVENT(mpx_range_trace, mpx_unmap_search,
+ TP_PROTO(unsigned long start, unsigned long end),
+ TP_ARGS(start, end)
+);
+
#else

/*
diff -puN arch/x86/mm/mpx.c~mpx-trace_unmap_zap arch/x86/mm/mpx.c
--- a/arch/x86/mm/mpx.c~mpx-trace_unmap_zap 2015-05-27 09:32:18.352617507 -0700
+++ b/arch/x86/mm/mpx.c 2015-05-27 09:32:18.356617687 -0700
@@ -668,6 +668,7 @@ static int zap_bt_entries(struct mm_stru

len = min(vma->vm_end, end) - addr;
zap_page_range(vma, addr, len, NULL);
+ trace_mpx_unmap_zap(addr, addr+len);

vma = vma->vm_next;
addr = vma->vm_start;
@@ -840,6 +841,7 @@ static int mpx_unmap_tables(struct mm_st
long __user *bd_entry, *bde_start, *bde_end;
unsigned long bt_addr;

+ trace_mpx_unmap_search(start, end);
/*
* "Edge" bounds tables are those which are being used by the region
* (start -> end), but that may be shared with adjacent areas. If they
_

\
 
 \ /
  Last update: 2015-05-27 21:21    [W:0.097 / U:0.668 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site