lkml.org 
[lkml]   [2021]   [Oct]   [8]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRe: [PATCH 2/2] mm/mprotect: do not flush on permission promotion
Date

>>
>> Any numbers would be helpful.
>>
>>> If you want, I will write a microbenchmarks and give you numbers.
>>> If you look for further optimizations (although you did not indicate
>>> so), such as doing the TLB batching from do_mprotect_key(),
>>> (i.e. batching across VMAs), we can discuss it and apply it on
>>> top of these patches.
>>
>> I think this patch itself is sufficient if we can show a benefit; I do wonder if existing benchmarks could already show a benefit, I feel like they should if this makes a difference. Excessive mprotect() usage (protect<>unprotect) isn't something unusual.
>
> I do not know about a concrete benchmark (other than my workload, which I cannot share right now) that does excessive mprotect() in a way that would actually be measurable on the overall performance. I would argue that many many optimizations in the kernel are such that would not have so measurable benefit by themselves on common macrobenchmarks.
>
> Anyhow, per your request I created a small micro-benchmark that runs mprotect(PROT_READ) and mprotect(PROT_READ|PROT_WRITE) in a loop and measured the time it took to do the latter (where a writeprotect is not needed). I ran the benchmark on a VM (guest) on top of KVM.
>
> The cost (cycles) per mprotect(PROT_READ|PROT_WRITE) operation:
>
> 1 thread 2 threads
> -------- ---------
> w/patch: 2496 2505
> w/o patch: 5342 10458
>

For my taste, the above numbers are sufficient, thanks!

> [ The results for 1 thread might seem strange as one can expect the overhead in this case to be no more than ~250 cycles, which is the time a TLB invalidation of a single PTE takes. Yet, this overhead are probably related to “page fracturing”, which happens when the VM memory is backed by 4KB pages. In such scenarios, a single PTE invalidation in the VM can cause on Intel a full TLB flush. The full flush is needed to ensure that if the invalidated address was mapped through huge page in the VM, any relevant 4KB mapping that is cached in the TLB (after fracturing due to the 4KB GPA->HPA mapping) would be removed.]

Very nice analysis :)

>
> Let me know if you want me to share the micro-benchmark with you. I am not going to mention the results in the commit log, because I think the overhead of unnecessary TLB invalidation is well established.

Just let me clarify why I am asking at all, it could be that:

a) The optimization is effective and applicable to many workloads
b) The optimization is effective and applicable to some workloads
("micro benchmark")
c) The optimization is ineffective
d) The optimization is wrong

IMHO: We can rule out d) by review and tests. We can rule out c) by
simple benchmarks easily.

Maybe extend the patch description by something like:

"The benefit of this optimization can already be visible when doing
mprotect(PROT_READ) -> mprotect(PROT_READ|PROT_WRITE) on a single
thread, because we end up requiring basically no TLB flushes. The
optimization gets even more significant with more threads. See [1] for
simple micro benchmark results."

Something like that would be good enough for my taste.

--
Thanks,

David / dhildenb

\
 
 \ /
  Last update: 2021-10-08 09:36    [W:0.070 / U:0.116 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site