lkml.org 
[lkml]   [2020]   [Dec]   [1]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: [PATCH 1/3] x86/membarrier: Get rid of a dubious optimization
----- On Nov 30, 2020, at 12:50 PM, Andy Lutomirski luto@kernel.org wrote:
[...]
> diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
> index 11666ba19b62..dabe683ab076 100644
> --- a/arch/x86/mm/tlb.c
> +++ b/arch/x86/mm/tlb.c
> @@ -474,8 +474,10 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct
> mm_struct *next,
> /*
> * The membarrier system call requires a full memory barrier and
> * core serialization before returning to user-space, after
> - * storing to rq->curr. Writing to CR3 provides that full
> - * memory barrier and core serializing instruction.
> + * storing to rq->curr, when changing mm. This is because membarrier()
> + * sends IPIs to all CPUs that are in the target mm, but another
> + * CPU switch to the target mm in the mean time.

The sentence "This is because membarrier() sends IPIs to all CPUs that are in
the target mm, but another CPU switch to the target mm in the mean time." seems
rather unclear. Could be clarified with e.g.:

"This is because membarrier() sends IPIs to all CPUs that are in the target mm
to make them issue memory barriers. However, if another CPU switches to/from the
target mm concurrently with membarrier(), it can cause that CPU not to receive the
IPI when it really should issue a memory barrier."

For the rest of this patch:

Reviewed-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>

Thanks!

Mathieu


--
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com
\
 
 \ /
  Last update: 2020-12-01 15:42    [W:0.190 / U:0.372 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site