lkml.org 
[lkml]   [2017]   [Jul]   [26]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH tip/core/rcu 4/5] sys_membarrier: Add expedited option
----- On Jul 26, 2017, at 2:30 PM, Paul E. McKenney paulmck@linux.vnet.ibm.com wrote:

> On Wed, Jul 26, 2017 at 06:01:15PM +0000, Mathieu Desnoyers wrote:
>> ----- On Jul 26, 2017, at 11:42 AM, Paul E. McKenney paulmck@linux.vnet.ibm.com
>> wrote:
>>
>> > On Wed, Jul 26, 2017 at 09:46:56AM +0200, Peter Zijlstra wrote:
>> >> On Tue, Jul 25, 2017 at 10:50:13PM +0000, Mathieu Desnoyers wrote:
>> >> > This would implement a MEMBARRIER_CMD_PRIVATE_EXPEDITED (or such) flag
>> >> > for expedited process-local effect. This differs from the "SHARED" flag,
>> >> > since the SHARED flag affects threads accessing memory mappings shared
>> >> > across processes as well.
>> >> >
>> >> > I wonder if we could create a MEMBARRIER_CMD_SHARED_EXPEDITED behavior
>> >> > by iterating on all memory mappings mapped into the current process,
>> >> > and build a cpumask based on the union of all mm masks encountered ?
>> >> > Then we could send the IPI to all cpus belonging to that cpumask. Or
>> >> > am I missing something obvious ?
>> >>
>> >> I would readily object to such a beast. You far too quickly end up
>> >> having to IPI everybody because of some stupid shared map or something
>> >> (yes I know, normal DSOs are mapped private).
>> >
>> > Agreed, we should keep things simple to start with. The user can always
>> > invoke sys_membarrier() from each process.
>>
>> Another alternative for a MEMBARRIER_CMD_SHARED_EXPEDITED would be rate-limiting
>> per thread. For instance, we could add a new "ulimit" that would bound the
>> number of expedited membarrier per thread that can be done per millisecond,
>> and switch to synchronize_sched() whenever a thread goes beyond that limit
>> for the rest of the time-slot.
>>
>> A RT system that really cares about not having userspace sending IPIs
>> to all cpus could set the ulimit value to 0, which would always use
>> synchronize_sched().
>>
>> Thoughts ?
>
> The patch I posted reverts to synchronize_sched() in kernels booted with
> rcupdate.rcu_normal=1. ;-)
>
> But who is pushing for multiple-process sys_membarrier()? Everyone I
> have talked to is OK with it being local to the current process.

I guess I'm probably the guilty one intending to do weird stuff in userspace ;)

Here are my two use-cases:

* a new multi-process liburcu flavor, useful if e.g. a set of processes are
responsible for updating a shared memory data structure, and a separate set
of processes read that data structure. The readers can be killed without ill
effect on the other processes. The synchronization could be done by one
multi-process liburcu flavor per reader process "group".

* lttng-ust user-space ring buffers (shared across processes).

Both rely on a shared memory mapping for communication between processes, and
I would like to be able to issue a sys_membarrier targeting all CPUs that may
currently touch the shared memory mapping.

I don't really need a system-wide effect, but I would like to be able to target
a shared memory mapping and efficiently do an expedited sys_membarrier on all
cpus involved.

With lttng-ust, the shared buffers can spawn across 1000+ processes, so
asking each process to issue sys_membarrier would add lots of unneeded overhead,
because this would issue lots of needless memory barriers.

Thoughts ?

Thanks,

Mathieu


--
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com

\
 
 \ /
  Last update: 2017-07-26 22:35    [W:0.113 / U:2.652 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site