lkml.org 
[lkml]   [2020]   [Aug]   [25]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 1/2 v3] rseq/membarrier: add MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ
----- On Aug 20, 2020, at 1:42 PM, Peter Oskolkov posk@posk.io wrote:

> On Wed, Aug 12, 2020 at 12:44 PM Mathieu Desnoyers
> <mathieu.desnoyers@efficios.com> wrote:
>>
> [...]
>>
>> > One way of doing what you suggest is to allow some commands to be bitwise-ORed.
>> >
>> > So, for example, the user could call
>> >
>> > membarrier(CMD_PRIVATE_EXPEDITED_SYNC_CORE | CMD_PRIVATE_EXPEDITED_RSEQ, cpu_id)
>> >
>> > Is this what you have in mind?
>>
>> Not really. This would not take care of the fact that we would end up
>> multiplying
>> the number of commands as we allow combinations. E.g. if we ever want to have
>> RSEQ
>> work in private and global, and in non-expedited and expedited, we end up
>> needing:
>>
>> - CMD_REGISTER_PRIVATE_EXPEDITED_RSEQ
>> - CMD_PRIVATE_EXPEDITED_RSEQ
>> - CMD_PRIVATE_RSEQ
>> - CMD_REGISTER_GLOBAL_EXPEDITED_RSEQ
>> - CMD_GLOBAL_EXPEDITED_RSEQ
>> - CMD_GLOBAL_RSEQ
>>
>> The only thing we would save by OR'ing it with the SYNC_CORE command is the
>> additional
>> list:
>>
>> - CMD_REGISTER_PRIVATE_EXPEDITED_RSEQ_SYNC_CORE
>> - CMD_PRIVATE_EXPEDITED_RSEQ_SYNC_CORE
>> - CMD_PRIVATE_RSEQ_SYNC_CORE
>> - CMD_REGISTER_GLOBAL_EXPEDITED_RSEQ_SYNC_CORE
>> - CMD_GLOBAL_EXPEDITED_RSEQ_SYNC_CORE
>> - CMD_GLOBAL_RSEQ_SYNC_CORE
>>
>> But unless we receive feedback that doing a membarrier with RSEQ+sync_core all
>> in
>> one go is a significant use-case, I am tempted to leave out that scenario for
>> now.
>> If we go for new commands, this means we could add (for private-expedited-rseq):
>>
>> - MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED_RSEQ,
>> - MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ,
>>
>> I do however have use-cases for using RSEQ across shared memory (between
>> processes). Not currently for a rseq-fence, but for rseq acting as per-cpu
>> atomic operations. If I ever end up needing rseq-fence across shared memory,
>> that would result in the following new commands:
>>
>> - MEMBARRIER_CMD_REGISTER_GLOBAL_EXPEDITED_RSEQ,
>> - MEMBARRIER_CMD_GLOBAL_EXPEDITED_RSEQ,
>>
>> The remaining open question is whether it would be OK to define a new
>> membarrier flag=MEMBARRIER_FLAG_CPU, which would expect an additional
>> @cpu parameter.
>
> Hi Mathieu,
>
> I do not think there is any reason to wait for additional feedback, so I believe
> we should finalize the API/ABI.
>
> I see two issues to resolve:
> 1: how to combine commands:
> - you do not like adding new commands that are combinations of existing ones;
> - you do not like ORing.
> => I'm not sure what other options we have here?

Concretely speaking, let's just add a new membarrier command for the use-case
at hand. All other ways of doing things we have discussed are tricky to expose
in a way that is discoverable by user-space through the QUERY command. (using
a flag, or OR'ing many commands together)

>
> 2: should @flags be repurposed for cpu_id, or MEMBARRIER_FLAG_CPU
> added with a new syscall parameter.
> => I'm still not sure a new parameter can be cleanly added, but I can try
> it in the next patchset if you prefer it this way.

Yes please, it's easy to implement and we'll quickly see if anyone yells. If
it turns out to be a bad idea, you can always blame me. ;-)

In summary:

- We add 2 new membarrier commands:
- MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED_RSEQ
- MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ

- We reserve a membarrier flag:

enum membarrier_flag {
MEMBARRIER_FLAG_CPU = (1 << 0),
}

So for CMD_PRIVATE_EXPEDITED_RSEQ, if flags & MEMBARRIER_FLAG_CPU is true,
then we expect the additional "int cpu" parameter (3rd parameter). Else the cpu
parameter is unused.

Are you OK with this approach ?

Thanks,

Mathieu

>
> Please let me know your thoughts.
>
> Thanks,
> Peter

--
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com

\
 
 \ /
  Last update: 2020-08-25 18:58    [W:0.063 / U:0.520 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site