lkml.org 
[lkml]   [2018]   [Feb]   [28]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [RFC PATCH V2 13/22] x86/intel_rdt: Support schemata write - pseudo-locking core
From
Date
Hi Thomas,

On 2/28/2018 10:39 AM, Thomas Gleixner wrote:
> On Tue, 27 Feb 2018, Reinette Chatre wrote:
>> On 2/27/2018 2:36 AM, Thomas Gleixner wrote:
>>> On Mon, 26 Feb 2018, Reinette Chatre wrote:
>>>> Moving to "exclusive" mode it appears that, when enabled for a resource
>>>> group, all domains of all resources are forced to have an "exclusive"
>>>> region associated with this resource group (closid). This is because the
>>>> schemata reflects the hardware settings of all resources and their
>>>> domains and the hardware does not accept a "zero" bitmask. A user thus
>>>> cannot just specify a single region of a particular cache instance as
>>>> "exclusive". Does this match your intention wrt "exclusive"?
>>>
>>> Interesting question. I really did not think about that yet.
>
> Second thoughts on that: I think for a start we can go the simple route and
> just say: exclusive covers all cache levels.

Will do ... (will refer back to this later)

>
>>> You could make it:
>>>
>>> echo locksetup > mode
>>> echo $CONF > schemata
>>> echo locked > mode
>>>
>>> Or something like that.
>>
>> Indeed ... the final command may perhaps not be needed? Since the user
>> expressed intent to create pseudo-locked region by writing "locksetup"
>> the pseudo-locking can be done when the schemata is written. I think it
>> would be simpler to act when the schemata is written since we know
>> exactly at that point which regions should be pseudo-locked. After the
>> schemata is stored the user's choice is just merged with the larger
>> schemata representing all resources/domains. We could set mode to
>> "locked" on success, it can remain as "locksetup" on failure of creating
>> the pseudo-locked region. We could perhaps also consider a name change
>> "locksetup" -> "lockrsv" since after the first pseudo-locked region is
>> created on a domain then all the other domains associated with this
>> class of service need to have some special state since no task will ever
>> run on them with that class of service so we would not want their bits
>> (which will not be zero) to be taken into account when checking for
>> "shareable" or "exclusive".
>
> Works for me.

A big change in the above is that now that the closid will be released
there is no need for the "lockrsv" anymore since the closid used for the
pseudo-locking can surely be re-used. It would thus change to:

# echo locksetup > mode
# echo $CONF > schemata
# echo $?
0
# cat mode
locked

Similar as before the writing of the schemata would trigger the
pseudo-locking and there is no lockarea/restrict file.

> I think dropping the closid makes sense. Once the thing is locked it's done
> and nothing can be changed anymore, except removal of course. That also
> gives you a 1:1 mapping between resource group and lockdevice.

Thanks. One pseudo-locked region per resource group does make things
simpler and removes the need for some strange removal tricks,
pseudo-locked region removal can be done with a directory removal.
Naming of the pseudo-locked region's character device will be same as
the resource group.

>
>> This is a real issue. The pros and cons of using a global CLOSID across
>> all resources are documented in the comments preceding:
>> arch/x86/kernel/cpu/intel_rdt_rdtgroup.c:closid_init()
>>
>> The issue I mention was foreseen, to quote from there "Our choices on
>> how to configure each resource become progressively more limited as the
>> number of resources grows".
>>
>>> Let's assume its real,
>>> so you could do the following:
>>>
>>> mkdir group <- acquires closid
>>> echo locksetup > mode <- Creates 'lockarea' file
>>> echo L2:0 > lockarea
>>> echo 'L2:0=0xf' > schemata
>>> echo locked > mode <- locks down all files, does the lock setup
>>> and drops closid
>>>
>>> That would solve quite some of the other issues as well. Hmm?
>>
>> At this time the resource group, represented by a resctrl directory, is
>> tightly associated with the closid. I'll take a closer look at what it
>> will take to separate them.
>
> Shouldn't be that hard.
>
>> Could you please elaborate on the purpose of the "lockarea" file? It
>> does seem to duplicate the information in the schemata written in the
>> subsequent line.
>
> No. The lockarea or restrict file (as I named it later, but feel free to
> come up with something more intuitive) is there to tell which part of the
> resource zoo should be made exclusive/locked. That makes the whole write to
> schemata file and validate whether this is really exclusive way simpler.

This file (lockarea/restrict) would surely support a flexible exclusive
mode, but if we start with the simple case as you note above then it
does not seem to be needed for exclusive mode since the exclusive mode
would cover all domains of all resources.

# mkdir group
# echo exclusive > group/mode # will attempt to mark all bits in current
schemata as exclusive, will fail if any are in use by another closid
# echo $newschemata > group/schemata # will require that all bits are
not in use by any other closid

This file (lockarea/restrict) also does not seem to be needed for the
locked mode since only one pseudo-locked region is supported per
resource group and that is communicated at the time the schemata is
written. The locking flow above did not use this file.

>> If we do go this route then it seems that there would be one
>> pseudo-locked region per resource group, not multiple ones as I had in
>> my examples above.
>
> Correct.
>
>> An alternative to the hardware programming on creation of resource group
>> could also be to reset the bitmasks of the closid to be shareable/unused
>> bits at the time the closid is released.
>
> That does not help because the default/shareable/unused bits can change
> between release of a CLOSID and reallocation.

Good point, thanks.

>>> Actually we could solve that problem similar to the locked one and share
>>> most of the functionality:
>>>
>>> mkdir group
>>> echo exclusive > mode
>>> echo L3:0 > restrict
>>>
>>> and for locked:
>>>
>>> mkdir group
>>> echo locksetup > mode
>>> echo L2:0 > restrict
>>> echo 'L2:0=0xf' > schemata
>>> echo locked > mode
>>>
>>> The 'restrict' file (feel free to come up with a better name) is only
>>> available/writeable in exclusive and locksetup mode. In case of exclusive
>>> mode it can contain several domains/resources, but in locked mode its only
>>> allowed to contain a single domain/resource.
>>>
>>> A write to schemata for exclusive or locksetup mode will apply the
>>> exclusiveness restrictions only to the resources/domains selected in the
>>> 'restrict' file.
>>
>> I think I understand for the exclusive case. Here the introduction of
>> the restrict file helps. I will run through a few examples to ensure I
>> understand it. For the pseudo-locking cases I do have the questions and
>> comments above. Here I likely may be missing something but I'll keep
>> dissecting how this would work to clear up my understanding.
>
> I came up with this under the assumptions:
>
> 1) One locked region per resource group
> 2) Drop closid after locking

I am also now working under these assumptions ...

> Then the restrict file makes a lot of sense because it would give a clear
> selection of the possible resource to lock.

... but I am still stuck on why this restrict file is needed at this
time. Surely it would be needed if later we add the more flexible
exclusive mode, but I do not understand how it helps the locked mode.

Reinette

\
 
 \ /
  Last update: 2018-02-28 20:18    [W:0.074 / U:0.420 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site