lkml.org 
[lkml]   [2016]   [Jun]   [23]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [RFC] capabilities: add capability cgroup controller
From
Date
On 06/23/16 05:59, Kees Cook wrote:
> On Wed, Jun 22, 2016 at 5:01 PM, Serge E. Hallyn <serge@hallyn.com> wrote:
>> Quoting Kees Cook (keescook@chromium.org):
>>> On Wed, Jun 22, 2016 at 11:17 AM, Serge E. Hallyn <serge@hallyn.com> wrote:
>>>> Quoting Topi Miettinen (toiwoton@gmail.com):
>>>>> On 06/22/16 17:14, Serge E. Hallyn wrote:
>>>>>> Quoting Topi Miettinen (toiwoton@gmail.com):
>>>>>>> On 06/21/16 15:45, Serge E. Hallyn wrote:
>>>>>>>> Quoting Topi Miettinen (toiwoton@gmail.com):
>>>>>>>>> On 06/19/16 20:01, serge@hallyn.com wrote:
>>>>>>>>>> apologies for top posting, this phone doesn't support inline)
>>>>>>>>>>
>>>>>>>>>> Where are you preventing less privileged tasks from limiting the caps of a more privileged task? It looks like you are relying on the cgroupfs for that?
>>>>>>>>>
>>>>>>>>> I didn't think that aspect. Some of that could be dealt with by
>>>>>>>>> preventing tasks which don't have CAP_SETPCAP to make other tasks join
>>>>>>>>> or set the bounding set. One problem is that the privileges would not be
>>>>>>>>> checked at cgroup.procs open(2) time but only when writing. In general,
>>>>>>>>> less privileged tasks should not be able to gain new capabilities even
>>>>>>>>> if they were somehow able to join the cgroup and also your case must be
>>>>>>>>> addressed in full.
>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Overall I'm not a fan of this for several reasons. Can you tell us precisely what your use case is?
>>>>>>>>>
>>>>>>>>> There are two.
>>>>>>>>>
>>>>>>>>> 1. Capability use tracking at cgroup level. There is no way to know
>>>>>>>>> which capabilities have been used and which could be trimmed. With
>>>>>>>>> cgroup approach, we can also keep track of how subprocesses use
>>>>>>>>> capabilities. Thus the administrator can quickly get a reasonable
>>>>>>>>> estimate of a bounding set just by reading the capability.used file.
>>>>>>>>
>>>>>>>> So to estimate the privileges needed by an application? Note this
>>>>>>>> could also be done with something like systemtap, but that's not as
>>>>>>>> friendly of course.
>>>>>>>>
>>>>>>>
>>>>>>> I've used systemtap to track how a single process uses capabilities, but
>>>>>>> I can imagine that without the cgroup, using it to track several
>>>>>>> subprocesses could be difficult.
>>>>>>>
>>>>>>>> Keeping the tracking part separate from enforcement might be worthwhile.
>>>>>>>> If you wanted to push that part of the patchset, we could keep
>>>>>>>> discussing the enforcement aspect separately.
>>>>>>>>
>>>>>>>
>>>>>>> OK, I'll prepare the tracking part first.
>>>>>>
>>>>>> So this does still have some security concerns, namely leaking information
>>>>>> to a less privileged process about what privs a root owned process used.
>>>>>> That's not on the same level as giving away details about memory mappings,
>>>>>> but could be an issue. Kees (cc'd), do you see that as an issue ?
>>>>>>
>>>>>> thanks,
>>>>>> -serge
>>>>>>
>>>>>
>>>>> Anyone can see the full set of capabilities available to each process
>>>>
>>>> But not the capabilities used. That's much more invasive.
>>>>
>>>>> via /proc/pid/status. But should I for example add a new flag
>>>>> CFTYPE_OWNER_ONLY to limit reading capability.used file to only owner
>>>>> (root) and use it here?
>>>>
>>>> Not sure that it's needed, let's see what Kees says. However if it is,
>>>> then using owner would not suffice, since that's tangential to the
>>>> privilege level of the task.
>>>
>>> I don't see a problem exposing the history of used capabilities to
>>
>> Thanks, Kees.
>>
>>> less privileged processes. The only thing I could see that being used
>>> for would be to improve some kind of race against a buggy process
>>> where you know caps get used at a certain time in the code, so
>>> spinning on reading /proc/pid/status might give you a better chance of
>>
>> It would actually be a cgroup file, I think someone else was suggesting
>> a /proc/pid/status enhancement to the same effect a few weeks ago.

That was also me. I think cgroup level monitoring is much more flexible
than per process monitoring. Checking how individual processes use
capabilities could be complementary, but I suppose cgroup is enough.

The per process information could be delayed to avoid races. For
example, store a timestamp (jiffies) with the capability set and keep
two sets of these as LIFO. When we're going to update the capability set
with new usage, check if the last value is old enough, if so, move it to
first slot. Then update the last slot. The reading process would read
the first slot and then get somewhat delayed information. This could be
also done for cgroup level tracking.

>
> Oh! Sorry, I misunderstood. What does the interface look like for the
> new cgroup file? (I assume my evaluation remains the same, though.)

I'll send a patch shortly which includes this addition to
Documentation/cgroup-v2.txt:
+5-4. Capabilities
+
+The "capability" controller is used to monitor capability use in the
+cgroup. This can be used to discover a starting point for capability
+bounding sets, even when running a shell script under ambient
+capabilities, with only short-lived helper processes exercising the
+capabilities.
+
+
+5-4-1. Capability Interface Files
+
+ capability.used
+
+ A read-only file which exists on all cgroups.
+
+ This reports the combined value of capability use in the
+ current cgroup and all its children.

-Topi

>
> -Kees
>
>>
>>> timing the race. That seems like a pretty far-out exposure, though. I
>>> imagine instruction counters would give a way finer grained timing
>>> too, so I wouldn't object to this being visible.
>>>
>>> -Kees
>>>
>>> --
>>> Kees Cook
>>> Chrome OS & Brillo Security
>
>
>

\
 
 \ /
  Last update: 2016-06-23 16:41    [W:0.049 / U:1.392 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site