lkml.org 
[lkml]   [2020]   [May]   [26]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v3 07/18] nitro_enclaves: Init misc device providing the ioctl interface
On Tue, May 26, 2020 at 03:44:30PM +0200, Alexander Graf wrote:
>
>
> On 26.05.20 15:17, Greg KH wrote:
> >
> > On Tue, May 26, 2020 at 02:44:18PM +0200, Alexander Graf wrote:
> > >
> > >
> > > On 26.05.20 14:33, Greg KH wrote:
> > > >
> > > > On Tue, May 26, 2020 at 01:42:41PM +0200, Alexander Graf wrote:
> > > > >
> > > > >
> > > > > On 26.05.20 08:51, Greg KH wrote:
> > > > > >
> > > > > > On Tue, May 26, 2020 at 01:13:23AM +0300, Andra Paraschiv wrote:
> > > > > > > +#define NE "nitro_enclaves: "
> > > > > >
> > > > > > Again, no need for this.
> > > > > >
> > > > > > > +#define NE_DEV_NAME "nitro_enclaves"
> > > > > >
> > > > > > KBUILD_MODNAME?
> > > > > >
> > > > > > > +#define NE_IMAGE_LOAD_OFFSET (8 * 1024UL * 1024UL)
> > > > > > > +
> > > > > > > +static char *ne_cpus;
> > > > > > > +module_param(ne_cpus, charp, 0644);
> > > > > > > +MODULE_PARM_DESC(ne_cpus, "<cpu-list> - CPU pool used for Nitro Enclaves");
> > > > > >
> > > > > > Again, please do not do this.
> > > > >
> > > > > I actually asked her to put this one in specifically.
> > > > >
> > > > > The concept of this parameter is very similar to isolcpus= and maxcpus= in
> > > > > that it takes CPUs away from Linux and instead donates them to the
> > > > > underlying hypervisor, so that it can spawn enclaves using them.
> > > > >
> > > > > From an admin's point of view, this is a setting I would like to keep
> > > > > persisted across reboots. How would this work with sysfs?
> > > >
> > > > How about just as the "initial" ioctl command to set things up? Don't
> > > > grab any cpu pools until asked to. Otherwise, what happens when you
> > > > load this module on a system that can't support it?
> > >
> > > That would give any user with access to the enclave device the ability to
> > > remove CPUs from the system. That's clearly a CAP_ADMIN task in my book.
> >
> > Ok, what's wrong with that?
>
> Would you want random users to get the ability to hot unplug CPUs from your
> system? At unlimited quantity? I don't :).

A random user, no, but one with admin rights, why not? They can do that
already today on your system, this isn't new.

> > > Hence this whole split: The admin defines the CPU Pool, users can safely
> > > consume this pool to spawn enclaves from it.
> >
> > But having the admin define that at module load / boot time, is a major
> > pain. What tools do they have that allow them to do that easily?
>
> The normal toolbox: editing /etc/default/grub, adding an /etc/modprobe.d/
> file.

Editing grub files is horrid, come on...

> When but at module load / boot time would you define it? I really don't want
> to have a device node that in theory "the world" can use which then allows
> any user on the system to hot unplug every CPU but 0 from my system.

But you have that already when the PCI device is found, right? What is
the initial interface to the driver? What's wrong with using that?

Or am I really missing something as to how this all fits together with
the different pieces? Seeing the patches as-is doesn't really provide a
good overview, sorry.

> > > So I really don't think an ioctl would be a great user experience. Same for
> > > a sysfs file - although that's probably slightly better than the ioctl.
> >
> > You already are using ioctls to control this thing, right? What's wrong
> > with "one more"? :)
>
> So what we *could* do is add an ioctl to set the pool size which then does a
> CAP_ADMIN check. That however means you now are in priority hell:
>
> A user that wants to spawn an enclave as part of an nginx service would need
> to create another service to set the pool size and indicate the dependency
> in systemd control files.
>
> Is that really better than a module parameter?

module parameters are hard to change, and manage control over who really
is changing them.

thanks,

greg k-h

\
 
 \ /
  Last update: 2020-05-27 00:24    [W:0.124 / U:0.036 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site