lkml.org 
[lkml]   [2016]   [Jun]   [17]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [RFC 15/18] limits: track RLIMIT_MSGQUEUE actual max
From
Date
On 6/13/2016 3:44 PM, Topi Miettinen wrote:
> Track maximum size of message queues, presented in /proc/self/limits.
>
> Signed-off-by: Topi Miettinen <toiwoton@gmail.com>
> ---
> ipc/mqueue.c | 2 ++
> 1 file changed, 2 insertions(+)
>
> diff --git a/ipc/mqueue.c b/ipc/mqueue.c
> index ade739f..edccf55 100644
> --- a/ipc/mqueue.c
> +++ b/ipc/mqueue.c
> @@ -287,6 +287,8 @@ static struct inode *mqueue_get_inode(struct super_block *sb,
>
> /* all is ok */
> info->user = get_uid(u);
> + /* XXX resource limits apply per task, not per user */
> + bump_rlimit(RLIMIT_MSGQUEUE, u->mq_bytes);
> } else if (S_ISDIR(mode)) {
> inc_nlink(inode);
> /* Some things misbehave if size == 0 on a directory */
>

This patch looks all sorts of wrong to me.

In a current linus tree I can't find a single instance of bump_rlimit.
Where is this magical function coming from?

Second, u->mq_bytes is the current size of all message queues for a
given user. It is not per-task. So your message about limits being
per-task is wrong (at least partially, the actual byte count is per-user
not per-task, but the limit we check when we create a new queue is
per-task and not per-user). So your comment is wrong, the one
functional line you added appears to be a non-existent function, and
even if those two things are resolved, why in the world would the fact
that we created a new message queue mean we should bump our rlimit?
That makes no sense, because would *never* have a working rlimit any
more, we would simply increase our rlimit by the size of our existing
queues every time we made a queue.

This is just a totally broken patch. Major NAK.

--
Doug Ledford <dledford@redhat.com>
GPG Key ID: 0E572FDD

[unhandled content-type:application/pgp-signature]
\
 
 \ /
  Last update: 2016-06-17 22:21    [W:0.291 / U:0.420 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site