lkml.org 
[lkml]   [2021]   [Sep]   [30]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [PATCH 2/5] kernel/fork: allocate task->comm dynamicly
On Thu, Sep 30, 2021 at 2:11 AM Kees Cook <keescook@chromium.org> wrote:
>
> On Wed, Sep 29, 2021 at 11:50:33AM +0000, Yafang Shao wrote:
> > task->comm is defined as an array embedded in struct task_struct before.
> > This patch changes it to a char pointer. It will be allocated in the fork
> > and freed when the task is freed.
> >
> > Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
> > ---
> > include/linux/sched.h | 2 +-
> > kernel/fork.c | 19 +++++++++++++++++++
> > 2 files changed, 20 insertions(+), 1 deletion(-)
> >
> > diff --git a/include/linux/sched.h b/include/linux/sched.h
> > index e12b524426b0..b387b5943db4 100644
> > --- a/include/linux/sched.h
> > +++ b/include/linux/sched.h
> > @@ -1051,7 +1051,7 @@ struct task_struct {
> > * - access it with [gs]et_task_comm()
> > * - lock it with task_lock()
> > */
> > - char comm[TASK_COMM_LEN];
> > + char *comm;
>
> This, I think, is basically a non-starter. It adds another kmalloc to
> the fork path without a well-justified reason. TASK_COMM_LEN is small,
> yes, but why is growing it valuable enough to slow things down?
>
> (Or, can you prove that this does NOT slow things down? It seems like
> it would.)
>

Right, the new kmalloc would take some extra latency.
Seems it is not easy to measure which one is more valuable.

>
> >
> > struct nameidata *nameidata;
> >
> > diff --git a/kernel/fork.c b/kernel/fork.c
> > index 38681ad44c76..227aec240501 100644
> > --- a/kernel/fork.c
> > +++ b/kernel/fork.c
> > @@ -721,6 +721,20 @@ static void mmdrop_async(struct mm_struct *mm)
> > }
> > }
> >
> > +static int task_comm_alloc(struct task_struct *p)
> > +{
> > + p->comm = kzalloc(TASK_COMM_LEN, GFP_KERNEL);
> > + if (!p->comm)
> > + return -ENOMEM;
> > +
> > + return 0;
> > +}
> > +
> > +static void task_comm_free(struct task_struct *p)
> > +{
> > + kfree(p->comm);
> > +}
> > +
> > static inline void free_signal_struct(struct signal_struct *sig)
> > {
> > taskstats_tgid_free(sig);
> > @@ -753,6 +767,7 @@ void __put_task_struct(struct task_struct *tsk)
> > bpf_task_storage_free(tsk);
> > exit_creds(tsk);
> > delayacct_tsk_free(tsk);
> > + task_comm_free(tsk);
> > put_signal_struct(tsk->signal);
> > sched_core_free(tsk);
> >
> > @@ -2076,6 +2091,10 @@ static __latent_entropy struct task_struct *copy_process(
> > if (data_race(nr_threads >= max_threads))
> > goto bad_fork_cleanup_count;
> >
> > + retval = task_comm_alloc(p);
> > + if (retval)
> > + goto bad_fork_cleanup_count;
> > +
> > delayacct_tsk_init(p); /* Must remain after dup_task_struct() */
> > p->flags &= ~(PF_SUPERPRIV | PF_WQ_WORKER | PF_IDLE | PF_NO_SETAFFINITY);
> > p->flags |= PF_FORKNOEXEC;
> > --
> > 2.17.1
> >
>
> --
> Kees Cook



--
Thanks
Yafang

\
 
 \ /
  Last update: 2021-09-30 14:43    [W:0.143 / U:1.092 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site