lkml.org 
[lkml]   [2013]   [Jan]   [31]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
SubjectRE: Avoid high order memory allocating with kmalloc, when read large seq file
Date
Hi Morton
Thank you very much for your kindly info, In android system,, when you read /sys/kernel/debug/binder/proc/xxx, xxx is the process id , it will trigger high order kmalloc.
But we can't limit the size of binder info, because we need this to debug the binder related issue.
I had re-send the patch, how do you think for about using vmalloc instaed of kmalloc when malloc high order allocating? Memory gragment should not be the issue, because this is very quick to free such memory.

Br
Xiaobing


-----Original Message-----
From: Andrew Morton [mailto:akpm@linux-foundation.org]
Sent: Wednesday, January 30, 2013 8:25 AM
To: Tu, Xiaobing
Cc: linux-kernel@vger.kernel.org; Tang, Guifang; Chen, LinX Z; Arve Hjønnevåg
Subject: Re: Avoid high order memory allocating with kmalloc, when read large seq file

On Tue, 29 Jan 2013 14:14:14 +0800
xtu4 <xiaobing.tu@intel.com> wrote:

> @@ -209,8 +209,17 @@ ssize_t seq_read(struct file *file, char __user
> *buf, size_t size, loff_t *ppos)
> if (m->count < m->size)
> goto Fill;
> m->op->stop(m, p);
> - kfree(m->buf);
> - m->buf = kmalloc(m->size <<= 1, GFP_KERNEL);
> + if (m->size > 2 * PAGE_SIZE) {
> + vfree(m->buf);
> + } else
> + kfree(m->buf);
> + m->size <<= 1;
> + if (m->size > 2 * PAGE_SIZE) {
> + m->buf = vmalloc(m->size);
> + } else
> + m->buf = kmalloc(m->size <<= 1, GFP_KERNEL);
> +
> +
> if (!m->buf)
> goto Enomem;
> m->count = 0;
> @@ -325,7 +334,10 @@ EXPORT_SYMBOL(seq_lseek);

The conventional way of doing this is to attempt the kmalloc with __GFP_NOWARN and if that failed, fall back to vmalloc().

Using vmalloc is generally not a good thing, mainly because of fragmentation issues, but for short-lived allocations like this, that shouldn't be too bad.

But really, the binder code is being obnoxious here and it would be best to fix it up. Please identify with some care which part of the binder code is causing this problem. binder_stats_show(), from a guess? It looks like that function's output size is proportional to the number of processes on binder_procs? If so, there is no upper bound, is there? Problem!

btw, binder_debug_no_lock should just go away. That list needs locking.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2013-01-31 08:01    [W:1.518 / U:0.540 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site