lkml.org 
[lkml]   [2016]   [Dec]   [14]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] binder: replace kzalloc with kmem_cache
On Wed, Dec 14, 2016 at 10:39:16AM +0800, Ganesh Mahendran wrote:
> Hi, Greg:
>
> Sorry for the late response.
>
> On Tue, Nov 22, 2016 at 02:53:02PM +0100, Greg KH wrote:
> > On Tue, Nov 22, 2016 at 07:17:30PM +0800, Ganesh Mahendran wrote:
> > > This patch use kmem_cache to allocate/free binder objects.
> >
> > Why do this?
>
> I am not very familiar with kmem_cache. I think if we have thousands of
> active binder objects in system, kmem_cache would be better.

It shouldn't matter, if the memory allocator you are using is good.
This you have proven with your benchmarks :)

And, if you are not familiar with kmem_cache, I would not recommend
changing a core system api to use it just because you think it might be
nicer for some reason, that's not acceptable.

> Below is binder object number in my android system:
> -----
> $ cat /d/binder/stats
> ...
> proc: active 100 total 6735
> thread: active 1456 total 180807
> node: active 5668 total 1027387
> ref: active 7141 total 1214877
> death: active 844 total 468056
> transaction: active 0 total 54736890
> transaction_complete: active 0 total 54736890
> -----
>
> binder objects are allocated/freed frequently.

Yes they are, and the memory allocator on your system handles that just
fine, as you see :)

> > > It will have better memory efficiency.
> >
> > Really? How? It should be the same, if not a bit worse. Have you
> > tested this? What is the results?
>
> kzalloc will use object with size 2^n to store user data.
> Take "struct binder_thread" as example, its size is 296 bytes.
> If use kzalloc(), slab system will use 512 object size to store the 296
> bytes. But if use kmem_cache to create a seperte(may be merged with other
> slab allocator) allocator, it will use 304 object size to store the 296
> bytes. Below is information get from /proc/slabinfo :
> ------
> name <active_objs> <num_objs> <objsize> <objperslab> <pagesperslab>
> binder_thread 858 858 304 26 2
>
> memmory efficiency is : (296 * 26) / (2 * 4096) = 93.9%

But, as you say, things will get merged by your slab allocator anyway,
so there's no real benefit in changing the code we have today.

> > > And we can also get object usage details in /sys/kernel/slab/* for
> > > futher analysis.
> >
> > Why do we need this? Who needs this information and what are you going
> > to do with it?
>
> This is only for debug purpuse to see how much memory is used by binder.

But who really cares about that? What can you do with that information?
The normal allocator debug statistics should provide you all of this
information already without anything needed to be added.

> > > Signed-off-by: Ganesh Mahendran <opensource.ganesh@gmail.com>
> > > ---
> > > drivers/android/binder.c | 127 ++++++++++++++++++++++++++++++++++++++---------
> > > 1 file changed, 104 insertions(+), 23 deletions(-)
> > >
> > > diff --git a/drivers/android/binder.c b/drivers/android/binder.c
> > > index 3c71b98..f1f8362 100644
> > > --- a/drivers/android/binder.c
> > > +++ b/drivers/android/binder.c
> > > @@ -54,6 +54,14 @@
> > > static HLIST_HEAD(binder_deferred_list);
> > > static HLIST_HEAD(binder_dead_nodes);
> > >
> > > +static struct kmem_cache *binder_proc_cachep;
> > > +static struct kmem_cache *binder_thread_cachep;
> > > +static struct kmem_cache *binder_node_cachep;
> > > +static struct kmem_cache *binder_ref_cachep;
> > > +static struct kmem_cache *binder_transaction_cachep;
> > > +static struct kmem_cache *binder_work_cachep;
> > > +static struct kmem_cache *binder_ref_death_cachep;
> >
> > That's a lot of different caches, are you sure they don't just all get
> > merged together anyway for most allocators?
>
> If binder kmem_cache have the same flag with other allocator, it may be
> merged with other allocator. But I think it would be better than using
> kzalloc().

Nope, it should work the same. And you showed that with your
benchmarks.

> > Don't create lots of little caches for no good reason, and without any
> > benchmark numbers, I'd prefer to leave this alone. You are going to
> > have to prove this is a win to allow this type of churn.
>
> I test binder with this patch. There is no performance regression.
> ---
> I run 10 times with below command:
> $binderThroughputTest -w 100
>
> Before after(with patch)
> avg: 9848.4 9878.8

Great, you made a change to the code that didn't actually do anything :)

Sorry, for that reason alone, I can't take this patch, nor should you
want me to.

thanks,

greg k-h

\
 
 \ /
  Last update: 2016-12-14 12:17    [W:0.236 / U:0.576 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site