lkml.org 
[lkml]   [2012]   [Jun]   [9]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRe: [PATCH -V8 14/16] hugetlb/cgroup: add charge/uncharge calls for HugeTLB alloc/free
Date
Johannes Weiner <hannes@cmpxchg.org> writes:

> On Sat, Jun 09, 2012 at 06:39:06PM +0530, Aneesh Kumar K.V wrote:
>> Johannes Weiner <hannes@cmpxchg.org> writes:
>>
>> > On Sat, Jun 09, 2012 at 02:29:59PM +0530, Aneesh Kumar K.V wrote:
>> >> From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
>> >>
>> >> This adds necessary charge/uncharge calls in the HugeTLB code. We do
>> >> hugetlb cgroup charge in page alloc and uncharge in compound page destructor.
>> >>
>> >> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
>> >> ---
>> >> mm/hugetlb.c | 16 +++++++++++++++-
>> >> mm/hugetlb_cgroup.c | 7 +------
>> >> 2 files changed, 16 insertions(+), 7 deletions(-)
>> >>
>> >> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
>> >> index bf79131..4ca92a9 100644
>> >> --- a/mm/hugetlb.c
>> >> +++ b/mm/hugetlb.c
>> >> @@ -628,6 +628,8 @@ static void free_huge_page(struct page *page)
>> >> BUG_ON(page_mapcount(page));
>> >>
>> >> spin_lock(&hugetlb_lock);
>> >> + hugetlb_cgroup_uncharge_page(hstate_index(h),
>> >> + pages_per_huge_page(h), page);
>> >
>> > hugetlb_cgroup_uncharge_page() takes the hugetlb_lock, no?
>>
>> Yes, But this patch also modifies it to not take the lock, because we
>> hold spin_lock just below in the call site. I didn't want to drop the
>> lock and take it again.
>
> Sorry, I missed that.
>
>> > It's quite hard to review code that is split up like this. Please
>> > always keep the introduction of new functions in the same patch that
>> > adds the callsite(s).
>>
>> One of the reason I split the charge/uncharge routines and the callers
>> in separate patches is to make it easier for review. Irrespective of
>> the call site charge/uncharge routines should be correct with respect
>> to locking and other details. What I did in this patch is a small
>> optimization of avoiding dropping and taking the lock again. May be the
>> right approach would have been to name it __hugetlb_cgroup_uncharge_page
>> and make sure the hugetlb_cgroup_uncharge_page still takes spin_lock.
>> But then we don't have any callers for that.
>
> I think this makes it needlessly complicated and there is no correct
> or incorrect locking in (initially) dead code :-)
>
> The callsites are just a few lines. It's harder to review if you
> introduce an API and then change it again mid-patchset.
>

I will fold the patches.

> If there are no callers for a function that grabs the lock itself,
> don't add it. Just add a note to the kerneldoc that explains the
> requirement or put VM_BUG_ON(!spin_is_locked(&hugetlb_lock)); in
> there or so.

That is excellent. I will add kerneldoc and VM_BUG_ON.

-aneesh



\
 
 \ /
  Last update: 2012-06-09 18:41    [W:0.088 / U:0.392 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site