lkml.org 
[lkml]   [2017]   [Apr]   [11]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRe: [PATCH -mm -v3] mm, swap: Sort swap entries before free
Date
Andrew Morton <akpm@linux-foundation.org> writes:

> On Fri, 7 Apr 2017 14:49:01 +0800 "Huang, Ying" <ying.huang@intel.com> wrote:
>
>> To reduce the lock contention of swap_info_struct->lock when freeing
>> swap entry. The freed swap entries will be collected in a per-CPU
>> buffer firstly, and be really freed later in batch. During the batch
>> freeing, if the consecutive swap entries in the per-CPU buffer belongs
>> to same swap device, the swap_info_struct->lock needs to be
>> acquired/released only once, so that the lock contention could be
>> reduced greatly. But if there are multiple swap devices, it is
>> possible that the lock may be unnecessarily released/acquired because
>> the swap entries belong to the same swap device are non-consecutive in
>> the per-CPU buffer.
>>
>> To solve the issue, the per-CPU buffer is sorted according to the swap
>> device before freeing the swap entries. Test shows that the time
>> spent by swapcache_free_entries() could be reduced after the patch.
>>
>> Test the patch via measuring the run time of swap_cache_free_entries()
>> during the exit phase of the applications use much swap space. The
>> results shows that the average run time of swap_cache_free_entries()
>> reduced about 20% after applying the patch.
>
> "20%" is useful info, but it is much better to present the absolute
> numbers, please. If it's "20% of one nanosecond" then the patch isn't
> very interesting. If it's "20% of 35 seconds" then we know we have
> more work to do.

The average run time of swap_cache_free_entries() is reduced from
about ~137us to ~111us. The total samples of swap_cache_free_entries()
is about 200000, run on 16 CPUs, so the wall time is about 1.7s. I will
revise the tests to get the total run time reduction.

> If there is indeed still a significant problem here then perhaps it
> would be better to move the percpu swp_entry_t buffer into the
> per-device structure swap_info_struct, so it becomes "per cpu, per
> device". That way we should be able to reduce contention further.
>
> Or maybe we do something else - it all depends upon the significance of
> this problem, which is why a full description of your measurements is
> useful.

Yes. I will provide more and better measurement firstly.

Best Regards,
Huang, Ying

\
 
 \ /
  Last update: 2017-04-11 09:03    [W:2.670 / U:0.448 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site