lkml.org 
[lkml]   [2016]   [Sep]   [5]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [RFC 0/4] ZRAM: make it just store the high compression rate page
On Mon, Sep 5, 2016 at 1:51 PM, Minchan Kim <minchan@kernel.org> wrote:
> On Mon, Sep 05, 2016 at 01:12:05PM +0800, Hui Zhu wrote:
>> On Mon, Sep 5, 2016 at 10:18 AM, Minchan Kim <minchan@kernel.org> wrote:
>> > On Thu, Aug 25, 2016 at 04:25:30PM +0800, Hui Zhu wrote:
>> >> On Thu, Aug 25, 2016 at 2:09 PM, Sergey Senozhatsky
>> >> <sergey.senozhatsky.work@gmail.com> wrote:
>> >> > Hello,
>> >> >
>> >> > On (08/22/16 16:25), Hui Zhu wrote:
>> >> >>
>> >> >> Current ZRAM just can store all pages even if the compression rate
>> >> >> of a page is really low. So the compression rate of ZRAM is out of
>> >> >> control when it is running.
>> >> >> In my part, I did some test and record with ZRAM. The compression rate
>> >> >> is about 40%.
>> >> >>
>> >> >> This series of patches make ZRAM can just store the page that the
>> >> >> compressed size is smaller than a value.
>> >> >> With these patches, I set the value to 2048 and did the same test with
>> >> >> before. The compression rate is about 20%. The times of lowmemorykiller
>> >> >> also decreased.
>> >> >
>> >> > I haven't looked at the patches in details yet. can you educate me a bit?
>> >> > is your test stable? why the number of lowmemorykill-s has decreased?
>> >> > ... or am reading "The times of lowmemorykiller also decreased" wrong?
>> >> >
>> >> > suppose you have X pages that result in bad compression size (from zram
>> >> > point of view). zram stores such pages uncompressed, IOW we have no memory
>> >> > savings - swapped out page lands in zsmalloc PAGE_SIZE class. now you
>> >> > don't try to store those pages in zsmalloc, but keep them as unevictable.
>> >> > so the page still occupies PAGE_SIZE; no memory saving again. why did it
>> >> > improve LMK?
>> >>
>> >> No, zram will not save this page uncompressed with these patches. It
>> >> will set it as non-swap and kick back to shrink_page_list.
>> >> Shrink_page_list will remove this page from swapcache and kick it to
>> >> unevictable list.
>> >> Then this page will not be swaped before it get write.
>> >> That is why most of code are around vmscan.c.
>> >
>> > If I understand Sergey's point right, he means there is no gain
>> > to save memory between before and after.
>> >
>> > With your approach, you can prevent unnecessary pageout(i.e.,
>> > uncompressible page swap out) but it doesn't mean you save the
>> > memory compared to old so why does your patch decrease the number of
>> > lowmemory killing?
>> >
>> > A thing I can imagine is without this feature, zram could be full of
>> > uncompressible pages so good-compressible page cannot be swapped out.
>> > Hui, is this scenario right for your case?
>> >
>>
>> That is one reason. But it is not the principal one.
>>
>> Another reason is when swap is running to put page to zram, what the
>> system wants is to get memory.
>> Then the deal is system spends cpu time and memory to get memory. If
>> the zram just access the high compression rate pages, system can get
>> more memory with the same amount of memory. It will pull system from
>> low memory status earlier. (Maybe more cpu time, because the
>> compression rate checks. But maybe less, because fewer pages need to
>> digress. That is the interesting part. :)
>> I think that is why lmk times decrease.
>>
>> And yes, all of this depends on the number of high compression rate
>> pages. So you cannot just set a non_swap limit to the system and get
>> everything. You need to do a lot of test around it to make sure the
>> non_swap limit is good for your system.
>>
>> And I think use AOP_WRITEPAGE_ACTIVATE without kicking page to a
>> special list will make cpu too busy sometimes.
>
> Yes, and it would same with your patch if new arraival write on CoWed
> page is uncompressible data.
>
>> I did some tests before I kick page to a special list. The shrink task
>
> What kinds of test? Could you elaborate a bit more?
> shrink task. What does it mean?
>



Sorry for this part. It should be function shrink_page_list.

I will do more test for that and post the patch later.

Thanks,
Hui


>> will be moved around, around and around because low compression rate
>> pages just moved from one list to another a lot of times, again, again
>> and again.
>> And all this low compression rate pages always stay together.
>
> I cannot understand with detail description. :(
> Could you explain more?

\
 
 \ /
  Last update: 2016-09-17 09:58    [W:0.064 / U:0.028 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site