lkml.org 
[lkml]   [2015]   [Jun]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: [RFC][PATCHv3 2/7] zsmalloc: partial page ordering within a fullness_list
Hello,

Minchan, I didn't publish this patch separately yet, mostly to keep
the discussion in on thread. If we decide that this patch is good
enough, I'll resubmit it separately.

I did some synthetic testing. And (not surprising at all) its not so
clear. Any

I used a modified zsmalloc debug stats (to also account and report ZS_FULL
zspages). Automatic compaction was disabled.

the results are:

almost_full full almost_empty obj_allocated obj_used pages_used
Base
Total 3 163 25 2265 1691 302
Total 2 161 26 2297 1688 298
Total 2 145 27 2396 1701 311
Total 3 152 26 2364 1696 312
Total 3 162 25 2243 1701 302

Patched
Total 3 155 22 2259 1691 293
Total 4 153 20 2177 1697 292
Total 2 157 23 2229 1696 298
Total 2 164 24 2242 1694 301
Total 2 159 24 2286 1696 301


Sooo... I don't know. The numbers are weird. On my x86_64 I saw somewhat
lowered 'almost_empty', 'obj_allocated', 'obj_used', 'pages_used'. But
it's a bit suspicious.

The patch was not expected to dramatically improve things anyway. It's
rather a theoretical improvement -- we sometimes keep busiest zspages first
and, at the same time, we can re-use recently used zspages.


I think it makes sense to also consider 'fullness_group fullness' in
insert_zspage(). Unconditionally put ZS_ALMOST_FULL pages to list
head, or (if zspage is !ZS_ALMOST_FULL) compage ->inuse.

IOW, something like this

---

mm/zsmalloc.c | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 692b7dc..d576397 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -645,10 +645,11 @@ static void insert_zspage(struct page *page, struct size_class *class,
* We want to see more ZS_FULL pages and less almost
* empty/full. Put pages with higher ->inuse first.
*/
- if (page->inuse < (*head)->inuse)
- list_add_tail(&page->lru, &(*head)->lru);
- else
+ if (fullness == ZS_ALMOST_FULL ||
+ (page->inuse >= (*head)->inuse))
list_add(&page->lru, &(*head)->lru);
+ else
+ list_add_tail(&page->lru, &(*head)->lru);
}

*head = page;
---
test script

modprobe zram
echo 4 > /sys/block/zram0/max_comp_streams
echo lzo > /sys/block/zram0/comp_algorithm
echo 3g > /sys/block/zram0/disksize
mkfs.ext4 /dev/zram0
mount -o relatime,defaults /dev/zram0 /zram
cd /zram/
sync

for i in {1..8192}; do
dd if=/media/dump/down/zero_file of=/zram/$i iflag=direct bs=4K count=20 > /dev/null 2>&1
done
sync

head -n 1 /sys/kernel/debug/zsmalloc/zram0/classes
tail -n 1 /sys/kernel/debug/zsmalloc/zram0/classes

cd /
umount /zram
rmmod zram


-ss


\
 
 \ /
  Last update: 2015-06-18 14:41    [W:0.284 / U:0.204 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site