lkml.org 
[lkml]   [2012]   [Oct]   [9]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] [staging][zram] Fix handling of incompressible pages
Hi Minchan,

On 10/09/2012 06:31 AM, Minchan Kim wrote:
>
> On Mon, Oct 08, 2012 at 06:32:44PM -0700, Nitin Gupta wrote:
>> Change 130f315a introduced a bug in the handling of incompressible
>> pages which resulted in memory allocation failure for such pages.
>> The fix is to store the page as-is i.e. without compression if the
>> compressed size exceeds a threshold (max_zpage_size) and request
>> exactly PAGE_SIZE sized buffer from zsmalloc.
>
> It seems you found a bug and already fixed it with below helpers.
> But unfortunately, description isn't enough to understand the problem for me.
> Could you explain in detail?
> You said it results in memory allocation failure. What is failure?
> You mean this code by needing a few pages for zspage to meet class size?
>
> handle = zs_malloc(zram->mem_pool, clen);
> if (!handle) {
> pr_info("Error allocating memory for compressed "
> "page: %u, size=%zu\n", index, clen);
> ret = -ENOMEM;
> goto out;
> }
>
> So instead of allocating more pages for incompressible page to make zspage,
> just allocate a page for PAGE_SIZE class without compression?
>

When a page expands on compression, say from 4K to 4K+30, we were trying
to do zsmalloc(pool, 4K+30). However, the maximum size which zsmalloc
can allocate is PAGE_SIZE (for obvious reasons), so such allocation
requests always return failure (0).

For a page that has compressed size larger than the original size (this
may happen with already compressed or random data), there is no point
storing the compressed version as that would take more space and would
also require time for decompression when needed again. So, the fix is to
store any page, whose compressed size exceeds a threshold
(max_zpage_size), as-it-is i.e. without compression. Memory required
for storing this uncompressed page can then be requested from zsmalloc
which supports PAGE_SIZE sized allocations.

Lastly, the fix checks that we do not attempt to "decompress" the page
which we stored in the uncompressed form -- we just memcpy() out such pages.

Thanks,
Nitin


>>
>> Signed-off-by: Nitin Gupta <ngupta@vflare.org>
>> Reported-by: viechweg@gmail.com
>> Reported-by: paerley@gmail.com
>> Reported-by: wu.tommy@gmail.com
>> Tested-by: wu.tommy@gmail.com
>> Tested-by: michael@zugelder.org
>> ---
>> drivers/staging/zram/zram_drv.c | 12 ++++++++++--
>> 1 file changed, 10 insertions(+), 2 deletions(-)
>>
>> diff --git a/drivers/staging/zram/zram_drv.c b/drivers/staging/zram/zram_drv.c
>> index 653b074..6edefde 100644
>> --- a/drivers/staging/zram/zram_drv.c
>> +++ b/drivers/staging/zram/zram_drv.c
>> @@ -223,8 +223,13 @@ static int zram_bvec_read(struct zram *zram, struct bio_vec *bvec,
>> cmem = zs_map_object(zram->mem_pool, zram->table[index].handle,
>> ZS_MM_RO);
>>
>> - ret = lzo1x_decompress_safe(cmem, zram->table[index].size,
>> + if (zram->table[index].size == PAGE_SIZE) {
>> + memcpy(uncmem, cmem, PAGE_SIZE);
>> + ret = LZO_E_OK;
>> + } else {
>> + ret = lzo1x_decompress_safe(cmem, zram->table[index].size,
>> uncmem, &clen);
>> + }
>>
>> if (is_partial_io(bvec)) {
>> memcpy(user_mem + bvec->bv_offset, uncmem + offset,
>> @@ -342,8 +347,11 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
>> goto out;
>> }
>>
>> - if (unlikely(clen > max_zpage_size))
>> + if (unlikely(clen > max_zpage_size)) {
>> zram_stat_inc(&zram->stats.bad_compress);
>> + src = uncmem;
>> + clen = PAGE_SIZE;
>> + }
>>
>> handle = zs_malloc(zram->mem_pool, clen);
>> if (!handle) {
>> --
>> 1.7.9.5
>>
>



\
 
 \ /
  Last update: 2012-10-09 20:41    [W:0.072 / U:1.664 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site