lkml.org 
[lkml]   [2003]   [Aug]   [16]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRe: [BUG] slab debug vs. L1 alignement
Date
Manfred Spraul wrote:

> Benjamin Herrenschmidt wrote:
>
>>Same for ppc32. Anyway, I don't like MUST_HWCACHE_ALIGN because it
>>just disables redzoning, I'd rather allocate more and do both redzoning
>>and cache alignement.
>>
>>
> I have a patch that creates helper functions that make that simple. The
> patch is stuck right now, because it exposes a bug in the i386 debug
> register handling. I'll add it redzoning with MUST_HWCACHE_ALIGN after
> that one is in.

I have a related problem on s390: Some of my GFP_DMA data must be 8 byte
aligned, while my cache lines are 256 bytes wide. With slab debugging,
the whole structure is only 4 byte aligned and I get addressing exceptions.

When I go to MUST_HWCACHE_ALIGN, the data already grows from ~100 Bytes to
a full cache line, adding redzoning would make it far worse.

Is it possible to make kmem_cache_create accept a user specified alignment
parameter? I suppose there are other cases where you want to force
a specific alignment that is different from the L1 cache lines.

Arnd <><
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:47    [W:0.036 / U:0.040 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site