lkml.org 
[lkml]   [2017]   [Oct]   [19]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH V8 4/5] libsas: Align SMP req/resp to dma_get_cache_alignment()
On Tue, Oct 17, 2017 at 01:55:43PM +0200, Marek Szyprowski wrote:
> If I remember correctly, kernel guarantees that each kmalloced buffer is
> always at least aligned to the CPU cache line, so CPU cache can be
> invalidated on the allocated buffer without corrupting anything else.

Yes, from slab.h:

/*
* Some archs want to perform DMA into kmalloc caches and need a guaranteed
* alignment larger than the alignment of a 64-bit integer.
* Setting ARCH_KMALLOC_MINALIGN in arch headers allows that.
*/
#if defined(ARCH_DMA_MINALIGN) && ARCH_DMA_MINALIGN > 8
#define ARCH_KMALLOC_MINALIGN ARCH_DMA_MINALIGN
#define KMALLOC_MIN_SIZE ARCH_DMA_MINALIGN
#define KMALLOC_SHIFT_LOW ilog2(ARCH_DMA_MINALIGN)
#else
#define ARCH_KMALLOC_MINALIGN __alignof__(unsigned long long)
#endif

Mips sets this for a few subarchitectures, but it seems like you need
to set it for yours as well.

\
 
 \ /
  Last update: 2017-10-22 17:26    [W:0.086 / U:0.656 seconds]
©2003-2018 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site