lkml.org 
[lkml]   [2018]   [Jan]   [16]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRe: [PATCH] dmaengine: avoid map_cnt overflow with CONFIG_DMA_ENGINE_RAID
Date
On 12 Jan 2018, at 11:56, Vinod Koul wrote:

> On Mon, Jan 08, 2018 at 10:50:50AM -0500, Zi Yan wrote:
>> From: Zi Yan <zi.yan@cs.rutgers.edu>
>>
>> When CONFIG_DMA_ENGINE_RAID is enabled, unmap pool size can reach to
>> 256. But in struct dmaengine_unmap_data, map_cnt is only u8, wrapping
>> to 0, if the unmap pool is maximally used. This triggers BUG() when
>> struct dmaengine_unmap_data is freed. Use u16 to fix the problem.
>>
>> Signed-off-by: Zi Yan <zi.yan@cs.rutgers.edu>
>> ---
>> include/linux/dmaengine.h | 4 ++++
>> 1 file changed, 4 insertions(+)
>>
>> diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h
>> index f838764993eb..861be5cab1df 100644
>> --- a/include/linux/dmaengine.h
>> +++ b/include/linux/dmaengine.h
>> @@ -470,7 +470,11 @@ typedef void
>> (*dma_async_tx_callback_result)(void *dma_async_param,
>> const struct dmaengine_result *result);
>>
>> struct dmaengine_unmap_data {
>> +#if IS_ENABLED(CONFIG_DMA_ENGINE_RAID)
>> + u16 map_cnt;
>> +#else
>> u8 map_cnt;
>> +#endif
>> u8 to_cnt;
>> u8 from_cnt;
>> u8 bidi_cnt;
>
> Would that cause adverse performance, the data structure is not
> aligned
> anymore. Dan was that a consideration while adding this?
>

It will be only two more cache misses (one for map the data, the other
for unmap the data)
for each DMA engine operation, no matter what data size is. And there is
no impact on
the actual DMA transfers. So the impact should be minimal.


Best Regards,
Yan Zi

\
 
 \ /
  Last update: 2018-01-16 21:01    [W:0.519 / U:0.088 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site