lkml.org 
[lkml]   [2011]   [May]   [5]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: Can I/OAT DMA engineer access PCI MMIO space
于 2011年05月03日 23:58, Dan Williams 写道:
>
>> Do you mean that if I have mapped the mmio, I can' use I/OAT dma
>> transfer to this region any more?
>> I can use memcpy to copy data, but it consumes lots of cpu as PCI access
>> is too slow.
>> If I can use i/oat dma and asyc_tx api to do the job, the performance
>> should be imporved.
>> Thanks
>
>
> The async_tx api only supports memory-to-memory transfers. To write
> to mmio space with ioatdma you would need a custom method, like the
> dma-slave support in other drivers, to program the descriptors with
> the physical mmio bus address.
>
> --
> Dan
Thanks.
I directly read pci bar address and program it into descriptors, ioatdma
works.
Some problem is, when PCI transfer failed (Using a NTB connect to
another system, and the system power down),
ioatdma will cause kernel oops.

BUG_ON(is_ioat_bug(chanerr));
in drivers/dma/ioat/dma_v3.c, line 365

It seems that HW reports a 'IOAT_CHANERR_DEST_ADDR_ERR', and drivers
can't recover from this situation.
What does dma-slave mean? Just like DMA_SLAVE flag existing in other DMA
drivers?

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2011-05-05 10:45    [W:0.078 / U:0.388 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site