lkml.org 
[lkml]   [2007]   [Jun]   [20]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [Intel IOMMU 06/10] Avoid memory allocation failures in dma map api calls
    On Wed, Jun 20, 2007 at 10:08:51PM +0200, Peter Zijlstra wrote:
    > On Wed, 2007-06-20 at 12:14 -0700, Arjan van de Ven wrote:
    > > Peter Zijlstra wrote:
    > > > So a reclaim context (kswapd and direct reclaim) set PF_MEMALLOC to
    > > > ensure they themselves will not block on a memory allocation. And it is
    > > > understood that these code paths have a bounded memory footprint.
    > >
    > >
    > > that's a too simplistic view though; what happens is that kswapd will
    > > queue the IO, but the irq context will then take the IO from the queue
    > > and do the DMA mapping... which needs the memory.....

    As Arjan is saying, that a reclaim context sets PF_MEMALLOC flag
    and submits the IO, but the controller driver decides to queue the IO,
    and later in the interrupt context it de-queues and calls the
    IOMMU driver for mapping the DMA physical address and in this DMA
    map api call we may need the memory to satisfy the DMA map api call.
    Hence PF_MEMALLOC set by the reclaim context should work from
    interrupt context too, if it is not then that needs to be fixed.

    >
    > Right, but who stops some unrelated interrupt handler from completely
    > depleting memory?
    The DMA map API's exposed by IOMMU are called by the
    storage/network controller driver and it is in this IOMMU
    driver we are talking about allocating memory. So we have no
    clue how much I/O the controller above us is capable of submitting.
    And all we do is the mapping of DMA phyical address and
    provide the caller with the virtual DMA address and it
    is in this process we may need some memory.

    >
    > What I'm saying is that there should be some coupling between the
    > reclaim context and the irq context doing work on its behalf.
    Nope this info is not available when upper level drivers
    calls the standard DMA map api's.

    >
    > For instance, you know how many pages are in the queue, and which queue.
    > So you could preallocate enough memory to handle that many pages from
    > irq context and couple that reserve to the queue object. Then irq
    > context can use that memory to do the work.

    As I have said earlier, we are not the storage or network controller
    driver and we have no idea how much the IO the controller is capable of.


    Again this patch is the best effort for not to fail the DMA map api,
    if it fails due to lack of memory, we have no choice but to return
    failure to upper level driver. But today, upper level drivers are
    not capable of handling failures from DMA map api's hence this best
    effort for not to fail the DMA map api call.

    thanks,
    -Anil


    -
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

    \
     
     \ /
      Last update: 2007-06-21 01:11    [W:5.011 / U:0.116 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site