lkml.org 
[lkml]   [2009]   [Jun]   [26]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: TTM page pool allocator
From
Date
On Fri, 2009-06-26 at 10:00 +1000, Dave Airlie wrote:
> On Thu, Jun 25, 2009 at 10:01 PM, Jerome Glisse<glisse@freedesktop.org> wrote:
> > Hi,
> >
> > Thomas i attach a reworked page pool allocator based on Dave works,
> > this one should be ok with ttm cache status tracking. It definitely
> > helps on AGP system, now the bottleneck is in mesa vertex's dma
> > allocation.
> >
>
> My original version kept a list of wb pages as well, this proved to be
> quite a useful
> optimisation on my test systems when I implemented it, without it I
> was spending ~20%
> of my CPU in getting free pages, granted I always used WB pages on
> PCIE/IGP systems.
>
> Another optimisation I made at the time was around the populate call,
> (not sure if this
> is what still happens):
>
> Allocate a 64K local BO for DMA object.
> Write into the first 5 pages from userspace - get WB pages.
> Bind to GART, swap those 5 pages to WC + flush.
> Then populate the rest with WC pages from the list.
>
> Granted I think allocating WC in the first place from the pool might
> work just as well since most of the DMA buffers are write only.
>
> Dave.
>

I think it's better to fix userspace to not allocate as much buffer per
frame as it does now rather than having a pool of wb pages, i removed
it because on my 64M box memory is getting tight, we need to compute
the number of page we still based on memory. Also i think it's ok
to assume that page allocation is fast enough.

I am reworking the patch with lastes Thomas comment, will post new one
after a bit of testing.

Cheers,
Jerome



\
 
 \ /
  Last update: 2009-06-26 09:35    [W:1.664 / U:0.324 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site