lkml.org 
[lkml]   [2019]   [Jul]   [16]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRe: [PATCH v3 3/3] drm/vgem: use normal cached mmap'ings
Date
Rob Clark <robdclark@gmail.com> writes:

> From: Rob Clark <robdclark@chromium.org>
>
> Since there is no real device associated with VGEM, it is impossible to
> end up with appropriate dev->dma_ops, meaning that we have no way to
> invalidate the shmem pages allocated by VGEM. So, at least on platforms
> without drm_cflush_pages(), we end up with corruption when cache lines
> from previous usage of VGEM bo pages get evicted to memory.
>
> The only sane option is to use cached mappings.

This may be an improvement, but...

pin/unpin is only on attaching/closing the dma-buf, right? So, great,
you flushed the cached map once after exporting the vgem dma-buf to the
actual GPU device, but from then on you still have no interface for
getting coherent access through VGEM's mapping again, which still
exists.

I feel like this is papering over something that's really just broken,
and we should stop providing VGEM just because someone wants to write
dma-buf test code without driver-specific BO alloc ioctl code.
[unhandled content-type:application/pgp-signature]
\
 
 \ /
  Last update: 2019-07-17 01:40    [W:0.062 / U:16.892 seconds]
©2003-2018 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site