lkml.org 
[lkml]   [2019]   [Apr]   [3]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v2] vfio/type1: Limit DMA mappings per container
On Tue, Apr 02, 2019 at 10:15:38AM -0600, Alex Williamson wrote:
> Memory backed DMA mappings are accounted against a user's locked
> memory limit, including multiple mappings of the same memory. This
> accounting bounds the number of such mappings that a user can create.
> However, DMA mappings that are not backed by memory, such as DMA
> mappings of device MMIO via mmaps, do not make use of page pinning
> and therefore do not count against the user's locked memory limit.
> These mappings still consume memory, but the memory is not well
> associated to the process for the purpose of oom killing a task.
>
> To add bounding on this use case, we introduce a limit to the total
> number of concurrent DMA mappings that a user is allowed to create.
> This limit is exposed as a tunable module option where the default
> value of 64K is expected to be well in excess of any reasonable use
> case (a large virtual machine configuration would typically only make
> use of tens of concurrent mappings).
>
> This fixes CVE-2019-3882.
>
> Signed-off-by: Alex Williamson <alex.williamson@redhat.com>

Have you tested with GPU passthrough ? GPU have huge BAR from
hundred of mega bytes to giga bytes (some driver resize them
to cover the whole GPU memory). Driver need to map those to
properly work. I am not sure what path is taken by mmap of
mmio BAR by a guest on the host but i just thought i would
point that out.

Cheers,
Jérôme

\
 
 \ /
  Last update: 2019-04-03 21:25    [W:0.085 / U:0.040 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site