lkml.org 
[lkml]   [2021]   [Apr]   [30]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRE: [RFC 1/2] vfio/pci: keep the prefetchable attribute of a BAR region in VMA
Date
Hi Marc, 

> -----Original Message-----
> From: Marc Zyngier <maz@kernel.org>
> Sent: Friday, April 30, 2021 10:31 AM
> On Fri, 30 Apr 2021 15:58:14 +0100,
> Shanker R Donthineni <sdonthineni@nvidia.com> wrote:
> >
> > Hi Marc,
> >
> > On 4/30/21 6:47 AM, Marc Zyngier wrote:
> > >
> > >>>> We've two concerns here:
> > >>>> - Performance impacts for pass-through devices.
> > >>>> - The definition of ioremap_wc() function doesn't match the
> > >>>> host kernel on ARM64
> > >>> Performance I can understand, but I think you're also using it to
> > >>> mask a driver bug which should be resolved first. Thank
> > >> We’ve already instrumented the driver code and found the code path
> > >> for the unaligned accesses. We’ll fix this issue if it’s not
> > >> following WC semantics.
> > >>
> > >> Fixing the performance concern will be under KVM stage-2 page-table
> > >> control. We're looking for a guidance/solution for updating stage-2
> > >> PTE based on PCI-BAR attribute.
> > > Before we start discussing the *how*, I'd like to clearly understand
> > > what *arm64* memory attributes you are relying on. We already have
> > > established that the unaligned access was a bug, which was the
> > > biggest argument in favour of NORMAL_NC. What are the other
> requirements?
> > Sorry, my earlier response was not complete...
> >
> > ARMv8 architecture has two features Gathering and Reorder
> > transactions, very important from a performance point of view. Small
> > inline packets for NIC cards and accesses to GPU's frame buffer are
> > CPU-bound operations. We want to take advantages of GRE features to
> > achieve higher performance.
> >
> > Both these features are disabled for prefetchable BARs in VM because
> > memory-type MT_DEVICE_nGnRE enforced in stage-2.
>
> Right, so Normal_NC is a red herring, and it is Device_GRE that you really are
> after, right?
>
I think Device GRE has some practical problems.
1. A lot of userspace code which is used to getting write combined mappings
to GPU memory from kernel drivers does memcpy/memset on it which
can insert ldp/stp which can crash on Device Memory Type. From a quick search
I didn't find a memcpy_io or memset_io in glibc. Perhaps there are some
other functions available, but a lot of userspace applications that work on x86 and
ARM baremetal won't work on ARM VMs without such changes. Changes to all of
userspace may not always be practical, specially if linking to binaries

2. Sometimes even if application is not using memset/memcpy directly,
gcc may insert a builtin memcpy/memset.

3. Recompiling all applications with gcc -m strict-align has performance issues.
In our experiments that resulted in an increase in code size, and also 3-5%
performance decrease reliably.
Also, it is not always practical to recompile all of userspace, depending on
who owns the code/linked binaries etc.

From KVM-ARM point of view, what is it about Normal NC at stage 2 for
Prefetchable BAR (however KVM gets the hint, whether from userspace or VMA)
that is undesirable vs Device GRE? I couldn't think of a difference to devices
whether the combining or prefetching or reordering happened because of one or
the other.

> Now, I'm not convinced that we can do that directly from vfio in a device-
> agnostic manner. It is userspace that places the device in the guest's
> memory, and I have the ugly feeling that userspace needs to be in control of
> memory attributes.
>
> Otherwise, we change the behaviour for all existing devices that have
> prefetchable BARs, and I don't think that's an acceptable move (userspace
> ABI change).
>
> M.
>
> --
> Without deviation from the norm, progress is not possible.
\
 
 \ /
  Last update: 2021-04-30 18:58    [W:0.122 / U:0.612 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site