lkml.org 
[lkml]   [2018]   [Dec]   [14]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 18/52] virtio-fs: Map cache using the values from the capabilities
* Cornelia Huck (cohuck@redhat.com) wrote:
> On Fri, 14 Dec 2018 13:44:34 +0000
> Stefan Hajnoczi <stefanha@redhat.com> wrote:
>
> > On Thu, Dec 13, 2018 at 01:38:23PM +0100, Cornelia Huck wrote:
> > > On Thu, 13 Dec 2018 13:24:31 +0100
> > > David Hildenbrand <david@redhat.com> wrote:
> > >
> > > > On 13.12.18 13:15, Dr. David Alan Gilbert wrote:
> > > > > * David Hildenbrand (david@redhat.com) wrote:
> > > > >> On 13.12.18 11:00, Dr. David Alan Gilbert wrote:
> > > > >>> * David Hildenbrand (david@redhat.com) wrote:
> > > > >>>> On 13.12.18 10:13, Dr. David Alan Gilbert wrote:
> > > > >>>>> * David Hildenbrand (david@redhat.com) wrote:
> > > > >>>>>> On 10.12.18 18:12, Vivek Goyal wrote:
> > > > >>>>>>> Instead of assuming we had the fixed bar for the cache, use the
> > > > >>>>>>> value from the capabilities.
> > > > >>>>>>>
> > > > >>>>>>> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
> > > > >>>>>>> ---
> > > > >>>>>>> fs/fuse/virtio_fs.c | 32 +++++++++++++++++---------------
> > > > >>>>>>> 1 file changed, 17 insertions(+), 15 deletions(-)
> > > > >>>>>>>
> > > > >>>>>>> diff --git a/fs/fuse/virtio_fs.c b/fs/fuse/virtio_fs.c
> > > > >>>>>>> index 60d496c16841..55bac1465536 100644
> > > > >>>>>>> --- a/fs/fuse/virtio_fs.c
> > > > >>>>>>> +++ b/fs/fuse/virtio_fs.c
> > > > >>>>>>> @@ -14,11 +14,6 @@
> > > > >>>>>>> #include <uapi/linux/virtio_pci.h>
> > > > >>>>>>> #include "fuse_i.h"
> > > > >>>>>>>
> > > > >>>>>>> -enum {
> > > > >>>>>>> - /* PCI BAR number of the virtio-fs DAX window */
> > > > >>>>>>> - VIRTIO_FS_WINDOW_BAR = 2,
> > > > >>>>>>> -};
> > > > >>>>>>> -
> > > > >>>>>>> /* List of virtio-fs device instances and a lock for the list */
> > > > >>>>>>> static DEFINE_MUTEX(virtio_fs_mutex);
> > > > >>>>>>> static LIST_HEAD(virtio_fs_instances);
> > > > >>>>>>> @@ -518,7 +513,7 @@ static int virtio_fs_setup_dax(struct virtio_device *vdev, struct virtio_fs *fs)
> > > > >>>>>>> struct dev_pagemap *pgmap;
> > > > >>>>>>> struct pci_dev *pci_dev;
> > > > >>>>>>> phys_addr_t phys_addr;
> > > > >>>>>>> - size_t len;
> > > > >>>>>>> + size_t bar_len;
> > > > >>>>>>> int ret;
> > > > >>>>>>> u8 have_cache, cache_bar;
> > > > >>>>>>> u64 cache_offset, cache_len;
> > > > >>>>>>> @@ -551,17 +546,13 @@ static int virtio_fs_setup_dax(struct virtio_device *vdev, struct virtio_fs *fs)
> > > > >>>>>>> }
> > > > >>>>>>>
> > > > >>>>>>> /* TODO handle case where device doesn't expose BAR? */
> > > > >>>>>>
> > > > >>>>>> For virtio-pmem we decided to not go via BARs as this would effectively
> > > > >>>>>> make it only usable for virtio-pci implementers. Instead, we are going
> > > > >>>>>> to export the applicable physical device region directly (e.g.
> > > > >>>>>> phys_start, phys_size in virtio config), so it is decoupled from PCI
> > > > >>>>>> details. Doing the same for virtio-fs would allow e.g. also virtio-ccw
> > > > >>>>>> to make eventually use of this.
> > > > >>>>>
> > > > >>>>> That makes it a very odd looking PCI device; I can see that with
> > > > >>>>> virtio-pmem it makes some sense, given that it's job is to expose
> > > > >>>>> arbitrary chunks of memory.
> > > > >>>>>
> > > > >>>>> Dave
> > > > >>>>
> > > > >>>> Well, the fact that your are
> > > > >>>>
> > > > >>>> - including <uapi/linux/virtio_pci.h>
> > > > >>>> - adding pci related code
> > > > >>>>
> > > > >>>> in/to fs/fuse/virtio_fs.c
> > > > >>>>
> > > > >>>> tells me that these properties might be better communicated on the
> > > > >>>> virtio layer, not on the PCI layer.
> > > > >>>>
> > > > >>>> Or do you really want to glue virtio-fs to virtio-pci for all eternity?
> > > > >>>
> > > > >>> No, these need cleaning up; and the split within the bar
> > > > >>> is probably going to change to be communicated via virtio layer
> > > > >>> rather than pci capabilities. However, I don't want to make our PCI
> > > > >>> device look odd, just to make portability to non-PCI devices - so it's
> > > > >>> right to make the split appropriately, but still to use PCI bars
> > > > >>> for what they were designed for.
> > > > >>>
> > > > >>> Dave
> > > > >>
> > > > >> Let's discuss after the cleanup. In general I am not convinced this is
> > > > >> the right thing to do. Using virtio-pci for anything else than pure
> > > > >> transport smells like bad design to me (well, I am no virtio expert
> > > > >> after all ;) ). No matter what PCI bars were designed for. If we can't
> > > > >> get the same running with e.g. virtio-ccw or virtio-whatever, it is
> > > > >> broken by design (or an addon that is tightly glued to virtio-pci, if
> > > > >> that is the general idea).
> > > > >
> > > > > I'm sure we can find alternatives for virtio-*, so I wouldn't expect
> > > > > it to be glued to virtio-pci.
> > > > >
> > > > > Dave
> > > >
> > > > As s390x does not have the concept of memory mapped io (RAM is RAM,
> > > > nothing else), this is not architectured. vitio-ccw can therefore not
> > > > define anything similar like that. However, in virtual environments we
> > > > can do whatever we want on top of the pure transport (e.g. on the virtio
> > > > layer).
> > > >
> > > > Conny can correct me if I am wrong.
> > >
> > > I don't think you're wrong, but I haven't read the code yet and I'm
> > > therefore not aware of the purpose of this BAR.
> > >
> > > Generally, if there is a memory location shared between host and guest,
> > > we need a way to communicate its location, which will likely differ
> > > between transports. For ccw, I could imagine a new channel command
> > > dedicated to exchanging configuration information (similar to what
> > > exists today to communicate the locations of virtqueues), but I'd
> > > rather not go down this path.
> > >
> > > Without reading the code/design further, can we use one of the
> > > following instead of a BAR:
> > > - a virtqueue;
> > > - something in config space?
> > > That would be implementable by any virtio transport.
> >
> > The way I think about this is that we wish to extend the VIRTIO device
> > model with the concept of shared memory. virtio-fs, virtio-gpu, and
> > virtio-vhost-user all have requirements for shared memory.
> >
> > This seems like a transport-level issue to me. PCI supports
> > memory-mapped I/O and that's the right place to do it. If you try to
> > put it into config space or the virtqueue, you'll end up with something
> > that cannot be realized as a PCI device because it bypasses PCI bus
> > address translation.
> >
> > If CCW needs a side-channel, that's fine. But that side-channel is a
> > CCW-specific mechanism and probably doesn't apply to all other
> > transports.
>
> But virtio-gpu works with ccw right now (I haven't checked what it
> uses); can virtio-fs use an equivalent method?
>
> If there's a more generic case to be made for extending virtio devices
> with a way to handle shared memory, a ccw for that would be fine. I
> just want to avoid adding new ccws for everything as the namespace is
> not infinite.

In our case we've got somewhere between 0..3 ranges of memory, and I was
specifying them as PCI capabilities; however Gerd's suggestion was that
it would be better to just use 1 bar and then have something as part of
virtio or the like to split them up.
If we do that, then we could have something of the form
(index, base, length)

for each of the regions, where in the PCI case 'index' means BAR and
in CCW it means something else. (For mmio it's probably irrelevant and
the base is probably a physical address).

Dave

--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

\
 
 \ /
  Last update: 2018-12-14 15:07    [W:0.227 / U:0.160 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site