lkml.org 
[lkml]   [2019]   [Mar]   [12]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
    On Tue, Mar 12, 2019 at 03:17:00PM +0800, Jason Wang wrote:
    >
    > On 2019/3/12 上午11:52, Michael S. Tsirkin wrote:
    > > On Tue, Mar 12, 2019 at 10:59:09AM +0800, Jason Wang wrote:
    > > > On 2019/3/12 上午2:14, David Miller wrote:
    > > > > From: "Michael S. Tsirkin" <mst@redhat.com>
    > > > > Date: Mon, 11 Mar 2019 09:59:28 -0400
    > > > >
    > > > > > On Mon, Mar 11, 2019 at 03:13:17PM +0800, Jason Wang wrote:
    > > > > > > On 2019/3/8 下午10:12, Christoph Hellwig wrote:
    > > > > > > > On Wed, Mar 06, 2019 at 02:18:07AM -0500, Jason Wang wrote:
    > > > > > > > > This series tries to access virtqueue metadata through kernel virtual
    > > > > > > > > address instead of copy_user() friends since they had too much
    > > > > > > > > overheads like checks, spec barriers or even hardware feature
    > > > > > > > > toggling. This is done through setup kernel address through vmap() and
    > > > > > > > > resigter MMU notifier for invalidation.
    > > > > > > > >
    > > > > > > > > Test shows about 24% improvement on TX PPS. TCP_STREAM doesn't see
    > > > > > > > > obvious improvement.
    > > > > > > > How is this going to work for CPUs with virtually tagged caches?
    > > > > > > Anything different that you worry?
    > > > > > If caches have virtual tags then kernel and userspace view of memory
    > > > > > might not be automatically in sync if they access memory
    > > > > > through different virtual addresses. You need to do things like
    > > > > > flush_cache_page, probably multiple times.
    > > > > "flush_dcache_page()"
    > > >
    > > > I get this. Then I think the current set_bit_to_user() is suspicious, we
    > > > probably miss a flush_dcache_page() there:
    > > >
    > > >
    > > > static int set_bit_to_user(int nr, void __user *addr)
    > > > {
    > > >         unsigned long log = (unsigned long)addr;
    > > >         struct page *page;
    > > >         void *base;
    > > >         int bit = nr + (log % PAGE_SIZE) * 8;
    > > >         int r;
    > > >
    > > >         r = get_user_pages_fast(log, 1, 1, &page);
    > > >         if (r < 0)
    > > >                 return r;
    > > >         BUG_ON(r != 1);
    > > >         base = kmap_atomic(page);
    > > >         set_bit(bit, base);
    > > >         kunmap_atomic(base);
    > > >         set_page_dirty_lock(page);
    > > >         put_page(page);
    > > >         return 0;
    > > > }
    > > >
    > > > Thanks
    > > I think you are right. The correct fix though is to re-implement
    > > it using asm and handling pagefault, not gup.
    >
    >
    > I agree but it needs to introduce new helpers in asm  for all archs which is
    > not trivial.

    We can have a generic implementation using kmap.

    > At least for -stable, we need the flush?
    >
    >
    > > Three atomic ops per bit is way to expensive.
    >
    >
    > Yes.
    >
    > Thanks

    See James's reply - I stand corrected we do kunmap so no need to flush.

    --
    MST

    \
     
     \ /
      Last update: 2019-03-12 12:54    [W:4.509 / U:0.536 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site