lkml.org 
[lkml]   [2019]   [Mar]   [12]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
From
Date

On 2019/3/12 下午3:51, Jason Wang wrote:
>
> On 2019/3/12 下午1:14, James Bottomley wrote:
>> On Tue, 2019-03-12 at 10:59 +0800, Jason Wang wrote:
>>> On 2019/3/12 上午2:14, David Miller wrote:
>>>> From: "Michael S. Tsirkin" <mst@redhat.com>
>>>> Date: Mon, 11 Mar 2019 09:59:28 -0400
>>>>
>>>>> On Mon, Mar 11, 2019 at 03:13:17PM +0800, Jason Wang wrote:
>>>>>> On 2019/3/8 下午10:12, Christoph Hellwig wrote:
>>>>>>> On Wed, Mar 06, 2019 at 02:18:07AM -0500, Jason Wang wrote:
>>>>>>>> This series tries to access virtqueue metadata through
>>>>>>>> kernel virtual
>>>>>>>> address instead of copy_user() friends since they had too
>>>>>>>> much
>>>>>>>> overheads like checks, spec barriers or even hardware
>>>>>>>> feature
>>>>>>>> toggling. This is done through setup kernel address through
>>>>>>>> vmap() and
>>>>>>>> resigter MMU notifier for invalidation.
>>>>>>>>
>>>>>>>> Test shows about 24% improvement on TX PPS. TCP_STREAM
>>>>>>>> doesn't see
>>>>>>>> obvious improvement.
>>>>>>> How is this going to work for CPUs with virtually tagged
>>>>>>> caches?
>>>>>> Anything different that you worry?
>>>>> If caches have virtual tags then kernel and userspace view of
>>>>> memory
>>>>> might not be automatically in sync if they access memory
>>>>> through different virtual addresses. You need to do things like
>>>>> flush_cache_page, probably multiple times.
>>>> "flush_dcache_page()"
>>>
>>> I get this. Then I think the current set_bit_to_user() is suspicious,
>>> we
>>> probably miss a flush_dcache_page() there:
>>>
>>>
>>> static int set_bit_to_user(int nr, void __user *addr)
>>> {
>>>           unsigned long log = (unsigned long)addr;
>>>           struct page *page;
>>>           void *base;
>>>           int bit = nr + (log % PAGE_SIZE) * 8;
>>>           int r;
>>>
>>>           r = get_user_pages_fast(log, 1, 1, &page);
>>>           if (r < 0)
>>>                   return r;
>>>           BUG_ON(r != 1);
>>>           base = kmap_atomic(page);
>>>           set_bit(bit, base);
>>>           kunmap_atomic(base);
>> This sequence should be OK.  get_user_pages() contains a flush which
>> clears the cache above the user virtual address, so on kmap, the page
>> is coherent at the new alias.  On parisc at least, kunmap embodies a
>> flush_dcache_page() which pushes any changes in the cache above the
>> kernel virtual address back to main memory and makes it coherent again
>> for the user alias to pick it up.
>
>
> It would be good if kmap()/kunmap() can do this but looks like we can
> not assume this? For example, sparc's flush_dcache_page()


Sorry, I meant kunmap_atomic().

Thanks


> doesn't do flush_dcache_page(). And bio_copy_data_iter() do
> flush_dcache_page() after kunmap_atomic().
>
> Thanks
>
>
>>
>> James
>>
>

\
 
 \ /
  Last update: 2019-03-12 08:54    [W:0.070 / U:0.568 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site