lkml.org 
[lkml]   [2019]   [Aug]   [5]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH V2 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
From
Date

On 2019/8/2 下午10:27, Michael S. Tsirkin wrote:
> On Fri, Aug 02, 2019 at 09:46:13AM -0300, Jason Gunthorpe wrote:
>> On Fri, Aug 02, 2019 at 05:40:07PM +0800, Jason Wang wrote:
>>>> This must be a proper barrier, like a spinlock, mutex, or
>>>> synchronize_rcu.
>>>
>>> I start with synchronize_rcu() but both you and Michael raise some
>>> concern.
>> I've also idly wondered if calling synchronize_rcu() under the various
>> mm locks is a deadlock situation.
>>
>>> Then I try spinlock and mutex:
>>>
>>> 1) spinlock: add lots of overhead on datapath, this leads 0 performance
>>> improvement.
>> I think the topic here is correctness not performance improvement
> The topic is whether we should revert
> commit 7f466032dc9 ("vhost: access vq metadata through kernel virtual address")
>
> or keep it in. The only reason to keep it is performance.


Maybe it's time to introduce the config option?


>
> Now as long as all this code is disabled anyway, we can experiment a
> bit.
>
> I personally feel we would be best served by having two code paths:
>
> - Access to VM memory directly mapped into kernel
> - Access to userspace
>
>
> Having it all cleanly split will allow a bunch of optimizations, for
> example for years now we planned to be able to process an incoming short
> packet directly on softirq path, or an outgoing on directly within
> eventfd.


It's not hard consider we've already had our own accssors. But the
question is (as asked in another thread), do you want permanent GUP or
still use MMU notifiers.

Thanks

\
 
 \ /
  Last update: 2019-08-05 06:36    [W:0.147 / U:0.340 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site