lkml.org 
[lkml]   [2017]   [Sep]   [29]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v3 tip/core/rcu 40/40] rcu: Make non-preemptive schedule be Tasks RCU quiescent state
On Fri, Sep 29, 2017 at 12:01:24PM +0200, Paolo Bonzini wrote:
> On 29/09/2017 11:30, Boqun Feng wrote:
> > On Thu, Sep 28, 2017 at 04:05:14PM +0000, Paul E. McKenney wrote:
> > [...]
> >>> __schedule+0x201/0x2240 kernel/sched/core.c:3292
> >>> schedule+0x113/0x460 kernel/sched/core.c:3421
> >>> kvm_async_pf_task_wait+0x43f/0x940 arch/x86/kernel/kvm.c:158
> >>
> >> It is kvm_async_pf_task_wait() that calls schedule(), but it carefully
> >> sets state to make that legal. Except...
> >>
> >>> do_async_page_fault+0x72/0x90 arch/x86/kernel/kvm.c:271
> >>> async_page_fault+0x22/0x30 arch/x86/entry/entry_64.S:1069
> >>> RIP: 0010:format_decode+0x240/0x830 lib/vsprintf.c:1996
> >>> RSP: 0018:ffff88003b2df520 EFLAGS: 00010283
> >>> RAX: 000000000000003f RBX: ffffffffb5d1e141 RCX: ffff88003b2df670
> >>> RDX: 0000000000000001 RSI: dffffc0000000000 RDI: ffffffffb5d1e140
> >>> RBP: ffff88003b2df560 R08: dffffc0000000000 R09: 0000000000000000
> >>> R10: ffff88003b2df718 R11: 0000000000000000 R12: ffff88003b2df5d8
> >>> R13: 0000000000000064 R14: ffffffffb5d1e140 R15: 0000000000000000
> >>> vsnprintf+0x173/0x1700 lib/vsprintf.c:2136
> >>
> >> We took a page fault in vsnprintf() while doing link_path_walk(),
> >> which looks to be within an RCU read-side critical section.
> >>
> >> Maybe the page fault confused lockdep?
> >>
> >> Sigh. It is going to be a real pain if all printk()s need to be
> >> outside of RCU read-side critical sections due to the possibility of
> >> page faults...
> >>
> >
> > Does this mean whenever we get a page fault in a RCU read-side critical
> > section, we may hit this?
> >
> > Could we simply avoid to schedule() in kvm_async_pf_task_wait() if the
> > fault process is in a RCU read-side critical section as follow?
> >
> > diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
> > index aa60a08b65b1..291ea13b23d2 100644
> > --- a/arch/x86/kernel/kvm.c
> > +++ b/arch/x86/kernel/kvm.c
> > @@ -140,7 +140,7 @@ void kvm_async_pf_task_wait(u32 token)
> >
> > n.token = token;
> > n.cpu = smp_processor_id();
> > - n.halted = is_idle_task(current) || preempt_count() > 1;
> > + n.halted = is_idle_task(current) || preempt_count() > 1 || rcu_preempt_depth();
> > init_swait_queue_head(&n.wq);
> > hlist_add_head(&n.link, &b->list);
> > raw_spin_unlock(&b->lock);

This works for PREEMPT=y kernels, but can silently break RCU read-side
critical sections on PREEMPT=n kernels.

> > (Add KVM folks and list Cced)
>
> Yes, that would work. Mind to send it as a proper patch?

Just out of curiosity, why is printk() being passed something that can
page fault?

Thanx, Paul

\
 
 \ /
  Last update: 2017-09-29 18:55    [W:0.243 / U:0.168 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site