lkml.org 
[lkml]   [2009]   [Oct]   [12]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [resend][PATCH v2] mlock() doesn't wait to finish lru_add_drain_all()
On Tue, 13 Oct 2009 12:18:17 +0900 (JST) KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> wrote:

> The problem is in __lru_cache_add().
>
> ============================================================
> void __lru_cache_add(struct page *page, enum lru_list lru)
> {
> struct pagevec *pvec = &get_cpu_var(lru_add_pvecs)[lru];
>
> page_cache_get(page);
> if (!pagevec_add(pvec, page))
> ____pagevec_lru_add(pvec, lru);
> put_cpu_var(lru_add_pvecs);
> }
> ============================================================
>
> current typical scenario is
> 1. preempt disable
> 2. assign lru_add_pvec
> 3. page_cache_get()
> 4. pvec->pages[pvec->nr++] = page;
> 5. preempt enable
>
> but the preempt disabling assume drain_cpu_pagevecs() run on process context.
> we need to convert it with irq_disabling.

Nope, preempt_disable()/enable() can be performed in hard IRQ context.
I see nothing in __lru_cache_add() which would cause problems when run
from hard IRQ.

Apart from latency, of course. Doing a full smp_call_function() in
lru_add_drain_all() might get expensive if it's ever called with any
great frequency.

A smart implementation might take a peek at other cpu's queues and omit
the cross-CPU call if the queue is empty, for example..



\
 
 \ /
  Last update: 2009-10-13 05:41    [W:0.027 / U:0.244 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site