lkml.org 
[lkml]   [2004]   [Aug]   [15]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [patch] voluntary-preempt-2.6.8.1-P0
From
Date
On Sun, 2004-08-15 at 20:25, Florian Schmidt wrote:
> On Sun, 15 Aug 2004 13:56:49 +0200
> Ingo Molnar <mingo@elte.hu> wrote:
>
> >
> > i've uploaded the -P0 patch:
> >
> > http://redhat.com/~mingo/voluntary-preempt/voluntary-preempt-2.6.8.1-P0
>
> I haven't tried this patch yet, but i have a question regarding the
> mlockall issue:
>
> Jackd also uses IPC mechnisms for remote procedure calls [i think,
> please correct me] and makes heavy use of shared memory. Might
> mlock(all) have influence of this? is jackd maybe producing xruns
> because some IPC stuff blocks when mlockall is used?

I reworked the jackd latency histogram to report the time elapsed
between entering and exiting the poll() of PCM fd's in alsa-driver.c.
This is a different metric than the previous max_usecs histogram, which
I believe indirectly measured latency, this one measures it directly.

http://krustophenia.net/testresults.php?dataset=2.6.8.1-P0

The peaks on this graph should correspond directly to the length of the
non-preemptible critical section reported by Ingo's latency tracer. I
think the large peak around 580-600usecs is caused by the
extract_entropy issue (which can be hit by regular processes and
ksoftirqd), and the large peak around 80-100 by the XFree86 unmap_vmas
issue, as the times match and these are by far the most common reported
in latency_trace.

There are a number of samples above 700us. I am working with a period
time of 666 usecs, and since there are 2 periods per buffer, we would
have to hit two > 666 usec latencies in a row for an xrun - it appears
that there are many individual latencies above 666, certainly more than
there are xruns. So, maybe the mlockall issue is not a result of
triggering a single large latency, but of increasing the frequency of
these higher latencies so that we are more likely to hit 2 in a row.

IIRC ksoftirqd will defer more work under load, and ksoftirqd is one of
the more common offenders to hit the extract_entropy latency. Maybe
mlockall causes more softirqs to be deferred, thus increaing the change
that we will have to do more than 666 usecs worth of work on 2
successive wakeups.

Lee

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 14:05    [W:0.631 / U:0.928 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site