lkml.org 
[lkml]   [2010]   [Oct]   [19]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [RFC/RFT PATCH] sched: automated per tty task groups

    * Mike Galbraith <efault@gmx.de> wrote:

    > > On Tue, 2010-10-19 at 08:28 -0700, Linus Torvalds wrote:
    > > >
    > > > If people compare with a non-CGROUP_SCHED
    > > > kernel, will a desktop-optimized kernel suddenly have horrible pipe
    > > > latency due to much higher scheduling cost? Right now that whole
    > > > feature is hidden by EXPERIMENTAL, I don't know how much it hurts, and
    > > > I never timed it when I tried it out long ago..
    >
    > Q/D test of kernels w/wo, with same .config using pipe-test (pure sched) gives on
    > my box ~590khz with tty_sched active, 620khz without cgroups acitve in same
    > kernel/config without patch. last time I measured stripped down config (not long
    > ago, but not yesterday either) gave max ctx rate ~690khz on this box.
    >
    > (note: very Q, very D numbers, no variance testing, ballpark)

    That's 5% overhead in context switches. Definitely not in the 'horrible' category.

    This would be a rather tempting item for 2.6.37 ... especially as it really mainly
    reuses existing group scheduling functionality, in a clever way.

    Mind doing more of the tty->desktop renames/generalizations as Linus suggested, and
    resend the patch?

    I'd also suggest to move it out of EXPERIMENTAL - we dont really do that for core
    kernel features as most distros enable CONFIG_EXPERIMENTAL so it's a rather
    meaningless distinction. Since the feature is default-n, people will get the old
    scheduler by default but can also choose this desktop-centric scheduling mode.

    I'd even argue to make it default-y, because this patch clearly cures a form of
    kbuild cancer.

    Thanks,

    Ingo


    \
     
     \ /
      Last update: 2010-10-20 04:59    [W:2.166 / U:0.028 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site