[lkml]   [1999]   [Feb]   [17]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
Messages in this thread
    SubjectRe: Locking a process or thread onto a specific CPU
    Steve Linton wrote:
    > Is there any way to do this with the 2.2 SMP code? A colleague is
    > implementing his own language for research purposes and wants, in
    > effect, to schedule his own threads on the available CPUs. The obvious
    > way to do this seems to be to run one "real" thread locked onto each
    > CPU. I know that threads will typically stay on the same CPU they
    > were on anyway, but I can imagine an ill-timed intervention of a third
    > process (in the case of 2 CPUs) resulting in both his processes trying
    > to run on the same CPU.

    > If this is not possible at the moment, can anyone (a) comment on its
    > feasibility (b) advise us on where to look to implement it or
    > (c) comment on its desirability as a new feature.

    (c) I am also into implementing threads. Fast ones, but flexible. Real
    fast, like I really mean fast fast.

    Locking tasks (= cloned processes) to CPUs isn't quite ideal, because it
    means if something else runs for a while, one of your tasks pauses, and
    so does the userspace thread running in that task at the time. As you
    still have another CPU, it is rather silly that one thread may get
    blocked indefinitely out of a pool of many. Or more than one if you're
    optimising your user space scheduler.

    There's another problem. Sometimes, one of your tasks blocks in the
    kernel. This happens however much effort you go to with poll(),
    O_NONBLOCK etc. Some things will just block anyway. Disk I/O is
    especially unpredictable at this; paging to disk also comes into this
    category. Ideally, you'd like to continue using that CPU for some more
    of your threads. A solution often used (see nfsd) is to run more than
    one task per CPU. But that's non-optimal because the number is often
    more than is optimal (= 1 per CPU), and sometimes less than is optimal
    (= 1 + number of blocked tasks, per CPU).

    IMO, the optimal mechanism is something which lets you schedule threads
    purely in userspace the way the kernel can do it for tasks. That is,
    have one task at a time running on each CPU at all times (as far as your
    program is concerned).

    As other programs run from time to time, they interfere with this.
    Also, your tasks block unpredictably in the kernel despite efforts with
    O_NONBLOCK et al., if you want them to have access to the full kernel

    To work optimally in the presence of these effects, we need to know how
    many tasks to have running at any time.

    Unfortunately, I can't think of a simple solution for the first
    desirable: keeping your pool of threads running and switching without
    multiple tasks context switching on the same CPU, in the presence of
    other programs using big timeslices on one or more CPUs. Locking tasks
    to CPUs is clearly part of the solution, but it is not the whole
    solution. Perhaps this problem is not worth worrying about.

    The second thing, blocking in kernel, is a problem for NUMBER CRUNCHING
    FARMS and the like. Mainly due to tasks blocking for paging I/O, but
    also network I/O etc. Fortunately this is easy to fix. When one of
    your task blocks, you'd like to know its worth creating another task (or
    unpausing one) to take over on that CPU. Let's assume the kernel picks
    the good CPU always. This isn't a few % tweak -- this is a whole
    potentially idle CPU that we want to use while some slow thing blocks in
    the kernel. So we'd like a mechnism to tell our program to restart
    another task and schedule threads into it. No problem: a task flag to
    say "if I block, wake up process X", plus user space code to say "when I
    carry on, I'll stop myself if X is running, or tell process X to stop".

    -- Jamie

    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to
    Please read the FAQ at

     \ /
      Last update: 2005-03-22 13:50    [W:0.022 / U:19.032 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site