[lkml]   [2006]   [Oct]   [13]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
Messages in this thread
    SubjectRe: Can context switches be faster?
    Hash: SHA1

    Andrew James Wade wrote:
    > On Thursday 12 October 2006 14:29, John Richard Moser wrote:
    >> How does a page table switch work? As I understand there are PTE chains
    >> which are pretty much linked lists the MMU follows; I can't imagine this
    >> being a harder problem than replacing the head.
    > Generally, the virtual memory mappings are stored as high-fanout trees
    > rather than linked lists. (ia64 supports a hash table based scheme,
    > but I don't know if Linux uses it.) But the bulk of the mapping
    > lookups will actually occur in a cache of the virtual memory mappings
    > called the translation lookaside buffer (TLB). It is from the TLB and
    > not the memory mapping trees that some of the performance problems
    > with address space switches originate.
    > The kernel can tolerate some small inconsistencies between the TLB
    > and the mapping tree (it can fix them in the page fault handler). But
    > for the most part the TLB must be kept consistent with the current
    > address space mappings for correct operation. Unfortunately, on some
    > architectures the only practical way of doing this is to flush the TLB
    > on address space switches. I do not know if the flush itself takes any
    > appreciable time, but each of the subsequent TLB cache misses will
    > necessitate walking the current mapping tree. Whether done by the MMU
    > or by the kernel (implementations vary), these walks in the aggregate
    > can be a performance issue.

    True. You can trick the MMU into faulting into the kernel (PaX does
    this to apply non-executable pages-- pages, not halves of VM-- on x86),
    but it's orders of magnitude slower as I understand and the petty gains
    you can get over the hardware MMU doing it are not going to outweigh it.

    > On some architectures the L1 cache can also require attention from the
    > kernel on address space switches for correct operation. Even when the
    > L1 cache doesn't need flushing a change in address space will generally
    > be accompanied by a change of working set, leading to a period of high
    > cache misses for the L1/L2 caches.

    Yeah, only exception being if L1 and L2 are both physically addressed,
    and thing like libc's .text are shared, leading to shared working sets
    in I1 and L2.

    > Microbenchmarks can miss the cache miss costs associated with context
    > switches. But I believe the costs of cache thrashing and flushing are

    cachegrind is probably guilty but I haven't examined it.

    > the reason that the time-sharing granularity is so coarse in Linux,
    > rather than the time it takes the kernel to actually perform a context
    > switch. (The default time-slice is 100 ms.) Still, the cache miss costs

    I thought it was minimum 5mS... I don't know what default is. Heh.

    > are workload-dependent, and the actual time the kernel takes to context
    > switch can be important as well.
    > Andrew Wade

    - --
    We will enslave their women, eat their children and rape their
    -- Bosc, Evil alien overlord from the fifth dimension
    Version: GnuPG v1.4.3 (GNU/Linux)
    Comment: Using GnuPG with Mozilla -

    -----END PGP SIGNATURE-----
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to
    More majordomo info at
    Please read the FAQ at

     \ /
      Last update: 2006-10-13 07:35    [W:0.048 / U:26.148 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site