lkml.org 
[lkml]   [2005]   [Apr]   [3]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    Date
    From
    SubjectRe: [patch] sched: auto-tune migration costs [was: Re: Industry db benchmark result on recent 2.6 kernels]

    * Paul Jackson <pj@engr.sgi.com> wrote:

    > Just so as no else wastes time repeating the little bit I've done so
    > far, and so I don't waste time figuring out what is already known,
    > here's what I have so far, trying out Ingo's "sched: auto-tune
    > migration costs" on ia64 SN2:
    >
    > To get it to compile against 2.6.12-rc1-mm4, I did thus:
    >
    > 1. Manually edited "include/asm-x86_64/topology.h" to
    > remove .cache_hot_time (patch failed due to conflicts
    > with nearby changes to add some *_idx terms).

    (next time you can ignore that hunk - we override the cache_hot_time
    value anyway.)

    > 2. Moved the 394 line block of new code in kernel/sched.c
    > to _before_ the large #ifdef ARCH_HAS_SCHED_DOMAIN,
    > #else, #endif block. The ia64 arch (only) defines
    > ARCH_HAS_SCHED_DOMAIN, so was being denied use of Ingo's
    > code when it was buried in the '#else-#endif' side of
    > this large conditional block.

    yeah, indeed. The place you moved it to is the right spot, as it's under
    CONFIG_SMP. I've done this in my tree too.

    > 3. Add "#include <linux/vmalloc.h>" to kernel/sched.c

    ok, did this in my tree too.

    > 4. Don't print cpu_khz in the cost matrix header, as cpu_khz
    > is only in a few arch's (x86_64, ppc, i386, arm).

    ok.

    > Brought up 8 CPUs
    > softlockup thread 7 started up.
    > Total of 8 processors activated (15548.60 BogoMIPS).
    > ---------------------
    > migration cost matrix (max_cache_size: 33554432):
    > ---------------------
    > [00] [01] [02] [03] [04] [05] [06] [07]
    > [00]: -
    > ============================= end =============================
    >
    > Then it hung for 5 or 10 minutes, [...]

    the default on ia64 (32MB) was way too large and caused the search to
    start from 64MB. That can take a _long_ time.

    i've attached a new patch with your changes included, and a couple of
    new things added:

    - removed the 32MB max_cache_size hack from ia64 - it should now fall
    back to the default 5MB and do a search from 10MB downwards. This
    should speed up the search.

    - added a migration_debug boot option - use it to get verbose printouts
    about the search for the migration cost.

    - added a max_cache_size=<bytes> boot option for debugging.

    - a few cleanups

    (in the next iteration of the patch i'll try a new method to further
    speed up the search - but didnt want to change it too much in this
    iteration.)

    > [<a0000001000db0d0>] schedule_work+0x30/0x60
    > sp=e00001b03a8d7910 bsp=e00001b03a8d14c8
    > [<a000000100490230>] blank_screen_t+0x30/0x60
    > sp=e00001b03a8d7910 bsp=e00001b03a8d14b8
    > [<a0000001000c8130>] run_timer_softirq+0x2d0/0x4a0
    > sp=e00001b03a8d7910 bsp=e00001b03a8d1410

    i think the crash is an unrelated bug: it seems the screen blanking
    timer hit and has crashed the box - i suspect it didnt expect the bootup
    to take that long.

    Ingo
    --- linux/kernel/sched.c.orig
    +++ linux/kernel/sched.c
    @@ -47,6 +47,7 @@
    #include <linux/syscalls.h>
    #include <linux/times.h>
    #include <linux/acct.h>
    +#include <linux/vmalloc.h>
    #include <asm/tlb.h>

    #include <asm/unistd.h>
    @@ -4639,6 +4640,438 @@ void __devinit init_sched_build_groups(s
    last->next = first;
    }

    +/*
    + * Self-tuning task migration cost measurement between source and target CPUs.
    + *
    + * This is done by measuring the cost of manipulating buffers of varying
    + * sizes. For a given buffer-size here are the steps that are taken:
    + *
    + * 1) the source CPU reads a big buffer to flush caches
    + * 2) the source CPU reads+dirties a shared buffer
    + * 3) the target CPU reads+dirties the same shared buffer
    + * 4) the target CPU reads a big buffer to flush caches
    + *
    + * We measure how long steps #2 and #3 take (step #1 and #4 is not
    + * measured), in the following 4 scenarios:
    + *
    + * - source: CPU1, target: CPU2 | cost1
    + * - source: CPU2, target: CPU1 | cost2
    + * - source: CPU1, target: CPU1 | cost3
    + * - source: CPU2, target: CPU2 | cost4
    + *
    + * We then calculate the cost3+cost4-cost1-cost2 difference - this is
    + * the cost of migration.
    + *
    + * We then start off from a large buffer-size and iterate down to smaller
    + * buffer sizes, in 5% steps - measuring each buffer-size separately, and
    + * do a maximum search for the cost. The maximum cost for a migration
    + * occurs when the working set is just below the effective cache size.
    + */
    +
    +
    +/*
    + * Flush the cache by reading a big buffer. (We want all writeback
    + * activities to subside. Works only if cache size is larger than
    + * 2*size, but that is good enough as the biggest migration effect
    + * is around cachesize size.)
    + */
    +__init static void read_cache(void *__cache, unsigned long __size)
    +{
    + unsigned long size = __size/sizeof(long);
    + unsigned long *cache = __cache;
    + volatile unsigned long data;
    + int i;
    +
    + for (i = 0; i < 2*size; i += 4)
    + data = cache[i];
    +}
    +
    +
    +/*
    + * Dirty a big buffer in a hard-to-predict (for the L2 cache) way. This
    + * is the operation that is timed, so we try to generate unpredictable
    + * cachemisses that still end up filling the L2 cache:
    + */
    +__init static void touch_cache(void *__cache, unsigned long __size)
    +{
    + unsigned long size = __size/sizeof(long), chunk1 = size/3,
    + chunk2 = 2*size/3;
    + unsigned long *cache = __cache;
    + int i;
    +
    + for (i = 0; i < size/6; i += 4) {
    + switch (i % 6) {
    + case 0: cache[i]++;
    + case 1: cache[size-1-i]++;
    + case 2: cache[chunk1-i]++;
    + case 3: cache[chunk1+i]++;
    + case 4: cache[chunk2-i]++;
    + case 5: cache[chunk2+i]++;
    + }
    + }
    +}
    +
    +struct flush_data {
    + unsigned long source, target;
    + void (*fn)(void *, unsigned long);
    + void *cache;
    + unsigned long size;
    + unsigned long long delta;
    +};
    +
    +/*
    + * Dirty L2 on the source CPU:
    + */
    +__init static void source_handler(void *__data)
    +{
    + struct flush_data *data = __data;
    +
    + if (smp_processor_id() != data->source)
    + return;
    +
    + /*
    + * Make sure the cache is quiet on this CPU,
    + * before starting measurement:
    + */
    + read_cache(data->cache, data->size);
    +
    + data->delta = sched_clock();
    + touch_cache(data->cache, data->size);
    +}
    +
    +/*
    + * Dirty the L2 cache on this CPU and then access the shared
    + * buffer. (which represents the working set of the migrated task.)
    + */
    +__init static void target_handler(void *__data)
    +{
    + struct flush_data *data = __data;
    +
    + if (smp_processor_id() != data->target)
    + return;
    +
    + touch_cache(data->cache, data->size);
    + data->delta = sched_clock() - data->delta;
    + /*
    + * Make sure the cache is quiet, so that it does not interfere
    + * with the next run on this CPU:
    + */
    + read_cache(data->cache, data->size);
    +}
    +
    +/*
    + * Measure the cache-cost of one task migration. Returns in units of nsec.
    + */
    +__init static unsigned long long measure_one(void *cache, unsigned long size,
    + int source, int target)
    +{
    + struct flush_data data;
    +
    + data.source = source;
    + data.target = target;
    + data.size = size;
    + data.cache = cache;
    +
    + if (on_each_cpu(source_handler, &data, 1, 1) != 0) {
    + printk("measure_one: timed out waiting for other CPUs\n");
    + return -1;
    + }
    + if (on_each_cpu(target_handler, &data, 1, 1) != 0) {
    + printk("measure_one: timed out waiting for other CPUs\n");
    + return -1;
    + }
    +
    + return data.delta;
    +}
    +
    +/*
    + * Maximum cache-size that the scheduler should try to measure.
    + * Architectures with larger caches should tune this up during
    + * bootup. Gets used in the domain-setup code (i.e. during SMP
    + * bootup).
    + */
    +__initdata unsigned int max_cache_size;
    +
    +static int __init setup_max_cache_size(char *str)
    +{
    + get_option(&str, &max_cache_size);
    + return 1;
    +}
    +
    +__setup("max_cache_size=", setup_max_cache_size);
    +
    +/*
    + * The migration cost is a function of 'domain distance'. Domain
    + * distance is the number of steps a CPU has to iterate down its
    + * domain tree to share a domain with the other CPU. The farther
    + * two CPUs are from each other, the larger the distance gets.
    + *
    + * Note that we use the distance only to cache measurement results,
    + * the distance value is not used numerically otherwise. When two
    + * CPUs have the same distance it is assumed that the migration
    + * cost is the same. (this is a simplification but quite practical)
    + */
    +#define MAX_DOMAIN_DISTANCE 32
    +
    +static __initdata unsigned long long migration_cost[MAX_DOMAIN_DISTANCE];
    +
    +/*
    + * Allow override of migration cost - in units of microseconds.
    + * E.g. migration_cost=1000,2000,3000 will set up a level-1 cost
    + * of 1 msec, level-2 cost of 2 msecs and level3 cost of 3 msecs:
    + */
    +static int __init migration_cost_setup(char *str)
    +{
    + int ints[MAX_DOMAIN_DISTANCE+1], i;
    +
    + str = get_options(str, ARRAY_SIZE(ints), ints);
    +
    + printk("#ints: %d\n", ints[0]);
    + for (i = 1; i <= ints[0]; i++) {
    + migration_cost[i-1] = (unsigned long long)ints[i]*1000;
    + printk("migration_cost[%d]: %Ld\n", i-1, migration_cost[i-1]);
    + }
    + return 1;
    +}
    +
    +__setup ("migration_cost=", migration_cost_setup);
    +
    +/*
    + * Global multiplier (divisor) for migration-cutoff values,
    + * in percentiles. E.g. use a value of 150 to get 1.5 times
    + * longer cache-hot cutoff times.
    + *
    + * (We scale it from 100 to 128 to long long handling easier.)
    + */
    +
    +#define MIGRATION_FACTOR_SCALE 128
    +
    +static __initdata unsigned int migration_factor = MIGRATION_FACTOR_SCALE;
    +
    +static int __init setup_migration_factor(char *str)
    +{
    + get_option(&str, &migration_factor);
    + migration_factor = migration_factor * MIGRATION_FACTOR_SCALE / 100;
    + return 1;
    +}
    +
    +__setup("migration_factor=", setup_migration_factor);
    +
    +static __initdata unsigned int migration_debug;
    +
    +static int __init setup_migration_debug(char *str)
    +{
    + get_option(&str, &migration_debug);
    + return 1;
    +}
    +
    +__setup("migration_debug=", setup_migration_debug);
    +
    +/*
    + * Estimated distance of two CPUs, measured via the number of domains
    + * we have to pass for the two CPUs to be in the same span:
    + */
    +__init static unsigned long cpu_distance(int cpu1, int cpu2)
    +{
    + unsigned long distance = 0;
    + struct sched_domain *sd;
    +
    + for_each_domain(cpu1, sd) {
    + WARN_ON(!cpu_isset(cpu1, sd->span));
    + if (cpu_isset(cpu2, sd->span))
    + return distance;
    + distance++;
    + }
    + if (distance >= MAX_DOMAIN_DISTANCE) {
    + WARN_ON(1);
    + distance = MAX_DOMAIN_DISTANCE-1;
    + }
    +
    + return distance;
    +}
    +
    +/*
    + * Measure a series of task migrations and return the average
    + * result. Since this code runs early during bootup the system
    + * is 'undisturbed' and the average latency makes sense.
    + *
    + * The algorithm in essence auto-detects the relevant cache-size,
    + * so it will properly detect different cachesizes for different
    + * cache-hierarchies, depending on how the CPUs are connected.
    + *
    + * Architectures can prime the upper limit of the search range via
    + * max_cache_size, otherwise the search range defaults to 20MB...64K.
    + */
    +#define SEARCH_SCOPE 2
    +#define MIN_CACHE_SIZE (64*1024U)
    +#define DEFAULT_CACHE_SIZE (5*1024*1024U)
    +#define ITERATIONS 2
    +
    +__init static unsigned long long measure_cacheflush_time(int cpu1, int cpu2)
    +{
    + unsigned long long cost = 0, cost1 = 0, cost2 = 0;
    + unsigned int size, cache_size = 0;
    + void *cache;
    + int i;
    +
    + /*
    + * Search from max_cache_size*5 down to 64K - the real relevant
    + * cachesize has to lie somewhere inbetween.
    + */
    + if (max_cache_size)
    + size = max(max_cache_size * SEARCH_SCOPE, MIN_CACHE_SIZE);
    + else
    + /*
    + * Since we have no estimation about the relevant
    + * search range
    + */
    + size = DEFAULT_CACHE_SIZE * SEARCH_SCOPE;
    +
    + if (!cpu_online(cpu1) || !cpu_online(cpu2)) {
    + printk("cpu %d and %d not both online!\n", cpu1, cpu2);
    + return 0;
    + }
    + /*
    + * We allocate 2*size so that read_cache() can access a
    + * larger buffer:
    + */
    + cache = vmalloc(2*size);
    + if (!cache) {
    + printk("could not vmalloc %d bytes for cache!\n", 2*size);
    + return 1000000; // return 1 msec on very small boxen
    + }
    + memset(cache, 0, 2*size);
    +
    + while (size >= MIN_CACHE_SIZE) {
    + /*
    + * Measure the migration cost of 'size' bytes, over an
    + * average of 10 runs:
    + *
    + * (We perturb the cache size by a small (0..4k)
    + * value to compensate size/alignment related artifacts.
    + * We also subtract the cost of the operation done on
    + * the same CPU.)
    + */
    + cost1 = 0;
    + for (i = 0; i < ITERATIONS; i++) {
    + cost1 += measure_one(cache, size - i*1024, cpu1, cpu2);
    + cost1 += measure_one(cache, size - i*1024, cpu2, cpu1);
    + }
    +
    + cost2 = 0;
    + for (i = 0; i < ITERATIONS; i++) {
    + cost2 += measure_one(cache, size - i*1024, cpu1, cpu1);
    + cost2 += measure_one(cache, size - i*1024, cpu2, cpu2);
    + }
    + if (cost1 > cost2) {
    + cost = max(cost, cost1 - cost2);
    + cache_size = size;
    + }
    + if (migration_debug)
    + printk("-> [%d][%d][%7d] %3ld.%ld (%ld): (%8Ld %8Ld %8Ld)\n",
    + cpu1, cpu2, size,
    + (long)cost / 1000000,
    + ((long)cost / 100000) % 10,
    + cpu_distance(cpu1, cpu2),
    + cost1, cost2, cost1-cost2);
    + /*
    + * Iterate down the cachesize (in a non-power-of-2
    + * way to avoid artifacts) in 5% decrements:
    + */
    + size = size * 19 / 20;
    + }
    + /*
    + * Get the per-iteration migration cost:
    + */
    + do_div(cost, 2*ITERATIONS);
    +
    + if (migration_debug)
    + printk("[%d][%d] cache size found: %d, cost: %Ld\n",
    + cpu1, cpu2, cache_size, cost);
    +
    + vfree(cache);
    +
    + /*
    + * A task is considered 'cache cold' if at least 2 times
    + * the worst-case cost of migration has passed.
    + *
    + * (this limit is only listened to if the load-balancing
    + * situation is 'nice' - if there is a large imbalance we
    + * ignore it for the sake of CPU utilization and
    + * processing fairness.)
    + */
    + return 2 * cost * migration_factor / MIGRATION_FACTOR_SCALE;
    +}
    +
    +void __devinit calibrate_migration_costs(void)
    +{
    + int cpu1 = -1, cpu2 = -1, cpu;
    + struct sched_domain *sd;
    + unsigned long distance, max_distance = 0;
    + unsigned long long cost;
    +
    + printk("---------------------\n");
    + printk("migration cost matrix (max_cache_size: %d, cpu: %ld MHz):\n",
    + max_cache_size,
    +#ifdef CONFIG_X86
    + cpu_khz/1000
    +#else
    + -1
    +#endif
    + );
    + printk("---------------------\n");
    + printk(" ");
    + for_each_online_cpu(cpu1)
    + printk(" [%02d]", cpu1);
    + printk("\n");
    + /*
    + * First pass - calculate the cacheflush times:
    + */
    + for_each_online_cpu(cpu1) {
    + printk("[%02d]: ", cpu1);
    + for_each_online_cpu(cpu2) {
    + if (cpu1 == cpu2) {
    + printk(" - ");
    + continue;
    + }
    + distance = cpu_distance(cpu1, cpu2);
    + max_distance = max(max_distance, distance);
    + /*
    + * Do we have the result cached already?
    + */
    + if (migration_cost[distance])
    + cost = migration_cost[distance];
    + else {
    + cost = measure_cacheflush_time(cpu1, cpu2);
    + migration_cost[distance] = cost;
    + }
    + printk(" %2ld.%ld(%ld)", (long)cost / 1000000,
    + ((long)cost / 100000) % 10, distance);
    + }
    + printk("\n");
    + }
    + printk("---------------------\n");
    + printk("cacheflush times [%ld]:", max_distance+1);
    + for (distance = 0; distance <= max_distance; distance++) {
    + cost = migration_cost[distance];
    + printk(" %ld.%ld (%Ld)", (long)cost / 1000000,
    + ((long)cost / 100000) % 10, cost);
    + }
    + printk("\n");
    + printk("---------------------\n");
    + /*
    + * Second pass - update the sched domain hierarchy with
    + * the new cache-hot-time estimations:
    + */
    + for_each_online_cpu(cpu) {
    + distance = 0;
    + for_each_domain(cpu, sd) {
    + sd->cache_hot_time = migration_cost[distance];
    + distance++;
    + }
    + }
    +}
    +

    #ifdef ARCH_HAS_SCHED_DOMAIN
    extern void __devinit arch_init_sched_domains(void);
    @@ -4820,6 +5253,10 @@ static void __devinit arch_init_sched_do
    #endif
    cpu_attach_domain(sd, i);
    }
    + /*
    + * Tune cache-hot values:
    + */
    + calibrate_migration_costs();
    }

    #ifdef CONFIG_HOTPLUG_CPU
    --- linux/arch/ia64/kernel/domain.c.orig
    +++ linux/arch/ia64/kernel/domain.c
    @@ -358,6 +358,10 @@ next_sg:
    #endif
    cpu_attach_domain(sd, i);
    }
    + /*
    + * Tune cache-hot values:
    + */
    + calibrate_migration_costs();
    }

    void __devinit arch_destroy_sched_domains(void)
    --- linux/arch/i386/kernel/smpboot.c.orig
    +++ linux/arch/i386/kernel/smpboot.c
    @@ -873,6 +873,7 @@ static void smp_tune_scheduling (void)
    cachesize = 16; /* Pentiums, 2x8kB cache */
    bandwidth = 100;
    }
    + max_cache_size = cachesize * 1024;
    }
    }

    --- linux/include/asm-ia64/topology.h.orig
    +++ linux/include/asm-ia64/topology.h
    @@ -51,7 +51,6 @@ void build_cpu_to_node_map(void);
    .max_interval = 320, \
    .busy_factor = 320, \
    .imbalance_pct = 125, \
    - .cache_hot_time = (10*1000000), \
    .cache_nice_tries = 1, \
    .per_cpu_gain = 100, \
    .flags = SD_LOAD_BALANCE \
    @@ -73,7 +72,6 @@ void build_cpu_to_node_map(void);
    .max_interval = 320, \
    .busy_factor = 320, \
    .imbalance_pct = 125, \
    - .cache_hot_time = (10*1000000), \
    .cache_nice_tries = 1, \
    .per_cpu_gain = 100, \
    .flags = SD_LOAD_BALANCE \
    --- linux/include/linux/topology.h.orig
    +++ linux/include/linux/topology.h
    @@ -86,7 +86,6 @@
    .max_interval = 2, \
    .busy_factor = 8, \
    .imbalance_pct = 110, \
    - .cache_hot_time = 0, \
    .cache_nice_tries = 0, \
    .per_cpu_gain = 25, \
    .flags = SD_LOAD_BALANCE \
    @@ -112,7 +111,6 @@
    .max_interval = 4, \
    .busy_factor = 64, \
    .imbalance_pct = 125, \
    - .cache_hot_time = (5*1000000/2), \
    .cache_nice_tries = 1, \
    .per_cpu_gain = 100, \
    .flags = SD_LOAD_BALANCE \
    --- linux/include/linux/sched.h.orig
    +++ linux/include/linux/sched.h
    @@ -527,7 +527,17 @@ extern cpumask_t cpu_isolated_map;
    extern void init_sched_build_groups(struct sched_group groups[],
    cpumask_t span, int (*group_fn)(int cpu));
    extern void cpu_attach_domain(struct sched_domain *sd, int cpu);
    +
    #endif /* ARCH_HAS_SCHED_DOMAIN */
    +
    +/*
    + * Maximum cache size the migration-costs auto-tuning code will
    + * search from:
    + */
    +extern unsigned int max_cache_size;
    +
    +extern void calibrate_migration_costs(void);
    +
    #endif /* CONFIG_SMP */


    --- linux/include/asm-i386/topology.h.orig
    +++ linux/include/asm-i386/topology.h
    @@ -75,7 +75,6 @@ static inline cpumask_t pcibus_to_cpumas
    .max_interval = 32, \
    .busy_factor = 32, \
    .imbalance_pct = 125, \
    - .cache_hot_time = (10*1000000), \
    .cache_nice_tries = 1, \
    .per_cpu_gain = 100, \
    .flags = SD_LOAD_BALANCE \
    --- linux/include/asm-ppc64/topology.h.orig
    +++ linux/include/asm-ppc64/topology.h
    @@ -46,7 +46,6 @@ static inline int node_to_first_cpu(int
    .max_interval = 32, \
    .busy_factor = 32, \
    .imbalance_pct = 125, \
    - .cache_hot_time = (10*1000000), \
    .cache_nice_tries = 1, \
    .per_cpu_gain = 100, \
    .flags = SD_LOAD_BALANCE \
    --- linux/include/asm-x86_64/topology.h.orig
    +++ linux/include/asm-x86_64/topology.h
    @@ -48,7 +48,6 @@ static inline cpumask_t __pcibus_to_cpum
    .max_interval = 32, \
    .busy_factor = 32, \
    .imbalance_pct = 125, \
    - .cache_hot_time = (10*1000000), \
    .cache_nice_tries = 1, \
    .per_cpu_gain = 100, \
    .flags = SD_LOAD_BALANCE \
    --- linux/include/asm-mips/mach-ip27/topology.h.orig
    +++ linux/include/asm-mips/mach-ip27/topology.h
    @@ -24,7 +24,6 @@ extern unsigned char __node_distances[MA
    .max_interval = 32, \
    .busy_factor = 32, \
    .imbalance_pct = 125, \
    - .cache_hot_time = (10*1000), \
    .cache_nice_tries = 1, \
    .per_cpu_gain = 100, \
    .flags = SD_LOAD_BALANCE \
    \
     
     \ /
      Last update: 2005-04-06 13:31    [W:0.060 / U:91.516 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site