lkml.org 
[lkml]   [2012]   [Dec]   [3]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    Date
    From
    SubjectRe: [PATCH 32/52] sched: Track groups of shared tasks
    On 12/02/2012 01:43 PM, Ingo Molnar wrote:

    > This is not entirely correct as this task might have scheduled or
    > migrate ther - but statistically there will be correlation to the
    ^^^^ there?

    > tasks that we share memory with, and correlation is all we need.
    >
    > We map out the relation itself by filtering out the highest address
    > ask that is below our own task address, per working set scan
    ^^^ task?
    > iteration.

    > @@ -906,23 +945,122 @@ out_backoff:
    > }
    >
    > /*
    > + * Track our "memory buddies" the tasks we actively share memory with.
    > + *
    > + * Firstly we establish the identity of some other task that we are
    > + * sharing memory with by looking at rq[page::last_cpu].curr - i.e.
    > + * we check the task that is running on that CPU right now.
    > + *
    > + * This is not entirely correct as this task might have scheduled or
    > + * migrate ther - but statistically there will be correlation to the
    ^^^^ there

    > + * tasks that we share memory with, and correlation is all we need.
    > + *
    > + * We map out the relation itself by filtering out the highest address
    > + * ask that is below our own task address, per working set scan
    ^^^ task?

    If that word is "task", the comment makes sense. If it is
    something else, I'm back to square one on what the code does :)


    > void task_numa_fault(int node, int last_cpu, int pages)
    > {
    > struct task_struct *p = current;
    > int priv = (task_cpu(p) == last_cpu);
    > + int idx = 2*node + priv;
    >
    > if (unlikely(!p->numa_faults)) {
    > - int size = sizeof(*p->numa_faults) * 2 * nr_node_ids;
    > + int entries = 2*nr_node_ids;
    > + int size = sizeof(*p->numa_faults) * entries;
    >
    > - p->numa_faults = kzalloc(size, GFP_KERNEL);
    > + p->numa_faults = kzalloc(2*size, GFP_KERNEL);

    So we multiply nr_node_ids by 2. Twice.

    That kind of magic deserves a comment explaining how
    and why. How about:

    /*
    * We track two arrays with private and shared faults
    * for each NUMA node. The p->numa_faults_curr array
    * is allocated at the same time as the p->numa_faults
    * array.
    */
    int size = sizeof(*p->numa_faults) * 4 * nr_node_ids;

    > if (!p->numa_faults)
    > return;
    > + /*
    > + * For efficiency reasons we allocate ->numa_faults[]
    > + * and ->numa_faults_curr[] at once and split the
    > + * buffer we get. They are separate otherwise.
    > + */
    > + p->numa_faults_curr = p->numa_faults + entries;
    > }




    \
     
     \ /
      Last update: 2012-12-04 00:01    [W:4.092 / U:0.012 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site