lkml.org 
[lkml]   [2012]   [Mar]   [5]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [PATCHSET] mempool, percpu, blkcg: fix percpu stat allocation and remove stats_lock
    Hello, Vivek.

    On Wed, Feb 29, 2012 at 12:36:39PM -0500, Vivek Goyal wrote:
    > Index: tejun-misc/block/blk-cgroup.h
    > ===================================================================
    > --- tejun-misc.orig/block/blk-cgroup.h 2012-02-28 01:29:09.238256494 -0500
    > +++ tejun-misc/block/blk-cgroup.h 2012-02-28 01:29:12.000000000 -0500
    > @@ -180,6 +180,8 @@ struct blkio_group {
    > struct request_queue *q;
    > struct list_head q_node;
    > struct hlist_node blkcg_node;
    > + /* List of blkg waiting for per cpu stats memory to be allocated */
    > + struct list_head pending_alloc_node;

    Can we move this right on top of rcu_head? It's one of the coldest
    entries. Also, long field names tend to be a bit painful. How about
    just alloc_node?

    > +static void blkio_stat_alloc_fn(struct work_struct *work)
    > +{
    > +
    > + void *stat_ptr = NULL;
    > + struct blkio_group *blkg, *n;
    > + int i;
    > +
    > +alloc_stats:
    > + spin_lock_irq(&pending_alloc_list_lock);
    > + if (list_empty(&pending_alloc_list)) {
    > + /* Nothing to do */
    > + spin_unlock_irq(&pending_alloc_list_lock);
    > + return;
    > + }
    > + spin_unlock_irq(&pending_alloc_list_lock);
    > +
    > + WARN_ON(stat_ptr != NULL);
    > + stat_ptr = alloc_percpu(struct blkio_group_stats_cpu);

    There will only one of this work item and if queued on nrt wq, only
    one instance would be running. Why not just create static ps[NR_POLS]
    array and fill it here.

    > + /* Retry. Should there be an upper limit on number of retries */
    > + if (stat_ptr == NULL)
    > + goto alloc_stats;
    > +
    > + spin_lock_irq(&blkio_list_lock);
    > + spin_lock(&pending_alloc_list_lock);
    > +
    > + list_for_each_entry_safe(blkg, n, &pending_alloc_list,
    > + pending_alloc_node) {
    > + for (i = 0; i < BLKIO_NR_POLICIES; i++) {
    > + struct blkio_policy_type *pol = blkio_policy[i];
    > + struct blkg_policy_data *pd;
    > +
    > + if (!pol)
    > + continue;
    > +
    > + if (!blkg->pd[i])
    > + continue;
    > +
    > + pd = blkg->pd[i];
    > + if (pd->stats_cpu)
    > + continue;
    > +
    > + pd->stats_cpu = stat_ptr;
    > + stat_ptr = NULL;
    > + break;

    and install everything here at one go.

    > + }
    > +
    > + if (i == BLKIO_NR_POLICIES - 1) {
    > + /* We are done with this group */
    > + list_del_init(&blkg->pending_alloc_node);
    > + continue;
    > + } else
    > + /* Go allocate more memory */
    > + break;
    > + }

    remove it from alloc list while holding alloc lock, unlock and go for
    retrying or exit and don't worry about stats_cpu left in ps[] as we're
    gonna be using that again later anyway.

    > /* insert */
    > spin_lock(&blkcg->lock);
    > - swap(blkg, new_blkg);
    > + spin_lock(&pending_alloc_list_lock);

    Do we need this nested inside blkcg->lock? What's wrong with doing it
    after release blkcg->lock?

    > @@ -648,11 +701,16 @@ static void blkg_destroy(struct blkio_gr
    > lockdep_assert_held(q->queue_lock);
    > lockdep_assert_held(&blkcg->lock);
    >
    > + spin_lock(&pending_alloc_list_lock);
    > +
    > /* Something wrong if we are trying to remove same group twice */
    > WARN_ON_ONCE(list_empty(&blkg->q_node));
    > WARN_ON_ONCE(hlist_unhashed(&blkg->blkcg_node));
    > list_del_init(&blkg->q_node);
    > hlist_del_init_rcu(&blkg->blkcg_node);
    > + list_del_init(&blkg->pending_alloc_node);
    > +
    > + spin_unlock(&pending_alloc_list_lock);

    Why put the whole thing inside the alloc lock?

    Thanks.

    --
    tejun


    \
     
     \ /
      Last update: 2012-03-05 23:15    [W:0.033 / U:29.660 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site