lkml.org 
[lkml]   [2021]   [Jan]   [10]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    From
    SubjectRE: [EXT] Re: [PATCH RFC net-next 11/19] net: mvpp2: add flow control RXQ and BM pool config callbacks
    Date
    > >
    > > +/* Routine calculate single queue shares address space */ static int
    > > +mvpp22_calc_shared_addr_space(struct mvpp2_port *port) {
    > > + /* If number of CPU's greater than number of threads, return last
    > > + * address space
    > > + */
    > > + if (num_active_cpus() >= MVPP2_MAX_THREADS)
    > > + return MVPP2_MAX_THREADS - 1;
    > > +
    > > + return num_active_cpus();
    >
    > Firstly - this can be written as:
    >
    > return min(num_active_cpus(), MVPP2_MAX_THREADS - 1);

    OK.

    > Secondly - what if the number of active CPUs change, for example due to
    > hotplug activity. What if we boot with maxcpus=1 and then bring the other
    > CPUs online after networking has been started? The number of active CPUs is
    > dynamically managed via the scheduler as CPUs are brought online or offline.
    >
    > > +/* Routine enable flow control for RXQs conditon */ void
    > > +mvpp2_rxq_enable_fc(struct mvpp2_port *port)
    > ...
    > > +/* Routine disable flow control for RXQs conditon */ void
    > > +mvpp2_rxq_disable_fc(struct mvpp2_port *port)
    >
    > Nothing seems to call these in this patch, so on its own, it's not obvious how
    > these are being called, and therefore what remedy to suggest for
    > num_active_cpus().

    I don't think that current driver support CPU hotplug, anyway I can remove num_active_cpus
    and just use shared RX IRQ ID.

    Thanks.
    .

    \
     
     \ /
      Last update: 2021-01-10 19:26    [W:2.073 / U:0.212 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site