lkml.org 
[lkml]   [2021]   [Feb]   [17]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: [PATCH v3] scsi: storvsc: Parameterize number hardware queues
On Fri, Feb 12, 2021 at 04:35:16PM +0000, Michael Kelley wrote:
> From: Melanie Plageman <melanieplageman@gmail.com> Sent: Thursday, February 11, 2021 3:18 PM
> >
> > Add ability to set the number of hardware queues with new module parameter,
> > storvsc_max_hw_queues. The default value remains the number of CPUs. This
> > functionality is useful in some environments (e.g. Microsoft Azure) where
> > decreasing the number of hardware queues has been shown to improve
> > performance.
> >
> > Signed-off-by: Melanie Plageman (Microsoft) <melanieplageman@gmail.com>
> > ---
> > drivers/scsi/storvsc_drv.c | 28 ++++++++++++++++++++++++++--
> > 1 file changed, 26 insertions(+), 2 deletions(-)
> >
> > diff --git a/drivers/scsi/storvsc_drv.c b/drivers/scsi/storvsc_drv.c
> > index 2e4fa77445fd..a64e6664c915 100644
> > --- a/drivers/scsi/storvsc_drv.c
> > +++ b/drivers/scsi/storvsc_drv.c
> > @@ -378,10 +378,14 @@ static u32 max_outstanding_req_per_channel;
> > static int storvsc_change_queue_depth(struct scsi_device *sdev, int queue_depth);
> >
> > static int storvsc_vcpus_per_sub_channel = 4;
> > +static int storvsc_max_hw_queues = -1;
> >
> > module_param(storvsc_ringbuffer_size, int, S_IRUGO);
> > MODULE_PARM_DESC(storvsc_ringbuffer_size, "Ring buffer size (bytes)");
> >
> > +module_param(storvsc_max_hw_queues, int, S_IRUGO|S_IWUSR);
> > +MODULE_PARM_DESC(storvsc_max_hw_queues, "Maximum number of hardware
> > queues");
> > +
>
> There's been an effort underway to not use the symbolic permissions in
> module_param(), but to just use the octal digits (like 0600 for root only
> access). But I couldn't immediately find documentation on why this
> change is being made. And clearly it hasn't been applied to the
> existing module_param() uses here in storvsc_drv.c. But with this being
> a new parameter, let's use the recommended octal digit format.

Thanks. I will update this in v4.

>
> > module_param(storvsc_vcpus_per_sub_channel, int, S_IRUGO);
> > MODULE_PARM_DESC(storvsc_vcpus_per_sub_channel, "Ratio of VCPUs to
> > subchannels");
> >
> > @@ -1897,6 +1901,7 @@ static int storvsc_probe(struct hv_device *device,
> > {
> > int ret;
> > int num_cpus = num_online_cpus();
> > + int num_present_cpus = num_present_cpus();
> > struct Scsi_Host *host;
> > struct hv_host_device *host_dev;
> > bool dev_is_ide = ((dev_id->driver_data == IDE_GUID) ? true : false);
> > @@ -2004,8 +2009,19 @@ static int storvsc_probe(struct hv_device *device,
> > * For non-IDE disks, the host supports multiple channels.
> > * Set the number of HW queues we are supporting.
> > */
> > - if (!dev_is_ide)
> > - host->nr_hw_queues = num_present_cpus();
> > + if (!dev_is_ide) {
> > + if (storvsc_max_hw_queues == -1)
> > + host->nr_hw_queues = num_present_cpus;
> > + else if (storvsc_max_hw_queues > num_present_cpus ||
> > + storvsc_max_hw_queues == 0 ||
> > + storvsc_max_hw_queues < -1) {
> > + storvsc_log(device, STORVSC_LOGGING_WARN,
> > + "Resetting invalid storvsc_max_hw_queues value to default.\n");
> > + host->nr_hw_queues = num_present_cpus;
> > + storvsc_max_hw_queues = -1;
> > + } else
> > + host->nr_hw_queues = storvsc_max_hw_queues;
> > + }
>
> I have a couple of thoughts about the above logic. As the code is written,
> valid values are integers from 1 to the number of CPUs, and -1. The logic
> would be simpler if the module parameter was an unsigned int instead of
> a signed int, and zero was the marker for "use number of CPUs". Then
> you wouldn't have to check for negative values or have special handling
> for -1.

I used -1 because the linter ./scripts/checkpatch.pl throws an ERROR "do
not initialise statics to 0"

>
> Second, I think you can avoid intertwining the logic for checking for an
> invalid value, and actually setting host->nr_hw_queues. Check for an
> invalid value first, then do the setting of host->hr_hw_queues.
>
> Putting both thoughts together, you could get code like this:
>
> if (!dev_is ide) {
> if (storvsc_max_hw_queues > num_present_cpus) {
> storvsc_max_hw_queues = 0;
> storvsc_log(device, STORVSC_LOGGING_WARN,
> "Resetting invalid storvsc_max_hw_queues value to default.\n");
> }
> if (storvsc_max_hw_queues)
> host->nr_hw_queues = storvsc_max_hw_queues
> else
> host->hr_hw_queues = num_present_cpus;
> }

I will update the logic like this.

>
> >
> > /*
> > * Set the error handler work queue.
> > @@ -2169,6 +2185,14 @@ static int __init storvsc_drv_init(void)
> > vmscsi_size_delta,
> > sizeof(u64)));
> >
> > + if (storvsc_max_hw_queues > num_present_cpus() ||
> > + storvsc_max_hw_queues == 0 ||
> > + storvsc_max_hw_queues < -1) {
> > + pr_warn("Setting storvsc_max_hw_queues to -1. %d is invalid.\n",
> > + storvsc_max_hw_queues);
> > + storvsc_max_hw_queues = -1;
> > + }
> > +
>
> Is this check really needed? Any usage of the value will be in
> storvsc_probe() where the same check is performed. I'm not seeing
> a scenario where this check adds value over what's already being
> done in storvsc_probe(), but maybe I'm missing it.

It is not. I had initially added it because I did not plan on making the
parameter updatable and thought it would be better to only have one
message about the invalid value instead of #device messages (from each
probe()). But, after making it updatable, I had to add invalid value
checking to storvsc_probe() anyway, so, this is definitely unneeded. If
you specify the parameter at boot-time and this is here, you would only
get one instance of the logging message (because it resets the value of
storvsc_max_hw_queues to the default before probe() is called), but, I
don't think that is worth it.

>
> > #if IS_ENABLED(CONFIG_SCSI_FC_ATTRS)
> > fc_transport_template = fc_attach_transport(&fc_transport_functions);
> > if (!fc_transport_template)
> > --
> > 2.20.1
>

\
 
 \ /
  Last update: 2021-02-18 01:06    [W:0.082 / U:0.176 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site