lkml.org 
[lkml]   [2011]   [Nov]   [17]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRE: [PATCH 1/1] Staging: hv: storvsc: Move the storage driver out of the staging area
Date


> -----Original Message-----
> From: James Bottomley [mailto:James.Bottomley@HansenPartnership.com]
> Sent: Thursday, November 17, 2011 1:27 PM
> To: KY Srinivasan
> Cc: gregkh@suse.de; linux-kernel@vger.kernel.org;
> devel@linuxdriverproject.org; virtualization@lists.osdl.org; linux-
> scsi@vger.kernel.org; ohering@suse.com; hch@infradead.org
> Subject: Re: [PATCH 1/1] Staging: hv: storvsc: Move the storage driver out of the
> staging area
>
> On Tue, 2011-11-08 at 10:13 -0800, K. Y. Srinivasan wrote:
> > The storage driver (storvsc_drv.c) handles all block storage devices
> > assigned to Linux guests hosted on Hyper-V. This driver has been in the
> > staging tree for a while and this patch moves it out of the staging area.
> > As per Greg's recommendation, this patch makes no changes to the staging/hv
> > directory. Once the driver moves out of staging, we will cleanup the
> > staging/hv directory.
> >
> > This patch includes all the patches that I have sent against the staging/hv
> > tree to address the comments I have gotten to date on this storage driver.
>
> First comment is that it would have been easier to see the individual
> patches for comment before you committed them.

I am not sure if the patches have been committed yet. All patches were sent
to various mailing lists and you were copied as well. In the future, I will include
the scsi mailing list in the set of lists I include for the staging patches.

>
> The way you did mempool isn't entirely right: the problem is that to
> prevent a memory to I/O deadlock we need to ensure forward progress on
> the drain device. Just having 64 commands available to the host doesn't
> necessarily achieve this because LUN1 could consume them all and starve
> LUN0 which is the drain device leading to the deadlock, so the mempool
> really needs to be per device using slave_alloc.

I will do this per LUN.

>
> +static int storvsc_device_alloc(struct scsi_device *sdevice)
> +{
> + /*
> + * This enables luns to be located sparsely. Otherwise, we may not
> + * discovered them.
> + */
> + sdevice->sdev_bflags |= BLIST_SPARSELUN | BLIST_LARGELUN;
> + return 0;
> +}
>
> Looks bogus ... this should happen automatically for SCSI-3 devices ...
> unless your hypervisor has some strange (and wrong) identification? I
> really think you want to use SCSI-3 because it will do report LUN
> scanning, which consumes far fewer resources.

I will see if I can clean this up.

>
> I still think you need to disable clustering and junk the bvec merge
> function. Your object seems to be to accumulate in page size multiples
> (and not aggregate over this) ... that's what clustering is designed to
> do.

As part of addressing your first round of comments, I experimented with your
suggestions and I could not get rid of the code that does the bounce buffer handling.
I could generate I/O patterns that would require bounce buffer handling with your
suggestions in place.

Regards,

K. Y

\
 
 \ /
  Last update: 2011-11-18 05:55    [W:0.070 / U:0.476 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site