lkml.org 
[lkml]   [2015]   [Dec]   [3]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: linux-next: build failure after merge of the block tree
On Thu, Dec 03, 2015 at 09:39:01AM +0100, Matias Bjørling wrote:
> A little crazy yes. The reason is that the NVMe admin queues and NVMe user
> queues are driven by different request queues. Previously this was patched
> up with having two queues in the lightnvm core. One for admin and another
> for user. But was later merged into a single queue.

Why? If you look at the current structure we have the admin queue
which is always allocated by the Low level driver, although it could and
should move to the core eventually. And then we have Command set specific
request_queues for the I/O queues. One per NS for NVM currenly, either
one per NS or one globally for LightNVM, and in Fabrics I currently
have another magic one :) Due to the tagset pointer in struct nvme_ctrl
that's really easy to handle.


\
 
 \ /
  Last update: 2015-12-03 10:21    [W:0.112 / U:0.348 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site