lkml.org 
[lkml]   [2011]   [Mar]   [11]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRe: [REVIEW] NVM Express driver
Date
Matthew Wilcox <willy@linux.intel.com> writes:
> +
> +static struct nvme_queue *get_nvmeq(struct nvme_ns *ns)
> +{
> + int qid, cpu = get_cpu();
> + if (cpu < ns->dev->queue_count)
> + qid = cpu + 1;
> + else
> + qid = (cpu % rounddown_pow_of_two(ns->dev->queue_count))
> + 1;

This will be likely a full divide, better use a mask.

> + nprps = DIV_ROUND_UP(length, PAGE_SIZE);
> + npages = DIV_ROUND_UP(8 * nprps, PAGE_SIZE);
> + prps = kmalloc(sizeof(*prps) + sizeof(__le64 *) * npages, GFP_ATOMIC);
> + prp_page = 0;
> + if (nprps <= (256 / 8)) {
> + pool = dev->prp_small_pool;
> + prps->npages = 0;


Unchecked GFP_ATOMIC allocation? That will oops soon.
Besides GFP_ATOMIC a very risky thing to do on a low memory situation,
which can trigger writeouts.


> + } else {
> + pool = dev->prp_page_pool;
> + prps->npages = npages;
> + }
> +
> + prp_list = dma_pool_alloc(pool, GFP_ATOMIC, &prp_dma);
> + prps->list[prp_page++] = prp_list;

And another one.


Didn't read all of it.

-Andi

--
ak@linux.intel.com -- Speaking for myself only


\
 
 \ /
  Last update: 2011-03-11 23:33    [W:0.132 / U:0.560 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site