lkml.org 
[lkml]   [2015]   [May]   [19]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 6/6] staging: vme_user: provide DMA functionality
On Mon, May 18, 2015 at 09:56:33PM +0300, Dmitry Kalinkin wrote:
> +static int vme_user_sg_to_dma_list(const struct vme_dma_op *dma_op,
> + struct sg_table *sgt, int sg_count, struct vme_dma_list *dma_list)
> +{
> + ssize_t pos = 0;
> + struct scatterlist *sg;
> + int i, ret;
> +
> + for_each_sg(sgt->sgl, sg, sg_count, i) {
> + struct vme_dma_attr *pci_attr, *vme_attr, *dest, *src;
> + dma_addr_t hw_address = sg_dma_address(sg);
> + unsigned int hw_len = sg_dma_len(sg);
> +
> + vme_attr = vme_dma_vme_attribute(dma_op->vme_addr + pos,
^^^^^^^^^^^^^^^^^^^^^^

->vme_addr comes from the user and we don't seem to have done any
validation that it's correct. This addition can overflow. How is this
safe? (This is not a rhetorical question, I am a newbie in this).

> + dma_op->aspace, dma_op->cycle, dma_op->dwidth);
> + if (!vme_attr)
> + return -ENOMEM;
> +
> + pci_attr = vme_dma_pci_attribute(hw_address);
> + if (!pci_attr) {
> + vme_dma_free_attribute(vme_attr);
> + return -ENOMEM;
> + }
> +
> + if (dma_op->write) {
> + dest = vme_attr;
> + src = pci_attr;
> + } else {
> + dest = pci_attr;
> + src = vme_attr;
> + }
> +
> + ret = vme_dma_list_add(dma_list, src, dest, hw_len);
> +
> + /*
> + * XXX VME API doesn't mention whether we should keep
> + * attributes around
> + */
> + vme_dma_free_attribute(vme_attr);
> + vme_dma_free_attribute(pci_attr);
> +
> + if (ret)
> + return ret;
> +
> + pos += hw_len;
> + }
> +
> + WARN_ON(pos != dma_op->count);
> +
> + return 0;
> +}
> +
> +static ssize_t vme_user_dma_ioctl(unsigned int minor,
> + const struct vme_dma_op *dma_op)
> +{
> + unsigned int offset = offset_in_page(dma_op->buf_vaddr);
> + unsigned long nr_pages;
> + enum dma_data_direction dir;
> + struct vme_dma_list *dma_list;
> + struct sg_table *sgt = NULL;
> + struct page **pages = NULL;
> + long got_pages;
> + int ret, sg_count;
> +
> + /* Overflow check for nr_pages */
> + if (dma_op->count > U32_MAX - 2 * PAGE_SIZE)
> + return -EINVAL;
> +
> + /* Prevent WARN from dma_map_sg */
> + if (dma_op->count == 0)
> + return 0;
> +
> + nr_pages = (offset + dma_op->count + PAGE_SIZE - 1) >> PAGE_SHIFT;
> + dir = dma_op->write ? DMA_TO_DEVICE : DMA_FROM_DEVICE;
> +
> + pages = kmalloc_array(nr_pages, sizeof(pages[0]), GFP_KERNEL);

This lets the user try allocate huge ammounts of RAM. Is there no
reasonable max size we can use?

> + if (!pages) {
> + ret = -ENOMEM;
> + goto free;
> + }
> +
> + sgt = kzalloc(sizeof(*sgt), GFP_KERNEL);
> + if (!sgt) {
> + ret = -ENOMEM;
> + goto free;
> + }
> +
> + dma_list = vme_new_dma_list(image[minor].resource);
> + if (!dma_list) {
> + ret = -ENOMEM;
> + goto free;
> + }
> +
> + got_pages = get_user_pages_fast(dma_op->buf_vaddr, nr_pages,
> + !dma_op->write, pages);

This file is all indented poorly, but these patches adds a bunch of new
ones so they make a bad situation worse.

got_pages = get_user_pages_fast(dma_op->buf_vaddr, nr_pages,
!dma_op->write, pages);

You sometimes might have to use spaces to make things align correctly.

got_pages = some_fake_name(dma_op->buf_vaddr, nr_pages,
!dma_op->write, pages);

[tab][tab][tab][tab][space][space][space][space]!dma_op->write, pages);

regards,
dan carpenter



\
 
 \ /
  Last update: 2015-05-19 11:41    [W:0.253 / U:0.020 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site