lkml.org 
[lkml]   [2000]   [Oct]   [16]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: mapping user space buffer to kernel address space
Hi,

On Mon, Oct 16, 2000 at 12:08:54AM +0200, Andrea Arcangeli wrote:

> The basic problem is that map_user_kiobuf tries to map those pages calling an
> handle_mm_fault on their virtual addresses and it's thinking that when
> handle_mm_fault returns 1 the page is mapped. That's wrong.

Good point --- good work.

> Furthmore in 2.4.x SMP (this couldn't happen in 2.2.x SMP at least) the page
> can also go away from under us even assuming handle_mm_fault did its work right
> by luck.

Hmm --- we had the BKL to protect against this when this code was
first done for 2.3. That's definitely a regression, agreed.

> I'm also not convinced that only increasing the page count in the critical
> section in map_user_kiobuf is enough because swap_out doesn't care about the
> page count (in 2.2.x rawio it's taking the page lock). The page count is
> significant as a "pin" only for the page cache and not for anonymous or shm
> memory for example. swap_out can mess with anonymous memory with a page
> count > 1 for example.

This bit I think we have to talk about --- I'm not sure I agree. I
dropped that automatic-page-locking from the map_user_kiobuf code
quite deliberately.

Basically, we don't care at all about pinning the pte in the page
tables during the IO. All we really care about is preserving the
mapping between the user's VA and the physical page. If the VM
chooses to unmap the page temporarily, then as long as the page
remains in physical memory, then a subsequent page fault will
reestablish the mapping. The swap cache and page cache are sufficient
to make this guarantee.

It's an important distinction, because we really would rather avoid
taking the page lock. If you happen to have the same page mapped into
multiple VAs within the region being written, should the kernel
object? If you're writing that region to disk, then it really
shouldn't matter. For some other applications, it might be important,
which is why I wanted to keep the map_user_kiobuf() and lock_kiobuf()
functions distinct.

Note that if you have a threaded application and another thread goes
messing with the MM while your IO is in progress, then it's possible
that the pages in the user's VA at the end of the IO are not the same
as the ones that were there at the start. No big deal, that's no
different to the situation if you have any other read or write going
on in parallel with other MM activity.

One final point: the new code,

> + if (!(map = follow_page(ptr, write))) {
> + char * user_ptr = (char *) ptr;
> + char c;
> spin_unlock(&mm->page_table_lock);
> + up(&mm->mmap_sem);
> + if (get_user(c, user_ptr))
> + goto out;
> + if (write && put_user(c, user_ptr))
> + goto out;
> + down(&mm->mmap_sem);
> + goto refind;

looks unnecessarily messy. We don't need the get_user() in ptrace:
why do we need it here? Similarly, the put_user should be dealt with
by the handle_mm_fault. We already absolutely rely on the fact that
handle_mm_fault never continually fails to make progress for ever. If
it did, then a page fault could produce an infinite loop in the
kernel.

Once I'm back in the UK I'll look at getting map_user_kiobuf() simply
to call the existing access_one_page() from ptrace. You're right,
this stuff is racy, so we should avoid duplicating it in memory.c.

Cheers,
Stephen
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 12:41    [W:0.158 / U:2.656 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site