Messages in this thread |  | | Date | Tue, 9 Jan 2001 14:59:07 -0800 (PST) | From | Linus Torvalds <> | Subject | Re: VM subsystem bug in 2.4.0 ? |
| |
On 9 Jan 2001, Christoph Rohland wrote:
> Linus Torvalds <torvalds@transmeta.com> writes: > > > > Note that this would be solved very cleanly if the SHM code would use the > > "VM_LOCKED" flag, and actually lock the pages in the VM, instead of trying > > to lock them down for writepage(). > > here comes the patch. (lightly tested)
I'd really like an opinion on whether this is truly legal or not? After all, it does change the behaviour to mean "pages are locked only if they have been mapped into virtual memory". Which is not what it used to mean.
Arguably the new semantics are perfectly valid semantics on their own, but I'm not sure they are acceptable.
In contrast, the PG_realdirty approach would give the old behaviour of truly locked-down shm segments, with not significantly different complexity behaviour.
What do other UNIXes do for shm_lock()?
The Linux man-page explicitly states for SHM_LOCK that
The user must fault in any pages that are required to be present after locking is enabled.
which kind of implies to me that the VM_LOCKED implementation is ok. HOWEVER, looking at the HP-UX man-page, for example, certainly implies that the PG_realdirty approach is the correct one. The IRIX man-pages in contrast say
Locking occurs per address space; multiple processes or sprocs mapping the area at different addresses each need to issue the lock (this is primarily an issue with the per-process page tables).
which again implies that they've done something akin to a VM_LOCKED implementation.
Does anybody have any better pointers, ideas, or opinions?
Linus
- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org Please read the FAQ at http://www.tux.org/lkml/
|  |