Messages in this thread |  | | Date | Mon, 25 Sep 2000 15:30:50 +0200 | From | Andrea Arcangeli <> | Subject | Re: the new VM |
| |
On Mon, Sep 25, 2000 at 03:12:58PM +0200, Ingo Molnar wrote: > well, i think all kernel-space allocations have to be limited carefully,
When a machine without a gigabit ethernet runs oom it's userspace that allocated the memory via page faults not the kernel.
And if the careful limit avoids the deadlock in the layer above alloc_pages, then it will also avoid alloc_pages to return NULL and you won't need an infinite loop in first place (unless the memory balancing is buggy).
GFP should return NULL only if the machine is out of memory. The kernel can be written in a way that never deadlocks when the machine is out of memory just checking the GFP retval. I don't think any in-kernel resource limit is necessary to have things reliable and fast. Most dynamic big caches and kernel data can be shrinked dynamically during memory pressure (pheraps except skbs and I agree that for skbs on gigabit ethernet the thing is a little different).
Andrea - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org Please read the FAQ at http://www.tux.org/lkml/
|  |