Messages in this thread Patch in this message | | | Date | Mon, 20 Dec 2004 10:15:34 -0500 (EST) | From | Rik van Riel <> | Subject | [PATCH][1/2] adjust dirty threshold for lowmem-only mappings |
| |
Simply running "dd if=/dev/zero of=/dev/hd<one you can miss>" will result in OOM kills, with the dirty pagecache completely filling up lowmem. This patch is part 1 to fixing that problem.
This patch effectively lowers the dirty limit for mappings which cannot be cached in highmem, counting the dirty limit as a percentage of lowmem instead. This should prevent heavy block device writers from pushing the VM over the edge and triggering OOM kills.
Signed-off-by: Rik van Riel <riel@redhat.com>
--- linux-2.6.9/mm/page-writeback.c.highmem 2004-12-16 11:22:48.193641312 -0500 +++ linux-2.6.9/mm/page-writeback.c 2004-12-16 11:30:00.565676290 -0500 @@ -133,18 +133,28 @@ * clamping level. */ static void -get_dirty_limits(struct writeback_state *wbs, long *pbackground, long *pdirty) +get_dirty_limits(struct writeback_state *wbs, long *pbackground, long *pdirty, struct address_space *mapping) { int background_ratio; /* Percentages */ int dirty_ratio; int unmapped_ratio; long background; long dirty; + unsigned long available_memory = total_pages; struct task_struct *tsk;
get_writeback_state(wbs);
- unmapped_ratio = 100 - (wbs->nr_mapped * 100) / total_pages; +#ifdef CONFIG_HIGHMEM + /* + * If this mapping can only allocate from low memory, + * we exclude high memory from our count. + */ + if (mapping && !(mapping_gfp_mask(mapping) & __GFP_HIGHMEM)) + available_memory -= totalhigh_pages; +#endif + + unmapped_ratio = 100 - (wbs->nr_mapped * 100) / available_memory;
dirty_ratio = vm_dirty_ratio; if (dirty_ratio > unmapped_ratio / 2) @@ -194,7 +204,8 @@ .nr_to_write = write_chunk, };
- get_dirty_limits(&wbs, &background_thresh, &dirty_thresh); + get_dirty_limits(&wbs, &background_thresh, + &dirty_thresh, mapping); nr_reclaimable = wbs.nr_dirty + wbs.nr_unstable; if (nr_reclaimable + wbs.nr_writeback <= dirty_thresh) break; @@ -210,7 +221,7 @@ if (nr_reclaimable) { writeback_inodes(&wbc); get_dirty_limits(&wbs, &background_thresh, - &dirty_thresh); + &dirty_thresh, mapping); nr_reclaimable = wbs.nr_dirty + wbs.nr_unstable; if (nr_reclaimable + wbs.nr_writeback <= dirty_thresh) break; @@ -283,7 +294,7 @@ long dirty_thresh;
for ( ; ; ) { - get_dirty_limits(&wbs, &background_thresh, &dirty_thresh); + get_dirty_limits(&wbs, &background_thresh, &dirty_thresh, NULL);
/* * Boost the allowable dirty threshold a bit for page @@ -318,7 +329,7 @@ long background_thresh; long dirty_thresh;
- get_dirty_limits(&wbs, &background_thresh, &dirty_thresh); + get_dirty_limits(&wbs, &background_thresh, &dirty_thresh, NULL); if (wbs.nr_dirty + wbs.nr_unstable < background_thresh && min_pages <= 0) break; - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
| |