lkml.org 
[lkml]   [2011]   [Oct]   [6]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
Subjectlinux-next: manual merge of the akpm with the writeback tree
Hi Andrew,

Today's linux-next merge of the akpm tree got a conflict in
mm/page-writeback.c between commit 6c14ae1e92c7 ("writeback: dirty
position control") from the writeback tree and commit
"mm/page-writeback.c: make determine_dirtyable_memory static again" from
the akpm tree.

Just context (I think). I fixed it up (see below) and can carry the fix
as necessary.
--
Cheers,
Stephen Rothwell sfr@canb.auug.org.au

diff --cc mm/page-writeback.c
index 325f753,da6d263..0000000
--- a/mm/page-writeback.c
+++ b/mm/page-writeback.c
@@@ -296,73 -409,6 +355,12 @@@ int bdi_set_max_ratio(struct backing_de
}
EXPORT_SYMBOL(bdi_set_max_ratio);

- /*
- * Work out the current dirty-memory clamping and background writeout
- * thresholds.
- *
- * The main aim here is to lower them aggressively if there is a lot of mapped
- * memory around. To avoid stressing page reclaim with lots of unreclaimable
- * pages. It is better to clamp down on writers than to start swapping, and
- * performing lots of scanning.
- *
- * We only allow 1/2 of the currently-unmapped memory to be dirtied.
- *
- * We don't permit the clamping level to fall below 5% - that is getting rather
- * excessive.
- *
- * We make sure that the background writeout level is below the adjusted
- * clamping level.
- */
- static unsigned long highmem_dirtyable_memory(unsigned long total)
- {
- #ifdef CONFIG_HIGHMEM
- int node;
- unsigned long x = 0;
- for_each_node_state(node, N_HIGH_MEMORY) {
- struct zone *z =
- &NODE_DATA(node)->node_zones[ZONE_HIGHMEM];
- x += zone_page_state(z, NR_FREE_PAGES) +
- zone_reclaimable_pages(z);
- }
- /*
- * Make sure that the number of highmem pages is never larger
- * than the number of the total dirtyable memory. This can only
- * occur in very strange VM situations but we want to make sure
- * that this does not occur.
- */
- return min(x, total);
- #else
- return 0;
- #endif
- }
- /**
- * determine_dirtyable_memory - amount of memory that may be used
- *
- * Returns the numebr of pages that can currently be freed and used
- * by the kernel for direct mappings.
- */
- unsigned long determine_dirtyable_memory(void)
- {
- unsigned long x;
- x = global_page_state(NR_FREE_PAGES) + global_reclaimable_pages();
-
- if (!vm_highmem_is_dirtyable)
- x -= highmem_dirtyable_memory(x);
- return x + 1; /* Ensure that we never return 0 */
- }
+static unsigned long dirty_freerun_ceiling(unsigned long thresh,
+ unsigned long bg_thresh)
+{
+ return (thresh + bg_thresh) / 2;
+}
+
static unsigned long hard_dirty_limit(unsigned long thresh)
{
return max(thresh, global_dirty_limit);
[unhandled content-type:application/pgp-signature]
\
 
 \ /
  Last update: 2011-10-06 08:07    [W:0.026 / U:0.104 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site