lkml.org 
[lkml]   [2004]   [Feb]   [4]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
SubjectRe: [2.4] heavy-load under swap space shortage
From
> May save CPU, but won't do the freeing expected of it.
> Or am I reading your patch wrongly?

No, you're right.

> Let me dig out my patch for 2.4.13, it appears to apply much the
> same to 2.4.24 or 2.4.25-pre8, though untested recently. I think I
> got involved in something else, and never pushed it out back then.
> Do you find it helpful?

With slight modification (please see the patch below), it's really helpful.
I hope you push it again to the mainline.

> It's a little more complicated than yours because although it's good
> to let the tasks encountering page_table_lock contention go on to
> try another mm, it would not be good to update swap_mm too readily:
> the contentious mm is likely to be the one which most needs freeing.

I agree.

> Think carefully about swap_address here: I seem to recall that
> it is racy, but benign - won't go wrong often enough to matter.

I had to remove raciness check in swap_out_mm, otherwise swap_out_mm
returns immediately and contend on mmlist_lock in mmput().
I think removal is ok because we now avoid the 'rush to same mm' by trylock.

I added the check for 'mm == swap_mm'. It might be necessary to avoid
the corner case where mmlist_lock being held too long.

Best regards.
--
NOMURA, Jun'ichi <j-nomura@ce.jp.nec.com>


--- linux-2.4.24/mm/vmscan.c
+++ linux/mm/vmscan.c
@@ -292,13 +292,7 @@ static inline int swap_out_mm(struct mm_
* Find the proper vm-area after freezing the vma chain
* and ptes.
*/
- spin_lock(&mm->page_table_lock);
address = mm->swap_address;
- if (address == TASK_SIZE || swap_mm != mm) {
- /* We raced: don't count this mm but try again */
- ++*mmcounter;
- goto out_unlock;
- }
vma = find_vma(mm, address);
if (vma) {
if (address < vma->vm_start)
@@ -310,15 +304,14 @@ static inline int swap_out_mm(struct mm_
if (!vma)
break;
if (!count)
- goto out_unlock;
+ goto out;
address = vma->vm_start;
}
}
/* Indicate that we reached the end of address space */
mm->swap_address = TASK_SIZE;

-out_unlock:
- spin_unlock(&mm->page_table_lock);
+out:
return count;
}

@@ -345,12 +338,20 @@ static int swap_out(zone_t * classzone)
swap_mm = mm;
}

+ /* scan mmlist and lock the first available mm */
+ while (mm == &init_mm || !spin_trylock(&mm->page_table_lock)) {
+ mm = list_entry(mm->mmlist.next, struct mm_struct, mmlist);
+ if (mm == swap_mm)
+ goto empty;
+ }
+
/* Make sure the mm doesn't disappear when we drop the lock.. */
atomic_inc(&mm->mm_users);
spin_unlock(&mmlist_lock);

nr_pages = swap_out_mm(mm, nr_pages, &counter, classzone);

+ spin_unlock(&mm->page_table_lock);
mmput(mm);

if (!nr_pages)
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
\
 
 \ /
  Last update: 2005-03-22 14:00    [W:0.124 / U:0.308 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site