Messages in this thread | | | From | Samium Gromoff <> | Subject | page_launder() on 2.4.9/10 issue | Date | Thu, 27 Sep 2001 23:14:52 +0000 (UTC) |
| |
Linus wrote: > Think about it - do you really want the system to actively try to reach > the point where it has no "regular" pages left, and has to start writing > stuff out (and wait for them synchronously) in order to free up memory? I I`m 100% agreed with you here: i had been hit by this issue alot of times... This is absolutely reproducible with streaming io case. I think the lower is the number of processes simultaneously accessing data, the harder this beats us... (cant explain, but this is how i feel that) > strongly feel that the old code was really really wrong - it may have been
sorry if im a noise here...
cheers, Sam
- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
| |