Messages in this thread | | | Date | Thu, 23 Apr 1998 11:21:29 +0200 (MET DST) | From | Rik van Riel <> | Subject | Re: 2.1.97 mm and cache - basic stupid idea included! |
| |
On Wed, 22 Apr 1998, Oliver Neukum wrote:
> >A3.3 Default: at pagecache.borrowpercent, which is considerably > >above minimum. > > At this moment, we already have disk activity and significant CPU load. > Perhaps we should have another, even higher threshold, below which we > would swap when the system is idle.
With my patch, the system uses a high- and low-water mark when swapping/freeing pages. This does essentially the same thing.
> >> filesystems: performance improves if info is already in memory > (cache) > >> (virtual) memory: performance improves if info is already in memory > >The global target of a memory management system is to limit > >the number of I/Os that the system needs to do. But we have > >to take into account that FS I/O is often more expensive > >than swap I/O (need to lookup metadata, data is scattered > >all over the disk, etc...). > > Is this true if we consider interactive performance as a goal, too ? > Let me give an example: > File I/O usually happens as a result of the user's explicit request > (menu selection, etc. ...), > while paging may happen every time, even if the user expects no delay ( > getting the windowmanager's menu )
File I/O usually happens as a result of an mmap()ed program having a pagefault. The difficulty with your theory is that program data is both in the page cache and in swappable memory...
This means that we:
- make sure that the page cache doesn't get overly large (max & borrow percentages) - swap pagecache and usermem in turn when the page cache isn't overly large any more
> We should make adjustments for the cost of an I/O operation. > Getting a page from a ZIP-drive is slower than from harddisk, NFS may be > worse. > This requires however to shrink the cache of each deveice seperately.
Or we could apply a cost to the shrinking of each cache. We would however run into problems when there's only one very large file being read by one program.
eg: Video streaming would ruin this scheme.
> In an ideal world we might consider balancing I/O over all deveices. > That is under light load the cost is probably determined by disk seeking > time and under heavy load the percentage of I/O throughput requiered and > the CPU usage due to I/O. Could this be somehow measured by the kernel > or do we have a need for further tuning parameters ?
Some balancing might be nice however. I just don't know _how_ to do this properly. I _do_ have some Digital Unix hints however, their documentation states that the page cache isn't being grown for files which occupy more than 10% of the page cache when the page cache is larger than 50% of memory.
> Even cooler would be a factor setable by syscall allowing let's say a > windowmanager to tell the kernel to take less pages from the task > controlling the window under focus, or to spare tasks just redrawing > their windows. If we could combine this with adaptive scheduling we > might get interactive performance to new highs.
This would be: - virtually impossible, since your X task may be running everywhere - plain wrong, since the database and/or app server could be: - more important then the occasional console user - interactively used by far more people than that one console user reading alt.alt.alt.alt.alt :) - unfair, since you would prefer one type of user (using X) over console/telnet users (who, IMHO, should be preferred because they use less resources)
Rik. +-------------------------------------------+--------------------------+ | Linux: - LinuxHQ MM-patches page | Scouting webmaster | | - kswapd ask-him & complain-to guy | Vries cubscout leader | | http://www.phys.uu.nl/~riel/ | <H.H.vanRiel@phys.uu.nl> | +-------------------------------------------+--------------------------+
- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.rutgers.edu
| |