Messages in this thread | | | From | Armin Rigo <> | Date | Mon, 19 May 2014 18:42:42 +0200 | Subject | Re: remap_file_pages() use |
| |
Hi Kirill,
On 19 May 2014 17:53, Kirill A. Shutemov <kirill.shutemov@linux.intel.com> wrote: > Is it nessesary to remap in 4k chunks for you? > What about 64k chunks? Or something bigger?
Good point. We remap chunks of 4k, which is not much, but is already much larger than the typical object size. Suppose we do such a remapping for a single object: then all other neighbouring objects that happen to live in the same page are also copied. Then, if some other thread modifies these other objects, we need extra copies to keep the objects in sync across all of their versions.
That's the reason for keeping the size of remappings as small as possible. But we need to measure the actual impact. We can easily argue that if the process is using many GB of memory, then the risk of unrelated copies starts to decrease. It might be fine to increase the remapping unit in this case.
If there is an official way to know in advance how many remappings our process is allowed to perform, then we could adapt as the size increases. Or maybe catching ENOMEM and doubling the remapping size (in some process-wide synchronization point). All in all, thanks for the note: it looks like there are solutions (even if less elegant than remap_file_pages from the user's perspective).
A bientôt,
Armin. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
| |