lkml.org 
[lkml]   [2014]   [Apr]   [2]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 0/5] Volatile Ranges (v12) & LSF-MM discussion fodder
On Wed, Apr 02, 2014 at 09:37:49AM -0700, H. Peter Anvin wrote:
> On 04/02/2014 09:32 AM, H. Peter Anvin wrote:
> > On 04/02/2014 09:30 AM, Johannes Weiner wrote:
> >>
> >> So between zero-fill and SIGBUS, I'd prefer the one which results in
> >> the simpler user interface / fewer system calls.
> >>
> >
> > The use cases are different; I believe this should be a user space option.
> >
>
> Case in point, for example: imagine a JIT. You *really* don't want to
> zero-fill memory behind the back of your JIT, as all zero memory may not
> be a trapping instruction (it isn't on x86, for example, and if you are
> unlucky you may be modifying *part* of an instruction.)

Yes, and I think this would be comparable to the compressed-library
usecase that John mentioned. What's special about these cases is that
the accesses are no longer under control of the application because
it's literally code that the CPU jumps into. It is obvious to me that
such a usecase would require SIGBUS handling. However, it seems that
in any usecase *besides* executable code caches, userspace would have
the ability to mark the pages non-volatile ahead of time, and thus not
require SIGBUS delivery.

Hence my follow-up question in the other mail about how large we
expect such code caches to become in practice in relationship to
overall system memory. Are code caches interesting reclaim candidates
to begin with? Are they big enough to make the machine thrash/swap
otherwise?



\
 
 \ /
  Last update: 2014-04-02 20:01    [W:0.109 / U:0.368 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site