lkml.org 
[lkml]   [2009]   [Feb]   [28]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [patch] x86, mm: pass in 'total' to __copy_from_user_*nocache()
> I think this is an accurate analysis as well, it's really unfortunate
> the non-temporal stuff on x86 doesn't preserve existing cache lines
> when present.
>
> I thought that was the whole point. Don't pollute the caches, but
> if cache lines are already loaded there, use them and don't purge!

x86 actually supports that, it's just not done through movnt.

You can do that on x86 by using PREFETCHNTA (or T0/T1/T2 for specific
cache levels). Typically this is implemented by forcing the cache line
to only a single way of the cache (so only using max 1/8 or so of your last
level cache)

I'm not sure how it interacts with REP MOVS* though, this internally
tends to do additional magic for larger copies.

-Andi

--
ak@linux.intel.com -- Speaking for myself only.


\
 
 \ /
  Last update: 2009-03-01 01:23    [from the cache]
©2003-2014 Jasper Spaans. Advertise on this site