Messages in this thread | | | Date | Thu, 13 Oct 2016 15:18:02 +0200 | From | Jan Kara <> | Subject | Re: [PATCHv3 17/41] filemap: handle huge pages in filemap_fdatawait_range() |
| |
On Thu 13-10-16 15:08:44, Kirill A. Shutemov wrote: > On Thu, Oct 13, 2016 at 11:44:41AM +0200, Jan Kara wrote: > > On Thu 15-09-16 14:54:59, Kirill A. Shutemov wrote: > > > We writeback whole huge page a time. > > > > This is one of the things I don't understand. Firstly I didn't see where > > changes of writeback like this would happen (maybe they come later). > > Secondly I'm not sure why e.g. writeback should behave atomically wrt huge > > pages. Is this because radix-tree multiorder entry tracks dirtiness for us > > at that granularity? > > We track dirty/writeback on per-compound pages: meaning we have one > dirty/writeback flag for whole compound page, not on every individual > 4k subpage. The same story for radix-tree tags. > > > BTW, can you also explain why do we need multiorder entries? What do > > they solve for us? > > It helps us having coherent view on tags in radix-tree: no matter which > index we refer from the range huge page covers we will get the same > answer on which tags set.
OK, understand that. But why do we need a coherent view? For which purposes exactly do we care that it is not just a bunch of 4k pages that happen to be physically contiguous and thus can be mapped in one PMD?
Honza -- Jan Kara <jack@suse.com> SUSE Labs, CR
| |