lkml.org 
[lkml]   [2023]   [Feb]   [25]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCHv2 1/6] zsmalloc: remove insert_zspage() ->inuse optimization
On (23/02/23 15:09), Minchan Kim wrote:
>
> On Thu, Feb 23, 2023 at 12:04:46PM +0900, Sergey Senozhatsky wrote:
> > This optimization has no effect. It only ensures that
> > when a page was added to its corresponding fullness
> > list, its "inuse" counter was higher or lower than the
> > "inuse" counter of the page at the head of the list.
> > The intention was to keep busy pages at the head, so
> > they could be filled up and moved to the ZS_FULL
> > fullness group more quickly. However, this doesn't work
> > as the "inuse" counter of a page can be modified by
>
> zspage
>
> Let's use term zspage instead of page to prevent confusing.
>
> > obj_free() but the page may still belong to the same
> > fullness list. So, fix_fullness_group() won't change
>
> Yes. I didn't expect it should be perfect from the beginning
> but would help just little optimization.
>
> > the page's position in relation to the head's "inuse"
> > counter, leading to a largely random order of pages
> > within the fullness list.
>
> Good point.
>
> >
> > For instance, consider a printout of the "inuse"
> > counters of the first 10 pages in a class that holds
> > 93 objects per zspage:
> >
> > ZS_ALMOST_EMPTY: 36 67 68 64 35 54 63 52
> >
> > As we can see the page with the lowest "inuse" counter
> > is actually the head of the fullness list.
>
> Let's write what the patch is doing cleary
>
> "So, let's remove the pointless optimization" or something better word.

ACK to all feedback (for all the patches). Thanks!

\
 
 \ /
  Last update: 2023-03-27 00:35    [W:0.072 / U:0.176 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site