zsmalloc: remove insert_zspage() ->inuse optimization

JIRA: https://issues.redhat.com/browse/RHEL-27741

commit a40a71e8343e281fedce9747ac1972c5556a982b
Author: Sergey Senozhatsky <senozhatsky@chromium.org>
Date:   Sat Mar 4 12:48:32 2023 +0900

    zsmalloc: remove insert_zspage() ->inuse optimization

    Patch series "zsmalloc: fine-grained fullness and new compaction
    algorithm", v4.

    Existing zsmalloc page fullness grouping leads to suboptimal page
    selection for both zs_malloc() and zs_compact().  This patchset reworks
    zsmalloc fullness grouping/classification.

    Additinally it also implements new compaction algorithm that is expected
    to use less CPU-cycles (as it potentially does fewer memcpy-s in
    zs_object_copy()).

    Test (synthetic) results can be seen in patch 0003.

    This patch (of 4):

    This optimization has no effect.  It only ensures that when a zspage was
    added to its corresponding fullness list, its "inuse" counter was higher
    or lower than the "inuse" counter of the zspage at the head of the list.
    The intention was to keep busy zspages at the head, so they could be
    filled up and moved to the ZS_FULL fullness group more quickly.  However,
    this doesn't work as the "inuse" counter of a zspage can be modified by
    obj_free() but the zspage may still belong to the same fullness list.  So,
    fix_fullness_group() won't change the zspage's position in relation to the
    head's "inuse" counter, leading to a largely random order of zspages
    within the fullness list.

    For instance, consider a printout of the "inuse" counters of the first 10
    zspages in a class that holds 93 objects per zspage:

     ZS_ALMOST_EMPTY:  36  67  68  64  35  54  63  52

    As we can see the zspage with the lowest "inuse" counter
    is actually the head of the fullness list.

    Remove this pointless "optimisation".

    Link: https://lkml.kernel.org/r/20230304034835.2082479-1-senozhatsky@chromium.org
    Link: https://lkml.kernel.org/r/20230304034835.2082479-2-senozhatsky@chromium.org
    Signed-off-by: Sergey Senozhatsky <senozhatsky@chromium.org>
    Acked-by: Minchan Kim <minchan@kernel.org>
    Cc: Yosry Ahmed <yosryahmed@google.com>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>

Signed-off-by: Chris von Recklinghausen <crecklin@redhat.com>
This commit is contained in:
Chris von Recklinghausen 2024-04-12 15:16:00 -04:00
parent 7ee826440a
commit f71a76c112
1 changed files with 1 additions and 12 deletions

View File

@ -762,19 +762,8 @@ static void insert_zspage(struct size_class *class,
struct zspage *zspage,
enum fullness_group fullness)
{
struct zspage *head;
class_stat_inc(class, fullness, 1);
head = list_first_entry_or_null(&class->fullness_list[fullness],
struct zspage, list);
/*
* We want to see more ZS_FULL pages and less almost empty/full.
* Put pages with higher ->inuse first.
*/
if (head && get_zspage_inuse(zspage) < get_zspage_inuse(head))
list_add(&zspage->list, &head->list);
else
list_add(&zspage->list, &class->fullness_list[fullness]);
list_add(&zspage->list, &class->fullness_list[fullness]);
}
/*