Revert "mm: skip CMA pages when they are not available"

JIRA: https://issues.redhat.com/browse/RHEL-27745

This patch is a backport of the following upstream commit:
commit bfe0857c20c663fcc1592fa4e3a61ca12b07dac9
Author: Usama Arif <usamaarif642@gmail.com>
Date:   Wed Aug 21 20:26:07 2024 +0100

    Revert "mm: skip CMA pages when they are not available"

    This reverts commit 5da226dbfce3 ("mm: skip CMA pages when they are not
    available") and b7108d66318a ("Multi-gen LRU: skip CMA pages when they are
    not eligible").

    lruvec->lru_lock is highly contended and is held when calling
    isolate_lru_folios.  If the lru has a large number of CMA folios
    consecutively, while the allocation type requested is not MIGRATE_MOVABLE,
    isolate_lru_folios can hold the lock for a very long time while it skips
    those.  For FIO workload, ~150million order=0 folios were skipped to
    isolate a few ZONE_DMA folios [1].  This can cause lockups [1] and high
    memory pressure for extended periods of time [2].

    Remove skipping CMA for MGLRU as well, as it was introduced in sort_folio
    for the same resaon as 5da226dbfce3a2f44978c2c7cf88166e69a6788b.

    [1] https://lore.kernel.org/all/CAOUHufbkhMZYz20aM_3rHZ3OcK4m2puji2FGpUpn_-DevGk3Kg@mail.gmail.com/
    [2] https://lore.kernel.org/all/ZrssOrcJIDy8hacI@gmail.com/

    [usamaarif642@gmail.com: also revert b7108d66318a, per Johannes]
      Link: https://lkml.kernel.org/r/9060a32d-b2d7-48c0-8626-1db535653c54@gmail.com
      Link: https://lkml.kernel.org/r/357ac325-4c61-497a-92a3-bdbd230d5ec9@gmail.com
    Link: https://lkml.kernel.org/r/9060a32d-b2d7-48c0-8626-1db535653c54@gmail.com
    Fixes: 5da226dbfce3 ("mm: skip CMA pages when they are not available")
    Signed-off-by: Usama Arif <usamaarif642@gmail.com>
    Acked-by: Johannes Weiner <hannes@cmpxchg.org>
    Cc: Bharata B Rao <bharata@amd.com>
    Cc: Breno Leitao <leitao@debian.org>
    Cc: David Hildenbrand <david@redhat.com>
    Cc: Matthew Wilcox <willy@infradead.org>
    Cc: Rik van Riel <riel@surriel.com>
    Cc: Vlastimil Babka <vbabka@suse.cz>
    Cc: Yu Zhao <yuzhao@google.com>
    Cc: Zhaoyang Huang <huangzhaoyang@gmail.com>
    Cc: Zhaoyang Huang <zhaoyang.huang@unisoc.com>
    Cc: <stable@vger.kernel.org>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>

Signed-off-by: Rafael Aquini <raquini@redhat.com>
This commit is contained in:
Rafael Aquini 2024-12-09 12:19:09 -05:00
parent 431f70ae66
commit 3150642fd5
1 changed files with 2 additions and 22 deletions

View File

@ -1562,25 +1562,6 @@ static __always_inline void update_lru_sizes(struct lruvec *lruvec,
}
#ifdef CONFIG_CMA
/*
* It is waste of effort to scan and reclaim CMA pages if it is not available
* for current allocation context. Kswapd can not be enrolled as it can not
* distinguish this scenario by using sc->gfp_mask = GFP_KERNEL
*/
static bool skip_cma(struct folio *folio, struct scan_control *sc)
{
return !current_is_kswapd() &&
gfp_migratetype(sc->gfp_mask) != MIGRATE_MOVABLE &&
folio_migratetype(folio) == MIGRATE_CMA;
}
#else
static bool skip_cma(struct folio *folio, struct scan_control *sc)
{
return false;
}
#endif
/*
* Isolating page from the lruvec to fill in @dst list by nr_to_scan times.
*
@ -1627,8 +1608,7 @@ static unsigned long isolate_lru_folios(unsigned long nr_to_scan,
nr_pages = folio_nr_pages(folio);
total_scan += nr_pages;
if (folio_zonenum(folio) > sc->reclaim_idx ||
skip_cma(folio, sc)) {
if (folio_zonenum(folio) > sc->reclaim_idx) {
nr_skipped[folio_zonenum(folio)] += nr_pages;
move_to = &folios_skipped;
goto move;
@ -4273,7 +4253,7 @@ static bool sort_folio(struct lruvec *lruvec, struct folio *folio, struct scan_c
}
/* ineligible */
if (zone > sc->reclaim_idx || skip_cma(folio, sc)) {
if (zone > sc->reclaim_idx) {
gen = folio_inc_gen(lruvec, folio, false);
list_move_tail(&folio->lru, &lrugen->folios[gen][type][zone]);
return true;