mm/slab: Dissolve slab_map_pages() in its caller

Bugzilla: https://bugzilla.redhat.com/2120352

commit c798154311e10ddba56a515c8ddce14e592bbe25
Author: Vlastimil Babka <vbabka@suse.cz>
Date:   Mon Nov 1 17:02:30 2021 +0100

    mm/slab: Dissolve slab_map_pages() in its caller

    The function no longer does what its name and comment suggests, and just
    sets two struct page fields, which can be done directly in its sole
    caller.

    Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
    Reviewed-by: Roman Gushchin <guro@fb.com>
    Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>

Signed-off-by: Chris von Recklinghausen <crecklin@redhat.com>
This commit is contained in:
Chris von Recklinghausen 2022-10-12 07:08:35 -04:00
parent fe89438db8
commit ded021f89b
1 changed files with 2 additions and 13 deletions

View File

@ -2546,18 +2546,6 @@ static void slab_put_obj(struct kmem_cache *cachep,
set_free_obj(page, page->active, objnr);
}
/*
* Map pages beginning at addr to the given cache and slab. This is required
* for the slab allocator to be able to lookup the cache and slab of a
* virtual address for kfree, ksize, and slab debugging.
*/
static void slab_map_pages(struct kmem_cache *cache, struct page *page,
void *freelist)
{
page->slab_cache = cache;
page->freelist = freelist;
}
/*
* Grow (by 1) the number of slabs within a cache. This is called by
* kmem_cache_alloc() when there are no active objs left in a cache.
@ -2621,7 +2609,8 @@ static struct page *cache_grow_begin(struct kmem_cache *cachep,
if (OFF_SLAB(cachep) && !freelist)
goto opps1;
slab_map_pages(cachep, page, freelist);
page->slab_cache = cachep;
page->freelist = freelist;
cache_init_objs(cachep, page);