Commit Graph

11 Commits

Author SHA1 Message Date
Jeff Moyer f1ebf01f03 io_uring/alloc_cache: switch to array based caching
JIRA: https://issues.redhat.com/browse/RHEL-64867

commit 414d0f45c316221acbf066658afdbae5b354a5cc
Author: Jens Axboe <axboe@kernel.dk>
Date:   Wed Mar 20 15:19:44 2024 -0600

    io_uring/alloc_cache: switch to array based caching
    
    Currently lists are being used to manage this, but best practice is
    usually to have these in an array instead as that it cheaper to manage.
    
    Outside of that detail, games are also played with KASAN as the list
    is inside the cached entry itself.
    
    Finally, all users of this need a struct io_cache_entry embedded in
    their struct, which is union'ized with something else in there that
    isn't used across the free -> realloc cycle.
    
    Get rid of all of that, and simply have it be an array. This will not
    change the memory used, as we're just trading an 8-byte member entry
    for the per-elem array size.
    
    This reduces the overhead of the recycled allocations, and it reduces
    the amount of code code needed to support recycling to about half of
    what it currently is.
    
    Signed-off-by: Jens Axboe <axboe@kernel.dk>

Signed-off-by: Jeff Moyer <jmoyer@redhat.com>
2024-11-28 16:56:44 -05:00
Jeff Moyer 448e2385c2 io_uring/alloc_cache: shrink default max entries from 512 to 128
JIRA: https://issues.redhat.com/browse/RHEL-64867

commit 0ae9b9a14d54bd0aa68c1e8bda9dd8e6346f1d87
Author: Jens Axboe <axboe@kernel.dk>
Date:   Sat Mar 16 18:23:44 2024 -0600

    io_uring/alloc_cache: shrink default max entries from 512 to 128
    
    In practice, we just need to recycle a few elements for (by far) most
    use cases. Shrink the total size down from 512 to 128, which should be
    more than plenty.
    
    Signed-off-by: Jens Axboe <axboe@kernel.dk>

Signed-off-by: Jeff Moyer <jmoyer@redhat.com>
2024-11-28 16:36:44 -05:00
Jeff Moyer 725cdfd749 io_uring: use mempool KASAN hook
JIRA: https://issues.redhat.com/browse/RHEL-64867

commit 8ab3b09755d926afc3bdd2fadff7f159310440c2
Author: Andrey Konovalov <andreyknvl@gmail.com>
Date:   Tue Dec 19 23:29:05 2023 +0100

    io_uring: use mempool KASAN hook
    
    Use the proper kasan_mempool_unpoison_object hook for unpoisoning cached
    objects.
    
    A future change might also update io_uring to check the return value of
    kasan_mempool_poison_object to prevent double-free and invalid-free bugs.
    This proves to be non-trivial with the current way io_uring caches
    objects, so this is left out-of-scope of this series.
    
    Link: https://lkml.kernel.org/r/eca18d6cbf676ed784f1a1f209c386808a8087c5.1703024586.git.andreyknvl@google.com
    Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
    Cc: Alexander Lobakin <alobakin@pm.me>
    Cc: Alexander Potapenko <glider@google.com>
    Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
    Cc: Breno Leitao <leitao@debian.org>
    Cc: Dmitry Vyukov <dvyukov@google.com>
    Cc: Evgenii Stepanov <eugenis@google.com>
    Cc: Marco Elver <elver@google.com>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>

Signed-off-by: Jeff Moyer <jmoyer@redhat.com>
2024-11-28 16:24:44 -05:00
Jeff Moyer 957eb4b56d kasan: rename kasan_slab_free_mempool to kasan_mempool_poison_object
JIRA: https://issues.redhat.com/browse/RHEL-64867

commit 280ec6ccb6422aa4a04f9ac4216ddcf055acc95d
Author: Andrey Konovalov <andreyknvl@gmail.com>
Date:   Tue Dec 19 23:28:45 2023 +0100

    kasan: rename kasan_slab_free_mempool to kasan_mempool_poison_object
    
    Patch series "kasan: save mempool stack traces".
    
    This series updates KASAN to save alloc and free stack traces for
    secondary-level allocators that cache and reuse allocations internally
    instead of giving them back to the underlying allocator (e.g.  mempool).
    
    As a part of this change, introduce and document a set of KASAN hooks:
    
    bool kasan_mempool_poison_pages(struct page *page, unsigned int order);
    void kasan_mempool_unpoison_pages(struct page *page, unsigned int order);
    bool kasan_mempool_poison_object(void *ptr);
    void kasan_mempool_unpoison_object(void *ptr, size_t size);
    
    and use them in the mempool code.
    
    Besides mempool, skbuff and io_uring also cache allocations and already
    use KASAN hooks to poison those.  Their code is updated to use the new
    mempool hooks.
    
    The new hooks save alloc and free stack traces (for normal kmalloc and
    slab objects; stack traces for large kmalloc objects and page_alloc are
    not supported by KASAN yet), improve the readability of the users' code,
    and also allow the users to prevent double-free and invalid-free bugs; see
    the patches for the details.
    
    
    This patch (of 21):
    
    Rename kasan_slab_free_mempool to kasan_mempool_poison_object.
    
    kasan_slab_free_mempool is a slightly confusing name: it is unclear
    whether this function poisons the object when it is freed into mempool or
    does something when the object is freed from mempool to the underlying
    allocator.
    
    The new name also aligns with other mempool-related KASAN hooks added in
    the following patches in this series.
    
    Link: https://lkml.kernel.org/r/cover.1703024586.git.andreyknvl@google.com
    Link: https://lkml.kernel.org/r/c5618685abb7cdbf9fb4897f565e7759f601da84.1703024586.git.andreyknvl@google.com
    Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
    Cc: Alexander Lobakin <alobakin@pm.me>
    Cc: Alexander Potapenko <glider@google.com>
    Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
    Cc: Breno Leitao <leitao@debian.org>
    Cc: Dmitry Vyukov <dvyukov@google.com>
    Cc: Evgenii Stepanov <eugenis@google.com>
    Cc: Marco Elver <elver@google.com>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>

Signed-off-by: Jeff Moyer <jmoyer@redhat.com>
2024-11-28 16:04:44 -05:00
Jeff Moyer 29dd0a1af8 io_uring/rsrc: consolidate node caching
JIRA: https://issues.redhat.com/browse/RHEL-12076

commit 528407b1e0ea51260fff2cc8b669c632a65d7a09
Author: Pavel Begunkov <asml.silence@gmail.com>
Date:   Tue Apr 11 12:06:05 2023 +0100

    io_uring/rsrc: consolidate node caching
    
    We store one pre-allocated rsrc node in ->rsrc_backup_node, merge it
    with ->rsrc_node_cache.
    
    Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
    Link: https://lore.kernel.org/r/6d5410e51ccd29be7a716be045b51d6b371baef6.1681210788.git.asml.silence@gmail.com
    Signed-off-by: Jens Axboe <axboe@kernel.dk>

Signed-off-by: Jeff Moyer <jmoyer@redhat.com>
2023-11-02 15:31:32 -04:00
Jeff Moyer 989784208d io_uring/rsrc: add custom limit for node caching
JIRA: https://issues.redhat.com/browse/RHEL-12076

commit 69bbc6ade9d9d4e3c556cb83e77b6f3cd9ad3d18
Author: Pavel Begunkov <asml.silence@gmail.com>
Date:   Tue Apr 4 13:39:57 2023 +0100

    io_uring/rsrc: add custom limit for node caching
    
    The number of entries in the rsrc node cache is limited to 512, which
    still seems unnecessarily large. Add per cache thresholds and set to
    to 32 for the rsrc node cache.
    
    Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
    Link: https://lore.kernel.org/r/d0cd538b944dac0bf878e276fc0199f21e6bccea.1680576071.git.asml.silence@gmail.com
    Signed-off-by: Jens Axboe <axboe@kernel.dk>

Signed-off-by: Jeff Moyer <jmoyer@redhat.com>
2023-11-02 15:31:30 -04:00
Jeff Moyer 3417772578 io_uring: Add KASAN support for alloc_caches
JIRA: https://issues.redhat.com/browse/RHEL-12076

commit e1fe7ee885dc0712e982ee465d9f8b96254c30c1
Author: Breno Leitao <leitao@debian.org>
Date:   Thu Feb 23 08:43:53 2023 -0800

    io_uring: Add KASAN support for alloc_caches
    
    Add support for KASAN in the alloc_caches (apoll and netmsg_cache).
    Thus, if something touches the unused caches, it will raise a KASAN
    warning/exception.
    
    It poisons the object when the object is put to the cache, and unpoisons
    it when the object is gotten or freed.
    
    Signed-off-by: Breno Leitao <leitao@debian.org>
    Reviewed-by: Gabriel Krisman Bertazi <krisman@suse.de>
    Link: https://lore.kernel.org/r/20230223164353.2839177-2-leitao@debian.org
    Signed-off-by: Jens Axboe <axboe@kernel.dk>

Signed-off-by: Jeff Moyer <jmoyer@redhat.com>
2023-11-02 15:31:28 -04:00
Jeff Moyer beecc18b5e io_uring: Move from hlist to io_wq_work_node
JIRA: https://issues.redhat.com/browse/RHEL-12076

commit efba1a9e653e107577a48157b5424878c46f2285
Author: Breno Leitao <leitao@debian.org>
Date:   Thu Feb 23 08:43:52 2023 -0800

    io_uring: Move from hlist to io_wq_work_node
    
    Having cache entries linked using the hlist format brings no benefit, and
    also requires an unnecessary extra pointer address per cache entry.
    
    Use the internal io_wq_work_node single-linked list for the internal
    alloc caches (async_msghdr and async_poll)
    
    This is required to be able to use KASAN on cache entries, since we do
    not need to touch unused (and poisoned) cache entries when adding more
    entries to the list.
    
    Suggested-by: Pavel Begunkov <asml.silence@gmail.com>
    Signed-off-by: Breno Leitao <leitao@debian.org>
    Link: https://lore.kernel.org/r/20230223164353.2839177-2-leitao@debian.org
    Signed-off-by: Jens Axboe <axboe@kernel.dk>

Signed-off-by: Jeff Moyer <jmoyer@redhat.com>
2023-11-02 15:31:28 -04:00
Jeff Moyer 5d7db98b4d io_uring: fix poll/netmsg alloc caches
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2068237

commit fd30d1cdcc4ff405fc54765edf2e11b03f2ed4f3
Author: Pavel Begunkov <asml.silence@gmail.com>
Date:   Thu Mar 30 06:52:38 2023 -0600

    io_uring: fix poll/netmsg alloc caches
    
    We increase cache->nr_cached when we free into the cache but don't
    decrease when we take from it, so in some time we'll get an empty
    cache with cache->nr_cached larger than IO_ALLOC_CACHE_MAX, that fails
    io_alloc_cache_put() and effectively disables caching.
    
    Fixes: 9b797a37c4bd8 ("io_uring: add abstraction around apoll cache")
    Cc: stable@vger.kernel.org
    Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
    Signed-off-by: Jens Axboe <axboe@kernel.dk>

Signed-off-by: Jeff Moyer <jmoyer@redhat.com>
2023-05-05 15:26:31 -04:00
Jeff Moyer 271ac30790 io_uring: impose max limit on apoll cache
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2068237

commit 9731bc9855dc169f27433fef3c4d0ff3496c512d
Author: Jens Axboe <axboe@kernel.dk>
Date:   Thu Jul 7 14:20:54 2022 -0600

    io_uring: impose max limit on apoll cache
    
    Caches like this tend to grow to the peak size, and then never get any
    smaller. Impose a max limit on the size, to prevent it from growing too
    big.
    
    A somewhat randomly chosen 512 is the max size we'll allow the cache
    to get. If a batch of frees come in and would bring it over that, we
    simply start kfree'ing the surplus.
    
    Signed-off-by: Jens Axboe <axboe@kernel.dk>

Signed-off-by: Jeff Moyer <jmoyer@redhat.com>
2023-04-29 07:28:02 -04:00
Jeff Moyer 1428738a54 io_uring: add abstraction around apoll cache
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2068237

commit 9b797a37c4bd83b03cedcfbd15852b836f5e562c
Author: Jens Axboe <axboe@kernel.dk>
Date:   Thu Jul 7 14:16:20 2022 -0600

    io_uring: add abstraction around apoll cache
    
    In preparation for adding limits, and one more user, abstract out the
    core bits of the allocation+free cache.
    
    Signed-off-by: Jens Axboe <axboe@kernel.dk>

Signed-off-by: Jeff Moyer <jmoyer@redhat.com>
2023-04-29 07:27:02 -04:00