Replaced all instances of __builtin_expect to __glibc_unlikely
within malloc.c and malloc-debug.c. This improves the portability
of glibc by avoiding calls to GNU C built-in functions. Since all
the expected results from calls to __builtin_expect were 0,
__glibc_likely was never used as a replacement. Multiple
calls to __builtin_expect within a single if statement have
been replaced with one call to __glibc_unlikely, which wraps
every condition.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
Renamed aligned_OK to misaligned_mem as to be similar
to misaligned_chunk, and reversed any assertions using
the macro. Made misaligned_chunk call misaligned_mem after
chunk2mem rather than bitmasking with the malloc alignment
itself, since misaligned_chunk is meant to test the data
chunk itself rather than the header, and the compiler
will optimise the addition so the ternary operator is not
needed.
Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
Introduce tests-link-with-libpthread to list tests that
require linking with libpthread, and use that to generate
dependencies on $(shared-thread-library) for all multi-threaded tests.
Fixes build failures of commit cde5caa4bb
("malloc: add testing for large tcache support") on Hurd.
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
Remove unused 'address' parameter from _mid_memalign and callers.
Fix off-by-one alignment calculation in __libc_pvalloc.
Reviewed-by: DJ Delorie <dj@redhat.com>
This patch adds large tcache support tests by re-executing malloc tests
using the tunable: glibc.malloc.tcache_max=1048576
Test names are postfixed with "largetcache".
Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
Existing tcache implementation in glibc seems to focus in caching
smaller data size allocations, limiting the size of the allocation to
1KB.
This patch changes tcache implementation to allow to cache any chunk
size allocations. The implementation adds extra bins (linked-lists)
which store chunks with different ranges of allocation sizes. Bin
selection is done in multiples in powers of 2 and chunks are inserted in
growing size ordering within the bin. The last bin contains all other
sizes of allocations.
This patch although by default preserves the same implementation,
limitting caches to 1KB chunks, it now allows to increase the max size
for the cached chunks with the tunable glibc.malloc.tcache_max.
It also now verifies if chunk was mmapped, in which case __libc_free
will not add it to tcache.
Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
Currently tcache requires 2 global variable accesses to determine
whether a block can be added to the tcache. Change the counts array
to 'num_slots' to indicate the number of entries that could be added.
If 'num_slots' reaches zero, no more blocks can be added. If the entries
pointer is not NULL, at least one block is available for allocation.
Now each tcache bin can support a different maximum number of entries,
and they can be individually switched on or off (a zero initialized
num_slots+entry means the tcache bin is not available for free or malloc).
Reviewed-by: DJ Delorie <dj@redhat.com>
Improve performance of __libc_calloc by splitting it into 2 parts: first handle
the tcache fastpath, then do the rest in a separate tailcalled function.
This results in significant performance gains since __libc_calloc doesn't need
to setup a frame.
On Neoverse V2, bench-calloc-simple improves by 5.0% overall.
Bench-calloc-thread 1 improves by 24%.
Reviewed-by: DJ Delorie <dj@redhat.com>
Move malloc initialization to __libc_early_init. Use a hidden __ptmalloc_init
for initialization and a weak call to avoid pulling in the system malloc in a
static binary. All previous initialization checks can now be removed.
Reviewed-by: Florian Weimer <fweimer@redhat.com>
The previous double free detection did not account for an attacker to
use a terminating null byte overflowing from the previous
chunk to change the size of a memory chunk is being sorted into.
So that the check in 'tcache_double_free_verify' would pass
even though it is a double free.
Solution:
Let 'tcache_double_free_verify' iterate over all tcache entries to
detect double frees.
This patch only protects from buffer overflows by one byte.
But I would argue that off by one errors are the most common
errors to be made.
Alternatives Considered:
Store the size of a memory chunk in big endian and thus
the chunk size would not get overwritten because entries in the
tcache are not that big.
Move the tcache_key before the actual memory chunk so that it
does not have to be checked at all, this would work better in general
but also it would increase the memory usage.
Signed-off-by: David Lau <david.lau@fau.de>
Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
Inline tcache_try_malloc into calloc since it is the only caller. Also fix
usize2tidx and use it in __libc_malloc, __libc_calloc and _mid_memalign.
The result is simpler, cleaner code.
Reviewed-by: DJ Delorie <dj@redhat.com>
This patch moves any calls of tcache_init away after tcache hot paths.
Since there is no reason to initialize tcaches in the hot path and since
we need to be able to check tcache != NULL in any case, because of
tcache_thread_shutdown function, moving tcache_init away from hot path
can only be beneficial.
The patch also removes the initialization of tcaches within the
__libc_free call. It only makes sense to initialize tcaches for the
thread after it calls one of the allocation functions. Also the patch
removes the save/restore of errno from tcache_init code, as it is no
longer needed.
Use tailcalls to avoid the overhead of a frame on the free fastpath.
Move tcache initialization to _int_free_chunk(). Add malloc_printerr_tail()
which can be tailcalled without forcing a frame like no-return functions.
Change tcache_double_free_verify() to retry via __libc_free() after clearing
the key.
Reviewed-by: Florian Weimer <fweimer@redhat.com>
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Inline tcache_free since it's only used by __libc_free. Add __glibc_likely
for the tcache checks.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
The checks on size can be merged and use __builtin_add_overflow. Since
tcache only handles small sizes (and rejects sizes < MINSIZE), delay this
check until after tcache.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Currently __libc_free checks for a freed mmap chunk in the fast path.
Also errno is always saved and restored to preserve it. Since mmap chunks
are larger than the largest tcache chunk, it is safe to delay this and
handle tcache, smallbin and medium bin blocks first. Move saving of errno
to cases that actually need it. Remove a safety check that fails on mmap
chunks and a check that mmap chunks cannot be added to tcache.
Performance of bench-malloc-thread improves by 9.2% for 1 thread and
6.9% for 32 threads on Neoverse V2.
Reviewed-by: DJ Delorie <dj@redhat.com>
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Improve performance of __libc_malloc by splitting it into 2 parts: first handle
the tcache fastpath, then do the rest in a separate tailcalled function.
This results in significant performance gains since __libc_malloc doesn't need
to setup a frame and we delay tcache initialization and setting of errno until
later.
On Neoverse V2, bench-malloc-simple improves by 6.7% overall (up to 8.5% for
ST case) and bench-malloc-thread improves by 20.3% for 1 thread and 14.4% for
32 threads.
Reviewed-by: DJ Delorie <dj@redhat.com>
Use __always_inline for small helper functions that are critical for
performance. This ensures inlining always happens when expected.
Performance of bench-malloc-simple improves by 0.6% on average on
Neoverse V2.
Reviewed-by: DJ Delorie <dj@redhat.com>
When splitting a chunk, release the tail part by calling int_free_chunk.
This avoids inserting random blocks into tcache that were never requested
by the user. Fragmentation will be worse if they are never used again.
Note if the tail is fairly small, we could avoid splitting it at all.
Also remove an oddly placed initialization of tcache in _libc_realloc.
Reviewed-by: DJ Delorie <dj@redhat.com>
Remove the alignment rounding up from csize2tidx - this makes no sense
since the input should be a chunk size. Removing it enables further
optimizations, for example chunksize_nomask can be safely used and
invalid sizes < MINSIZE are not mapped to a valid tidx.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Change heap_max_size() to improve performance of arena_for_chunk().
Instead of a complex calculation, using a simple mask operation to get the
arena base pointer. HEAP_MAX_SIZE should be larger than the huge page size,
otherwise heaps will use not huge pages.
On AArch64 this removes 6 instructions from arena_for_chunk(), and
bench-malloc-thread improves by 1.1% - 1.8%.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
If attacker overwrites the bk_nextsize link in the first chunk of a
largebin that later has a smaller chunk inserted into it, malloc will
write a heap pointer into an attacker-controlled address [0].
This patch adds an integrity check to mitigate this attack.
[0]: https://github.com/shellphish/how2heap/blob/master/glibc_2.39/large_bin_attack.c
Signed-off-by: Ben Kallus <benjamin.p.kallus.gr@dartmouth.edu>
Reviewed-by: DJ Delorie <dj@redhat.com>
By overwriting a forward link in a fastbin chunk that is subsequently
moved into the tcache, it's possible to get malloc to return an
arbitrary address [0].
When a chunk is fetched from a fastbin, its size is checked against the
expected chunk size for that fastbin (see malloc.c:3991). This patch
adds a similar check for chunks being moved from a fastbin to tcache,
which renders obsolete the exploitation technique described above.
Now updated to use __glibc_unlikely instead of __builtin_expect, as
requested.
[0]: https://github.com/shellphish/how2heap/blob/master/glibc_2.39/fastbin_reverse_into_tcache.c
Signed-off-by: Ben Kallus <benjamin.p.kallus.gr@dartmouth.edu>
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
The functions serve very similar purposes. The advantage of
__rtld_libc_freeres is that it is located within ld.so, so it is
more natural to poke at link map internals there.
This slightly regresses cleanup capabilities for statically linked
binaries. If that becomes a problem, we should start calling
__rtld_libc_freeres from __libc_freeres (perhaps after renaming it).
Similar to a9944a52c9 and
f9493a15ea, we need to hide calloc use from
the compiler to accommodate GCC's r15-6566-g804e9d55d9e54c change.
First, include tst-malloc-aux.h, but then use `volatile` variables
for size.
The test passes without the tst-malloc-aux.h change but IMO we want
it there for consistency and to avoid future problems (possibly silent).
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
I've updated copyright dates in glibc for 2025. This is the patch for
the changes not generated by scripts/update-copyrights and subsequent
build / regeneration of generated files.
Add __attribute_optimization_barrier__ to disable inlining and cloning on a
function. For Clang, expand it to
__attribute__ ((optnone))
Otherwise, expand it to
__attribute__ ((noinline, clone))
Co-Authored-By: H.J. Lu <hjl.tools@gmail.com>
Reviewed-by: Sam James <sam@gentoo.org>
Since -1 isn't a power of two, compiler may reject it, hide memalign from
Clang 19 which issues an error:
tst-memalign.c:86:31: error: requested alignment is not a power of 2 [-Werror,-Wnon-power-of-two-alignment]
86 | p = memalign (-1, pagesize);
| ^~
tst-memalign.c:86:31: error: requested alignment must be 4294967296 bytes or smaller; maximum alignment assumed [-Werror,-Wbuiltin-assume-aligned-alignment]
86 | p = memalign (-1, pagesize);
| ^~
Update tst-malloc-aux.h to hide all malloc functions and include it in
all malloc tests to prevent compiler from optimizing out any malloc
functions.
Tested with Clang 19.1.5 and GCC 15 20241206 for BZ #32366.
Signed-off-by: H.J. Lu <hjl.tools@gmail.com>
Reviewed-by: Sam James <sam@gentoo.org>
This commit add tcache support in calloc() which can largely improve
the performance of small size allocation, especially in multi-thread
scenario. tcache_available() and tcache_try_malloc() are split out as
a helper function for better reusing the code.
Also fix tst-safe-linking failure after enabling tcache. In previous,
calloc() is used as a way to by-pass tcache in memory allocation and
trigger safe-linking check in fastbins path. With tcache enabled, it
needs extra workarounds to bypass tcache.
Result of bench-calloc-thread benchmark
Test Platform: Xeon-8380
Ratio: New / Original time_per_iteration (Lower is Better)
Threads# | Ratio
-----------|------
1 thread | 0.656
4 threads | 0.470
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
GCC 15 introduces allocation dead code removal (DCE) for PR117370 in
r15-5255-g7828dc070510f8. This breaks various glibc tests which want
to assert various properties of the allocator without doing anything
obviously useful with the allocated memory.
Alexander Monakov rightly pointed out that we can and should do better
than passing -fno-malloc-dce to paper over the problem. Not least because
GCC 14 already does such DCE where there's no testing of malloc's return
value against NULL, and LLVM has such optimisations too.
Handle this by providing malloc (and friends) wrappers with a volatile
function pointer to obscure that we're calling malloc (et. al) from the
compiler.
Reviewed-by: Paul Eggert <eggert@cs.ucla.edu>
Add calloc-clear-memory.h to clear memory size up to 36 bytes (72 bytes
on 64-bit targets) for calloc. Use repeated stores with 1 branch, instead
of up to 3 branches. On x86-64, it is faster than memset since calling
memset needs 1 indirect branch, 1 broadcast, and up to 4 branches.
Signed-off-by: H.J. Lu <hjl.tools@gmail.com>
Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
Large chunks get added to the unsorted bin since
sorting them takes time, for small chunks the
benefit of adding them to the unsorted bin is
non-existant, actually hurting performance.
Splitting and malloc_consolidate still add small
chunks to unsorted, but we can hint the compiler
that that is a relatively rare occurance.
Benchmarking shows this to be consistently good.
Authored-by: k4lizen <k4lizen@proton.me>
Signed-off-by: Aleksa Siriški <sir@tmina.org>
Tcache is an important optimzation to accelerate memory free(), things
within this code path should be kept as simple as possible. This commit
try to remove the function call when free() invokes tcache code path by
inlining _int_free().
Result of bench-malloc-thread benchmark
Test Platform: Xeon-8380
Ratio: New / Original time_per_iteration (Lower is Better)
Threads# | Ratio
-----------|------
1 thread | 0.879
4 threads | 0.874
The performance data shows it can improve bench-malloc-thread benchmark
by ~12% in both single thread and multi-thread scenario.
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
Linux 6.11 has getrandom() in vDSO. It operates on a thread-local opaque
state allocated with mmap using flags specified by the vDSO.
Multiple states are allocated at once, as many as fit into a page, and
these are held in an array of available states to be doled out to each
thread upon first use, and recycled when a thread terminates. As these
states run low, more are allocated.
To make this procedure async-signal-safe, a simple guard is used in the
LSB of the opaque state address, falling back to the syscall if there's
reentrancy contention.
Also, _Fork() is handled by blocking signals on opaque state allocation
(so _Fork() always sees a consistent state even if it interrupts a
getrandom() call) and by iterating over the thread stack cache on
reclaim_stack. Each opaque state will be in the free states list
(grnd_alloc.states) or allocated to a running thread.
The cancellation is handled by always using GRND_NONBLOCK flags while
calling the vDSO, and falling back to the cancellable syscall if the
kernel returns EAGAIN (would block). Since getrandom is not defined by
POSIX and cancellation is supported as an extension, the cancellation is
handled as 'may occur' instead of 'shall occur' [1], meaning that if
vDSO does not block (the expected behavior) getrandom will not act as a
cancellation entrypoint. It avoids a pthread_testcancel call on the fast
path (different than 'shall occur' functions, like sem_wait()).
It is currently enabled for x86_64, which is available in Linux 6.11,
and aarch64, powerpc32, powerpc64, loongarch64, and s390x, which are
available in Linux 6.12.
Link: https://pubs.opengroup.org/onlinepubs/9799919799/nframe.html [1]
Co-developed-by: Jason A. Donenfeld <Jason@zx2c4.com>
Tested-by: Jason A. Donenfeld <Jason@zx2c4.com> # x86_64
Tested-by: Adhemerval Zanella <adhemerval.zanella@linaro.org> # x86_64, aarch64
Tested-by: Xi Ruoyao <xry111@xry111.site> # x86_64, aarch64, loongarch64
Tested-by: Stefan Liebler <stli@linux.ibm.com> # s390x
Improve aligned_alloc/calloc/malloc test coverage by adding
multi-threaded tests with random memory allocations and with/without
cross-thread memory deallocations.
Perform a number of memory allocation calls with random sizes limited
to 0xffff.
Use the existing DSO ('malloc/tst-aligned_alloc-lib.c') to randomize
allocator selection.
The multi-threaded allocation/deallocation is staged as described below:
- Stage 1: Half of the threads will be allocating memory and the
other half will be waiting for them to finish the allocation.
- Stage 2: Half of the threads will be allocating memory and the
other half will be deallocating memory.
- Stage 3: Half of the threads will be deallocating memory and the
second half waiting on them to finish.
Add 'malloc/tst-aligned-alloc-random-thread.c' where each thread will
deallocate only the memory that was previously allocated by itself.
Add 'malloc/tst-aligned-alloc-random-thread-cross.c' where each thread
will deallocate memory that was previously allocated by another thread.
The intention is to be able to utilize existing malloc testing to ensure
that similar allocation APIs are also exposed to the same rigors.
Reviewed-by: Arjun Shankar <arjun@redhat.com>
Make sure the DSO used by aligned_alloc/calloc/malloc tests does not get
a global lock on multithreaded tests.
Reviewed-by: Arjun Shankar <arjun@redhat.com>