Commit Graph

503 Commits

Author SHA1 Message Date
Wilco Dijkstra 1061b75412 malloc: Cleanup tcache_init()
Cleanup tcache_init() by using the new __libc_malloc2 interface.

Reviewed-by: Cupertino Miranda <cupertino.miranda@oracle.com>
2025-06-26 15:08:17 +00:00
William Hunt 9a5a7613ac malloc: replace instances of __builtin_expect with __glibc_unlikely
Replaced all instances of __builtin_expect to __glibc_unlikely
within malloc.c and malloc-debug.c.  This improves the portability
of glibc by avoiding calls to GNU C built-in functions.  Since all
the expected results from calls to __builtin_expect were 0,
__glibc_likely was never used as a replacement.  Multiple
calls to __builtin_expect within a single if statement have
been replaced with one call to __glibc_unlikely, which wraps
every condition.

Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
Reviewed-by: Wilco Dijkstra  <Wilco.Dijkstra@arm.com>
2025-06-26 15:07:53 +00:00
William Hunt d1ad959b00 malloc: refactored aligned_OK and misaligned_chunk
Renamed aligned_OK to misaligned_mem as to be similar
to misaligned_chunk, and reversed any assertions using
the macro.  Made misaligned_chunk call misaligned_mem after
chunk2mem rather than bitmasking with the malloc alignment
itself, since misaligned_chunk is meant to test the data
chunk itself rather than the header, and the compiler
will optimise the addition so the ternary operator is not
needed.

Reviewed-by: Wilco Dijkstra  <Wilco.Dijkstra@arm.com>
2025-06-26 14:57:53 +00:00
Wilco Dijkstra ba32fd7d04 malloc: Cleanup _mid_memalign
Remove unused 'address' parameter from _mid_memalign and callers.
Fix off-by-one alignment calculation in __libc_pvalloc.

Reviewed-by: DJ Delorie <dj@redhat.com>
2025-06-18 13:37:00 +00:00
Cupertino Miranda cbfd798810 malloc: add tcache support for large chunk caching
Existing tcache implementation in glibc seems to focus in caching
smaller data size allocations, limiting the size of the allocation to
1KB.

This patch changes tcache implementation to allow to cache any chunk
size allocations.  The implementation adds extra bins (linked-lists)
which store chunks with different ranges of allocation sizes. Bin
selection is done in multiples in powers of 2 and chunks are inserted in
growing size ordering within the bin.  The last bin contains all other
sizes of allocations.

This patch although by default preserves the same implementation,
limitting caches to 1KB chunks, it now allows to increase the max size
for the cached chunks with the tunable glibc.malloc.tcache_max.

It also now verifies if chunk was mmapped, in which case __libc_free
will not add it to tcache.

Reviewed-by: Wilco Dijkstra  <Wilco.Dijkstra@arm.com>
2025-06-16 12:05:22 +00:00
Wilco Dijkstra 7e10e30e64 malloc: Count tcache entries downwards
Currently tcache requires 2 global variable accesses to determine
whether a block can be added to the tcache.  Change the counts array
to 'num_slots' to indicate the number of entries that could be added.
If 'num_slots' reaches zero, no more blocks can be added.  If the entries
pointer is not NULL, at least one block is available for allocation.

Now each tcache bin can support a different maximum number of entries,
and they can be individually switched on or off (a zero initialized
num_slots+entry means the tcache bin is not available for free or malloc).

Reviewed-by: DJ Delorie <dj@redhat.com>
2025-06-03 17:16:39 +00:00
Wilco Dijkstra 36189c76fb malloc: Improve performance of __libc_calloc
Improve performance of __libc_calloc by splitting it into 2 parts: first handle
the tcache fastpath, then do the rest in a separate tailcalled function.
This results in significant performance gains since __libc_calloc doesn't need
to setup a frame.

On Neoverse V2, bench-calloc-simple improves by 5.0% overall.
Bench-calloc-thread 1 improves by 24%.

Reviewed-by: DJ Delorie <dj@redhat.com>
2025-05-14 09:22:32 +00:00
Wilco Dijkstra 25d37948c9 malloc: Improve malloc initialization
Move malloc initialization to __libc_early_init.  Use a hidden __ptmalloc_init
for initialization and a weak call to avoid pulling in the system malloc in a
static binary.  All previous initialization checks can now be removed.

Reviewed-by: Florian Weimer <fweimer@redhat.com>
2025-05-12 16:10:28 +00:00
David Lau eff1f680cf malloc: Improved double free detection in the tcache
The previous double free detection did not account for an attacker to
use a terminating null byte overflowing from the previous
chunk to change the size of a memory chunk is being sorted into.
So that the check in 'tcache_double_free_verify' would pass
even though it is a double free.

Solution:
Let 'tcache_double_free_verify' iterate over all tcache entries to
detect double frees.

This patch only protects from buffer overflows by one byte.
But I would argue that off by one errors are the most common
errors to be made.

Alternatives Considered:
  Store the size of a memory chunk in big endian and thus
  the chunk size would not get overwritten because entries in the
  tcache are not that big.

  Move the tcache_key before the actual memory chunk so that it
  does not have to be checked at all, this would work better in general
  but also it would increase the memory usage.

Signed-off-by: David Lau  <david.lau@fau.de>
Reviewed-by: Wilco Dijkstra  <Wilco.Dijkstra@arm.com>
2025-05-12 11:58:30 +00:00
Wilco Dijkstra 5d10174581 malloc: Inline tcache_try_malloc
Inline tcache_try_malloc into calloc since it is the only caller.  Also fix
usize2tidx and use it in __libc_malloc, __libc_calloc and _mid_memalign.
The result is simpler, cleaner code.

Reviewed-by: DJ Delorie <dj@redhat.com>
2025-05-01 20:01:53 +00:00
Cupertino Miranda 1c9ac027a5 malloc: move tcache_init out of hot tcache paths
This patch moves any calls of tcache_init away after tcache hot paths.
Since there is no reason to initialize tcaches in the hot path and since
we need to be able to check tcache != NULL in any case, because of
tcache_thread_shutdown function, moving tcache_init away from hot path
can only be beneficial.
The patch also removes the initialization of tcaches within the
__libc_free call. It only makes sense to initialize tcaches for the
thread after it calls one of the allocation functions. Also the patch
removes the save/restore of errno from tcache_init code, as it is no
longer needed.
2025-04-16 13:09:16 +00:00
Wilco Dijkstra c968fe5062 malloc: Use tailcalls in __libc_free
Use tailcalls to avoid the overhead of a frame on the free fastpath.
Move tcache initialization to _int_free_chunk().  Add malloc_printerr_tail()
which can be tailcalled without forcing a frame like no-return functions.
Change tcache_double_free_verify() to retry via __libc_free() after clearing
the key.

Reviewed-by: Florian Weimer  <fweimer@redhat.com>
Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2025-04-15 11:14:58 +00:00
Wilco Dijkstra 393b1a6e50 malloc: Inline tcache_free
Inline tcache_free since it's only used by __libc_free.  Add __glibc_likely
for the tcache checks.

Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2025-04-15 11:14:58 +00:00
Wilco Dijkstra 9b0c8ced9c malloc: Improve free checks
The checks on size can be merged and use __builtin_add_overflow.  Since
tcache only handles small sizes (and rejects sizes < MINSIZE), delay this
check until after tcache.

Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2025-04-15 11:14:57 +00:00
Wilco Dijkstra 0296654d61 malloc: Inline _int_free_check
Inline _int_free_check since it is only used by __libc_free.

Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2025-04-15 11:14:47 +00:00
Wilco Dijkstra 69da24fbc5 malloc: Inline _int_free
Inline _int_free since it is a small function and only really used by
__libc_free.

Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2025-04-14 16:07:46 +00:00
Wilco Dijkstra b0cb99bef5 malloc: Move mmap code out of __libc_free hotpath
Currently __libc_free checks for a freed mmap chunk in the fast path.
Also errno is always saved and restored to preserve it.  Since mmap chunks
are larger than the largest tcache chunk, it is safe to delay this and
handle tcache, smallbin and medium bin blocks first.  Move saving of errno
to cases that actually need it.  Remove a safety check that fails on mmap
chunks and a check that mmap chunks cannot be added to tcache.

Performance of bench-malloc-thread improves by 9.2% for 1 thread and
6.9% for 32 threads on Neoverse V2.

Reviewed-by: DJ Delorie  <dj@redhat.com>
Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2025-04-14 15:08:24 +00:00
Wilco Dijkstra b0897944cc malloc: Improve performance of __libc_malloc
Improve performance of __libc_malloc by splitting it into 2 parts: first handle
the tcache fastpath, then do the rest in a separate tailcalled function.
This results in significant performance gains since __libc_malloc doesn't need
to setup a frame and we delay tcache initialization and setting of errno until
later.

On Neoverse V2, bench-malloc-simple improves by 6.7% overall (up to 8.5% for
ST case) and bench-malloc-thread improves by 20.3% for 1 thread and 14.4% for
32 threads.

Reviewed-by: DJ Delorie <dj@redhat.com>
2025-03-28 15:31:34 +00:00
Wilco Dijkstra 1233da4943 malloc: Use __always_inline for simple functions
Use __always_inline for small helper functions that are critical for
performance.  This ensures inlining always happens when expected.
Performance of bench-malloc-simple improves by 0.6% on average on
Neoverse V2.

Reviewed-by: DJ Delorie <dj@redhat.com>
2025-03-26 13:17:51 +00:00
Wilco Dijkstra cd33535002 malloc: Use _int_free_chunk for remainders
When splitting a chunk, release the tail part by calling int_free_chunk.
This avoids inserting random blocks into tcache that were never requested
by the user.  Fragmentation will be worse if they are never used again.
Note if the tail is fairly small, we could avoid splitting it at all.
Also remove an oddly placed initialization of tcache in _libc_realloc.

Reviewed-by: DJ Delorie <dj@redhat.com>
2025-03-25 18:53:01 +00:00
Cupertino Miranda 855561a1fb malloc: missing initialization of tcache in _mid_memalign
_mid_memalign includes tcache code but does not attempt to initialize
tcaches.

Reviewed-by: DJ Delorie <dj@redhat.com>
2025-03-21 17:52:14 +00:00
Wilco Dijkstra 575de3d666 malloc: Improve csize2tidx
Remove the alignment rounding up from csize2tidx - this makes no sense
since the input should be a chunk size.  Removing it enables further
optimizations, for example chunksize_nomask can be safely used and
invalid sizes < MINSIZE are not mapped to a valid tidx.

Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2025-03-18 19:19:33 +00:00
Ben Kallus 4cf2d86936 malloc: Add integrity check to largebin nextsizes
If attacker overwrites the bk_nextsize link in the first chunk of a
largebin that later has a smaller chunk inserted into it, malloc will
write a heap pointer into an attacker-controlled address [0].

This patch adds an integrity check to mitigate this attack.

[0]: https://github.com/shellphish/how2heap/blob/master/glibc_2.39/large_bin_attack.c

Signed-off-by: Ben Kallus <benjamin.p.kallus.gr@dartmouth.edu>
Reviewed-by: DJ Delorie <dj@redhat.com>
2025-03-03 18:31:27 -05:00
Ben Kallus d10176c0ff malloc: Add size check when moving fastbin->tcache
By overwriting a forward link in a fastbin chunk that is subsequently
moved into the tcache, it's possible to get malloc to return an
arbitrary address [0].

When a chunk is fetched from a fastbin, its size is checked against the
expected chunk size for that fastbin (see malloc.c:3991). This patch
adds a similar check for chunks being moved from a fastbin to tcache,
which renders obsolete the exploitation technique described above.

Now updated to use __glibc_unlikely instead of __builtin_expect, as
requested.

[0]: https://github.com/shellphish/how2heap/blob/master/glibc_2.39/fastbin_reverse_into_tcache.c

Signed-off-by: Ben Kallus <benjamin.p.kallus.gr@dartmouth.edu>
Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2025-02-13 16:31:28 -03:00
Paul Eggert 2642002380 Update copyright dates with scripts/update-copyrights 2025-01-01 11:22:09 -08:00
Wangyang Guo 226e3b0a41 malloc: Add tcache path for calloc
This commit add tcache support in calloc() which can largely improve
the performance of small size allocation, especially in multi-thread
scenario. tcache_available() and tcache_try_malloc() are split out as
a helper function for better reusing the code.

Also fix tst-safe-linking failure after enabling tcache. In previous,
calloc() is used as a way to by-pass tcache in memory allocation and
trigger safe-linking check in fastbins path. With tcache enabled, it
needs extra workarounds to bypass tcache.

Result of bench-calloc-thread benchmark

Test Platform: Xeon-8380
Ratio: New / Original time_per_iteration (Lower is Better)

Threads#   | Ratio
-----------|------
1 thread   | 0.656
4 threads  | 0.470
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
2024-12-11 11:34:47 +08:00
H.J. Lu 1c4cebb84b malloc: Optimize small memory clearing for calloc
Add calloc-clear-memory.h to clear memory size up to 36 bytes (72 bytes
on 64-bit targets) for calloc.  Use repeated stores with 1 branch, instead
of up to 3 branches.  On x86-64, it is faster than memset since calling
memset needs 1 indirect branch, 1 broadcast, and up to 4 branches.

Signed-off-by: H.J. Lu <hjl.tools@gmail.com>
Reviewed-by: Wilco Dijkstra  <Wilco.Dijkstra@arm.com>
2024-12-04 04:28:15 +08:00
k4lizen e2436d6f5a malloc: send freed small chunks to smallbin
Large chunks get added to the unsorted bin since
sorting them takes time, for small chunks the
benefit of adding them to the unsorted bin is
non-existant, actually hurting performance.

Splitting and malloc_consolidate still add small
chunks to unsorted, but we can hint the compiler
that that is a relatively rare occurance.
Benchmarking shows this to be consistently good.

Authored-by: k4lizen <k4lizen@proton.me>
Signed-off-by: Aleksa Siriški <sir@tmina.org>
2024-11-29 13:27:13 +00:00
Wangyang Guo c69e8cccaf malloc: Avoid func call for tcache quick path in free()
Tcache is an important optimzation to accelerate memory free(), things
within this code path should be kept as simple as possible. This commit
try to remove the function call when free() invokes tcache code path by
inlining _int_free().

Result of bench-malloc-thread benchmark

Test Platform: Xeon-8380
Ratio: New / Original time_per_iteration (Lower is Better)

Threads#   | Ratio
-----------|------
1 thread   | 0.879
4 threads  | 0.874

The performance data shows it can improve bench-malloc-thread benchmark
by ~12% in both single thread and multi-thread scenario.
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
2024-11-27 08:24:09 +08:00
Alejandro Colomar 53fcdf5f74 Silence most -Wzero-as-null-pointer-constant diagnostics
Replace 0 by NULL and {0} by {}.

Omit a few cases that aren't so trivial to fix.

Link: <https://gcc.gnu.org/bugzilla/show_bug.cgi?id=117059>
Link: <https://software.codidact.com/posts/292718/292759#answer-292759>
Signed-off-by: Alejandro Colomar <alx@kernel.org>
2024-11-25 16:45:59 -03:00
Wangyang Guo c621d4f74f malloc: Split _int_free() into 3 sub functions
Split _int_free() into 3 smaller functions for flexible combination:
* _int_free_check -- sanity check for free
* tcache_free -- free memory to tcache (quick path)
* _int_free_chunk -- free memory chunk (slow path)
2024-11-25 12:11:23 +08:00
Adhemerval Zanella 461cab1de7 linux: Add support for getrandom vDSO
Linux 6.11 has getrandom() in vDSO. It operates on a thread-local opaque
state allocated with mmap using flags specified by the vDSO.

Multiple states are allocated at once, as many as fit into a page, and
these are held in an array of available states to be doled out to each
thread upon first use, and recycled when a thread terminates. As these
states run low, more are allocated.

To make this procedure async-signal-safe, a simple guard is used in the
LSB of the opaque state address, falling back to the syscall if there's
reentrancy contention.

Also, _Fork() is handled by blocking signals on opaque state allocation
(so _Fork() always sees a consistent state even if it interrupts a
getrandom() call) and by iterating over the thread stack cache on
reclaim_stack. Each opaque state will be in the free states list
(grnd_alloc.states) or allocated to a running thread.

The cancellation is handled by always using GRND_NONBLOCK flags while
calling the vDSO, and falling back to the cancellable syscall if the
kernel returns EAGAIN (would block). Since getrandom is not defined by
POSIX and cancellation is supported as an extension, the cancellation is
handled as 'may occur' instead of 'shall occur' [1], meaning that if
vDSO does not block (the expected behavior) getrandom will not act as a
cancellation entrypoint. It avoids a pthread_testcancel call on the fast
path (different than 'shall occur' functions, like sem_wait()).

It is currently enabled for x86_64, which is available in Linux 6.11,
and aarch64, powerpc32, powerpc64, loongarch64, and s390x, which are
available in Linux 6.12.

Link: https://pubs.opengroup.org/onlinepubs/9799919799/nframe.html [1]
Co-developed-by: Jason A. Donenfeld <Jason@zx2c4.com>
Tested-by: Jason A. Donenfeld <Jason@zx2c4.com> # x86_64
Tested-by: Adhemerval Zanella <adhemerval.zanella@linaro.org> # x86_64, aarch64
Tested-by: Xi Ruoyao <xry111@xry111.site> # x86_64, aarch64, loongarch64
Tested-by: Stefan Liebler <stli@linux.ibm.com> # s390x
2024-11-12 14:42:12 -03:00
Xi Ruoyao 5a85786a90
Make __getrandom_nocancel set errno and add a _nostatus version
The __getrandom_nocancel function returns errors as negative values
instead of errno.  This is inconsistent with other _nocancel functions
and it breaks "TEMP_FAILURE_RETRY (__getrandom_nocancel (p, n, 0))" in
__arc4random_buf.  Use INLINE_SYSCALL_CALL instead of
INTERNAL_SYSCALL_CALL to fix this issue.

But __getrandom_nocancel has been avoiding from touching errno for a
reason, see BZ 29624.  So add a __getrandom_nocancel_nostatus function
and use it in tcache_key_initialize.

Signed-off-by: Xi Ruoyao <xry111@xry111.site>
Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
Signed-off-by: Andreas K. Hüttel <dilfridge@gentoo.org>
2024-01-12 14:23:11 +01:00
Paul Eggert dff8da6b3e Update copyright dates with scripts/update-copyrights 2024-01-01 10:53:40 -08:00
Adhemerval Zanella fee9e40a8d malloc: Decorate malloc maps
Add anonymous mmap annotations on loader malloc, malloc when it
allocates memory with mmap, and on malloc arena.  The /proc/self/maps
will now print:

   [anon: glibc: malloc arena]
   [anon: glibc: malloc]
   [anon: glibc: loader malloc]

On arena allocation, glibc annotates only the read/write mapping.

Checked on x86_64-linux-gnu and aarch64-linux-gnu.
Reviewed-by: DJ Delorie <dj@redhat.com>
2023-11-07 10:27:20 -03:00
Florian Weimer 0dc7fc1cf0 malloc: Remove bin scanning from memalign (bug 30723)
On the test workload (mpv --cache=yes with VP9 video decoding), the
bin scanning has a very poor success rate (less than 2%).  The tcache
scanning has about 50% success rate, so keep that.

Update comments in malloc/tst-memalign-2 to indicate the purpose
of the tests.  Even with the scanning removed, the additional
merging opportunities since commit 542b110585
("malloc: Enable merging of remainders in memalign (bug 30723)")
are sufficient to pass the existing large bins test.

Remove leftover variables from _int_free from refactoring in the
same commit.

Reviewed-by: DJ Delorie <dj@redhat.com>
2023-08-15 08:07:36 +02:00
Florian Weimer 542b110585 malloc: Enable merging of remainders in memalign (bug 30723)
Previously, calling _int_free from _int_memalign could put remainders
into the tcache or into fastbins, where they are invisible to the
low-level allocator.  This results in missed merge opportunities
because once these freed chunks become available to the low-level
allocator, further memalign allocations (even of the same size are)
likely obstructing merges.

Furthermore, during forwards merging in _int_memalign, do not
completely give up when the remainder is too small to serve as a
chunk on its own.  We can still give it back if it can be merged
with the following unused chunk.  This makes it more likely that
memalign calls in a loop achieve a compact memory layout,
independently of initial heap layout.

Drop some useless (unsigned long) casts along the way, and tweak
the style to more closely match GNU on changed lines.

Reviewed-by: DJ Delorie <dj@redhat.com>
2023-08-11 11:18:17 +02:00
Siddhesh Poyarekar 2fb12bbd09 realloc: Limit chunk reuse to only growing requests [BZ #30579]
The trim_threshold is too aggressive a heuristic to decide if chunk
reuse is OK for reallocated memory; for repeated small, shrinking
allocations it leads to internal fragmentation and for repeated larger
allocations that fragmentation may blow up even worse due to the dynamic
nature of the threshold.

Limit reuse only when it is within the alignment padding, which is 2 *
size_t for heap allocations and a page size for mmapped allocations.
There's the added wrinkle of THP, but this fix ignores it for now,
pessimizing that case in favor of keeping fragmentation low.

This resolves BZ #30579.

Signed-off-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
Reported-by: Nicolas Dusart <nicolas@freedelity.be>
Reported-by: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Tested-by: Aurelien Jarno <aurelien@aurel32.net>
2023-07-06 11:10:27 -04:00
Paul Pluzhnikov 7f0d9e61f4 Fix all the remaining misspellings -- BZ 25337 2023-06-02 01:39:48 +00:00
DJ Delorie d1417176a3 aligned_alloc: conform to C17
This patch adds the strict checking for power-of-two alignments
in aligned_alloc(), and updates the manual accordingly.

Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2023-05-08 16:40:10 -04:00
DJ Delorie e5524ef335 malloc: set NON_MAIN_ARENA flag for reclaimed memalign chunk (BZ #30101)
Based on these comments in malloc.c:

   size field is or'ed with NON_MAIN_ARENA if the chunk was obtained
   from a non-main arena.  This is only set immediately before handing
   the chunk to the user, if necessary.

   The NON_MAIN_ARENA flag is never set for unsorted chunks, so it
   does not have to be taken into account in size comparisons.

When we pull a chunk off the unsorted list (or any list) we need to
make sure that flag is set properly before returning the chunk.

Use the rounded-up size for chunk_ok_for_memalign()

Do not scan the arena for reusable chunks if there's no arena.

Account for chunk overhead when determining if a chunk is a reuse
candidate.

mcheck interferes with memalign, so skip mcheck variants of
memalign tests.

Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
2023-04-18 10:58:42 -04:00
DJ Delorie 24cdd6c71d memalign: Support scanning for aligned chunks.
This patch adds a chunk scanning algorithm to the _int_memalign code
path that reduces heap fragmentation by reusing already aligned chunks
instead of always looking for chunks of larger sizes and splitting
them.  The tcache macros are extended to allow removing a chunk from
the middle of the list.

The goal is to fix the pathological use cases where heaps grow
continuously in workloads that are heavy users of memalign.

Note that tst-memalign-2 checks for tcache operation, which
malloc-check bypasses.

Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2023-03-29 16:36:03 -04:00
Adhemerval Zanella Netto 33237fe83d Remove --enable-tunables configure option
And make always supported.  The configure option was added on glibc 2.25
and some features require it (such as hwcap mask, huge pages support, and
lock elisition tuning).  It also simplifies the build permutations.

Changes from v1:
 * Remove glibc.rtld.dynamic_sort changes, it is orthogonal and needs
   more discussion.
 * Cleanup more code.
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
2023-03-29 14:33:06 -03:00
Robert Morell 6a734e62f1 malloc: Fix transposed arguments in sysmalloc_mmap_fallback call
git commit 0849eed45d ("malloc: Move MORECORE fallback mmap to
sysmalloc_mmap_fallback") moved a block of code from sysmalloc to a
new helper function sysmalloc_mmap_fallback(), but 'pagesize' is used
for the 'minsize' argument and 'MMAP_AS_MORECORE_SIZE' for the
'pagesize' argument.

Fixes: 0849eed45d ("malloc: Move MORECORE fallback mmap to sysmalloc_mmap_fallback")
Signed-off-by: Robert Morell <rmorell@nvidia.com>
Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2023-03-08 10:13:31 -03:00
Ayush Mittal 3f84f1159e malloc: remove redundant check of unsorted bin corruption
* malloc/malloc.c (_int_malloc): remove redundant check of
  unsorted bin corruption

With commit "b90ddd08f6dd688e651df9ee89ca3a69ff88cd0c"
(malloc: Additional checks for unsorted bin integrity),
same check of (bck->fd != victim) is added before checking of unsorted
chunk corruption, which was added in "bdc3009b8ff0effdbbfb05eb6b10966753cbf9b8"
(Added check before removing from unsorted list).

..
3773           if (__glibc_unlikely (bck->fd != victim)
3774               || __glibc_unlikely (victim->fd != unsorted_chunks (av)))
3775             malloc_printerr ("malloc(): unsorted double linked list corrupted");
..
..
3815           /* remove from unsorted list */
3816          if (__glibc_unlikely (bck->fd != victim))
3817            malloc_printerr ("malloc(): corrupted unsorted chunks 3");
3818          unsorted_chunks (av)->bk = bck;
..

So this extra check can be removed.

Signed-off-by: Maninder Singh <maninder1.s@samsung.com>
Signed-off-by: Ayush Mittal <ayush.m@samsung.com>
Reviewed-by: DJ Delorie <dj@redhat.com>
2023-02-22 16:56:45 -05:00
Joseph Myers 6d7e8eda9b Update copyright dates with scripts/update-copyrights 2023-01-06 21:14:39 +00:00
Siddhesh Poyarekar f4f2ca1509 realloc: Return unchanged if request is within usable size
If there is enough space in the chunk to satisfy the new size, return
the old pointer as is, thus avoiding any locks or reallocations.  The
only real place this has a benefit is in large chunks that tend to get
satisfied with mmap, since there is a large enough spare size (up to a
page) for it to matter.  For allocations on heap, the extra size is
typically barely a few bytes (up to 15) and it's unlikely that it would
make much difference in performance.

Also added a smoke test to ensure that the old pointer is returned
unchanged if the new size to realloc is within usable size of the old
pointer.

Signed-off-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
Reviewed-by: DJ Delorie <dj@redhat.com>
2022-12-08 11:23:43 -05:00
Florian Weimer 15a94e6668 malloc: Switch global_max_fast to uint8_t
MAX_FAST_SIZE is 160 at most, so a uint8_t is sufficient.  This makes
it harder to use memory corruption, by overwriting global_max_fast
with a large value, to fundamentally alter malloc behavior.

Reviewed-by: DJ Delorie <dj@redhat.com>
2022-10-13 05:45:41 +02:00
Wilco Dijkstra 22f4ab2d20 Use atomic_exchange_release/acquire
Rename atomic_exchange_rel/acq to use atomic_exchange_release/acquire
since these map to the standard C11 atomic builtins.

Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2022-09-26 16:58:08 +01:00
Qingqing Li 774d43f27d malloc: Print error when oldsize is not equal to the current size.
This is used to detect errors early.  The read of the oldsize is
not protected by any lock, so check this value to avoid causing
bigger mistakes.

Reviewed-by: DJ Delorie <dj@redhat.com>
2022-09-22 15:32:56 -04:00