Commit Graph

540 Commits

Author SHA1 Message Date
Wilco Dijkstra 9b0c8ced9c malloc: Improve free checks
The checks on size can be merged and use __builtin_add_overflow.  Since
tcache only handles small sizes (and rejects sizes < MINSIZE), delay this
check until after tcache.

Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2025-04-15 11:14:57 +00:00
Wilco Dijkstra 0296654d61 malloc: Inline _int_free_check
Inline _int_free_check since it is only used by __libc_free.

Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2025-04-15 11:14:47 +00:00
Wilco Dijkstra 69da24fbc5 malloc: Inline _int_free
Inline _int_free since it is a small function and only really used by
__libc_free.

Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2025-04-14 16:07:46 +00:00
Wilco Dijkstra b0cb99bef5 malloc: Move mmap code out of __libc_free hotpath
Currently __libc_free checks for a freed mmap chunk in the fast path.
Also errno is always saved and restored to preserve it.  Since mmap chunks
are larger than the largest tcache chunk, it is safe to delay this and
handle tcache, smallbin and medium bin blocks first.  Move saving of errno
to cases that actually need it.  Remove a safety check that fails on mmap
chunks and a check that mmap chunks cannot be added to tcache.

Performance of bench-malloc-thread improves by 9.2% for 1 thread and
6.9% for 32 threads on Neoverse V2.

Reviewed-by: DJ Delorie  <dj@redhat.com>
Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2025-04-14 15:08:24 +00:00
Wilco Dijkstra b0897944cc malloc: Improve performance of __libc_malloc
Improve performance of __libc_malloc by splitting it into 2 parts: first handle
the tcache fastpath, then do the rest in a separate tailcalled function.
This results in significant performance gains since __libc_malloc doesn't need
to setup a frame and we delay tcache initialization and setting of errno until
later.

On Neoverse V2, bench-malloc-simple improves by 6.7% overall (up to 8.5% for
ST case) and bench-malloc-thread improves by 20.3% for 1 thread and 14.4% for
32 threads.

Reviewed-by: DJ Delorie <dj@redhat.com>
2025-03-28 15:31:34 +00:00
Wilco Dijkstra 1233da4943 malloc: Use __always_inline for simple functions
Use __always_inline for small helper functions that are critical for
performance.  This ensures inlining always happens when expected.
Performance of bench-malloc-simple improves by 0.6% on average on
Neoverse V2.

Reviewed-by: DJ Delorie <dj@redhat.com>
2025-03-26 13:17:51 +00:00
Wilco Dijkstra cd33535002 malloc: Use _int_free_chunk for remainders
When splitting a chunk, release the tail part by calling int_free_chunk.
This avoids inserting random blocks into tcache that were never requested
by the user.  Fragmentation will be worse if they are never used again.
Note if the tail is fairly small, we could avoid splitting it at all.
Also remove an oddly placed initialization of tcache in _libc_realloc.

Reviewed-by: DJ Delorie <dj@redhat.com>
2025-03-25 18:53:01 +00:00
Cupertino Miranda 855561a1fb malloc: missing initialization of tcache in _mid_memalign
_mid_memalign includes tcache code but does not attempt to initialize
tcaches.

Reviewed-by: DJ Delorie <dj@redhat.com>
2025-03-21 17:52:14 +00:00
Wilco Dijkstra 575de3d666 malloc: Improve csize2tidx
Remove the alignment rounding up from csize2tidx - this makes no sense
since the input should be a chunk size.  Removing it enables further
optimizations, for example chunksize_nomask can be safely used and
invalid sizes < MINSIZE are not mapped to a valid tidx.

Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2025-03-18 19:19:33 +00:00
Ben Kallus 4cf2d86936 malloc: Add integrity check to largebin nextsizes
If attacker overwrites the bk_nextsize link in the first chunk of a
largebin that later has a smaller chunk inserted into it, malloc will
write a heap pointer into an attacker-controlled address [0].

This patch adds an integrity check to mitigate this attack.

[0]: https://github.com/shellphish/how2heap/blob/master/glibc_2.39/large_bin_attack.c

Signed-off-by: Ben Kallus <benjamin.p.kallus.gr@dartmouth.edu>
Reviewed-by: DJ Delorie <dj@redhat.com>
2025-03-03 18:31:27 -05:00
Ben Kallus d10176c0ff malloc: Add size check when moving fastbin->tcache
By overwriting a forward link in a fastbin chunk that is subsequently
moved into the tcache, it's possible to get malloc to return an
arbitrary address [0].

When a chunk is fetched from a fastbin, its size is checked against the
expected chunk size for that fastbin (see malloc.c:3991). This patch
adds a similar check for chunks being moved from a fastbin to tcache,
which renders obsolete the exploitation technique described above.

Now updated to use __glibc_unlikely instead of __builtin_expect, as
requested.

[0]: https://github.com/shellphish/how2heap/blob/master/glibc_2.39/fastbin_reverse_into_tcache.c

Signed-off-by: Ben Kallus <benjamin.p.kallus.gr@dartmouth.edu>
Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2025-02-13 16:31:28 -03:00
Paul Eggert 2642002380 Update copyright dates with scripts/update-copyrights 2025-01-01 11:22:09 -08:00
Wangyang Guo 226e3b0a41 malloc: Add tcache path for calloc
This commit add tcache support in calloc() which can largely improve
the performance of small size allocation, especially in multi-thread
scenario. tcache_available() and tcache_try_malloc() are split out as
a helper function for better reusing the code.

Also fix tst-safe-linking failure after enabling tcache. In previous,
calloc() is used as a way to by-pass tcache in memory allocation and
trigger safe-linking check in fastbins path. With tcache enabled, it
needs extra workarounds to bypass tcache.

Result of bench-calloc-thread benchmark

Test Platform: Xeon-8380
Ratio: New / Original time_per_iteration (Lower is Better)

Threads#   | Ratio
-----------|------
1 thread   | 0.656
4 threads  | 0.470
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
2024-12-11 11:34:47 +08:00
H.J. Lu 1c4cebb84b malloc: Optimize small memory clearing for calloc
Add calloc-clear-memory.h to clear memory size up to 36 bytes (72 bytes
on 64-bit targets) for calloc.  Use repeated stores with 1 branch, instead
of up to 3 branches.  On x86-64, it is faster than memset since calling
memset needs 1 indirect branch, 1 broadcast, and up to 4 branches.

Signed-off-by: H.J. Lu <hjl.tools@gmail.com>
Reviewed-by: Wilco Dijkstra  <Wilco.Dijkstra@arm.com>
2024-12-04 04:28:15 +08:00
k4lizen e2436d6f5a malloc: send freed small chunks to smallbin
Large chunks get added to the unsorted bin since
sorting them takes time, for small chunks the
benefit of adding them to the unsorted bin is
non-existant, actually hurting performance.

Splitting and malloc_consolidate still add small
chunks to unsorted, but we can hint the compiler
that that is a relatively rare occurance.
Benchmarking shows this to be consistently good.

Authored-by: k4lizen <k4lizen@proton.me>
Signed-off-by: Aleksa Siriški <sir@tmina.org>
2024-11-29 13:27:13 +00:00
Wangyang Guo c69e8cccaf malloc: Avoid func call for tcache quick path in free()
Tcache is an important optimzation to accelerate memory free(), things
within this code path should be kept as simple as possible. This commit
try to remove the function call when free() invokes tcache code path by
inlining _int_free().

Result of bench-malloc-thread benchmark

Test Platform: Xeon-8380
Ratio: New / Original time_per_iteration (Lower is Better)

Threads#   | Ratio
-----------|------
1 thread   | 0.879
4 threads  | 0.874

The performance data shows it can improve bench-malloc-thread benchmark
by ~12% in both single thread and multi-thread scenario.
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
2024-11-27 08:24:09 +08:00
Alejandro Colomar 53fcdf5f74 Silence most -Wzero-as-null-pointer-constant diagnostics
Replace 0 by NULL and {0} by {}.

Omit a few cases that aren't so trivial to fix.

Link: <https://gcc.gnu.org/bugzilla/show_bug.cgi?id=117059>
Link: <https://software.codidact.com/posts/292718/292759#answer-292759>
Signed-off-by: Alejandro Colomar <alx@kernel.org>
2024-11-25 16:45:59 -03:00
Wangyang Guo c621d4f74f malloc: Split _int_free() into 3 sub functions
Split _int_free() into 3 smaller functions for flexible combination:
* _int_free_check -- sanity check for free
* tcache_free -- free memory to tcache (quick path)
* _int_free_chunk -- free memory chunk (slow path)
2024-11-25 12:11:23 +08:00
Adhemerval Zanella 461cab1de7 linux: Add support for getrandom vDSO
Linux 6.11 has getrandom() in vDSO. It operates on a thread-local opaque
state allocated with mmap using flags specified by the vDSO.

Multiple states are allocated at once, as many as fit into a page, and
these are held in an array of available states to be doled out to each
thread upon first use, and recycled when a thread terminates. As these
states run low, more are allocated.

To make this procedure async-signal-safe, a simple guard is used in the
LSB of the opaque state address, falling back to the syscall if there's
reentrancy contention.

Also, _Fork() is handled by blocking signals on opaque state allocation
(so _Fork() always sees a consistent state even if it interrupts a
getrandom() call) and by iterating over the thread stack cache on
reclaim_stack. Each opaque state will be in the free states list
(grnd_alloc.states) or allocated to a running thread.

The cancellation is handled by always using GRND_NONBLOCK flags while
calling the vDSO, and falling back to the cancellable syscall if the
kernel returns EAGAIN (would block). Since getrandom is not defined by
POSIX and cancellation is supported as an extension, the cancellation is
handled as 'may occur' instead of 'shall occur' [1], meaning that if
vDSO does not block (the expected behavior) getrandom will not act as a
cancellation entrypoint. It avoids a pthread_testcancel call on the fast
path (different than 'shall occur' functions, like sem_wait()).

It is currently enabled for x86_64, which is available in Linux 6.11,
and aarch64, powerpc32, powerpc64, loongarch64, and s390x, which are
available in Linux 6.12.

Link: https://pubs.opengroup.org/onlinepubs/9799919799/nframe.html [1]
Co-developed-by: Jason A. Donenfeld <Jason@zx2c4.com>
Tested-by: Jason A. Donenfeld <Jason@zx2c4.com> # x86_64
Tested-by: Adhemerval Zanella <adhemerval.zanella@linaro.org> # x86_64, aarch64
Tested-by: Xi Ruoyao <xry111@xry111.site> # x86_64, aarch64, loongarch64
Tested-by: Stefan Liebler <stli@linux.ibm.com> # s390x
2024-11-12 14:42:12 -03:00
Xi Ruoyao 5a85786a90
Make __getrandom_nocancel set errno and add a _nostatus version
The __getrandom_nocancel function returns errors as negative values
instead of errno.  This is inconsistent with other _nocancel functions
and it breaks "TEMP_FAILURE_RETRY (__getrandom_nocancel (p, n, 0))" in
__arc4random_buf.  Use INLINE_SYSCALL_CALL instead of
INTERNAL_SYSCALL_CALL to fix this issue.

But __getrandom_nocancel has been avoiding from touching errno for a
reason, see BZ 29624.  So add a __getrandom_nocancel_nostatus function
and use it in tcache_key_initialize.

Signed-off-by: Xi Ruoyao <xry111@xry111.site>
Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
Signed-off-by: Andreas K. Hüttel <dilfridge@gentoo.org>
2024-01-12 14:23:11 +01:00
Paul Eggert dff8da6b3e Update copyright dates with scripts/update-copyrights 2024-01-01 10:53:40 -08:00
Adhemerval Zanella fee9e40a8d malloc: Decorate malloc maps
Add anonymous mmap annotations on loader malloc, malloc when it
allocates memory with mmap, and on malloc arena.  The /proc/self/maps
will now print:

   [anon: glibc: malloc arena]
   [anon: glibc: malloc]
   [anon: glibc: loader malloc]

On arena allocation, glibc annotates only the read/write mapping.

Checked on x86_64-linux-gnu and aarch64-linux-gnu.
Reviewed-by: DJ Delorie <dj@redhat.com>
2023-11-07 10:27:20 -03:00
Florian Weimer 0dc7fc1cf0 malloc: Remove bin scanning from memalign (bug 30723)
On the test workload (mpv --cache=yes with VP9 video decoding), the
bin scanning has a very poor success rate (less than 2%).  The tcache
scanning has about 50% success rate, so keep that.

Update comments in malloc/tst-memalign-2 to indicate the purpose
of the tests.  Even with the scanning removed, the additional
merging opportunities since commit 542b110585
("malloc: Enable merging of remainders in memalign (bug 30723)")
are sufficient to pass the existing large bins test.

Remove leftover variables from _int_free from refactoring in the
same commit.

Reviewed-by: DJ Delorie <dj@redhat.com>
2023-08-15 08:07:36 +02:00
Florian Weimer 542b110585 malloc: Enable merging of remainders in memalign (bug 30723)
Previously, calling _int_free from _int_memalign could put remainders
into the tcache or into fastbins, where they are invisible to the
low-level allocator.  This results in missed merge opportunities
because once these freed chunks become available to the low-level
allocator, further memalign allocations (even of the same size are)
likely obstructing merges.

Furthermore, during forwards merging in _int_memalign, do not
completely give up when the remainder is too small to serve as a
chunk on its own.  We can still give it back if it can be merged
with the following unused chunk.  This makes it more likely that
memalign calls in a loop achieve a compact memory layout,
independently of initial heap layout.

Drop some useless (unsigned long) casts along the way, and tweak
the style to more closely match GNU on changed lines.

Reviewed-by: DJ Delorie <dj@redhat.com>
2023-08-11 11:18:17 +02:00
Siddhesh Poyarekar 2fb12bbd09 realloc: Limit chunk reuse to only growing requests [BZ #30579]
The trim_threshold is too aggressive a heuristic to decide if chunk
reuse is OK for reallocated memory; for repeated small, shrinking
allocations it leads to internal fragmentation and for repeated larger
allocations that fragmentation may blow up even worse due to the dynamic
nature of the threshold.

Limit reuse only when it is within the alignment padding, which is 2 *
size_t for heap allocations and a page size for mmapped allocations.
There's the added wrinkle of THP, but this fix ignores it for now,
pessimizing that case in favor of keeping fragmentation low.

This resolves BZ #30579.

Signed-off-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
Reported-by: Nicolas Dusart <nicolas@freedelity.be>
Reported-by: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Tested-by: Aurelien Jarno <aurelien@aurel32.net>
2023-07-06 11:10:27 -04:00
Paul Pluzhnikov 7f0d9e61f4 Fix all the remaining misspellings -- BZ 25337 2023-06-02 01:39:48 +00:00
DJ Delorie d1417176a3 aligned_alloc: conform to C17
This patch adds the strict checking for power-of-two alignments
in aligned_alloc(), and updates the manual accordingly.

Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2023-05-08 16:40:10 -04:00
DJ Delorie e5524ef335 malloc: set NON_MAIN_ARENA flag for reclaimed memalign chunk (BZ #30101)
Based on these comments in malloc.c:

   size field is or'ed with NON_MAIN_ARENA if the chunk was obtained
   from a non-main arena.  This is only set immediately before handing
   the chunk to the user, if necessary.

   The NON_MAIN_ARENA flag is never set for unsorted chunks, so it
   does not have to be taken into account in size comparisons.

When we pull a chunk off the unsorted list (or any list) we need to
make sure that flag is set properly before returning the chunk.

Use the rounded-up size for chunk_ok_for_memalign()

Do not scan the arena for reusable chunks if there's no arena.

Account for chunk overhead when determining if a chunk is a reuse
candidate.

mcheck interferes with memalign, so skip mcheck variants of
memalign tests.

Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
2023-04-18 10:58:42 -04:00
DJ Delorie 24cdd6c71d memalign: Support scanning for aligned chunks.
This patch adds a chunk scanning algorithm to the _int_memalign code
path that reduces heap fragmentation by reusing already aligned chunks
instead of always looking for chunks of larger sizes and splitting
them.  The tcache macros are extended to allow removing a chunk from
the middle of the list.

The goal is to fix the pathological use cases where heaps grow
continuously in workloads that are heavy users of memalign.

Note that tst-memalign-2 checks for tcache operation, which
malloc-check bypasses.

Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2023-03-29 16:36:03 -04:00
Adhemerval Zanella Netto 33237fe83d Remove --enable-tunables configure option
And make always supported.  The configure option was added on glibc 2.25
and some features require it (such as hwcap mask, huge pages support, and
lock elisition tuning).  It also simplifies the build permutations.

Changes from v1:
 * Remove glibc.rtld.dynamic_sort changes, it is orthogonal and needs
   more discussion.
 * Cleanup more code.
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
2023-03-29 14:33:06 -03:00
Robert Morell 6a734e62f1 malloc: Fix transposed arguments in sysmalloc_mmap_fallback call
git commit 0849eed45d ("malloc: Move MORECORE fallback mmap to
sysmalloc_mmap_fallback") moved a block of code from sysmalloc to a
new helper function sysmalloc_mmap_fallback(), but 'pagesize' is used
for the 'minsize' argument and 'MMAP_AS_MORECORE_SIZE' for the
'pagesize' argument.

Fixes: 0849eed45d ("malloc: Move MORECORE fallback mmap to sysmalloc_mmap_fallback")
Signed-off-by: Robert Morell <rmorell@nvidia.com>
Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2023-03-08 10:13:31 -03:00
Ayush Mittal 3f84f1159e malloc: remove redundant check of unsorted bin corruption
* malloc/malloc.c (_int_malloc): remove redundant check of
  unsorted bin corruption

With commit "b90ddd08f6dd688e651df9ee89ca3a69ff88cd0c"
(malloc: Additional checks for unsorted bin integrity),
same check of (bck->fd != victim) is added before checking of unsorted
chunk corruption, which was added in "bdc3009b8ff0effdbbfb05eb6b10966753cbf9b8"
(Added check before removing from unsorted list).

..
3773           if (__glibc_unlikely (bck->fd != victim)
3774               || __glibc_unlikely (victim->fd != unsorted_chunks (av)))
3775             malloc_printerr ("malloc(): unsorted double linked list corrupted");
..
..
3815           /* remove from unsorted list */
3816          if (__glibc_unlikely (bck->fd != victim))
3817            malloc_printerr ("malloc(): corrupted unsorted chunks 3");
3818          unsorted_chunks (av)->bk = bck;
..

So this extra check can be removed.

Signed-off-by: Maninder Singh <maninder1.s@samsung.com>
Signed-off-by: Ayush Mittal <ayush.m@samsung.com>
Reviewed-by: DJ Delorie <dj@redhat.com>
2023-02-22 16:56:45 -05:00
Joseph Myers 6d7e8eda9b Update copyright dates with scripts/update-copyrights 2023-01-06 21:14:39 +00:00
Siddhesh Poyarekar f4f2ca1509 realloc: Return unchanged if request is within usable size
If there is enough space in the chunk to satisfy the new size, return
the old pointer as is, thus avoiding any locks or reallocations.  The
only real place this has a benefit is in large chunks that tend to get
satisfied with mmap, since there is a large enough spare size (up to a
page) for it to matter.  For allocations on heap, the extra size is
typically barely a few bytes (up to 15) and it's unlikely that it would
make much difference in performance.

Also added a smoke test to ensure that the old pointer is returned
unchanged if the new size to realloc is within usable size of the old
pointer.

Signed-off-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
Reviewed-by: DJ Delorie <dj@redhat.com>
2022-12-08 11:23:43 -05:00
Florian Weimer 15a94e6668 malloc: Switch global_max_fast to uint8_t
MAX_FAST_SIZE is 160 at most, so a uint8_t is sufficient.  This makes
it harder to use memory corruption, by overwriting global_max_fast
with a large value, to fundamentally alter malloc behavior.

Reviewed-by: DJ Delorie <dj@redhat.com>
2022-10-13 05:45:41 +02:00
Wilco Dijkstra 22f4ab2d20 Use atomic_exchange_release/acquire
Rename atomic_exchange_rel/acq to use atomic_exchange_release/acquire
since these map to the standard C11 atomic builtins.

Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2022-09-26 16:58:08 +01:00
Qingqing Li 774d43f27d malloc: Print error when oldsize is not equal to the current size.
This is used to detect errors early.  The read of the oldsize is
not protected by any lock, so check this value to avoid causing
bigger mistakes.

Reviewed-by: DJ Delorie <dj@redhat.com>
2022-09-22 15:32:56 -04:00
Wilco Dijkstra a364a3a709 Use C11 atomics instead of atomic_decrement(_val)
Replace atomic_decrement and atomic_decrement_val with
atomic_fetch_add_relaxed.

Reviewed-by: DJ Delorie <dj@redhat.com>
2022-09-09 14:22:26 +01:00
Wilco Dijkstra 53b251c9ff Use C11 atomics instead atomic_add(_zero)
Replace atomic_add and atomic_add_zero with atomic_fetch_add_relaxed.

Reviewed-by: DJ Delorie <dj@redhat.com>
2022-09-09 14:11:23 +01:00
Wilco Dijkstra 89d40cacd0 malloc: Use C11 atomics rather than atomic_exchange_and_add
Replace a few counters using atomic_exchange_and_add with
atomic_fetch_add_relaxed.

Reviewed-by: Florian Weimer <fweimer@redhat.com>
2022-09-06 16:49:01 +01:00
Florian Weimer 85860ad6ea malloc: Do not use MAP_NORESERVE to allocate heap segments
Address space for heap segments is reserved in a mmap call with
MAP_ANONYMOUS | MAP_PRIVATE and protection flags PROT_NONE.  This
reservation does not count against the RSS limit of the process or
system.  Backing memory is allocated using mprotect in alloc_new_heap
and grow_heap, and at this point, the allocator expects the kernel
to provide memory (subject to memory overcommit).

The SIGSEGV that might generate due to MAP_NORESERVE (according to
the mmap manual page) does not seem to occur in practice, it's always
SIGKILL from the OOM killer.  Even if there is a way that SIGSEGV
could be generated, it is confusing to applications that this only
happens for secondary heaps, not for large mmap-based allocations,
and not for the main arena.

Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
2022-08-15 16:45:40 +02:00
Florian Weimer 9001cb1102 assert: Do not use stderr in libc-internal assert
Redirect internal assertion failures to __libc_assert_fail, based on
based on __libc_message, which writes directly to STDERR_FILENO
and calls abort.  Also disable message translation and reword the
error message slightly (adjusting stdlib/tst-bz20544 accordingly).

As a result of these changes, malloc no longer needs its own
redefinition of __assert_fail.

__libc_assert_fail needs to be stubbed out during rtld dependency
analysis because the rtld rebuilds turn __libc_assert_fail into
__assert_fail, which is unconditionally provided by elf/dl-minimal.c.

This change is not possible for the public assert macro and its
__assert_fail function because POSIX requires that the diagnostic
is written to stderr.

Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2022-08-03 11:43:04 +02:00
Florian Weimer cca9684f2d stdio: Clean up __libc_message after unconditional abort
Since commit ec2c1fcefb ("malloc:
Abort on heap corruption, without a backtrace [BZ #21754]"),
__libc_message always terminates the process.  Since commit
a289ea09ea ("Do not print backtraces
on fatal glibc errors"), the backtrace facility has been removed.
Therefore, remove enum __libc_message_action and the action
argument of __libc_message, and mark __libc_message as _No_return.

Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2022-08-03 11:42:39 +02:00
Florian Weimer 7187efd0aa malloc: Use __getrandom_nocancel during tcache initiailization
Cancellation currently cannot happen at this point because dlopen
as used by the unwind link always performs additional allocations
for libgcc_s.so.1, even if it has been loaded already as a dependency
of the main executable.  But it seems prudent not to rely on this
quirk.

Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2022-08-01 15:50:09 +02:00
Florian Weimer ac8047cdf3 malloc: Simplify implementation of __malloc_assert
It is prudent not to run too much code after detecting heap
corruption, and __fxprintf is really complex.  The line number
and file name do not carry much information, so it is not included
in the error message.  (__libc_message only supports %s formatting.)
The function name and assertion should provide some context.

Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
2022-07-21 16:33:04 +02:00
Florian Weimer 7519dee356 malloc: Simplify checked_request2size interface
In-band signaling avoids an uninitialized variable warning when
building with -Og and GCC 12.

Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
2022-07-05 11:04:45 +02:00
Adhemerval Zanella a4ea49f85e malloc: Fix duplicate inline for do_set_mxfast 2022-03-23 12:28:44 -03:00
Paul Eggert 581c785bf3 Update copyright dates with scripts/update-copyrights
I used these shell commands:

../glibc/scripts/update-copyrights $PWD/../gnulib/build-aux/update-copyright
(cd ../glibc && git commit -am"[this commit message]")

and then ignored the output, which consisted lines saying "FOO: warning:
copyright statement not found" for each of 7061 files FOO.

I then removed trailing white space from math/tgmath.h,
support/tst-support-open-dev-null-range.c, and
sysdeps/x86_64/multiarch/strlen-vec.S, to work around the following
obscure pre-commit check failure diagnostics from Savannah.  I don't
know why I run into these diagnostics whereas others evidently do not.

remote: *** 912-#endif
remote: *** 913:
remote: *** 914-
remote: *** error: lines with trailing whitespace found
...
remote: *** error: sysdeps/unix/sysv/linux/statx_cp.c: trailing lines
2022-01-01 11:40:24 -08:00
Patrick McGehearty 0a4df6f534 Remove upper limit on tunable MALLOC_MMAP_THRESHOLD
The current limit on MALLOC_MMAP_THRESHOLD is either 1 Mbyte (for
32-bit apps) or 32 Mbytes (for 64-bit apps).  This value was set by a
patch dated 2006 (15 years ago).  Attempts to set the threshold higher
are currently ignored.

The default behavior is appropriate for many highly parallel
applications where many processes or threads are sharing RAM. In other
situations where the number of active processes or threads closely
matches the number of cores, a much higher limit may be desired by the
application designer. By today's standards on personal computers and
small servers, 2 Gbytes of RAM per core is commonly available. On
larger systems 4 Gbytes or more of RAM is sometimes available.
Instead of raising the limit to match current needs, this patch
proposes to remove the limit of the tunable, leaving the decision up
to the user of a tunable to judge the best value for their needs.

This patch does not change any of the defaults for malloc tunables,
retaining the current behavior of the dynamic malloc mmap threshold.

bugzilla 27801 - Remove upper limit on tunable MALLOC_MMAP_THRESHOLD
Reviewed-by: DJ Delorie <dj@redhat.com>

malloc/
        malloc.c changed do_set_mmap_threshold to remove test
        for HEAP_MAX_SIZE.
2021-12-16 17:24:37 +00:00
Adhemerval Zanella 0f982c1827 malloc: Enable huge page support on main arena
This patch adds support huge page support on main arena allocation,
enable with tunable glibc.malloc.hugetlb=2.  The patch essentially
disable the __glibc_morecore() sbrk() call (similar when memory
tag does when sbrk() call does not support it) and fallback to
default page size if the memory allocation fails.

Checked on x86_64-linux-gnu.

Reviewed-by: DJ Delorie <dj@redhat.com>
2021-12-15 17:35:39 -03:00
Adhemerval Zanella 0849eed45d malloc: Move MORECORE fallback mmap to sysmalloc_mmap_fallback
So it can be used on hugepage code as well.

Reviewed-by: DJ Delorie <dj@redhat.com>
2021-12-15 17:35:39 -03:00
Adhemerval Zanella c1beb51d08 malloc: Add Huge Page support to arenas
It is enabled as default for glibc.malloc.hugetlb set to 2 or higher.
It also uses a non configurable minimum value and maximum value,
currently set respectively to 1 and 4 selected huge page size.

The arena allocation with huge pages does not use MAP_NORESERVE.  As
indicate by kernel internal documentation [1], the flag might trigger
a SIGBUS on soft page faults if at memory access there is no left
pages in the pool.

On systems without a reserved huge pages pool, is just stress the
mmap(MAP_HUGETLB) allocation failure.  To improve test coverage it is
required to create a pool with some allocated pages.

Checked on x86_64-linux-gnu with no reserved pages, 10 reserved pages
(which trigger mmap(MAP_HUGETBL) failures) and with 256 reserved pages
(which does not trigger mmap(MAP_HUGETLB) failures).

[1] https://www.kernel.org/doc/html/v4.18/vm/hugetlbfs_reserv.html#resv-map-modifications

Reviewed-by: DJ Delorie <dj@redhat.com>
2021-12-15 17:35:39 -03:00
Adhemerval Zanella 98d5fcb8d0 malloc: Add Huge Page support for mmap
With the morecore hook removed, there is not easy way to provide huge
pages support on with glibc allocator without resorting to transparent
huge pages.  And some users and programs do prefer to use the huge pages
directly instead of THP for multiple reasons: no splitting, re-merging
by the VM, no TLB shootdowns for running processes, fast allocation
from the reserve pool, no competition with the rest of the processes
unlike THP, no swapping all, etc.

This patch extends the 'glibc.malloc.hugetlb' tunable: the value
'2' means to use huge pages directly with the system default size,
while a positive value means and specific page size that is matched
against the supported ones by the system.

Currently only memory allocated on sysmalloc() is handled, the arenas
still uses the default system page size.

To test is a new rule is added tests-malloc-hugetlb2, which run the
addes tests with the required GLIBC_TUNABLE setting.  On systems without
a reserved huge pages pool, is just stress the mmap(MAP_HUGETLB)
allocation failure.  To improve test coverage it is required to create
a pool with some allocated pages.

Checked on x86_64-linux-gnu.

Reviewed-by: DJ Delorie <dj@redhat.com>
2021-12-15 17:35:38 -03:00
Adhemerval Zanella 6cc3ccc67e malloc: Move mmap logic to its own function
So it can be used with different pagesize and flags.

Reviewed-by: DJ Delorie <dj@redhat.com>
2021-12-15 17:35:15 -03:00
Adhemerval Zanella 7478c9959a malloc: Add THP/madvise support for sbrk
To increase effectiveness with Transparent Huge Page with madvise, the
large page size is use instead page size for sbrk increment for the
main arena.

Checked on x86_64-linux-gnu.

Reviewed-by: DJ Delorie <dj@redhat.com>
2021-12-15 17:35:15 -03:00
Adhemerval Zanella 5f6d8d97c6 malloc: Add madvise support for Transparent Huge Pages
Linux Transparent Huge Pages (THP) current supports three different
states: 'never', 'madvise', and 'always'.  The 'never' is
self-explanatory and 'always' will enable THP for all anonymous
pages.  However, 'madvise' is still the default for some system and
for such case THP will be only used if the memory range is explicity
advertise by the program through a madvise(MADV_HUGEPAGE) call.

To enable it a new tunable is provided, 'glibc.malloc.hugetlb',
where setting to a value diffent than 0 enables the madvise call.

This patch issues the madvise(MADV_HUGEPAGE) call after a successful
mmap() call at sysmalloc() with sizes larger than the default huge
page size.  The madvise() call is disable is system does not support
THP or if it has the mode set to "never" and on Linux only support
one page size for THP, even if the architecture supports multiple
sizes.

To test is a new rule is added tests-malloc-hugetlb1, which run the
addes tests with the required GLIBC_TUNABLE setting.

Checked on x86_64-linux-gnu.

Reviewed-by: DJ Delorie <dj@redhat.com>
2021-12-15 17:35:14 -03:00
Siddhesh Poyarekar 88e316b064 Handle NULL input to malloc_usable_size [BZ #28506]
Hoist the NULL check for malloc_usable_size into its entry points in
malloc-debug and malloc and assume non-NULL in all callees.  This fixes
BZ #28506

Signed-off-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
Reviewed-by: Florian Weimer <fweimer@redhat.com>
Reviewed-by: Richard W.M. Jones <rjones@redhat.com>
2021-10-29 14:53:55 +05:30
Siddhesh Poyarekar 30891f35fa Remove "Contributed by" lines
We stopped adding "Contributed by" or similar lines in sources in 2012
in favour of git logs and keeping the Contributors section of the
glibc manual up to date.  Removing these lines makes the license
header a bit more consistent across files and also removes the
possibility of error in attribution when license blocks or files are
copied across since the contributed-by lines don't actually reflect
reality in those cases.

Move all "Contributed by" and similar lines (Written by, Test by,
etc.) into a new file CONTRIBUTED-BY to retain record of these
contributions.  These contributors are also mentioned in
manual/contrib.texi, so we just maintain this additional record as a
courtesy to the earlier developers.

The following scripts were used to filter a list of files to edit in
place and to clean up the CONTRIBUTED-BY file respectively.  These
were not added to the glibc sources because they're not expected to be
of any use in future given that this is a one time task:

https://gist.github.com/siddhesh/b5ecac94eabfd72ed2916d6d8157e7dc
https://gist.github.com/siddhesh/15ea1f5e435ace9774f485030695ee02

Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2021-09-03 22:06:44 +05:30
Siddhesh Poyarekar 5b8d271571 Fix build and tests with --disable-tunables
Remove unused code and declare __libc_mallopt when !IS_IN (libc) to
allow the debug hook to build with --disable-tunables.

Also, run tst-ifunc-isa-2* tests only when tunables are enabled since
the result depends on it.

Tested on x86_64.

Reported-by: Matheus Castanho <msc@linux.ibm.com>
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2021-07-23 13:57:56 +05:30
Siddhesh Poyarekar 0552fd2c7d Move malloc_{g,s}et_state to libc_malloc_debug
These deprecated functions are only safe to call from
__malloc_initialize_hook and as a result, are not useful in the
general case.  Move the implementations to libc_malloc_debug so that
existing binaries that need it will now have to preload the debug DSO
to work correctly.

This also allows simplification of the core malloc implementation by
dropping all the undumping support code that was added to make
malloc_set_state work.

One known breakage is that of ancient emacs binaries that depend on
this.  They will now crash when running with this libc.  With
LD_BIND_NOW=1, it will terminate immediately because of not being able
to find malloc_set_state but with lazy binding it will crash in
unpredictable ways.  It will need a preloaded libc_malloc_debug.so so
that its initialization hook is executed to allow its malloc
implementation to work properly.

Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
2021-07-22 18:38:10 +05:30
Siddhesh Poyarekar b5bd5bfe88 glibc.malloc.check: Wean away from malloc hooks
The malloc-check debugging feature is tightly integrated into glibc
malloc, so thanks to an idea from Florian Weimer, much of the malloc
implementation has been moved into libc_malloc_debug.so to support
malloc-check.  Due to this, glibc malloc and malloc-check can no
longer work together; they use altogether different (but identical)
structures for heap management.  This should not make a difference
though since the malloc check hook is not disabled anywhere.
malloc_set_state does, but it does so early enough that it shouldn't
cause any problems.

The malloc check tunable is now in the debug DSO and has no effect
when the DSO is not preloaded.

Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
2021-07-22 18:38:08 +05:30
Siddhesh Poyarekar cc35896ea3 Simplify __malloc_initialized
Now that mcheck no longer needs to check __malloc_initialized (and no
other third party hook can since the symbol is not exported), make the
variable boolean and static so that it is used strictly within malloc.

Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
2021-07-22 18:38:04 +05:30
Siddhesh Poyarekar 2d2d9f2b48 Move malloc hooks into a compat DSO
Remove all malloc hook uses from core malloc functions and move it
into a new library libc_malloc_debug.so.  With this, the hooks now no
longer have any effect on the core library.

libc_malloc_debug.so is a malloc interposer that needs to be preloaded
to get hooks functionality back so that the debugging features that
depend on the hooks, i.e. malloc-check, mcheck and mtrace work again.
Without the preloaded DSO these debugging features will be nops.
These features will be ported away from hooks in subsequent patches.

Similarly, legacy applications that need hooks functionality need to
preload libc_malloc_debug.so.

The symbols exported by libc_malloc_debug.so are maintained at exactly
the same version as libc.so.

Finally, static binaries will no longer be able to use malloc
debugging features since they cannot preload the debugging DSO.

Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
2021-07-22 18:37:59 +05:30
Siddhesh Poyarekar 55a4dd3930 Remove __morecore and __default_morecore
Make the __morecore and __default_morecore symbols compat-only and
remove their declarations from the API.  Also, include morecore.c
directly into malloc.c; this should ideally get merged into malloc in
a future cleanup.

Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
2021-07-22 18:37:57 +05:30
Siddhesh Poyarekar 57b07bede1 Remove __after_morecore_hook
Remove __after_morecore_hook from the API and finalize the symbol so
that it can no longer be used in new applications.  Old applications
using __after_morecore_hook will find that their hook is no longer
called.

Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
2021-07-22 18:37:54 +05:30
Florian Weimer 7c241325d6 Force building with -fno-common
As a result, is not necessary to specify __attribute__ ((nocommon))
on individual definitions.

GCC 10 defaults to -fno-common on all architectures except ARC,
but this change is compatible with older GCC versions and ARC, too.

Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2021-07-09 20:09:14 +02:00
Siddhesh Poyarekar 79969f41a7 _int_realloc is static
_int_realloc is correctly declared at the top to be static, but
incorrectly defined without the static keyword.  Fix that.  The
generated binaries have identical code.
2021-07-08 18:47:21 +05:30
Siddhesh Poyarekar fc859c3048 Harden tcache double-free check
The tcache allocator layer uses the tcache pointer as a key to
identify a block that may be freed twice.  Since this is in the
application data area, an attacker exploiting a use-after-free could
potentially get access to the entire tcache structure through this
key.  A detailed write-up was provided by Awarau here:

https://awaraucom.wordpress.com/2020/07/19/house-of-io-remastered/

Replace this static pointer use for key checking with one that is
generated at malloc initialization.  The first attempt is through
getrandom with a fallback to random_bits(), which is a simple
pseudo-random number generator based on the clock.  The fallback ought
to be sufficient since the goal of the randomness is only to make the
key arbitrary enough that it is very unlikely to collide with user
data.

Co-authored-by: Eyal Itkin <eyalit@checkpoint.com>
2021-07-08 01:39:38 +05:30
JeffyChen dfec225ee1 malloc: Initiate tcache shutdown even without allocations [BZ #28028]
After commit 1e26d35193 ("malloc: Fix
tcache leak after thread destruction [BZ #22111]"),
tcache_shutting_down is still not early enough.  When we detach a
thread with no tcache allocated, tcache_shutting_down would still be
false.

Reviewed-by: DJ Delorie <dj@redhat.com>
2021-07-02 17:39:24 +02:00
Xeonacid 5295172e20 fix typo
"accomodate" should be "accommodate"
Reviewed-by: Paul Zimmermann <Paul.Zimmermann@inria.fr>
2021-06-02 12:16:49 +02:00
Paul Eggert 9f1bed18f9 Further fixes for REALLOC_ZERO_BYTES_FREES comment
* malloc/malloc.c (REALLOC_ZERO_BYTES_FREES): Improve comment further.
2021-04-12 00:45:06 -07:00
Paul Eggert dff9e592b8 Fix REALLOC_ZERO_BYTES_FREES comment to match C17
* malloc/malloc.c (REALLOC_ZERO_BYTES_FREES):
Update comment to match current C standard.
2021-04-11 14:39:20 -07:00
Szabolcs Nagy 850dbf24ee malloc: Ensure mtag code path in checked_request2size is cold
This is a workaround (hack) for a gcc optimization issue (PR 99551).
Without this the generated code may evaluate the expression in the
cold path which causes performance regression for small allocations
in the memory tagging disabled (common) case.

Reviewed-by: DJ Delorie <dj@redhat.com>
2021-03-26 11:03:06 +00:00
Szabolcs Nagy 05f878c58e malloc: Remove unnecessary tagging around _mid_memalign
The internal _mid_memalign already returns newly tagged memory.
(__libc_memalign and posix_memalign already relied on this, this
patch fixes the other call sites.)

Reviewed-by: DJ Delorie <dj@redhat.com>
2021-03-26 11:03:06 +00:00
Szabolcs Nagy ca89f1c7d7 malloc: Rename chunk2rawmem
The previous patch ensured that all chunk to mem computations use
chunk2rawmem, so now we can rename it to chunk2mem, and in the few
cases where the tag of mem is relevant chunk2mem_tag can be used.

Replaced tag_at (chunk2rawmem (x)) with chunk2mem_tag (x).
Renamed chunk2rawmem to chunk2mem.

Reviewed-by: DJ Delorie <dj@redhat.com>
2021-03-26 11:03:06 +00:00
Szabolcs Nagy 4eac0ab186 malloc: Use chunk2rawmem throughout
The difference between chunk2mem and chunk2rawmem is that the latter
does not get the memory tag for the returned pointer.  It turns out
chunk2rawmem almost always works:

The input of chunk2mem is a chunk pointer that is untagged so it can
access the chunk header. All memory that is not user allocated heap
memory is untagged, which in the current implementation means that it
has the 0 tag, but this patch does not rely on the tag value. The
patch relies on that chunk operations are either done on untagged
chunks or without doing memory access to the user owned part.

Internal interface contracts:

sysmalloc: Returns untagged memory.
_int_malloc: Returns untagged memory.
_int_free: Takes untagged memory.
_int_memalign: Returns untagged memory.
_int_realloc: Takes and returns tagged memory.

So only _int_realloc and functions outside this list need care.
Alignment checks do not need the right tag and tcache works with
untagged memory.

tag_at was kept in realloc after an mremap, which is not strictly
necessary, since the pointer is only used to retag the memory, but this
way the tag is guaranteed to be different from the old tag.

Reviewed-by: DJ Delorie <dj@redhat.com>
2021-03-26 11:03:06 +00:00
Szabolcs Nagy 14652f60a4 malloc: Use different tag after mremap
The comment explained why different tag is used after mremap, but
for that correctly tagged pointer should be passed to tag_new_usable.
Use chunk2mem to get the tag.

Reviewed-by: DJ Delorie <dj@redhat.com>
2021-03-26 11:03:06 +00:00
Szabolcs Nagy faf003ed8d malloc: Use memsize instead of CHUNK_AVAILABLE_SIZE
This is a pure refactoring change that does not affect behaviour.

The CHUNK_AVAILABLE_SIZE name was unclear, the memsize name tries to
follow the existing convention of mem denoting the allocation that is
handed out to the user, while chunk is its internally used container.

The user owned memory for a given chunk starts at chunk2mem(p) and
the size is memsize(p).  It is not valid to use on dumped heap chunks.

Moved the definition next to other chunk and mem related macros.

Reviewed-by: DJ Delorie <dj@redhat.com>
2021-03-26 11:03:06 +00:00
Szabolcs Nagy d32624802d malloc: Use mtag_enabled instead of USE_MTAG
Use the runtime check where possible: it should not cause slow down in
the !USE_MTAG case since then mtag_enabled is constant false, but it
allows compiling the tagging logic so it's less likely to break or
diverge when developers only test the !USE_MTAG case.

Reviewed-by: DJ Delorie <dj@redhat.com>
2021-03-26 11:03:06 +00:00
Szabolcs Nagy 63a20eb03c malloc: Use branches instead of mtag_granule_mask
The branches may be better optimized since mtag_enabled is widely used.

Granule size larger than a chunk header is not supported since then we
cannot have both the chunk header and user area granule aligned.  To
fix that for targets with large granule, the chunk layout has to change.

So code that attempted to handle the granule mask generally was changed.
This simplified CHUNK_AVAILABLE_SIZE and the logic in malloc_usable_size.

Reviewed-by: DJ Delorie <dj@redhat.com>
2021-03-26 11:03:06 +00:00
Szabolcs Nagy 9d61722b59 malloc: Change calloc when tagging is disabled
When glibc is built with memory tagging support (USE_MTAG) but it is not
enabled at runtime (mtag_enabled) then unconditional memset was used
even though that can be often avoided.

This is for performance when tagging is supported but not enabled.
The extra check should have no overhead: tag_new_zero_region already
had a runtime check which the compiler can now optimize away.

Reviewed-by: DJ Delorie <dj@redhat.com>
2021-03-26 11:03:06 +00:00
Szabolcs Nagy c076a0bc69 malloc: Only support zeroing and not arbitrary memset with mtag
The memset api is suboptimal and does not provide much benefit. Memory
tagging only needs a zeroing memset (and only for memory that's sized
and aligned to multiples of the tag granule), so change the internal
api and the target hooks accordingly.  This is to simplify the
implementation of the target hook.

Reviewed-by: DJ Delorie <dj@redhat.com>
2021-03-26 11:03:06 +00:00
Szabolcs Nagy 42bac88a21 malloc: Use global flag instead of function pointer dispatch for mtag
A flag check can be faster than function pointers because of how
branch prediction and speculation works and it can also remove a layer
of indirection when there is a mismatch between the malloc internal
tag_* api and __libc_mtag_* target hooks.

Memory tagging wrapper functions are moved to malloc.c from arena.c and
the logic now checks mmap_enabled.  The definition of tag_new_usable is
moved after chunk related definitions.

This refactoring also allows using mtag_enabled checks instead of
USE_MTAG ifdefs when memory tagging support only changes code logic
when memory tagging is enabled at runtime. Note: an "if (false)" code
block is optimized away even at -O0 by gcc.

Reviewed-by: DJ Delorie <dj@redhat.com>
2021-03-26 11:03:06 +00:00
Szabolcs Nagy 0c719cf42c malloc: Refactor TAG_ macros to avoid indirection
This does not change behaviour, just removes one layer of indirection
in the internal memory tagging logic.

Use tag_ and mtag_ prefixes instead of __tag_ and __mtag_ since these
are all symbols with internal linkage, private to malloc.c, so there
is no user namespace pollution issue.

Reviewed-by: DJ Delorie <dj@redhat.com>
2021-03-26 11:03:06 +00:00
Szabolcs Nagy b9b85be6ea malloc: Avoid taggig mmaped memory on free
Either the memory belongs to the dumped area, in which case we don't
want to tag (the dumped area has the same tag as malloc internal data
so tagging is unnecessary, but chunks there may not have the right
alignment for the tag granule), or the memory will be unmapped
immediately (and thus tagging is not useful).

Reviewed-by: DJ Delorie <dj@redhat.com>
2021-03-26 11:03:06 +00:00
Szabolcs Nagy 0ae773bba0 malloc: Move MTAG_MMAP_FLAGS definition
This is only used internally in malloc.c, the extern declaration
was wrong, __mtag_mmap_flags has internal linkage.

Reviewed-by: DJ Delorie <dj@redhat.com>
2021-03-26 11:03:06 +00:00
Szabolcs Nagy 8ae909a533 malloc: Fix a potential realloc issue with memory tagging
At an _int_free call site in realloc the wrong size was used for tag
clearing: the chunk header of the next chunk was also cleared which
in practice may work, but logically wrong.

The tag clearing is moved before the memcpy to save a tag computation,
this avoids a chunk2mem.  Another chunk2mem is removed because newmem
does not have to be recomputed. Whitespaces got fixed too.

Reviewed-by: DJ Delorie <dj@redhat.com>
2021-03-26 11:03:06 +00:00
Szabolcs Nagy 42cc96066b malloc: Fix a realloc crash with heap tagging [BZ 27468]
_int_free must be called with a chunk that has its tag reset. This was
missing in a rare case that could crash when heap tagging is enabled:
when in a multi-threaded process the current arena runs out of memory
during realloc, but another arena still has space to finish the realloc
then _int_free was called without clearing the user allocation tags.

Fixes bug 27468.

Reviewed-by: DJ Delorie <dj@redhat.com>
2021-03-26 10:43:51 +00:00
Florian Weimer 0923f74ada Support for multiple versions in versioned_symbol, compat_symbol
This essentially folds compat_symbol_unique functionality into
compat_symbol.

This change eliminates the need for intermediate aliases for defining
multiple symbol versions, for both compat_symbol and versioned_symbol.
Some binutils versions do not suport multiple versions per symbol on
some targets, so aliases are automatically introduced, similar to what
compat_symbol_unique did.  To reduce symbol table sizes, a configure
check is added to avoid these aliases if they are not needed.

The new mechanism works with data symbols as well as function symbols,
due to the way an assembler-level redirect is used.  It is not
compatible with weak symbols for old binutils versions, which is why
the definition of __malloc_initialize_hook had to be changed.  This
is not a loss of functionality because weak symbols do not matter
to dynamic linking.

The placeholder symbol needs repeating in nptl/libpthread-compat.c
now that compat_symbol is used, but that seems more obvious than
introducing yet another macro.

A subtle difference was that compat_symbol_unique made the symbol
global automatically.  compat_symbol does not do this, so static
had to be removed from the definition of
__libpthread_version_placeholder.

Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2021-03-25 12:33:02 +01:00
Paul Eggert 2b778ceb40 Update copyright dates with scripts/update-copyrights
I used these shell commands:

../glibc/scripts/update-copyrights $PWD/../gnulib/build-aux/update-copyright
(cd ../glibc && git commit -am"[this commit message]")

and then ignored the output, which consisted lines saying "FOO: warning:
copyright statement not found" for each of 6694 files FOO.
I then removed trailing white space from benchtests/bench-pthread-locks.c
and iconvdata/tst-iconv-big5-hkscs-to-2ucs4.c, to work around this
diagnostic from Savannah:
remote: *** pre-commit check failed ...
remote: *** error: lines with trailing whitespace found
remote: error: hook declined to update refs/heads/master
2021-01-02 12:17:34 -08:00
Paul Eggert 69fda43b8d free: preserve errno [BZ#17924]
In the next release of POSIX, free must preserve errno
<https://www.austingroupbugs.net/view.php?id=385>.
Modify __libc_free to save and restore errno, so that
any internal munmap etc. syscalls do not disturb the caller's errno.
Add a test malloc/tst-free-errno.c (almost all by Bruno Haible),
and document that free preserves errno.

Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2020-12-29 00:46:46 -08:00
Richard Earnshaw 3784dfc098 malloc: Basic support for memory tagging in the malloc() family
This patch adds the basic support for memory tagging.

Various flavours are supported, particularly being able to turn on
tagged memory at run-time: this allows the same code to be used on
systems where memory tagging support is not present without neededing
a separate build of glibc.  Also, depending on whether the kernel
supports it, the code will use mmap for the default arena if morecore
does not, or cannot support tagged memory (on AArch64 it is not
available).

All the hooks use function pointers to allow this to work without
needing ifuncs.

Reviewed-by: DJ Delorie <dj@redhat.com>
2020-12-21 15:25:25 +00:00
Florian Weimer 29a4db291b malloc: Use __libc_initial to detect an inner libc
The secondary/non-primary/inner libc (loaded via dlmopen, LD_AUDIT,
static dlopen) must not use sbrk to allocate member because that would
interfere with allocations in the outer libc.  On Linux, this does not
matter because sbrk itself was changed to fail in secondary libcs.

 _dl_addr occasionally shows up in profiles, but had to be used before
because __libc_multiple_libs was unreliable.  So this change achieves
a slight reduction in startup time.

Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2020-12-16 15:13:40 +01:00
W. Hashimoto 0e00b35704 malloc: Detect infinite-loop in _int_free when freeing tcache [BZ#27052]
If linked-list of tcache contains a loop, it invokes infinite
loop in _int_free when freeing tcache. The PoC which invokes
such infinite loop is on the Bugzilla(#27052). This loop
should terminate when the loop exceeds mp_.tcache_count and
the program should abort. The affected glibc version is
2.29 or later.

Reviewed-by: DJ Delorie <dj@redhat.com>
2020-12-11 16:59:10 -05:00
H.J. Lu 862897d2ad Replace Minumum/minumum with Minimum/minimum
Replace Minumum/minumum in comments with Minimum/minimum.
2020-10-06 05:15:11 -07:00
DJ Delorie cdf645427d Update mallinfo2 ABI, and test
This patch adds the ABI-related bits to reflect the new mallinfo2
function, and adds a test case to verify basic functionality.

Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2020-09-17 18:49:30 -04:00
Martin Liska e3960d1c57 Add mallinfo2 function that support sizes >= 4GB.
The current int type can easily overflow for allocation of more
than 4GB.
2020-08-31 08:58:20 +02:00
DJ Delorie b9cde4e3aa malloc: ensure set_max_fast never stores zero [BZ #25733]
The code for set_max_fast() stores an "impossibly small value"
instead of zero, when the parameter is zero.  However, for
small values of the parameter (ex: 1 or 2) the computation
results in a zero being stored anyway.

This patch checks for the parameter being small enough for the
computation to result in zero instead, so that a zero is never
stored.

key values which result in zero being stored:

x86-64:  1..7  (or other 64-bit)
i686:    1..11
armhfp:  1..3  (or other 32-bit)

Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2020-04-06 16:27:53 -04:00
Eyal Itkin 49c3c37651 Fix alignment bug in Safe-Linking
Alignment checks should be performed on the user's buffer and NOT
on the mchunkptr as was done before. This caused bugs in 32 bit
versions, because: 2*sizeof(t) != MALLOC_ALIGNMENT.

As the tcache works on users' buffers it uses the aligned_OK()
check, and the rest work on mchunkptr and therefore check using
misaligned_chunk().

Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2020-03-31 21:48:54 -04:00
Eyal Itkin 768358b6a8 Typo fixes and CR cleanup in Safe-Linking
Removed unneeded '\' chars from end of lines and fixed some
indentation issues that were introduced in the original
Safe-Linking patch.

Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2020-03-31 21:48:54 -04:00