The Linux specific test-case in tst-free-errno was backing up malloc
metadata for a large mmap'd block, overwriting the block with its own
mmap, then restoring malloc metadata and calling free to force an munmap
failure. However, the backed up pages containing metadata can
occasionally be overlapped by the overwriting mmap, leading to a
metadata corruption.
This commit replaces this Linux specific test case with a simpler,
generic, three block allocation, expecting the kernel to coalesce the
VMAs, then cause a fragmentation to trigger the same failure.
Reviewed-by: Florian Weimer <fweimer@redhat.com>
The support for lock elision was already deprecated with glibc 2.42:
commit 77438db8cf
"Mark support for lock elision as deprecated."
See also discussions:
https://sourceware.org/pipermail/libc-alpha/2025-July/168492.html
This patch removes the architecture specific support for lock elision
for x86, powerpc and s390 by removing the elision-conf.h, elision-conf.c,
elision-lock.c, elision-timed.c, elision-unlock.c, elide.h, htm.h/hle.h files.
Those generic files are also removed.
The architecture specific structures are adjusted and the elision fields are
marked as unused. See struct_mutex.h files.
Furthermore in struct_rwlock.h, the leftover __rwelision was also removed.
Those were originally removed with commit 0377a7fde6
"nptl: Remove rwlock elision definitions"
and by chance reintroduced with commit 7df8af43ad
"nptl: Add struct_rwlock.h"
The common code (e.g. the pthread_mutex-files) are changed back to the time
before lock elision was introduced with the x86-support:
- commit 1cdbe57948
"Add the low level infrastructure for pthreads lock elision with TSX"
- commit b023e4ca99
"Add new internal mutex type flags for elision."
- commit 68cc29355f
"Add minimal test suite changes for elision enabled kernels"
- commit e8c659d74e
"Add elision to pthread_mutex_{try,timed,un}lock"
- commit 49186d21ef
"Disable elision for any pthread_mutexattr_settype call"
- commit 1717da59ae
"Add a configure option to enable lock elision and disable by default"
Elision is removed also from the tunables, the initialization part, the
pretty-printers and the manual.
Some extra handling in the testsuite is removed as well as the full tst-mutex10
testcase, which tested a race while enabling lock elision.
I've also searched the code for "elision", "elide", "transaction" and e.g.
cleaned some comments.
I've run the testsuite on x86_64 and s390x and run the build-many-glibcs.py
script.
Thanks to Sachin Monga, this patch is also tested on powerpc.
A NEWS entry also mentions the removal.
Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
All our targets support casts between function pointers and void *,
so we might as well use them.
This change was largely auto-generated, with the following prompts.
@getXXbyYY_r.c Remove the use of the `fct` union and replace it by
pointer casts.
Apply the same change to ether_* getnetgrent_r getnssent_r netname
publickey .
Do not use explicit `*` in function pointer calls. Replace
`(*((lookup_function) fct))` and similar with `((lookup_function) fct)`.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Factor out the internal kernel interface from termios_internal.h, so
that it can be used in test code without causing breakage due to glibc
internals used in headers.
[ v3: fix Alpha build breakage ]
Signed-off-by: H. Peter Anvin (Intel) <hpa@zytor.com>
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
After getting more experience with the various broken direct-to-ioctl
termios2 hacks using Fedora 43 beta, I have found a fair number of
cases where the software would fail to set, or clear CIBAUD for
non-split-speed operation.
Thus it seems will help improve compatibility to clear the kernel-side
version of c_cflag & CIBAUD (having the same meaning to the Linux
kernel as the speed 0 has for cfsetibaud(), i.e. force the input speed
to equal the output speed) for non-split-speed operation, rather than
having it explicitly equal the output speed in CBAUD.
When writing the code that went into glibc 2.42 I had considered this
issue, and had to make an educated guess which way would be more
likely to break fewer things. Unfortunately, it appears I guessed
wrong.
A third option would be to *always* set CIBAUD to __BOTHER, even for
the standard baud rates. However, that is an even bigger departure
from legacy behavior, whereas this variant mostly preserves current
behavior in terms of under what conditions buggy utilities will
continue to work.
This change is in tcsetattr() rather than
___termios2_canonicalize_speeds(), as it should not be run for
tcgetattr(); that would break split speed support for the legacy
interface versions of cfgetispeed() and cfsetispeed().
[ v2: fixed comment style ]
Resolves: BZ #33340
Signed-off-by: H. Peter Anvin (Intel) <hpa@zytor.com>
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Update to latest text from Gnulib commit
08f579c56d81cf78c60fcd3568190f97e6e7f684, file doc/lgpl-2.1.texi.
Reviewed-by: Florian Weimer <fweimer@redhat.com>
* posix/execvpe.c (__execvpe_common): Since strnlen doesn't inspect
beyond NAME_MAX and NAME_MAX does not cover the NUL, we need
to explicitly check for the NUL. I.e. the existing check for,
file_len-1 > NAME_MAX, was never true. This check is required
so that we're guaranteed that file_len includes the NUL, as we
depend on that in the following memcpy to properly terminate
the file buffer passed to execve(). Otherwise that call will trigger
UMR when inspecting the passed file, which can be seen with valgrind.
Note returning ENAMETOOLONG early here for FILE names > NAME_MAX
will also avoid redundant processing of ENAMETOOLONG on each entry
in $PATH, after the change in [BZ #33626] is applied.
Reviewed-by: Collin Funk <collin.funk1@gmail.com>
The check was initially used to define HAVE_BUILTIN_REDIRECTION, which
enables or not libc_hidden_builtin_proto support. It was later removed
with 3ce1f29594, making the feature
mandatory. The configure check was kept as a transition knob.
Current minimum gcc/linker always supports this, as well as clang with
some extra care. Also, missing hidden_proto/hidden_def support is
already flagged in the check-localplt test.
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
Work around the clang limitation wrt inline function and attribute
definition, where it does not allow to 'add' new attribute if a
function is already defined:
clang on x86_64 fails to build s_fabsf128.c with:
../sysdeps/ieee754/float128/../ldbl-128/s_fabsl.c:32:1: error: attribute declaration must precede definition [-Werror,-Wignored-attributes]
32 | libm_alias_ldouble (__fabs, fabs)
| ^
../sysdeps/generic/libm-alias-ldouble.h:63:38: note: expanded from macro 'libm_alias_ldouble'
63 | #define libm_alias_ldouble(from, to) libm_alias_ldouble_r (from, to, )
| ^
../sysdeps/ieee754/float128/float128_private.h:133:43: note: expanded from macro 'libm_alias_ldouble_r'
133 | #define libm_alias_ldouble_r(from, to, r) libm_alias_float128_r (from, to, r)
| ^
../sysdeps/ieee754/float128/s_fabsf128.c:5:3: note: expanded from macro 'libm_alias_float128_r'
5 | static_weak_alias (from ## f128 ## r, to ## f128 ## r); \
| ^
./../include/libc-symbols.h:166:46: note: expanded from macro 'static_weak_alias'
166 | # define static_weak_alias(name, aliasname) weak_alias (name, aliasname)
| ^
./../include/libc-symbols.h:154:38: note: expanded from macro 'weak_alias'
154 | # define weak_alias(name, aliasname) _weak_alias (name, aliasname)
| ^
./../include/libc-symbols.h:156:52: note: expanded from macro '_weak_alias'
156 | extern __typeof (name) aliasname __attribute__ ((weak, alias (#name))) \
| ^
../include/math.h:134:1: note: previous definition is here
134 | fabsf128 (_Float128 x)
If compiler does not support __USE_EXTERN_INLINES we need to route
fabsf128 call to an internal symbol.
Work around the clang limitation wrt inline function and attribute
definition, where it does not allow to 'add' new attribute if a
function is already defined:
Buildint with clang triggers multiple issue on how ifunc macro are
used:
../sysdeps/x86_64/multiarch/strstr.c:38:54: error: attribute declaration must precede definition [-Werror,-Wignored-attributes]
38 | extern __typeof (__redirect_strstr) __strstr_generic attribute_hidden;
| ^
./../include/libc-symbols.h:356:43: note: expanded from macro 'attribute_hidden'
356 | # define attribute_hidden __attribute__ ((visibility ("hidden")))
| ^
../string/strstr.c:76:1: note: previous definition is here
76 | STRSTR (const char *haystack, const char *needle)
| ^
../sysdeps/x86_64/multiarch/strstr.c:27:16: note: expanded from macro 'STRSTR'
27 | #define STRSTR __strstr_generic
| ^
../sysdeps/x86_64/multiarch/strstr.c:65:43: error: redefinition of '__libc_strstr'
65 | libc_ifunc_redirected (__redirect_strstr, __libc_strstr, IFUNC_SELECTOR ());
| ^
And
../sysdeps/x86_64/multiarch/strstr.c:65:43: error: redefinition of '__libc_strstr'
65 | libc_ifunc_redirected (__redirect_strstr, __libc_strstr, IFUNC_SELECTOR ());
| ^
../sysdeps/x86_64/multiarch/strstr.c:59:13: note: previous definition is here
59 | libc_ifunc (__libc_strstr,
| ^
Refactor to use a auxiliary function like other selection (for instance,
x86_64/multiarch/strcmp.c).
clang supports -msse2avx from version 19 and onwards, but it should
be gated as an option to assembler (either with -Wa or -Xassembler).
The -DSSE2AVX option was used because there were asm statements with
SSE-only instructions which was fixed by commit ff8be6152b.
Now we can simply use -mavx.
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
When we want to inline builtin math functions, like truncf, for
extern float truncf (float __x) __attribute__ ((__nothrow__ )) __attribute__ ((__const__));
extern float __truncf (float __x) __attribute__ ((__nothrow__ )) __attribute__ ((__const__));
float (truncf) (float) asm ("__truncf");
compiler may redirect truncf calls to __truncf, instead of inlining it
(for instance, clang). The USE_TRUNCF_BUILTIN is 1 to indicate that
truncf should be inlined. In this case, we don't want the truncf
redirection:
1. For each math function which may be inlined, we define
#if USE_TRUNCF_BUILTIN
# define NO_truncf_BUILTIN inline_truncf
#else
# define NO_truncf_BUILTIN truncf
#endif
in <math-use-builtins.h>.
2. Include <math-use-builtins.h> in include/math.h.
3. Change MATH_REDIRECT to
#define MATH_REDIRECT(FUNC, PREFIX, ARGS) \
float (NO_ ## FUNC ## f ## _BUILTIN) (ARGS (float)) \
asm (PREFIX #FUNC "f");
With this change If USE_TRUNCF_BUILTIN is 0, we get
float (truncf) (float) asm ("__truncf");
truncf will be redirected to __truncf.
And for USE_TRUNCF_BUILTIN 1, we get:
float (inline_truncf) (float) asm ("__truncf");
In both cases either truncf will be inlined or the internal alias
(__truncf) will be called.
It is not required for all math-use-builtin symbol, only the one
defined in math.h. It also allows to remove all the math-use-builtin
inclusion, since it is now implicitly included by math.h.
For MIPS, some math-use-builtin headers include sysdep.h and this
in turn includes a lot of extra headers that do not allow ldbl-128
code to override alias definition (math.h will include
some stdlib.h definition). The math-use-builtin only requires
the __mips_isa_rev, so move the defintion to sgidefs.h.
Signed-off-by: H.J. Lu <hjl.tools@gmail.com>
Co-authored-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
The new file names are COPYINGv2 and COPYING.LESSERv2. Lots of
copyright headers mention COPYING.LIB, so add a symbolic link.
(This is not the first symbolic link in the repository, so this
should be fine.)
The files come from gnulib commit 3cc5b69dda06890929a2d0433f30708.
Signed-off-by: Florian Weimer <fweimer@redhat.com>
Reviewed-by: Collin Funk <collin.funk1@gmail.com>
The license is referenced in various headers, so we should ship it.
The text was copied from gnulib commit d64d66cc4897d605f543257dcd0,
file doc/COPYINGv3.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Reviewed-by: Collin Funk <collin.funk1@gmail.com>
Signed-off-by: Florian Weimer <fweimer@redhat.com>
Commit 3360913c37 ("elf: Add SFrame
stack tracing") added this file with an inconsistent copyright header.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
This is notably needed for the main thread structure to be always
initialized so that some pthread functions can work from the main thread
without other threads, e.g. pthread_cancel.
Add fast path optimization for frexpl (80-bit x87 extended precision) using
a single unsigned comparison to identify normal floating-point numbers and
return immediately via arithmetic on the exponent field.
The implementation uses arithmetic operations (se - ex ) to
adjust the exponent directly, which is simpler than bit masking. For subnormals,
the traditional multiply-based normalization is retained as it handles the
split word format more reliably.
The zero/infinity/NaN check groups these special cases together for better
branch prediction.
Benchmark results on Intel Core i9-13900H (13th Gen):
Baseline: 25.543 ns/op
Optimized: 25.531 ns/op
Speedup: 1.00x (neutral)
Zero: 17.774 ns/op
Denormal: 23.900 ns/op
Signed-off-by: Osama Abdelkader <osama.abdelkader@gmail.com>
Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
The 53807741fb added a configure check
for 64-bit atomic operations that were not previously enabled on some
32-bit ABIs.
However, the NPTL semaphore code casts a sem_t to a new_sem and issues
a 64-bit atomic operation for __HAVE_64B_ATOMICS. Since sem_t has
32-bit alignment on 32-bit architectures, this prevents the use of
64-bit atomics even if the ABI supports them.
Assume 64-bit atomic support from __WORDSIZE, which maps to how glibc
defines it before the broken change. Also rename __HAVE_64B_ATOMICS
to USE_64B_ATOMICS to define better the flag meaning.
Checked on x86_64-linux-gnu and i686-linux-gnu.
Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
The file was added with a GPL reference (but LGPL statement) in
commit 0d6bed7150 ("hppa: Add
____longjmp_check C implementation.").
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Reviewed-by: Collin Funk <collin.funk1@gmail.com>
As discussed in bug 28327, C23 changed the fromfp functions to return
floating types instead of intmax_t / uintmax_t. (Although the
motivation in N2548 was reducing the use of intmax_t in library
interfaces, the new version does have the advantage of being able to
specify arbitrary integer widths for e.g. assigning the result to a
_BitInt, as well as being able to indicate an error case in-band with
a NaN return.)
As with other such changes from interfaces introduced in TS 18661,
implement the new types as a replacement for the old ones, with the
old functions remaining as compat symbols but not supported as an API.
The test generator used for many of the tests is updated to handle
both versions of the functions.
Tested for x86_64 and x86, and with build-many-glibcs.py.
Also tested tgmath tests for x86_64 with GCC 7 to make sure that the
modified case for older compilers in <tgmath.h> does work.
Also tested for powerpc64le to cover the ldbl-128ibm implementation
and the other things that are handled differently for that
configuration. The new tests fail for ibm128, but all the failures
relate to incorrect signs of zero results and turn out to arise from
bugs in the underlying roundl, ceill, truncl and floorl
implementations that I've reported in bug 33623, rather than
indicating any bug in the actual new implementation of the functions
for that format. So given fixes for those functions (which shouldn't
be hard, and of course should add to the tests for those functions
rather than relying only on indirect testing via fromfp), the fromfp
tests should start passing for ibm128 as well.
Remove uses of float_t and double_t. This is not useful on modern machines,
and does not help given GCC defaults to -fexcess-precision=fast.
One use of double_t remains to allow forcing the precision to double
on targets where FLT_EVAL_METHOD=2. This fixes BZ #33563 on
i486-pc-linux-gnu.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Remove ldbl-128/s_fma.c - it makes no sense to use emulated float128
operations to emulate FMA. Benchmarking shows dbl-64/s_fma.c is about
twice as fast. Remove redundant dbl-64/s_fma.c includes in targets
that were trying to work around this issue.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
It has been added on Linux 6.10 (8be7258aad44b5e25977a98db136f677fa6f4370)
as a way to block operations such as mapping, moving to another location,
shrinking the size, expanding the size, or modifying it to a pre-existing
memory mapping.
Although the system only works on 64-bit CPUs, the entrypoint was added
for all ABIs (since the kernel might eventually implement it for additional
ones and/or the ABI can execute on a 64-bit kernel).
Checked on x86_64-linux-gnu and aarch64-linux-gnu.
Reviewed-by: Collin Funk <collin.funk1@gmail.com>
When R_LARCH_IRELATIVE is resolved by apply_irel, the ifunc resolver is
called via elf_ifunc_invoke so it can read HWCAP from the __ifunc_arg_t
argument. But when R_LARCH_IRELATIVE is resolved by elf_machine_rela (it
will happen if we dlopen() a shared object containing R_LARCH_IRELATIVE),
the ifunc resolver is invoked directly with no or different argument.
This causes a segfault if the resolver uses the __ifunc_arg_t.
Despite the LoongArch psABI does not specify this argument, IMO it's
more convenient to have this argument IMO and per hyrum's rule there may
be objects in wild which already relies on this argument (they just
didn't blow up because they are not dlopen()ed yet). So make the
behavior handling R_LARCH_IRELATIVE of elf_machine_rela same as
apply_irel.
This fixes BZ #33610.
Signed-off-by: Xi Ruoyao <xry111@xry111.site>
The definition of once_flag conflicts with std::once_flag in
if “using namespace std;” is active.
Updates commit a7ddbf456d
("Add once_flag, ONCE_FLAG_INIT and call_once to stdlib.h for C23").
Suggested-by: Jonathan Wakely <jwakely@redhat.com>
Reviewed-by: Collin Funk <collin.funk1@gmail.com>
Benchmarks indicate evex can be more profitable on Hygon hardware
than AVX512. So add Prefer_No_AVX512 to make it run with evex.
Change-Id: Icc59492f71fde7a783a8bd315714ffd6f7ecaf29
Signed-off-by: Li jing <lijing@hygon.cn>
Signed-off-by: Xie jiamei <xiejiamei@hygon.cn>
Add fast path optimization for frexpl (128-bit IEEE quad precision) using
a single unsigned comparison to identify normal floating-point numbers and
return immediately via arithmetic on the exponent field.
The implementation uses arithmetic operations hx = hx - (ex << 48)
to adjust the exponent in place, which is simpler and more efficient than
bit masking. For subnormals, the traditional multiply-based normalization
is retained for reliability with the split 64-bit word format.
The zero/infinity/NaN check groups these special cases together for better
branch prediction.
This optimization provides the same algorithmic improvements as the other
frexp variants while maintaining correctness for all edge cases.
Signed-off-by: Osama Abdelkader <osama.abdelkader@gmail.com>
Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
Add fast path optimization for frexp using a single unsigned comparison
to identify normal floating-point numbers and return immediately via
arithmetic on the bit representation.
The implementation uses asuint64()/asdouble() from math_config.h and arithmetic
operations to adjust the exponent, which generates better code than bit masking
on ARM and RISC-V architectures. For subnormals, stdc_leading_zeros provides
faster normalization than the traditional multiply approach.
The zero/infinity/NaN check is simplified to (int64_t)(ix << 1) <= 0, which
is more efficient than separate comparisons.
Benchmark results on Intel Core i9-13900H (13th Gen):
Baseline: 6.778 ns/op
Optimized: 4.007 ns/op
Speedup: 1.69x (40.9% faster)
Zero: 3.580 ns/op (fast path)
Denormal: 6.096 ns/op (slower, rare case)
Signed-off-by: Osama Abdelkader <osama.abdelkader@gmail.com>
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Add fast path optimization for frexpf using a single unsigned comparison
to identify normal floating-point numbers and return immediately via
arithmetic on the bit representation.
The implementation uses asuint()/asfloat() from math_config.h and arithmetic
operations to adjust the exponent, which generates better code than bit masking
on ARM and RISC-V architectures. For subnormals, stdc_leading_zeros provides
faster normalization than the traditional multiply approach.
The zero/infinity/NaN check is simplified to (int32_t)(hx << 1) <= 0, which
is more efficient than separate comparisons.
Benchmark results on Intel Core i9-13900H (13th Gen):
Baseline: 5.858 ns/op
Optimized: 4.003 ns/op
Speedup: 1.46x (31.7% faster)
Zero: 3.580 ns/op (fast path)
Denormal: 5.597 ns/op (slower, rare case)
Signed-off-by: Osama Abdelkader <osama.abdelkader@gmail.com>
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Add benchmark support for frexp, frexpf, and frexpl to measure the
performance improvement of the fast path optimization.
- Created frexp-inputs, frexpf-inputs, frexpl-inputs with random test values
- Added frexp, frexpf, frexpl to bench-math list
- Added CFLAGS to disable builtins for accurate benchmarking
These benchmarks will be used to quantify the performance gains from the
fast path optimization for normal floating-point numbers.
Signed-off-by: Osama Abdelkader <osama.abdelkader@gmail.com>
clang might generate an abort call when cleanup functions (set by
__attribute__ ((cleanup)) calls functions not marked as nothrow.
The hurd already provides abort for the loader at
sysdeps/mach/hurd/dl-sysdep.c, and adding it rtld-stubbed-symbols
triggers duplicate symbols.
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>