Commit Graph

16761 Commits

Author SHA1 Message Date
Yury Khrustalev 82decb59bc aarch64: Add tests for Guarded Control Stack
These tests validate that GCS tunable works as expected depending
on the GCS markings in the test binaries.

Tests validate both static and dynamically linked binaries.

These new tests are AArch64 specific. Moreover, they are included only
if linker supports the "-z gcs=<value>" option. If built, these tests
will run on systems with and without HWCAP_GCS. In the latter case the
tests will be reported as UNSUPPORTED.

Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2025-02-21 16:08:00 +00:00
Wilco Dijkstra 163b1bbb76 AArch64: Add SVE memset
Add SVE memset based on the generic memset with predicated load for sizes < 16.
Unaligned memsets of 128-1024 are improved by ~20% on average by using aligned
stores for the last 64 bytes.  Performance of random memset benchmark improves
by ~2% on Neoverse V1.

Reviewed-by: Yury Khrustalev <yury.khrustalev@arm.com>
2025-02-20 15:31:50 +00:00
H.J. Lu 5a4573be6f x86 (__HAVE_FLOAT128): Defined to 0 for Intel SYCL compiler [BZ #32723]
Intel compiler always defines __INTEL_LLVM_COMPILER.  When SYCL is
enabled by -fsycl, it also defines SYCL_LANGUAGE_VERSION.  Since Intel
SYCL compiler doesn't support _Float128:

https://github.com/intel/llvm/issues/16903

define __HAVE_FLOAT128 to 0 for Intel SYCL compiler.

This fixes BZ #32723.

Signed-off-by: H.J. Lu <hjl.tools@gmail.com>
Reviewed-by: Sam James <sam@gentoo.org>
2025-02-20 08:42:37 +08:00
Adhemerval Zanella 0242c9f9e6 math: Consolidate acosf and asinf internal tables
The libm size improvement built with gcc-14, "--enable-stack-protector=strong
--enable-bind-now=yes --enable-fortify-source=2":

Before:

 582292     844      12  583148   8e5ec aarch64-linux-gnu/math/libm.so
 975133    1076      12  976221   ee55d x86_64-linux-gnu/math/libm.so
1203586    5608     368 1209562  1274da powerpc64le-linux-gnu/math/libm.so

After:

 581972     844      12  582828   8e4ac aarch64-linux-gnu/math/libm.so
 974941    1076      12  976029   ee49d x86_64-linux-gnu/math/libm.so
1203394    5608     368 1209370  12741a powerpc64le-linux-gnu/math/libm.so
Reviewed-by: Andreas K. Huettel <dilfridge@gentoo.org>
2025-02-17 10:09:09 -03:00
Adhemerval Zanella 1faccf388a math: Consolidate acospif and asinpif internal tables
The libm size improvement built with gcc-14, "--enable-stack-protector=strong
--enable-bind-now=yes --enable-fortify-source=2":

Before:

   text    data     bss     dec     hex filename
 583444     844      12  584300   8ea6c aarch64-linux-gnu/math/libm.so
 976349    1076      12  977437   eea1d x86_64-linux-gnu/math/libm.so
1204738    5608     368 1210714  12795a powerpc64le-linux-gnu/math/libm.so

After:

 582292     844      12  583148   8e5ec aarch64-linux-gnu/math/libm.so
 975133    1076      12  976221   ee55d x86_64-linux-gnu/math/libm.so
1203586    5608     368 1209562  1274da powerpc64le-linux-gnu/math/libm.so
Reviewed-by: Andreas K. Huettel <dilfridge@gentoo.org>
2025-02-17 10:09:09 -03:00
Adhemerval Zanella 246e52574d math: Consolidate cospif and sinpif internal tables
The libm size improvement built with gcc-14, "--enable-stack-protector=strong
--enable-bind-now=yes --enable-fortify-source=2":

Before:

   text    data     bss     dec     hex filename
 584500     844      12  585356   8ee8c aarch64-linux-gnu/math/libm.so
 977341    1076      12  978429   eedfd x86_64-linux-gnu/math/libm.so
1205762    5608     368 1211738  127d5a powerpc64le-linux-gnu/math/libm.so

After:

   text    data     bss     dec     hex filename
 583444     844      12  584300   8ea6c aarch64-linux-gnu/math/libm.so
 976349    1076      12  977437   eea1d x86_64-linux-gnu/math/libm.so
1204738    5608     368 1210714  12795a powerpc64le-linux-gnu/math/libm.so
Reviewed-by: Andreas K. Huettel <dilfridge@gentoo.org>
2025-02-17 10:09:09 -03:00
gfleury 4afbc1aa2e htl: don't export __pthread_default_rwlockattr anymore.
since now all symbloy that use it are in libc
Message-ID: <20250216145434.7089-11-gfleury@disroot.org>
2025-02-16 23:43:04 +01:00
gfleury 6f6732c1c4 htl: move pthread_rwlock_init into libc.
Signed-off-by: gfleury <gfleury@disroot.org>
Message-ID: <20250216145434.7089-10-gfleury@disroot.org>
2025-02-16 23:43:03 +01:00
gfleury d3ef1b56aa htl: move pthread_rwlock_destroy into libc.
Signed-off-by: gfleury <gfleury@disroot.org>
Message-ID: <20250216145434.7089-9-gfleury@disroot.org>
2025-02-16 23:42:38 +01:00
gfleury 25650ef6b9 htl: move pthread_rwlock_{rdlock, timedrdlock, timedwrlock, wrlock, clockrdlock, clockwrlock} into libc.
Signed-off-by: gfleury <gfleury@disroot.org>
Message-ID: <20250216145434.7089-8-gfleury@disroot.org>
2025-02-16 23:08:54 +01:00
gfleury 119798a7b1 htl: move pthread_rwlock_unlock into libc.
Signed-off-by: gfleury <gfleury@disroot.org>
Message-ID: <20250216145434.7089-7-gfleury@disroot.org>
2025-02-16 23:08:54 +01:00
gfleury 18accc19b9 htl: move pthread_rwlock_tryrdlock, pthread_rwlock_trywrlock into libc.
Signed-off-by: gfleury <gfleury@disroot.org>
Message-ID: <20250216145434.7089-6-gfleury@disroot.org>
2025-02-16 22:59:34 +01:00
gfleury 4b25413df5 htl: move pthread_rwlockattr_getpshared, pthread_rwlockattr_setpshared into libc.
Signed-off-by: gfleury <gfleury@disroot.org>
Message-ID: <20250216145434.7089-5-gfleury@disroot.org>
2025-02-16 22:59:25 +01:00
gfleury cd2d31ed58 htl: move pthread_rwlockattr_destroy into libc.
Signed-off-by: gfleury <gfleury@disroot.org>
Message-ID: <20250216145434.7089-4-gfleury@disroot.org>
2025-02-16 22:59:16 +01:00
gfleury e618b671cd htl: move pthread_rwlockattr_init into libc.
Signed-off-by: gfleury <gfleury@disroot.org>
Message-ID: <20250216145434.7089-3-gfleury@disroot.org>
2025-02-16 22:59:07 +01:00
gfleury 8f842ce13e htl: move __pthread_default_rwlockattr into libc.
Signed-off-by: gfleury <gfleury@disroot.org>
Message-ID: <20250216145434.7089-2-gfleury@disroot.org>
2025-02-16 22:59:00 +01:00
Aurelien Jarno 60f2d6be65 Fix tst-aarch64-pkey to handle ENOSPC as not supported
The syscall pkey_alloc can return ENOSPC to indicate either that all
keys are in use or that the system runs in a mode in which memory
protection keys are disabled. In such case the test should not fail and
just return unsupported.

This matches the behaviour of the generic tst-pkey.

Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
Reviewed-by: Florian Weimer <fweimer@redhat.com>
2025-02-15 11:08:43 +01:00
Yat Long Poon 95e807209b AArch64: Improve codegen for SVE powf
Improve memory access with indexed/unpredicated instructions.
Eliminate register spills.  Speedup on Neoverse V1: 3%.

Reviewed-by: Wilco Dijkstra  <Wilco.Dijkstra@arm.com>
2025-02-13 18:16:54 +00:00
Yat Long Poon 0b195651db AArch64: Improve codegen for SVE pow
Move constants to struct.  Improve memory access with indexed/unpredicated
instructions.  Eliminate register spills.  Speedup on Neoverse V1: 24%.

Reviewed-by: Wilco Dijkstra  <Wilco.Dijkstra@arm.com>
2025-02-13 18:16:54 +00:00
Yat Long Poon f5ff34cb3c AArch64: Improve codegen for SVE erfcf
Reduce number of MOV/MOVPRFXs and use unpredicated FMUL.
Replace MUL with LSL.  Speedup on Neoverse V1: 6%.

Reviewed-by: Wilco Dijkstra  <Wilco.Dijkstra@arm.com>
2025-02-13 18:16:54 +00:00
Luna Lamb c0ff447edf Aarch64: Improve codegen in SVE exp and users, and update expf_inline
Use unpredicted muls, and improve memory access.
7%, 3% and 1% improvement in throughput microbenchmark on Neoverse V1,
for exp, exp2 and cosh respectively.

Reviewed-by: Wilco Dijkstra  <Wilco.Dijkstra@arm.com>
2025-02-13 18:16:54 +00:00
Luna Lamb 8f0e7fe61e Aarch64: Improve codegen in SVE asinh
Use unpredicated muls, use lanewise mla's and improve memory access.
1% regression in throughput microbenchmark on Neoverse V1.

Reviewed-by: Wilco Dijkstra  <Wilco.Dijkstra@arm.com>
2025-02-13 18:16:54 +00:00
Wilco Dijkstra 5afaf99edb math: Improve layout of exp/exp10 data
GCC aligns global data to 16 bytes if their size is >= 16 bytes.  This patch
changes the exp_data struct slightly so that the fields are better aligned
and without gaps.  As a result on targets that support them, more load-pair
instructions are used in exp.  Exp10 is improved by moving invlog10_2N later
so that neglog10_2hiN and neglog10_2loN can be loaded using load-pair.

The exp benchmark improves 2.5%, "144bits" by 7.2%, "768bits" by 12.7% on
Neoverse V2.  Exp10 improves by 1.5%.

Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2025-02-13 18:16:54 +00:00
Adhemerval Zanella b81252c4b9 math: Consolidate coshf and sinhf internal tables
The libm size improvement built with "--enable-stack-protector=strong
--enable-bind-now=yes --enable-fortify-source=2":

Before:

   text    data     bss     dec     hex filename
 585192     860      12  586064   8f150 aarch64-linux-gnu/math/libm.so
 960775    1068      12  961855   ead3f x86_64-linux-gnu/math/libm.so
1189174    5544     368 1195086  123c4e powerpc64le-linux-gnu/math/libm.so

After:

   text    data     bss     dec     hex filename
 584952     860      12  585824   8f060 aarch64-linux-gnu/math/libm.so
 960615    1068      12  961695   eac9f x86_64-linux-gnu/math/libm.so
1189078    5544     368 1194990  123bee powerpc64le-linux-gnu/math/libm.so

The are small code changes for x86_64 and powerpc64le, which do not
affect performance; but on aarch64 with gcc-14 I see a slight better
code generation due the usage of ldq for floating point constant loading.
Reviewed-by: Andreas K. Huettel <dilfridge@gentoo.org>
2025-02-12 16:31:57 -03:00
Adhemerval Zanella 994007ff29 math: Consolidate acoshf and asinhf internal tables
The libm size improvement built with "--enable-stack-protector=strong
--enable-bind-now=yes --enable-fortify-source=2":

Before:

   text    data     bss     dec     hex filename
 587304     860      12  588176   8f990 aarch64-linux-gnu-master/math/libm.so
 962855    1068      12  963935   eb55f x86_64-linux-gnu-master/math/libm.so
1191222    5544     368 1197134  12444e powerpc64le-linux-gnu-master/math/libm.so

After:

   text    data     bss     dec     hex filename
 585192     860      12  586064   8f150 aarch64-linux-gnu/math/libm.so
 960775    1068      12  961855   ead3f x86_64-linux-gnu/math/libm.so
1189174    5544     368 1195086  123c4e powerpc64le-linux-gnu/math/libm.so

The are small code changes for x86_64 and powerpc64le, which do not
affect performance; but on aarch64 with gcc-14 I see a slight better
code generation due the usage of ldq for floating point constant loading.
Reviewed-by: Andreas K. Huettel <dilfridge@gentoo.org>
2025-02-12 16:31:57 -03:00
Adhemerval Zanella 8f170dc819 math: Use tanpif from CORE-MATH
The CORE-MATH implementation is correctly rounded (for any rounding mode)
and shows better performance to the generic tanpif.

The code was adapted to glibc style and to use the definition of
math_config.h (to handle errno, overflow, and underflow).

Benchtest on x64_64 (Ryzen 9 5900X, gcc 14.2.1), aarch64 (Neoverse-N1,
gcc 13.3.1), and powerpc (POWER10, gcc 13.2.1):

latency                      master        patched   improvement
x86_64                      85.1683        47.7990        43.88%
x86_64v2                    76.8219        41.4679        46.02%
x86_64v3                    73.7775        37.7734        48.80%
aarch64 (Neoverse)          35.4514        18.0742        49.02%
power8                      22.7604        10.1054        55.60%
power10                     22.1358         9.9553        55.03%

reciprocal-throughput        master        patched   improvement
x86_64                      41.0174        19.4718        52.53%
x86_64v2                    34.8565        11.3761        67.36%
x86_64v3                    34.0325         9.6989        71.50%
aarch64 (Neoverse)          25.4349         9.2017        63.82%
power8                      13.8626         3.8486        72.24%
power10                     11.7933         3.6420        69.12%

Reviewed-by: DJ Delorie <dj@redhat.com>
2025-02-12 16:31:57 -03:00
Adhemerval Zanella de2fca9fe2 math: Use sinpif from CORE-MATH
The CORE-MATH implementation is correctly rounded (for any rounding mode)
and shows better performance to the generic sinpif.

The code was adapted to glibc style and to use the definition of
math_config.h (to handle errno, overflow, and underflow).

Benchtest on x64_64 (Ryzen 9 5900X, gcc 14.2.1), aarch64 (Neoverse-N1,
gcc 13.3.1), and powerpc (POWER10, gcc 13.2.1):

latency                      master        patched   improvement
x86_64                      47.5710        38.4455        19.18%
x86_64v2                    46.8828        40.7563        13.07%
x86_64v3                    44.0034        34.1497        22.39%
aarch64 (Neoverse)          19.2493        14.1968        26.25%
power8                      23.5312        16.3854        30.37%
power10                     22.6485        10.2888        54.57%

reciprocal-throughput        master        patched   improvement
x86_64                      21.8858        11.6717        46.67%
x86_64v2                    22.0620        11.9853        45.67%
x86_64v3                    21.5653        11.3291        47.47%
aarch64 (Neoverse)          13.0615         6.5499        49.85%
power8                      16.2030         6.9580        57.06%
power10                     12.8911         4.2858        66.75%

Reviewed-by: DJ Delorie <dj@redhat.com>
2025-02-12 16:31:57 -03:00
Adhemerval Zanella be85208b9f math: Use cospif from CORE-MATH
The CORE-MATH implementation is correctly rounded (for any rounding mode)
and shows better performance to the generic cospif.

The code was adapted to glibc style and to use the definition of
math_config.h (to handle errno, overflow, and underflow).

Benchtest on x64_64 (Ryzen 9 5900X, gcc 14.2.1), aarch64 (Neoverse-N1,
gcc 13.3.1), and powerpc (POWER10, gcc 13.2.1):

latency                    master        patched   improvement
x86_64                    47.4679        38.4157        19.07%
x86_64v2                  46.9686        38.3329        18.39%
x86_64v3                  43.8929        31.8510        27.43%
aarch64 (Neoverse)        18.8867        13.2089        30.06%
power8                    22.9435         7.8023        65.99%
power10                   15.4472        7.77505        49.67%

reciprocal-throughput      master        patched   improvement
x86_64                    20.9518        11.4991        45.12%
x86_64v2                  19.8699        10.5921        46.69%
x86_64v3                  19.3475         9.3998        51.42%
aarch64 (Neoverse)        12.5767         6.2158        50.58%
power8                    15.0566         3.2654        78.31%
power10                    9.2866         3.1147        66.46%

Reviewed-by: DJ Delorie <dj@redhat.com>
2025-02-12 16:31:57 -03:00
Adhemerval Zanella 95a01ea955 math: Use atanpif from CORE-MATH
The CORE-MATH implementation is correctly rounded (for any rounding mode)
and shows better performance to the generic atanpif.

The code was adapted to glibc style and to use the definition of
math_config.h (to handle errno, overflow, and underflow).

Benchtest on x64_64 (Ryzen 9 5900X, gcc 14.2.1), aarch64 (Neoverse-N1,
gcc 13.3.1), and powerpc (POWER10, gcc 13.2.1):

latency                     master        patched   improvement
x86_64                     66.3296        52.7558        20.46%
x86_64v2                   66.0429        51.4007        22.17%
x86_64v3                   60.6294        48.7876        19.53%
aarch64 (Neoverse)         24.3163        20.9110        14.00%
power8                     16.5766        13.3620        19.39%
power10                    16.5115        13.4072        18.80%

reciprocal-throughput       master        patched   improvement
x86_64                     30.8599        16.0866        47.87%
x86_64v2                   29.2286        15.4688        47.08%
x86_64v3                   23.0960        12.8510        44.36%
aarch64 (Neoverse)         15.4619        10.6752        30.96%
power8                      7.9200         5.2483        33.73%
power10                     6.8539         4.6262        32.50%

Reviewed-by: DJ Delorie <dj@redhat.com>
2025-02-12 16:31:57 -03:00
Adhemerval Zanella 1cd9ccd8c0 math: Use atan2pif from CORE-MATH
The CORE-MATH implementation is correctly rounded (for any rounding mode)
and shows better performance to the generic atan2pif.

The code was adapted to glibc style and to use the definition of
math_config.h (to handle errno, overflow, and underflow).

Benchtest on x64_64 (Ryzen 9 5900X, gcc 14.2.1), aarch64 (Neoverse-N1,
gcc 13.3.1), and powerpc (POWER10, gcc 13.2.1):

latency                 master        patched   improvement
x86_64                 79.4006        70.8726        10.74%
x86_64v2               77.5136        69.1424        10.80%
x86_64v3               71.8050        68.1637         5.07%
aarch64 (Neoverse)     27.8363        24.7700        11.02%
power8                 39.3893        17.2929        56.10%
power10                19.7200        16.8187        14.71%

reciprocal-throughput   master        patched   improvement
x86_64                 38.3457        30.9471        19.29%
x86_64v2               37.4023        30.3112        18.96%
x86_64v3               33.0713        24.4891        25.95%
aarch64 (Neoverse)     19.3683        15.3259        20.87%
power8                 19.5507        8.27165        57.69%
power10                9.05331        7.63775        15.64%

Reviewed-by: DJ Delorie <dj@redhat.com>
2025-02-12 16:31:57 -03:00
Adhemerval Zanella ae679a0aca math: Use asinpif from CORE-MATH
The CORE-MATH implementation is correctly rounded (for any rounding mode)
and shows better performance to the generic asinpif.

The code was adapted to glibc style and to use the definition of
math_config.h (to handle errno, overflow, and underflow).

Benchtest on x64_64 (Ryzen 9 5900X, gcc 14.2.1), aarch64 (Neoverse-N1,
gcc 13.3.1), and powerpc (POWER10, gcc 13.2.1):

latency                 master        patched   improvement
x86_64                 46.4996        41.6126        10.51%
x86_64v2               46.7551        38.8235        16.96%
x86_64v3               42.6235        33.7603        20.79%
aarch64 (Neoverse)     17.4161        14.3604        17.55%
power8                 10.7347         9.0193        15.98%
power10                10.6420         9.0362        15.09%

reciprocal-throughput   master        patched   improvement
x86_64                 24.7208        16.5544        33.03%
x86_64v2               24.2177        14.8938        38.50%
x86_64v3               20.5617        10.5452        48.71%
aarch64 (Neoverse)     13.4827        7.17613        46.78%
power8                 6.46134        3.56089        44.89%
power10                5.79007        3.49544        39.63%

Reviewed-by: DJ Delorie <dj@redhat.com>
2025-02-12 16:31:57 -03:00
Adhemerval Zanella edb2a8f0ae math: Use acospif from CORE-MATH
The CORE-MATH implementation is correctly rounded (for any rounding mode)
and shows better performance to the generic acospif.

The code was adapted to glibc style and to use the definition of
math_config.h (to handle errno, overflow, and underflow).

Benchtest on x64_64 (Ryzen 9 5900X, gcc 14.2.1), aarch64 (Neoverse-N1,
gcc 13.3.1), and powerpc (POWER10, gcc 13.2.1):

latency                  master        patched   improvement
x86_64                  54.8281        42.9070        21.74%
x86_64v2                54.1717        42.7497        21.08%
x86_64v3                49.3552        34.1512        30.81%
aarch64 (Neoverse)      17.9395        14.3733        19.88%
power8                  20.3110         8.8609        56.37%
power10                 11.3113        8.84067        21.84%

reciprocal-throughput    master        patched   improvement
x86_64                  21.2301        14.4803        31.79%
x86_64v2                20.6858        13.9506        32.56%
x86_64v3                16.1944        11.3377        29.99%
aarch64 (Neoverse)      11.4474        7.13282        37.69%
power8                  10.6916        3.57547        66.56%
power10                 4.64269        3.54145        23.72%

Reviewed-by: DJ Delorie <dj@redhat.com>
2025-02-12 16:31:57 -03:00
Samuel Thibault 392261a2b6 hurd: Replace char foo[1024] with string_t
Like already done in various other places and advised by Roland in

https://lists.gnu.org/archive/html/bug-hurd/2012-04/msg00124.html
2025-02-10 20:10:59 +01:00
Samuel Thibault 659fa18dde hurd: Drop useless buffer initialization in ttyname*
The RPC stub will write a string anyway.
2025-02-10 20:10:59 +01:00
gfleury 6bcd7bf100 htl: stop exporting __pthread_default_barrierattr.
since all symbol that use it are now in libc
Message-ID: <20250209200108.865599-9-gfleury@disroot.org>
2025-02-10 01:39:17 +01:00
gfleury 710bbc9659 htl: move pthread_barrier_wait into libc.
Message-ID: <20250209200108.865599-8-gfleury@disroot.org>
2025-02-10 01:39:17 +01:00
gfleury 2789003489 htl: move pthread_barrier_init into libc.
Message-ID: <20250209200108.865599-7-gfleury@disroot.org>
2025-02-10 01:39:17 +01:00
gfleury 735c9b73d6 htl: move pthread_barrier_destroy into libc.
Message-ID: <20250209200108.865599-6-gfleury@disroot.org>
2025-02-10 01:39:17 +01:00
gfleury ccf19a68ab htl: move pthread_barrierattr_getpshared, pthread_barrierattr_setpshared into libc.
Message-ID: <20250209200108.865599-5-gfleury@disroot.org>
2025-02-10 01:39:17 +01:00
gfleury ca2a95ee67 htl: move pthread_barrierattr_init into libc.
Message-ID: <20250209200108.865599-4-gfleury@disroot.org>
2025-02-10 01:18:56 +01:00
gfleury 40cbd3c361 htl: move pthread_barrierattr_destroy into libc.
Message-ID: <20250209200108.865599-3-gfleury@disroot.org>
2025-02-10 01:18:17 +01:00
gfleury 7d799d85e8 htl: move __pthread_default_barrierattr into libc.
Message-ID: <20250209200108.865599-2-gfleury@disroot.org>
2025-02-10 01:17:50 +01:00
Florian Weimer 3755ffb665 powerpc64le: Also avoid IFUNC for __mempcpy
Code used during early static startup in elf/dl-tls.c uses
__mempcpy.

Fixes commit cbd9fd2369 ("Consolidate
TLS block allocation for static binaries with ld.so").

Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2025-02-05 09:53:11 +01:00
Adhemerval Zanella 09e7f4d594 math: Fix tanf for some inputs (BZ 32630)
The logic was copied wrong from CORE-MATH.
2025-02-03 09:40:39 -03:00
Florian Weimer 749310c61b elf: Add l_soname accessor function for DT_SONAME values
It's not necessary to introduce temporaries because the compiler
is able to evaluate l_soname just once in constracts like:

  l_soname (l) != NULL && strcmp (l_soname (l), LIBC_SO) != 0
2025-02-02 20:10:09 +01:00
Florian Weimer aa1bf89039 elf: Split _dl_lookup_map, _dl_map_new_object from _dl_map_object
So that they can eventually be called separately from dlopen.
2025-02-02 20:10:08 +01:00
Sergey Bugaev a7aad6e2b7 hurd: Use the new __proc_reauthenticate_complete protocol 2025-02-01 18:20:42 +01:00
Florian Weimer 96429bcc91 elf: Do not add a copy of _dl_find_object to libc.so
This reduces code size and dependencies on ld.so internals from
libc.so.

Fixes commit f4c142bb9f
("arm: Use _dl_find_object on __gnu_Unwind_Find_exidx (BZ 31405)").

Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2025-02-01 12:37:58 +01:00
gfleury cf51d18b9d htl: move pthread_setcancelstate into libc.
sysdeps/pthread/sem_open.c: call pthread_setcancelstate directely
since forward declaration is gone on hurd too
Message-ID: <20250201080202.494671-1-gfleury@disroot.org>
2025-02-01 11:24:14 +01:00
Adhemerval Zanella 04588633cf math: Fix sinhf for some inputs (BZ 32627)
The logic was copied wrong from CORE-MATH.
2025-01-31 13:05:41 -03:00