577 Commits
Author | SHA1 | Message | Date |
---|---|---|---|
|
d6c51c8e5c |
perf stat: Fix L2 Topdown metrics disappear for raw events
Bugzilla: https://bugzilla.redhat.com/2123229
upstream
========
commit f0c86a2bae4fd12bfa8bad4d43fb59fb498cdd14
Author: Zhengjun Xing <zhengjun.xing@linux.intel.com>
Date: Fri Aug 26 22:00:57 2022 +0800
description
===========
In perf/Documentation/perf-stat.txt, for "--td-level" the default "0" means
the max level that the current hardware support.
So we need initialize the stat_config.topdown_level to TOPDOWN_MAX_LEVEL
when “--td-level=0” or no “--td-level” option. Otherwise, for the
hardware with a max level is 2, the 2nd level metrics disappear for raw
events in this case.
The issue cannot be observed for the perf stat default or "--topdown"
options. This commit fixes the raw events issue and removes the
duplicated code for the perf stat default.
Before:
# ./perf stat -e "cpu-clock,context-switches,cpu-migrations,page-faults,instructions,cycles,ref-cycles,branches,branch-misses,{slots,topdown-retiring,topdown-bad-spec,topdown-fe-bound,topdown-be-bound,topdown-heavy-ops,topdown-br-mispredict,topdown-fetch-lat,topdown-mem-bound}" sleep 1
Performance counter stats for 'sleep 1':
1.03 msec cpu-clock # 0.001 CPUs utilized
1 context-switches # 966.216 /sec
0 cpu-migrations # 0.000 /sec
60 page-faults # 57.973 K/sec
1,132,112 instructions # 1.41 insn per cycle
803,872 cycles # 0.777 GHz
1,909,120 ref-cycles # 1.845 G/sec
236,634 branches # 228.640 M/sec
6,367 branch-misses # 2.69% of all branches
4,823,232 slots # 4.660 G/sec
1,210,536 topdown-retiring # 25.1% Retiring
699,841 topdown-bad-spec # 14.5% Bad Speculation
1,777,975 topdown-fe-bound # 36.9% Frontend Bound
1,134,878 topdown-be-bound # 23.5% Backend Bound
189,146 topdown-heavy-ops # 182.756 M/sec
662,012 topdown-br-mispredict # 639.647 M/sec
1,097,048 topdown-fetch-lat # 1.060 G/sec
416,121 topdown-mem-bound # 402.063 M/sec
1.002423690 seconds time elapsed
0.002494000 seconds user
0.000000000 seconds sys
After:
# ./perf stat -e "cpu-clock,context-switches,cpu-migrations,page-faults,instructions,cycles,ref-cycles,branches,branch-misses,{slots,topdown-retiring,topdown-bad-spec,topdown-fe-bound,topdown-be-bound,topdown-heavy-ops,topdown-br-mispredict,topdown-fetch-lat,topdown-mem-bound}" sleep 1
Performance counter stats for 'sleep 1':
1.13 msec cpu-clock # 0.001 CPUs utilized
1 context-switches # 882.128 /sec
0 cpu-migrations # 0.000 /sec
61 page-faults # 53.810 K/sec
1,137,612 instructions # 1.29 insn per cycle
881,477 cycles # 0.778 GHz
2,093,496 ref-cycles # 1.847 G/sec
236,356 branches # 208.496 M/sec
7,090 branch-misses # 3.00% of all branches
5,288,862 slots # 4.665 G/sec
1,223,697 topdown-retiring # 23.1% Retiring
767,403 topdown-bad-spec # 14.5% Bad Speculation
2,053,322 topdown-fe-bound # 38.8% Frontend Bound
1,244,438 topdown-be-bound # 23.5% Backend Bound
186,665 topdown-heavy-ops # 3.5% Heavy Operations # 19.6% Light Operations
725,922 topdown-br-mispredict # 13.7% Branch Mispredict # 0.8% Machine Clears
1,327,400 topdown-fetch-lat # 25.1% Fetch Latency # 13.7% Fetch Bandwidth
497,775 topdown-mem-bound # 9.4% Memory Bound # 14.1% Core Bound
1.002701530 seconds time elapsed
0.002744000 seconds user
0.000000000 seconds sys
Fixes:
|
|
|
f0553f82d4 |
perf stat: Clear evsel->reset_group for each stat run
Bugzilla: https://bugzilla.redhat.com/2123229
upstream
========
commit bf515f024e4c0ca46a1b08c4f31860c01781d8a5
Author: Ian Rogers <irogers@google.com>
Date: Mon Aug 22 14:33:51 2022 -0700
description
===========
If a weak group is broken then the reset_group flag remains set for
the next run. Having reset_group set means the counter isn't created
and ultimately a segfault.
A simple reproduction of this is:
# perf stat -r2 -e '{cycles,cycles,cycles,cycles,cycles,cycles,cycles,cycles,cycles,cycles}:W
which will be added as a test in the next patch.
Fixes:
|
|
|
e7cffd9f96 |
perf stat: Remove duplicated include in builtin-stat.c
Bugzilla: https://bugzilla.redhat.com/2123229 upstream ======== commit 8d33834f9fb06b5349b145e6499aca976deed3d8 Author: Yang Li <yang.lee@linux.alibaba.com> Date: Thu Aug 4 08:52:13 2022 +0800 description =========== util/topdown.h is included twice in builtin-stat.c, remove one of them. Reported-by: Abaci Robot <abaci@linux.alibaba.com> Signed-off-by: Yang Li <yang.lee@linux.alibaba.com> Tested-by: Ian Rogers <irogers@google.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://bugzilla.openanolis.cn/show_bug.cgi?id=1818 Link: https://lore.kernel.org/r/20220804005213.71990-1-yang.lee@linux.alibaba.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: Michael Petlan <mpetlan@redhat.com> |
|
|
13c363993a |
perf stat: Add JSON output option
Bugzilla: https://bugzilla.redhat.com/2123229 upstream ======== commit df936cadfb58ba93601ac351ab6fc2e2650cf591 Author: Claire Jensen <cjense@google.com> Date: Fri Aug 5 13:01:04 2022 -0700 description =========== CSV output is tricky to format and column layout changes are susceptible to breaking parsers. New JSON-formatted output has variable names to identify fields that are consistent and informative, making the output parseable. CSV output example: 1.20,msec,task-clock:u,1204272,100.00,0.697,CPUs utilized 0,,context-switches:u,1204272,100.00,0.000,/sec 0,,cpu-migrations:u,1204272,100.00,0.000,/sec 70,,page-faults:u,1204272,100.00,58.126,K/sec JSON output example: {"counter-value" : "3805.723968", "unit" : "msec", "event" : "cpu-clock", "event-runtime" : 3805731510100.00, "pcnt-running" : 100.00, "metric-value" : 4.007571, "metric-unit" : "CPUs utilized"} {"counter-value" : "6166.000000", "unit" : "", "event" : "context-switches", "event-runtime" : 3805723045100.00, "pcnt-running" : 100.00, "metric-value" : 1.620191, "metric-unit" : "K/sec"} {"counter-value" : "466.000000", "unit" : "", "event" : "cpu-migrations", "event-runtime" : 3805727613100.00, "pcnt-running" : 100.00, "metric-value" : 122.447136, "metric-unit" : "/sec"} {"counter-value" : "208.000000", "unit" : "", "event" : "page-faults", "event-runtime" : 3805726799100.00, "pcnt-running" : 100.00, "metric-value" : 54.654516, "metric-unit" : "/sec"} Also added documentation for JSON option. There is some tidy up of CSV code including a potential memory over run in the os.nfields set up. To facilitate this an AGGR_MAX value is added. Committer notes: Fixed up using PRIu64 to format u64 values, not %lu. Committer testing: ⬢[acme@toolbox perf]$ perf stat -j sleep 1 {"counter-value" : "0.731750", "unit" : "msec", "event" : "task-clock:u", "event-runtime" : 731750, "pcnt-running" : 100.00, "metric-value" : 0.000731, "metric-unit" : "CPUs utilized"} {"counter-value" : "0.000000", "unit" : "", "event" : "context-switches:u", "event-runtime" : 731750, "pcnt-running" : 100.00, "metric-value" : 0.000000, "metric-unit" : "/sec"} {"counter-value" : "0.000000", "unit" : "", "event" : "cpu-migrations:u", "event-runtime" : 731750, "pcnt-running" : 100.00, "metric-value" : 0.000000, "metric-unit" : "/sec"} {"counter-value" : "75.000000", "unit" : "", "event" : "page-faults:u", "event-runtime" : 731750, "pcnt-running" : 100.00, "metric-value" : 102.494021, "metric-unit" : "K/sec"} {"counter-value" : "578765.000000", "unit" : "", "event" : "cycles:u", "event-runtime" : 379366, "pcnt-running" : 49.00, "metric-value" : 0.790933, "metric-unit" : "GHz"} {"counter-value" : "1298.000000", "unit" : "", "event" : "stalled-cycles-frontend:u", "event-runtime" : 768020, "pcnt-running" : 100.00, "metric-value" : 0.224271, "metric-unit" : "frontend cycles idle"} {"counter-value" : "21984.000000", "unit" : "", "event" : "stalled-cycles-backend:u", "event-runtime" : 768020, "pcnt-running" : 100.00, "metric-value" : 3.798433, "metric-unit" : "backend cycles idle"} {"counter-value" : "468197.000000", "unit" : "", "event" : "instructions:u", "event-runtime" : 768020, "pcnt-running" : 100.00, "metric-value" : 0.808959, "metric-unit" : "insn per cycle"} {"metric-value" : 0.046955, "metric-unit" : "stalled cycles per insn"} {"counter-value" : "103335.000000", "unit" : "", "event" : "branches:u", "event-runtime" : 768020, "pcnt-running" : 100.00, "metric-value" : 141.216262, "metric-unit" : "M/sec"} {"counter-value" : "2381.000000", "unit" : "", "event" : "branch-misses:u", "event-runtime" : 388654, "pcnt-running" : 50.00, "metric-value" : 2.304156, "metric-unit" : "of all branches"} ⬢[acme@toolbox perf]$ Signed-off-by: Claire Jensen <cjense@google.com> Acked-by: Namhyung Kim <namhyung@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Alyssa Ross <hi@alyssa.is> Cc: Claire Jensen <clairej735@gmail.com> Cc: Florian Fischer <florian.fischer@muhq.space> Cc: Ingo Molnar <mingo@redhat.com> Cc: James Clark <james.clark@arm.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Like Xu <likexu@tencent.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Sandipan Das <sandipan.das@amd.com> Cc: Stephane Eranian <eranian@google.com> Cc: Xing Zhengjun <zhengjun.xing@linux.intel.com> Link: https://lore.kernel.org/r/20220805200105.2020995-2-irogers@google.com Signed-off-by: Ian Rogers <irogers@google.com> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: Michael Petlan <mpetlan@redhat.com> |
|
|
7223f07756 |
perf stat: Refactor __run_perf_stat() common code
Bugzilla: https://bugzilla.redhat.com/2123229 upstream ======== commit bb8bc52e75785af94b9ba079277547d50d018a52 Author: Adrián Herrera Arcila <adrian.herrera@arm.com> Date: Fri Jul 29 16:12:43 2022 +0000 description =========== This extracts common code from the branches of the forks if-then-else. enable_counters(), which was at the beginning of both branches of the conditional, is now unconditional; evlist__start_workload() is extracted to a different if, which enables making the common clocking code unconditional. Reviewed-by: Leo Yan <leo.yan@linaro.org> Signed-off-by: Adrián Herrera Arcila <adrian.herrera@arm.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: James Clark <james.clark@arm.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/r/20220729161244.10522-1-adrian.herrera@arm.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: Michael Petlan <mpetlan@redhat.com> |
|
|
73a167e3f4 |
perf stat: Add topdown metrics in the default perf stat on the hybrid machine
Bugzilla: https://bugzilla.redhat.com/2123229 upstream ======== commit 9a0b36266f7a83912592052035b84f13b12e30da Author: Zhengjun Xing <zhengjun.xing@linux.intel.com> Date: Thu Jul 21 14:57:06 2022 +0800 description =========== Topdown metrics are missed in the default perf stat on the hybrid machine, add Topdown metrics in default perf stat for hybrid systems. Currently, we support the perf metrics Topdown for the p-core PMU in the perf stat default, the perf metrics Topdown support for e-core PMU will be implemented later separately. Refactor the code adds two x86 specific functions. Widen the size of the event name column by 7 chars, so that all metrics after the "#" become aligned again. The perf metrics topdown feature is supported on the cpu_core of ADL. The dedicated perf metrics counter and the fixed counter 3 are used for the topdown events. Adding the topdown metrics doesn't trigger multiplexing. Before: # ./perf stat -a true Performance counter stats for 'system wide': 53.70 msec cpu-clock # 25.736 CPUs utilized 80 context-switches # 1.490 K/sec 24 cpu-migrations # 446.951 /sec 52 page-faults # 968.394 /sec 2,788,555 cpu_core/cycles/ # 51.931 M/sec 851,129 cpu_atom/cycles/ # 15.851 M/sec 2,974,030 cpu_core/instructions/ # 55.385 M/sec 416,919 cpu_atom/instructions/ # 7.764 M/sec 586,136 cpu_core/branches/ # 10.916 M/sec 79,872 cpu_atom/branches/ # 1.487 M/sec 14,220 cpu_core/branch-misses/ # 264.819 K/sec 7,691 cpu_atom/branch-misses/ # 143.229 K/sec 0.002086438 seconds time elapsed After: # ./perf stat -a true Performance counter stats for 'system wide': 61.39 msec cpu-clock # 24.874 CPUs utilized 76 context-switches # 1.238 K/sec 24 cpu-migrations # 390.968 /sec 52 page-faults # 847.097 /sec 2,753,695 cpu_core/cycles/ # 44.859 M/sec 903,899 cpu_atom/cycles/ # 14.725 M/sec 2,927,529 cpu_core/instructions/ # 47.690 M/sec 428,498 cpu_atom/instructions/ # 6.980 M/sec 581,299 cpu_core/branches/ # 9.470 M/sec 83,409 cpu_atom/branches/ # 1.359 M/sec 13,641 cpu_core/branch-misses/ # 222.216 K/sec 8,008 cpu_atom/branch-misses/ # 130.453 K/sec 14,761,308 cpu_core/slots/ # 240.466 M/sec 3,288,625 cpu_core/topdown-retiring/ # 22.3% retiring 1,323,323 cpu_core/topdown-bad-spec/ # 9.0% bad speculation 5,477,470 cpu_core/topdown-fe-bound/ # 37.1% frontend bound 4,679,199 cpu_core/topdown-be-bound/ # 31.7% backend bound 646,194 cpu_core/topdown-heavy-ops/ # 4.4% heavy operations # 17.9% light operations 1,244,999 cpu_core/topdown-br-mispredict/ # 8.4% branch mispredict # 0.5% machine clears 3,891,800 cpu_core/topdown-fetch-lat/ # 26.4% fetch latency # 10.7% fetch bandwidth 1,879,034 cpu_core/topdown-mem-bound/ # 12.7% memory bound # 19.0% Core bound 0.002467839 seconds time elapsed Reviewed-by: Kan Liang <kan.liang@linux.intel.com> Signed-off-by: Xing Zhengjun <zhengjun.xing@linux.intel.com> Acked-by: Ian Rogers <irogers@google.com> Acked-by: Namhyung Kim <namhyung@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20220721065706.2886112-6-zhengjun.xing@linux.intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: Michael Petlan <mpetlan@redhat.com> |
|
|
84abeb0206 |
perf evlist: Always use arch_evlist__add_default_attrs()
Bugzilla: https://bugzilla.redhat.com/2123229 upstream ======== commit a9c1ecdabc4f2ef04ef5334b8deb3a5c5910136d Author: Kan Liang <kan.liang@linux.intel.com> Date: Thu Jul 21 14:57:04 2022 +0800 description =========== Current perf stat uses the evlist__add_default_attrs() to add the generic default attrs, and uses arch_evlist__add_default_attrs() to add the Arch specific default attrs, e.g., Topdown for x86. It works well for the non-hybrid platforms. However, for a hybrid platform, the hard code generic default attrs don't work. Uses arch_evlist__add_default_attrs() to replace the evlist__add_default_attrs(). The arch_evlist__add_default_attrs() is modified to invoke the same __evlist__add_default_attrs() for the generic default attrs. No functional change. Add default_null_attrs[] to indicate the arch specific attrs. No functional change for the arch specific default attrs either. Signed-off-by: Kan Liang <kan.liang@linux.intel.com> Acked-by: Ian Rogers <irogers@google.com> Acked-by: Namhyung Kim <namhyung@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20220721065706.2886112-4-zhengjun.xing@linux.intel.com Signed-off-by: Xing Zhengjun <zhengjun.xing@linux.intel.com> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: Michael Petlan <mpetlan@redhat.com> |
|
|
72e2c8791b |
perf stat: Revert "perf stat: Add default hybrid events"
Bugzilla: https://bugzilla.redhat.com/2123229
upstream
========
commit ace3e31e653e79cae9b047e85f567e6b44c98532
Author: Kan Liang <kan.liang@linux.intel.com>
Date: Thu Jul 21 14:57:02 2022 +0800
description
===========
This reverts commit Fixes:
|
|
|
43ed165f53 |
perf stat: Enable ignore_missing_thread
Bugzilla: https://bugzilla.redhat.com/2123231
upstream
========
commit 448ce0e6ea93ae99e0b36055e5f5a3f723fe3665
Author: Gang Li <ligang.bdlg@bytedance.com>
Date: Wed Jun 22 11:00:37 2022 +0800
description
===========
perf already support ignore_missing_thread for -p, but not yet
applied to `perf stat -p <pid>`. This patch enables ignore_missing_thread
for `perf stat -p <pid>`.
Committer notes:
And here is a refresher about the 'ignore_missing_thread' knob, from a
previous patch using it:
|
|
|
7a532ba916 |
perf stat: Add requires_cpu flag for uncore
Bugzilla: https://bugzilla.redhat.com/2123231 upstream ======== commit d3345fecf9e5f63be7946a1e5bf1f5695c67b445 Author: Adrian Hunter <adrian.hunter@intel.com> Date: Tue May 24 10:54:33 2022 +0300 description =========== Uncore events require a CPU i.e. it cannot be -1. The evsel system_wide flag is intended for events that should be on every CPU, which does not make sense for uncore events because uncore events do not map one-to-one with CPUs. These 2 requirements are not exactly the same, so introduce a new flag 'requires_cpu' for the uncore case. Signed-off-by: Michael Petlan <mpetlan@redhat.com> |
|
|
377c417f23 |
perf stat: Always keep perf metrics topdown events in a group
Bugzilla: https://bugzilla.redhat.com/2123231
upstream
========
commit e8f4f794d7047dd36f090f44f12cd645fba204d2
Author: Kan Liang <kan.liang@linux.intel.com>
Date: Wed May 18 07:38:58 2022 -0700
description
===========
If any member in a group has a different cpu mask than the other
members, the current perf stat disables group. when the perf metrics
topdown events are part of the group, the below <not supported> error
will be triggered.
$ perf stat -e "{slots,topdown-retiring,uncore_imc_free_running_0/dclk/}" -a sleep 1
WARNING: grouped events cpus do not match, disabling group:
anon group { slots, topdown-retiring, uncore_imc_free_running_0/dclk/ }
Performance counter stats for 'system wide':
141,465,174 slots
<not supported> topdown-retiring
1,605,330,334 uncore_imc_free_running_0/dclk/
The perf metrics topdown events must always be grouped with a slots
event as leader.
Factor out evsel__remove_from_group() to only remove the regular events
from the group.
Remove evsel__must_be_in_group(), since no one use it anymore.
With the patch, the topdown events aren't broken from the group for the
splitting.
$ perf stat -e "{slots,topdown-retiring,uncore_imc_free_running_0/dclk/}" -a sleep 1
WARNING: grouped events cpus do not match, disabling group:
anon group { slots, topdown-retiring, uncore_imc_free_running_0/dclk/ }
Performance counter stats for 'system wide':
346,110,588 slots
124,608,256 topdown-retiring
1,606,869,976 uncore_imc_free_running_0/dclk/
1.003877592 seconds time elapsed
Fixes:
|
|
|
d9b8dce991 |
perf stat: Support hybrid --topdown option
Bugzilla: https://bugzilla.redhat.com/2123231 upstream ======== commit d7e3c397087fffde68389e7530093dbc2b70c48a Author: Zhengjun Xing <zhengjun.xing@linux.intel.com> Date: Fri Apr 22 14:56:35 2022 +0800 description =========== Since for cpu_core or cpu_atom, they have different topdown events groups. For cpu_core, --topdown equals to: "{slots,cpu_core/topdown-retiring/,cpu_core/topdown-bad-spec/, cpu_core/topdown-fe-bound/,cpu_core/topdown-be-bound/, cpu_core/topdown-heavy-ops/,cpu_core/topdown-br-mispredict/, cpu_core/topdown-fetch-lat/,cpu_core/topdown-mem-bound/}" For cpu_atom, --topdown equals to: "{cpu_atom/topdown-retiring/,cpu_atom/topdown-bad-spec/, cpu_atom/topdown-fe-bound/,cpu_atom/topdown-be-bound/}" To simplify the implementation, on hybrid, --topdown is used together with --cputype. If without --cputype, it uses cpu_core topdown events by default. # ./perf stat --topdown -a sleep 1 WARNING: default to use cpu_core topdown events Performance counter stats for 'system wide': retiring bad speculation frontend bound backend bound heavy operations light operations branch mispredict machine clears fetch latency fetch bandwidth memory bound Core bound 4.1% 0.0% 5.1% 90.8% 2.3% 1.8% 0.0% 0.0% 4.2% 0.9% 9.9% 81.0% 1.002624229 seconds time elapsed # ./perf stat --topdown -a --cputype atom sleep 1 Performance counter stats for 'system wide': retiring bad speculation frontend bound backend bound 13.5% 0.1% 31.2% 55.2% 1.002366987 seconds time elapsed Signed-off-by: Michael Petlan <mpetlan@redhat.com> |
|
|
61b179a734 |
perf stat: Merge event counts from all hybrid PMUs
Bugzilla: https://bugzilla.redhat.com/2123231 upstream ======== commit 2c8e64514aa2ea414c8ada6c77405680267d0ab3 Author: Zhengjun Xing <zhengjun.xing@linux.intel.com> Date: Fri Apr 22 14:56:34 2022 +0800 description =========== For hybrid events, by default stat aggregates and reports the event counts per pmu. # ./perf stat -e cycles -a sleep 1 Performance counter stats for 'system wide': 14,066,877,268 cpu_core/cycles/ 6,814,443,147 cpu_atom/cycles/ 1.002760625 seconds time elapsed Sometimes, it's also useful to aggregate event counts from all PMUs. Create a new option '--hybrid-merge' to enable that behavior and report the counts without PMUs. # ./perf stat -e cycles -a --hybrid-merge sleep 1 Performance counter stats for 'system wide': 20,732,982,512 cycles 1.002776793 seconds time elapsed Conflicts: ========== Since 60344f1a9a59 ("perf stat: Support metrics with hybrid events") is reverted back a bit later, I haven't even taken it. That causes different context when backporting this patch. Signed-off-by: Michael Petlan <mpetlan@redhat.com> |
|
|
80229988ad |
perf stat: Add user_time and system_time events
Bugzilla: https://bugzilla.redhat.com/2123231 upstream ======== commit b03b89b350034f220cc24fc77c56990a97a796b2 Author: Florian Fischer <florian.fischer@muhq.space> Date: Wed Apr 20 12:23:53 2022 +0200 description =========== It bothered me that during benchmarking using 'perf stat' (to collect for example CPU cache events) I could not simultaneously retrieve the times spend in user or kernel mode in a machine readable format. When running 'perf stat' the output for humans contains the times reported by rusage and wait4. $ perf stat -e cache-misses:u -- true Performance counter stats for 'true': 4,206 cache-misses:u 0.001113619 seconds time elapsed 0.001175000 seconds user 0.000000000 seconds sys But 'perf stat's machine-readable format does not provide this information. $ perf stat -x, -e cache-misses:u -- true 4282,,cache-misses:u,492859,100.00,, I found no way to retrieve this information using the available events while using machine-readable output. This patch adds two new tool internal events 'user_time' and 'system_time', similarly to the already present 'duration_time' event. Both events use the already collected rusage information obtained by wait4 and tracked in the global ru_stats. Examples presenting cache-misses and rusage information in both human and machine-readable form: $ perf stat -e duration_time,user_time,system_time,cache-misses -- grep -q -r duration_time . Performance counter stats for 'grep -q -r duration_time .': 67,422,542 ns duration_time:u 50,517,000 ns user_time:u 16,839,000 ns system_time:u 30,937 cache-misses:u 0.067422542 seconds time elapsed 0.050517000 seconds user 0.016839000 seconds sys $ perf stat -x, -e duration_time,user_time,system_time,cache-misses -- grep -q -r duration_time . 72134524,ns,duration_time:u,72134524,100.00,, 65225000,ns,user_time:u,65225000,100.00,, 6865000,ns,system_time:u,6865000,100.00,, 38705,,cache-misses:u,71189328,100.00,, Signed-off-by: Michael Petlan <mpetlan@redhat.com> |
|
|
c60c40f35b |
perf stat: Introduce stats for the user and system rusage times
Bugzilla: https://bugzilla.redhat.com/2123231 upstream ======== commit c735b0a5217620192a001323e1c2a4b4af5d3dea Author: Florian Fischer <florian.fischer@muhq.space> Date: Wed Apr 20 12:23:52 2022 +0200 description =========== This is preparation for exporting rusage values as tool events. Add new global stats tracking the values obtained via rusage. For now only ru_utime and ru_stime are part of the tracked stats. Both are stored as nanoseconds to be consistent with 'duration_time', although the finest resolution the struct timeval data in rusage provides are microseconds. Signed-off-by: Michael Petlan <mpetlan@redhat.com> |
|
|
1ed24eddc8 |
perf evlist: Rename cpus to user_requested_cpus
Bugzilla: https://bugzilla.redhat.com/2123231 upstream ======== commit 0df6ade7119daa40904b0c18871169e753663e14 Author: Ian Rogers <irogers@google.com> Date: Mon Mar 28 16:26:44 2022 -0700 description =========== evlist contains cpus and all_cpus. all_cpus is the union of the cpu maps of all evsels. For non-task targets, cpus is set to be cpus requested from the command line, defaulting to all online cpus if no cpus are specified. For an uncore event, all_cpus may be just CPU 0 or every online CPU. This causes all_cpus to have fewer values than the cpus variable which is confusing given the 'all' in the name. To try to make the behavior clearer, rename cpus to user_requested_cpus and add comments on the two struct variables. Signed-off-by: Michael Petlan <mpetlan@redhat.com> |
|
|
433d2e06c5 |
perf stat: Avoid SEGV if core.cpus isn't set
Bugzilla: https://bugzilla.redhat.com/2123231 upstream ======== commit 8a96f454f566857290867fb3943ffc37ea7d50d2 Author: Ian Rogers <irogers@google.com> Date: Sun Mar 27 23:24:13 2022 -0700 description =========== Passing NULL to perf_cpu_map__max doesn't make sense as there is no valid max. Avoid this problem by null checking in perf_stat_init_aggr_mode. Signed-off-by: Michael Petlan <mpetlan@redhat.com> |
|
|
c371ffa18c |
perf tools: Enhance the matching of sub-commands abbreviations
Bugzilla: https://bugzilla.redhat.com/2123231 upstream ======== commit ae0f4eb34fc3014f7eba78fab90a0e98e441a4cd Author: Wei Li <liwei391@huawei.com> Date: Fri Mar 25 17:20:32 2022 +0800 description =========== We support short command 'rec*' for 'record' and 'rep*' for 'report' in lots of sub-commands, but the matching is not quite strict currnetly. It may be puzzling sometime, like we mis-type a 'recport' to report but it will perform 'record' in fact without any message. To fix this, add a check to ensure that the short cmd is valid prefix of the real command. Committer testing: [root@quaco ~]# perf c2c re sleep 1 Usage: perf c2c {record|report} -v, --verbose be more verbose (show counter open errors, etc) # perf c2c rec sleep 1 [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.038 MB perf.data (16 samples) ] # perf c2c recport sleep 1 Usage: perf c2c {record|report} -v, --verbose be more verbose (show counter open errors, etc) # perf c2c record sleep 1 [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.038 MB perf.data (15 samples) ] # perf c2c records sleep 1 Usage: perf c2c {record|report} -v, --verbose be more verbose (show counter open errors, etc) # Signed-off-by: Michael Petlan <mpetlan@redhat.com> |
|
|
0acf3bf328 |
perf stat: Fix forked applications enablement of counters
Bugzilla: https://bugzilla.redhat.com/2123231
upstream
========
commit d0a0a511493d269514fcbd852481cdca32c95350
Author: Thomas Richter <tmricht@linux.ibm.com>
Date: Thu Mar 17 16:53:46 2022 +0100
description
===========
I have run into the following issue:
# perf stat -a -e new_pmu/INSTRUCTION_7/ -- mytest -c1 7
Performance counter stats for 'system wide':
0 new_pmu/INSTRUCTION_7/
0.000366428 seconds time elapsed
#
The new PMU for s390 counts the execution of certain CPU instructions.
The root cause is the extremely small run time of the mytest program. It
just executes some assembly instructions and then exits.
In above invocation the instruction is executed exactly one time (-c1
option). The PMU is expected to report this one time execution by a
counter value of one, but fails to do so in some cases, not all.
Debugging reveals the invocation of the child process is done
*before* the counter events are installed and enabled.
Tracing reveals that sometimes the child process starts and exits before
the event is installed on all CPUs. The more CPUs the machine has, the
more often this miscount happens.
Fix this by reversing the start of the work load after the events have
been installed on the specified CPUs. Now the comment also matches the
code.
Output after:
# perf stat -a -e new_pmu/INSTRUCTION_7/ -- mytest -c1 7
Performance counter stats for 'system wide':
1 new_pmu/INSTRUCTION_7/
0.000366428 seconds time elapsed
#
Now the correct result is reported rock solid all the time regardless
how many CPUs are online.
Reviewers notes:
Jiri:
Right, without -a the event has enable_on_exec so the race does not
matter, but it's a problem for system wide with fork.
Namhyung:
Agreed. Also we may move the enable_counters() and the clock code out of
the if block to be shared with the else block.
Fixes:
|
|
|
717eef675b |
perf cpumap: Migrate to libperf cpumap api
Bugzilla: https://bugzilla.redhat.com/2069073 upstream ======== commit 440286993960bea4aa09d912a5497d92d09ae54c Author: Ian Rogers <irogers@google.com> Date: Fri Jan 21 20:58:10 2022 -0800 description =========== Switch from directly accessing the perf_cpu_map to using the appropriate libperf API when possible. Using the API simplifies the job of refactoring use of perf_cpu_map. Signed-off-by: Michael Petlan <mpetlan@redhat.com> |
|
|
73a05cbb84 |
perf stat: No need to setup affinities when starting a workload
Bugzilla: https://bugzilla.redhat.com/2069073 upstream ======== commit 49de179577e7b05b57f625bf05cdc60a72de38d0 Author: Arnaldo Carvalho de Melo <acme@redhat.com> Date: Mon Jan 17 13:09:29 2022 -0300 description =========== I.e. the simple: $ perf stat sleep 1 Uses a dummy CPU map and thus there is no need to setup/cleanup affinities to avoid IPIs, etc. With this we're down to a sched_getaffinity() call, in the libnuma initialization, that probably can be removed in a followup patch. Signed-off-by: Michael Petlan <mpetlan@redhat.com> |
|
|
86ce88496c |
perf cpumap: Give CPUs their own type
Bugzilla: https://bugzilla.redhat.com/2069073 upstream ======== commit 6d18804b963b78dcd53851f11e9080408b3d85c2 Author: Ian Rogers <irogers@google.com> Date: Tue Jan 4 22:13:51 2022 -0800 description =========== A common problem is confusing CPU map indices with the CPU, by wrapping the CPU with a struct then this is avoided. This approach is similar to atomic_t. Committer notes: To make it build with BUILD_BPF_SKEL=1 these files needed the conversions to 'struct perf_cpu' usage: tools/perf/util/bpf_counter.c tools/perf/util/bpf_counter_cgroup.c tools/perf/util/bpf_ftrace.c Also perf_env__get_cpu() was removed back in "perf cpumap: Switch cpu_map__build_map to cpu function". Additionally these needed to be fixed for the ARM builds to complete: tools/perf/arch/arm/util/cs-etm.c tools/perf/arch/arm64/util/pmu.c Signed-off-by: Michael Petlan <mpetlan@redhat.com> |
|
|
fb276aec6a |
perf stat: Correct variable name for read counter
Bugzilla: https://bugzilla.redhat.com/2069073 upstream ======== commit da8c94c065174099853a207d9716a49d339b265f Author: Ian Rogers <irogers@google.com> Date: Tue Jan 4 22:13:39 2022 -0800 description =========== Switch from cpu to cpu_map_idx to reduce confusion. Signed-off-by: Michael Petlan <mpetlan@redhat.com> |
|
|
b146cdcd1d |
perf evsel: Pass cpu not cpu map index to synthesize
Bugzilla: https://bugzilla.redhat.com/2069073 upstream ======== commit 7ac0089d138f80dcd7ba8ca368a9b2bdfe780b16 Author: Ian Rogers <irogers@google.com> Date: Tue Jan 4 22:13:38 2022 -0800 description =========== evsel__write_stat_event() was incorrectly passing a cpu map index rather than a CPU to perf_event__synthesize_stat(). Signed-off-by: Michael Petlan <mpetlan@redhat.com> |
|
|
2f4f0bf948 |
perf evlist: Refactor evlist__for_each_cpu()
Bugzilla: https://bugzilla.redhat.com/2069073 upstream ======== commit 472832d2c000b9611feaea66fe521055c3dbf17a Author: Ian Rogers <irogers@google.com> Date: Tue Jan 4 22:13:37 2022 -0800 description =========== Previously evlist__for_each_cpu() needed to iterate over the evlist in an inner loop and call "skip" routines. Refactor this so that the iteratr is smarter and the next function can update both the current CPU and evsel. By using a cpu map index, fix apparent off-by-1 in __run_perf_stat's call to perf_evsel__close_cpu(). Signed-off-by: Michael Petlan <mpetlan@redhat.com> |
|
|
e250bca416 |
perf cpumap: Rename cpu_map__get_X_aggr_by_cpu functions
Bugzilla: https://bugzilla.redhat.com/2069073 upstream ======== commit 973aeb3c7ada35b75442126c745bb6074cb3e172 Author: Ian Rogers <irogers@google.com> Date: Tue Jan 4 22:13:22 2022 -0800 description =========== The functions don't use a cpu_map so reduce them to being like constructors of aggr_cpu_id. Signed-off-by: Michael Petlan <mpetlan@redhat.com> |
|
|
f19db2b78a |
perf cpumap: Refactor cpu_map__build_map()
Bugzilla: https://bugzilla.redhat.com/2069073 upstream ======== commit 5f50e15c1510c77b37e10c6b22912bf4bf11476b Author: Ian Rogers <irogers@google.com> Date: Tue Jan 4 22:13:21 2022 -0800 description =========== Turn it into a cpu_aggr_map__new(). Pass helper functions. Refactor builtin-stat calls to manually pass function pointers. Try to reduce some copy-paste code. Signed-off-by: Michael Petlan <mpetlan@redhat.com> |
|
|
54a6dcb7ef |
perf cpumap: Rename empty functions
Bugzilla: https://bugzilla.redhat.com/2069073 upstream ======== commit 51b826fadf4fc42c8614b752b6cb0cb516589ade Author: Ian Rogers <irogers@google.com> Date: Tue Jan 4 22:13:17 2022 -0800 description =========== Remove cpu_map from name as a cpu_map isn't used. Pass a const pointer rather than by value to avoid unnecessary copying. Signed-off-by: Michael Petlan <mpetlan@redhat.com> |
|
|
bdb4b6c917 |
perf cpumap: Switch cpu_map__build_map() to cpu function
Bugzilla: https://bugzilla.redhat.com/2069073 upstream ======== commit eff54c24bb147afc0a1423b49bfa1b8eaa85a88f Author: Ian Rogers <irogers@google.com> Date: Tue Jan 4 22:13:09 2022 -0800 description =========== Avoid error prone cpu_map + idx variant. Remove now unused functions. Committer notes: Remove by now unused perf_env__get_cpu(). Signed-off-by: Michael Petlan <mpetlan@redhat.com> |
|
|
0002a012f8 |
perf stat: Switch to cpu version of cpu_map__get()
Bugzilla: https://bugzilla.redhat.com/2069073 upstream ======== commit 88031a0de7d68d132014154b9e5307428e8ed70d Author: Ian Rogers <irogers@google.com> Date: Tue Jan 4 22:13:08 2022 -0800 description =========== Avoid possible bugs where the wrong index is passed with the cpu_map. Signed-off-by: Michael Petlan <mpetlan@redhat.com> |
|
|
0d045c4b75 |
perf stat: Support --cputype option for hybrid events
Bugzilla: https://bugzilla.redhat.com/2069073 upstream ======== commit e69dc84282fb474cb87097c6c945d8f90e05a4d9 Author: Jin Yao <yao.jin@linux.intel.com> Date: Thu Sep 9 14:22:15 2021 +0800 description =========== In previous patch, we have supported the syntax which enables the event on a specified pmu, such as: cpu_core/<event>/ cpu_atom/<event>/ While this syntax is not very easy for applying on a set of events or applying on a group. In following example, we have to explicitly assign the pmu prefix. # ./perf stat -e '{cpu_core/cycles/,cpu_core/instructions/}' -- sleep 1 Performance counter stats for 'sleep 1': 1,158,545 cpu_core/cycles/ 1,003,113 cpu_core/instructions/ 1.002428712 seconds time elapsed A much easier way is: # ./perf stat --cputype core -e '{cycles,instructions}' -- sleep 1 Performance counter stats for 'sleep 1': 1,101,071 cpu_core/cycles/ 939,892 cpu_core/instructions/ 1.002363142 seconds time elapsed For this example, the '--cputype' enables the events from specified pmu (cpu_core). If '--cputype' conflicts with pmu prefix, '--cputype' is ignored. # ./perf stat --cputype core -e cycles,cpu_atom/instructions/ -a -- sleep 1 Performance counter stats for 'system wide': 21,003,407 cpu_core/cycles/ 367,886 cpu_atom/instructions/ 1.002203520 seconds time elapsed Signed-off-by: Michael Petlan <mpetlan@redhat.com> |
|
|
be19bc8026 |
perf parse-event: Add init and exit to parse_event_error
Bugzilla: https://bugzilla.redhat.com/2069073 upstream ======== commit 07eafd4e053a41d72611848b8758df0752b53ee4 Author: Ian Rogers <irogers@google.com> Date: Sun Nov 7 01:00:01 2021 -0800 description =========== parse_events() may succeed but leave string memory allocations reachable in the error. Add an init/exit that must be called to initialize and clean up the error. This fixes a leak in metricgroup parse_ids. Signed-off-by: Michael Petlan <mpetlan@redhat.com> |
|
|
2620c00cd7 |
perf parse-events: Rename parse_events_error functions
Bugzilla: https://bugzilla.redhat.com/2069073 upstream ======== commit 6c1912898ed21bef2d7f8b52902b8bc3c0e5c2b5 Author: Ian Rogers <irogers@google.com> Date: Sun Nov 7 01:00:00 2021 -0800 description =========== Group error functions and name after the data type they manipulate. Signed-off-by: Michael Petlan <mpetlan@redhat.com> |
|
|
10cc4a53d0 |
perf iostat: Use system-wide mode if the target cpu_list is unspecified
Bugzilla: https://bugzilla.redhat.com/2069070
upstream
========
commit e4fe5d7349e0b1c0d3da5b6b3e1efce591e85bd2
Author: Like Xu <likexu@tencent.com>
Date: Mon Sep 27 16:11:14 2021 +0800
description
===========
An iostate use case like "perf iostat 0000:16,0000:97 -- ls" should be
implemented to work in system-wide mode to ensure that the output from
print_header() is consistent with the user documentation perf-iostat.txt,
rather than incorrectly assuming that the kernel does not support it:
Error:
The sys_perf_event_open() syscall returned with 22 (Invalid argument) \
for event (uncore_iio_0/event=0x83,umask=0x04,ch_mask=0xF,fc_mask=0x07/).
/bin/dmesg | grep -i perf may provide additional information.
This error is easily fixed by assigning system-wide mode by default
for IOSTAT_RUN only when the target cpu_list is unspecified.
Fixes:
|
|
|
ebf2d15716 |
perf stat: Do not allow --for-each-cgroup without cpu
Bugzilla: https://bugzilla.redhat.com/2069070 upstream ======== commit 1c02f6c9043e9a6f359278cc2f17b4283ac0bd67 Author: Namhyung Kim <namhyung@kernel.org> Date: Mon Aug 30 10:02:00 2021 -0700 description =========== The cgroup mode should work with cpu events. Warn if --for-each-cgroup option is used with a task target like existing -G option. # perf stat --for-each-cgroup . sleep 1 both cgroup and no-aggregation modes only available in system-wide mode Usage: perf stat [<options>] [<command>] -G, --cgroup <name> monitor event in cgroup name only -A, --no-aggr disable CPU count aggregation -a, --all-cpus system-wide collection from all CPUs --for-each-cgroup <name> expand events for each cgroup Signed-off-by: Michael Petlan <mpetlan@redhat.com> |
|
|
1b539daf08 |
perf tools: Enable on a list of CPUs for hybrid
Bugzilla: https://bugzilla.redhat.com/2069070 upstream ======== commit 1d3351e631fc34d73b530a67263188062fe598ba Author: Jin Yao <yao.jin@linux.intel.com> Date: Fri Jul 23 14:34:33 2021 +0800 description =========== The 'perf record' and 'perf stat' commands have supported the option '-C/--cpus' to count or collect only on the list of CPUs provided. This option needs to be supported for hybrid as well. For hybrid support, it needs to check that the cpu list are available on hybrid PMU. One example for AlderLake, cpu0-7 is 'cpu_core', cpu8-11 is 'cpu_atom'. Before: # perf stat -e cpu_core/cycles/ -C11 -- sleep 1 Performance counter stats for 'CPU(s) 11': <not supported> cpu_core/cycles/ 1.006179431 seconds time elapsed The 'perf stat' command silently returned "<not supported>" without any helpful information. It should error out pointing out that that cpu11 was not 'cpu_core'. After: # perf stat -e cpu_core/cycles/ -C11 -- sleep 1 WARNING: 11 isn't a 'cpu_core', please use a CPU list in the 'cpu_core' range (0-7) failed to use cpu list 11 We also need to support the events without pmu prefix specified. # perf stat -e cycles -C11 -- sleep 1 WARNING: 11 isn't a 'cpu_core', please use a CPU list in the 'cpu_core' range (0-7) Performance counter stats for 'CPU(s) 11': 1,067,373 cpu_atom/cycles/ 1.005544738 seconds time elapsed The perf tool creates two cycles events automatically, cpu_core/cycles/ and cpu_atom/cycles/. It checks that cpu11 is not 'cpu_core', then shows a warning for cpu_core/cycles/ and only count the cpu_atom/cycles/. If part of cpus are 'cpu_core' and part of cpus are 'cpu_atom', for example, # perf stat -e cycles -C0,11 -- sleep 1 WARNING: use 0 in 'cpu_core' for 'cycles', skip other cpus in list. WARNING: use 11 in 'cpu_atom' for 'cycles', skip other cpus in list. Performance counter stats for 'CPU(s) 0,11': 1,914,704 cpu_core/cycles/ 2,036,983 cpu_atom/cycles/ 1.005815641 seconds time elapsed It now automatically selects cpu0 for cpu_core/cycles/, selects cpu11 for cpu_atom/cycles/, and output with some warnings. Some more complex examples, # perf stat -e cycles,instructions -C0,11 -- sleep 1 WARNING: use 0 in 'cpu_core' for 'cycles', skip other cpus in list. WARNING: use 11 in 'cpu_atom' for 'cycles', skip other cpus in list. WARNING: use 0 in 'cpu_core' for 'instructions', skip other cpus in list. WARNING: use 11 in 'cpu_atom' for 'instructions', skip other cpus in list. Performance counter stats for 'CPU(s) 0,11': 2,780,387 cpu_core/cycles/ 1,583,432 cpu_atom/cycles/ 3,957,277 cpu_core/instructions/ 1,167,089 cpu_atom/instructions/ 1.006005124 seconds time elapsed # perf stat -e cycles,cpu_atom/instructions/ -C0,11 -- sleep 1 WARNING: use 0 in 'cpu_core' for 'cycles', skip other cpus in list. WARNING: use 11 in 'cpu_atom' for 'cycles', skip other cpus in list. WARNING: use 11 in 'cpu_atom' for 'cpu_atom/instructions/', skip other cpus in list. Performance counter stats for 'CPU(s) 0,11': 3,290,301 cpu_core/cycles/ 1,953,073 cpu_atom/cycles/ 1,407,869 cpu_atom/instructions/ 1.006260912 seconds time elapsed Signed-off-by: Michael Petlan <mpetlan@redhat.com> |
|
|
1df7d85569 |
perf tools: Remove repipe argument from perf_session__new()
Bugzilla: https://bugzilla.redhat.com/2069070 upstream ======== commit 2681bd85a4b92788e265934d0d76bd56b5b08d16 Author: Namhyung Kim <namhyung@kernel.org> Date: Mon Jul 19 15:31:49 2021 -0700 description =========== The repipe argument is only used by perf inject and the all others passes 'false'. Let's remove it from the function signature and add __perf_session__new() to be called from perf inject directly. This is a preparation of the change the pipe input/output. [ Fixed up some trivial conflicts as this patchset fell thru the cracks ;-( ] Signed-off-by: Michael Petlan <mpetlan@redhat.com> |
|
|
e0a7ef2a62 |
perf stat: Merge uncore events by default for hybrid platform
On a hybrid platform, by default 'perf stat' aggregates and reports the event counts per PMU. For example, # perf stat -e cycles -a true Performance counter stats for 'system wide': 1,400,445 cpu_core/cycles/ 680,881 cpu_atom/cycles/ 0.001770773 seconds time elapsed But for uncore events that's not a suitable method. Uncore has nothing to do with hybrid. So for uncore events, we aggregate event counts from all PMUs and report the counts without PMUs. Before: # perf stat -e arb/event=0x81,umask=0x1/,arb/event=0x84,umask=0x1/ -a true Performance counter stats for 'system wide': 2,058 uncore_arb_0/event=0x81,umask=0x1/ 2,028 uncore_arb_1/event=0x81,umask=0x1/ 0 uncore_arb_0/event=0x84,umask=0x1/ 0 uncore_arb_1/event=0x84,umask=0x1/ 0.000614498 seconds time elapsed After: # perf stat -e arb/event=0x81,umask=0x1/,arb/event=0x84,umask=0x1/ -a true Performance counter stats for 'system wide': 3,996 arb/event=0x81,umask=0x1/ 0 arb/event=0x84,umask=0x1/ 0.000630046 seconds time elapsed Of course, we also keep the '--no-merge' working for uncore events. # perf stat -e arb/event=0x81,umask=0x1/,arb/event=0x84,umask=0x1/ --no-merge true Performance counter stats for 'system wide': 1,952 uncore_arb_0/event=0x81,umask=0x1/ 1,921 uncore_arb_1/event=0x81,umask=0x1/ 0 uncore_arb_0/event=0x84,umask=0x1/ 0 uncore_arb_1/event=0x84,umask=0x1/ 0.000575536 seconds time elapsed Signed-off-by: Jin Yao <yao.jin@linux.intel.com> Acked-by: Jiri Olsa <jolsa@redhat.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Kan Liang <kan.liang@intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20210707055652.962-1-yao.jin@linux.intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> |
|
|
5f148e7c6a |
perf stat: Add Topdown metrics L2 events as default events
The Topdown Microarchitecture Analysis (TMA) Method is a structured
analysis methodology to identify critical performance bottlenecks in
out-of-order processors.
The Topdown metrics L1 event was added as default in
|
|
|
fba7c86601 |
libperf: Move 'leader' from tools/perf to perf_evsel::leader
Move evsel::leader to perf_evsel::leader, so we can move the group interface to libperf. Also add several evsel helpers to ease up the transition: struct evsel *evsel__leader(struct evsel *evsel); - get leader evsel bool evsel__has_leader(struct evsel *evsel, struct evsel *leader); - true if evsel has leader as leader bool evsel__is_leader(struct evsel *evsel); - true if evsel is itw own leader void evsel__set_leader(struct evsel *evsel, struct evsel *leader); - set leader for evsel Committer notes: Fix this when building with 'make BUILD_BPF_SKEL=1' tools/perf/util/bpf_counter.c - if (evsel->leader->core.nr_members > 1) { + if (evsel->core.leader->nr_members > 1) { Signed-off-by: Jiri Olsa <jolsa@kernel.org> Requested-by: Shunsuke Nakamura <nakamura.shun@fujitsu.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Michael Petlan <mpetlan@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lore.kernel.org/lkml/20210706151704.73662-4-jolsa@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> |
|
|
f8b61bd204 |
perf stat: Skip evlist__[enable|disable] when all events uses BPF
When all events of a perf-stat session use BPF, it is not necessary to call evlist__enable() and evlist__disable(). Skip them when all_counters_use_bpf is true. Signed-off-by: Song Liu <song@kernel.org> Reported-by: Jiri Olsa <jolsa@redhat.com> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> |
|
|
660e533e87 |
perf stat: Warn group events from different hybrid PMU
If a group has events which are from different hybrid PMUs, shows a warning: "WARNING: events in group from different hybrid PMUs!" This is to remind the user not to put the core event and atom event into one group. Next, just disable grouping. # perf stat -e "{cpu_core/cycles/,cpu_atom/cycles/}" -a -- sleep 1 WARNING: events in group from different hybrid PMUs! WARNING: grouped events cpus do not match, disabling group: anon group { cpu_core/cycles/, cpu_atom/cycles/ } Performance counter stats for 'system wide': 5,438,125 cpu_core/cycles/ 3,914,586 cpu_atom/cycles/ 1.004250966 seconds time elapsed Signed-off-by: Jin Yao <yao.jin@linux.intel.com> Reviewed-by: Jiri Olsa <jolsa@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Kan Liang <kan.liang@intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20210427070139.25256-17-yao.jin@linux.intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> |
|
|
ac2dc29edd |
perf stat: Add default hybrid events
Previously if '-e' is not specified in perf stat, some software events and hardware events are added to evlist by default. Before: # perf stat -a -- sleep 1 Performance counter stats for 'system wide': 24,044.40 msec cpu-clock # 23.946 CPUs utilized 99 context-switches # 4.117 /sec 24 cpu-migrations # 0.998 /sec 3 page-faults # 0.125 /sec 7,000,244 cycles # 0.000 GHz 2,955,024 instructions # 0.42 insn per cycle 608,941 branches # 25.326 K/sec 31,991 branch-misses # 5.25% of all branches 1.004106859 seconds time elapsed Among the events, cycles, instructions, branches and branch-misses are hardware events. One hybrid platform, two hardware events are created for one hardware event. cpu_core/cycles/, cpu_atom/cycles/, cpu_core/instructions/, cpu_atom/instructions/, cpu_core/branches/, cpu_atom/branches/, cpu_core/branch-misses/, cpu_atom/branch-misses/ These events would be added to evlist on hybrid platform. Since parse_events() has been supported to create two hardware events for one event on hybrid platform, so we just use parse_events(evlist, "cycles,instructions,branches,branch-misses") to create the default events and add them to evlist. After: # perf stat -a -- sleep 1 Performance counter stats for 'system wide': 24,043.99 msec cpu-clock # 23.991 CPUs utilized 139 context-switches # 5.781 /sec 25 cpu-migrations # 1.040 /sec 6 page-faults # 0.250 /sec 10,381,751 cpu_core/cycles/ # 431.782 K/sec 1,264,216 cpu_atom/cycles/ # 52.579 K/sec 3,406,958 cpu_core/instructions/ # 141.697 K/sec 414,588 cpu_atom/instructions/ # 17.243 K/sec 705,149 cpu_core/branches/ # 29.327 K/sec 82,358 cpu_atom/branches/ # 3.425 K/sec 40,821 cpu_core/branch-misses/ # 1.698 K/sec 9,086 cpu_atom/branch-misses/ # 377.891 /sec 1.002228863 seconds time elapsed We can see two events are created for one hardware event. One TODO is, the shadow stats looks a bit different, now it's just 'M/sec'. The perf_stat__update_shadow_stats and perf_stat__print_shadow_stats need to be improved in future if we want to get the original shadow stats. Signed-off-by: Jin Yao <yao.jin@linux.intel.com> Reviewed-by: Jiri Olsa <jolsa@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Kan Liang <kan.liang@intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20210427070139.25256-15-yao.jin@linux.intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> |
|
|
12279429d8 |
perf stat: Uniquify hybrid event name
It would be useful to let user know the pmu which the event belongs to. perf-stat has supported '--no-merge' option and it can print the pmu name after the event name, such as: "cycles [cpu_core]" Now this option is enabled by default for hybrid platform but change the format to: "cpu_core/cycles/" If user configs the name, we still use the user specified name. Signed-off-by: Jin Yao <yao.jin@linux.intel.com> Reviewed-by: Jiri Olsa <jolsa@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Kan Liang <kan.liang@intel.com> Cc: Peter Zijlstra <peterz@infradead.org> ink: https://lore.kernel.org/r/20210427070139.25256-8-yao.jin@linux.intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> |
|
|
112cb56164 |
perf stat: Introduce config stat.bpf-counter-events
Currently, to use BPF to aggregate perf event counters, the user uses --bpf-counters option. Enable "use bpf by default" events with a config option, stat.bpf-counter-events. Events with name in the option will use BPF. This also enables mixed BPF event and regular event in the same sesssion. For example: perf config stat.bpf-counter-events=instructions perf stat -e instructions,cs The second command will use BPF for "instructions" but not "cs". Signed-off-by: Song Liu <song@kernel.org> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/r/20210425214333.1090950-4-song@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> |
|
|
f07952b179 |
perf stat: Basic support for iostat in perf
Add basic flow for a new iostat mode in perf. Mode is intended to provide four I/O performance metrics per each PCIe root port: Inbound Read, Inbound Write, Outbound Read, Outbound Write. The actual code to compute the metrics and attribute it to root port is in follow-on patches. Signed-off-by: Alexander Antonov <alexander.antonov@linux.intel.com> Acked-by: Namhyung Kim <namhyung@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Alexey V Bayduraev <alexey.v.bayduraev@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20210419094147.15909-2-alexander.antonov@linux.intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> |
|
|
0bdad97801 |
perf stat: Align CSV output for summary mode
The 'perf stat' subcommand supports the request for a summary of the interval counter readings. But the summary lines break the CSV output so it's hard for scripts to parse the result. Before: # perf stat -x, -I1000 --interval-count 1 --summary 1.001323097,8013.48,msec,cpu-clock,8013483384,100.00,8.013,CPUs utilized 1.001323097,270,,context-switches,8013513297,100.00,0.034,K/sec 1.001323097,13,,cpu-migrations,8013530032,100.00,0.002,K/sec 1.001323097,184,,page-faults,8013546992,100.00,0.023,K/sec 1.001323097,20574191,,cycles,8013551506,100.00,0.003,GHz 1.001323097,10562267,,instructions,8013564958,100.00,0.51,insn per cycle 1.001323097,2019244,,branches,8013575673,100.00,0.252,M/sec 1.001323097,106152,,branch-misses,8013585776,100.00,5.26,of all branches 8013.48,msec,cpu-clock,8013483384,100.00,7.984,CPUs utilized 270,,context-switches,8013513297,100.00,0.034,K/sec 13,,cpu-migrations,8013530032,100.00,0.002,K/sec 184,,page-faults,8013546992,100.00,0.023,K/sec 20574191,,cycles,8013551506,100.00,0.003,GHz 10562267,,instructions,8013564958,100.00,0.51,insn per cycle 2019244,,branches,8013575673,100.00,0.252,M/sec 106152,,branch-misses,8013585776,100.00,5.26,of all branches The summary line loses the timestamp column, which breaks the CSV output. We add a column at the original 'timestamp' position and it just says 'summary' for the summary line. After: # perf stat -x, -I1000 --interval-count 1 --summary 1.001196053,8012.72,msec,cpu-clock,8012722903,100.00,8.013,CPUs utilized 1.001196053,218,,context-switches,8012753271,100.00,0.027,K/sec 1.001196053,9,,cpu-migrations,8012769767,100.00,0.001,K/sec 1.001196053,0,,page-faults,8012786257,100.00,0.000,K/sec 1.001196053,15004518,,cycles,8012790637,100.00,0.002,GHz 1.001196053,7954691,,instructions,8012804027,100.00,0.53,insn per cycle 1.001196053,1590259,,branches,8012814766,100.00,0.198,M/sec 1.001196053,82601,,branch-misses,8012824365,100.00,5.19,of all branches summary,8012.72,msec,cpu-clock,8012722903,100.00,7.986,CPUs utilized summary,218,,context-switches,8012753271,100.00,0.027,K/sec summary,9,,cpu-migrations,8012769767,100.00,0.001,K/sec summary,0,,page-faults,8012786257,100.00,0.000,K/sec summary,15004518,,cycles,8012790637,100.00,0.002,GHz summary,7954691,,instructions,8012804027,100.00,0.53,insn per cycle summary,1590259,,branches,8012814766,100.00,0.198,M/sec summary,82601,,branch-misses,8012824365,100.00,5.19,of all branches Now it's easy for script to analyse the summary lines. Of course, we also consider not to break possible existing scripts which can continue to use the broken CSV format by using a new '--no-csv-summary.' option. # perf stat -x, -I1000 --interval-count 1 --summary --no-csv-summary 1.001213261,8012.67,msec,cpu-clock,8012672327,100.00,8.013,CPUs utilized 1.001213261,197,,context-switches,8012703742,100.00,24.586,/sec 1.001213261,9,,cpu-migrations,8012720902,100.00,1.123,/sec 1.001213261,644,,page-faults,8012738266,100.00,80.373,/sec 1.001213261,18350698,,cycles,8012744109,100.00,0.002,GHz 1.001213261,12745021,,instructions,8012759001,100.00,0.69,insn per cycle 1.001213261,2458033,,branches,8012770864,100.00,306.768,K/sec 1.001213261,102107,,branch-misses,8012781751,100.00,4.15,of all branches 8012.67,msec,cpu-clock,8012672327,100.00,7.985,CPUs utilized 197,,context-switches,8012703742,100.00,24.586,/sec 9,,cpu-migrations,8012720902,100.00,1.123,/sec 644,,page-faults,8012738266,100.00,80.373,/sec 18350698,,cycles,8012744109,100.00,0.002,GHz 12745021,,instructions,8012759001,100.00,0.69,insn per cycle 2458033,,branches,8012770864,100.00,306.768,K/sec 102107,,branch-misses,8012781751,100.00,4.15,of all branches This option can be enabled in perf config by setting the variable 'stat.no-csv-summary'. # perf config stat.no-csv-summary=true # perf config -l stat.no-csv-summary=true # perf stat -x, -I1000 --interval-count 1 --summary 1.001330198,8013.28,msec,cpu-clock,8013279201,100.00,8.013,CPUs utilized 1.001330198,205,,context-switches,8013308394,100.00,25.583,/sec 1.001330198,10,,cpu-migrations,8013324681,100.00,1.248,/sec 1.001330198,0,,page-faults,8013340926,100.00,0.000,/sec 1.001330198,8027742,,cycles,8013344503,100.00,0.001,GHz 1.001330198,2871717,,instructions,8013356501,100.00,0.36,insn per cycle 1.001330198,553564,,branches,8013366204,100.00,69.081,K/sec 1.001330198,54021,,branch-misses,8013375952,100.00,9.76,of all branches 8013.28,msec,cpu-clock,8013279201,100.00,7.985,CPUs utilized 205,,context-switches,8013308394,100.00,25.583,/sec 10,,cpu-migrations,8013324681,100.00,1.248,/sec 0,,page-faults,8013340926,100.00,0.000,/sec 8027742,,cycles,8013344503,100.00,0.001,GHz 2871717,,instructions,8013356501,100.00,0.36,insn per cycle 553564,,branches,8013366204,100.00,69.081,K/sec 54021,,branch-misses,8013375952,100.00,9.76,of all branches Signed-off-by: Jin Yao <yao.jin@linux.intel.com> Acked-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@redhat.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Jin Yao <yao.jin@intel.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lore.kernel.org/lkml/20210319070156.20394-1-yao.jin@linux.intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> |
|
|
435b46ef1d |
perf stat: Measure 't0' and 'ref_time' after enable_counters()
Take measurements of 't0' and 'ref_time' after enable_counters(), so that they only measure the time consumed when the counters are enabled. Signed-off-by: Song Liu <songliubraving@fb.com> Acked-by: Andi Kleen <andi@firstfloor.org> Acked-by: Jiri Olsa <jolsa@kernel.org> Acked-by: Namhyung Kim <namhyung@kernel.org> Cc: kernel-team@fb.com Link: http://lore.kernel.org/lkml/20210316211837.910506-3-songliubraving@fb.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> |
|
|
7fac83aaf2 |
perf stat: Introduce 'bperf' to share hardware PMCs with BPF
The perf tool uses performance monitoring counters (PMCs) to monitor system performance. The PMCs are limited hardware resources. For example, Intel CPUs have 3x fixed PMCs and 4x programmable PMCs per cpu. Modern data center systems use these PMCs in many different ways: system level monitoring, (maybe nested) container level monitoring, per process monitoring, profiling (in sample mode), etc. In some cases, there are more active perf_events than available hardware PMCs. To allow all perf_events to have a chance to run, it is necessary to do expensive time multiplexing of events. On the other hand, many monitoring tools count the common metrics (cycles, instructions). It is a waste to have multiple tools create multiple perf_events of "cycles" and occupy multiple PMCs. bperf tries to reduce such wastes by allowing multiple perf_events of "cycles" or "instructions" (at different scopes) to share PMUs. Instead of having each perf-stat session to read its own perf_events, bperf uses BPF programs to read the perf_events and aggregate readings to BPF maps. Then, the perf-stat session(s) reads the values from these BPF maps. Please refer to the comment before the definition of bperf_ops for the description of bperf architecture. bperf is off by default. To enable it, pass --bpf-counters option to perf-stat. bperf uses a BPF hashmap to share information about BPF programs and maps used by bperf. This map is pinned to bpffs. The default path is /sys/fs/bpf/perf_attr_map. The user could change the path with option --bpf-attr-map. Committer testing: # dmesg|grep "Performance Events" -A5 [ 0.225277] Performance Events: Fam17h+ core perfctr, AMD PMU driver. [ 0.225280] ... version: 0 [ 0.225280] ... bit width: 48 [ 0.225281] ... generic registers: 6 [ 0.225281] ... value mask: 0000ffffffffffff [ 0.225281] ... max period: 00007fffffffffff # # for a in $(seq 6) ; do perf stat -a -e cycles,instructions sleep 100000 & done [1] 2436231 [2] 2436232 [3] 2436233 [4] 2436234 [5] 2436235 [6] 2436236 # perf stat -a -e cycles,instructions sleep 0.1 Performance counter stats for 'system wide': 310,326,987 cycles (41.87%) 236,143,290 instructions # 0.76 insn per cycle (41.87%) 0.100800885 seconds time elapsed # We can see that the counters were enabled for this workload 41.87% of the time. Now with --bpf-counters: # for a in $(seq 32) ; do perf stat --bpf-counters -a -e cycles,instructions sleep 100000 & done [1] 2436514 [2] 2436515 [3] 2436516 [4] 2436517 [5] 2436518 [6] 2436519 [7] 2436520 [8] 2436521 [9] 2436522 [10] 2436523 [11] 2436524 [12] 2436525 [13] 2436526 [14] 2436527 [15] 2436528 [16] 2436529 [17] 2436530 [18] 2436531 [19] 2436532 [20] 2436533 [21] 2436534 [22] 2436535 [23] 2436536 [24] 2436537 [25] 2436538 [26] 2436539 [27] 2436540 [28] 2436541 [29] 2436542 [30] 2436543 [31] 2436544 [32] 2436545 # # ls -la /sys/fs/bpf/perf_attr_map -rw-------. 1 root root 0 Mar 23 14:53 /sys/fs/bpf/perf_attr_map # bpftool map | grep bperf | wc -l 64 # # bpftool map | tail 1265: percpu_array name accum_readings flags 0x0 key 4B value 24B max_entries 1 memlock 4096B 1266: hash name filter flags 0x0 key 4B value 4B max_entries 1 memlock 4096B 1267: array name bperf_fo.bss flags 0x400 key 4B value 8B max_entries 1 memlock 4096B btf_id 996 pids perf(2436545) 1268: percpu_array name accum_readings flags 0x0 key 4B value 24B max_entries 1 memlock 4096B 1269: hash name filter flags 0x0 key 4B value 4B max_entries 1 memlock 4096B 1270: array name bperf_fo.bss flags 0x400 key 4B value 8B max_entries 1 memlock 4096B btf_id 997 pids perf(2436541) 1285: array name pid_iter.rodata flags 0x480 key 4B value 4B max_entries 1 memlock 4096B btf_id 1017 frozen pids bpftool(2437504) 1286: array flags 0x0 key 4B value 32B max_entries 1 memlock 4096B # # bpftool map dump id 1268 | tail value (CPU 21): 8f f3 bc ca 00 00 00 00 80 fd 2a d1 4d 00 00 00 80 fd 2a d1 4d 00 00 00 value (CPU 22): 7e d5 64 4d 00 00 00 00 a4 8a 2e ee 4d 00 00 00 a4 8a 2e ee 4d 00 00 00 value (CPU 23): a7 78 3e 06 01 00 00 00 b2 34 94 f6 4d 00 00 00 b2 34 94 f6 4d 00 00 00 Found 1 element # bpftool map dump id 1268 | tail value (CPU 21): c6 8b d9 ca 00 00 00 00 20 c6 fc 83 4e 00 00 00 20 c6 fc 83 4e 00 00 00 value (CPU 22): 9c b4 d2 4d 00 00 00 00 3e 0c df 89 4e 00 00 00 3e 0c df 89 4e 00 00 00 value (CPU 23): 18 43 66 06 01 00 00 00 5b 69 ed 83 4e 00 00 00 5b 69 ed 83 4e 00 00 00 Found 1 element # bpftool map dump id 1268 | tail value (CPU 21): f2 6e db ca 00 00 00 00 92 67 4c ba 4e 00 00 00 92 67 4c ba 4e 00 00 00 value (CPU 22): dc 8e e1 4d 00 00 00 00 d9 32 7a c5 4e 00 00 00 d9 32 7a c5 4e 00 00 00 value (CPU 23): bd 2b 73 06 01 00 00 00 7c 73 87 bf 4e 00 00 00 7c 73 87 bf 4e 00 00 00 Found 1 element # # perf stat --bpf-counters -a -e cycles,instructions sleep 0.1 Performance counter stats for 'system wide': 119,410,122 cycles 152,105,479 instructions # 1.27 insn per cycle 0.101395093 seconds time elapsed # See? We had the counters enabled all the time. Signed-off-by: Song Liu <songliubraving@fb.com> Reviewed-by: Jiri Olsa <jolsa@kernel.org> Acked-by: Namhyung Kim <namhyung@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: kernel-team@fb.com Link: http://lore.kernel.org/lkml/20210316211837.910506-2-songliubraving@fb.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> |
|
|
4d39c89f0b |
perf tools: Fix various typos in comments
Fix ~124 single-word typos and a few spelling errors in the perf tooling code, accumulated over the years. Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20210321113734.GA248990@gmail.com Link: http://lore.kernel.org/lkml/20210323160915.GA61903@gmail.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> |