diff options
author | Stephane Eranian <eranian@google.com> | 2015-09-03 15:23:40 +0200 |
---|---|---|
committer | Arnaldo Carvalho de Melo <acme@redhat.com> | 2015-09-16 18:01:03 -0300 |
commit | 02d8dabc50f94353075f2f62b1047c1306e8bf92 (patch) | |
tree | 32a17c8f2428f4dccb5b66caaf07fa122c8a556c /tools/perf | |
parent | f6cf87f748ff9480f97ff9c5caf6d6faacf52aa1 (diff) |
perf stat: Fix per-pkg event reporting bug
Per-pkg events need to be captured once per processor socket. The code
in check_per_pkg() ensures only one value per processor package is used.
However there is a problem with this function in case the first CPU of
the package does not measure anything for the per-pkg event, but other
CPUs do.
Consider the following:
$ create cgroup FOO; echo $$ >FOO/tasks; taskset -c 1 noploop &
$ perf stat -a -I 1000 -e intel_cqm/llc_occupancy/ -G FOO sleep 100
1.00000 <not counted> Bytes intel_cqm/llc_occupancy/ FOO
The reason for this is that CPU0 in the cgroup has nothing running on it.
Yet check_per_plg() will mark socket0 as processed and no other event
value will be considered for the socket.
This patch fixes the problem by having check_per_pkg() only consider
events which actually ran.
Signed-off-by: Stephane Eranian <eranian@google.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Kan Liang <kan.liang@intel.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1441286620-10117-1-git-send-email-eranian@google.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Diffstat (limited to 'tools/perf')
-rw-r--r-- | tools/perf/util/stat.c | 16 |
1 files changed, 14 insertions, 2 deletions
diff --git a/tools/perf/util/stat.c b/tools/perf/util/stat.c index 415c359de465..2d065d065b67 100644 --- a/tools/perf/util/stat.c +++ b/tools/perf/util/stat.c @@ -196,7 +196,8 @@ static void zero_per_pkg(struct perf_evsel *counter) memset(counter->per_pkg_mask, 0, MAX_NR_CPUS); } -static int check_per_pkg(struct perf_evsel *counter, int cpu, bool *skip) +static int check_per_pkg(struct perf_evsel *counter, + struct perf_counts_values *vals, int cpu, bool *skip) { unsigned long *mask = counter->per_pkg_mask; struct cpu_map *cpus = perf_evsel__cpus(counter); @@ -218,6 +219,17 @@ static int check_per_pkg(struct perf_evsel *counter, int cpu, bool *skip) counter->per_pkg_mask = mask; } + /* + * we do not consider an event that has not run as a good + * instance to mark a package as used (skip=1). Otherwise + * we may run into a situation where the first CPU in a package + * is not running anything, yet the second is, and this function + * would mark the package as used after the first CPU and would + * not read the values from the second CPU. + */ + if (!(vals->run && vals->ena)) + return 0; + s = cpu_map__get_socket(cpus, cpu); if (s < 0) return -1; @@ -235,7 +247,7 @@ process_counter_values(struct perf_stat_config *config, struct perf_evsel *evsel static struct perf_counts_values zero; bool skip = false; - if (check_per_pkg(evsel, cpu, &skip)) { + if (check_per_pkg(evsel, count, cpu, &skip)) { pr_err("failed to read per-pkg counter\n"); return -1; } |