diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2017-11-13 13:05:08 -0800 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2017-11-13 13:05:08 -0800 |
commit | 31486372a1e9a66ec2e9e2903b8792bba7e503e1 (patch) | |
tree | 84f61e0758695e6fbee8d2b7d7b9bc4a5ce6e03b | |
parent | 8e9a2dba8686187d8c8179e5b86640e653963889 (diff) | |
parent | fcdfafcb73be8fa45909327bbddca46fb362a675 (diff) |
Merge branch 'perf-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull perf updates from Ingo Molnar:
"The main changes in this cycle were:
Kernel:
- kprobes updates: use better W^X patterns for code modifications,
improve optprobes, remove jprobes. (Masami Hiramatsu, Kees Cook)
- core fixes: event timekeeping (enabled/running times statistics)
fixes, perf_event_read() locking fixes and cleanups, etc. (Peter
Zijlstra)
- Extend x86 Intel free-running PEBS support and support x86
user-register sampling in perf record and perf script. (Andi Kleen)
Tooling:
- Completely rework the way inline frames are handled. Instead of
querying for the inline nodes on-demand in the individual tools, we
now create proper callchain nodes for inlined frames. (Milian
Wolff)
- 'perf trace' updates (Arnaldo Carvalho de Melo)
- Implement a way to print formatted output to per-event files in
'perf script' to facilitate generate flamegraphs, elliminating the
need to write scripts to do that separation (yuzhoujian, Arnaldo
Carvalho de Melo)
- Update vendor events JSON metrics for Intel's Broadwell, Broadwell
Server, Haswell, Haswell Server, IvyBridge, IvyTown, JakeTown,
Sandy Bridge, Skylake, SkyLake Server - and Goldmont Plus V1 (Andi
Kleen, Kan Liang)
- Multithread the synthesizing of PERF_RECORD_ events for
pre-existing threads in 'perf top', speeding up that phase, greatly
improving the user experience in systems such as Intel's Knights
Mill (Kan Liang)
- Introduce the concept of weak groups in 'perf stat': try to set up
a group, but if it's not schedulable fallback to not using a group.
That gives us the best of both worlds: groups if they work, but
still a usable fallback if they don't. E.g: (Andi Kleen)
- perf sched timehist enhancements (David Ahern)
- ... various other enhancements, updates, cleanups and fixes"
* 'perf-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (139 commits)
kprobes: Don't spam the build log with deprecation warnings
arm/kprobes: Remove jprobe test case
arm/kprobes: Fix kretprobe test to check correct counter
perf srcline: Show correct function name for srcline of callchains
perf srcline: Fix memory leak in addr2inlines()
perf trace beauty kcmp: Beautify arguments
perf trace beauty: Implement pid_fd beautifier
tools include uapi: Grab a copy of linux/kcmp.h
perf callchain: Fix double mapping al->addr for children without self period
perf stat: Make --per-thread update shadow stats to show metrics
perf stat: Move the shadow stats scale computation in perf_stat__update_shadow_stats
perf tools: Add perf_data_file__write function
perf tools: Add struct perf_data_file
perf tools: Rename struct perf_data_file to perf_data
perf script: Print information about per-event-dump files
perf trace beauty prctl: Generate 'option' string table from kernel headers
tools include uapi: Grab a copy of linux/prctl.h
perf script: Allow creating per-event dump files
perf evsel: Restore evsel->priv as a tool private area
perf script: Use event_format__fprintf()
...
182 files changed, 8370 insertions, 2521 deletions
diff --git a/Documentation/kprobes.txt b/Documentation/kprobes.txt index 2335715bf471..22208bf2386d 100644 --- a/Documentation/kprobes.txt +++ b/Documentation/kprobes.txt @@ -8,7 +8,7 @@ Kernel Probes (Kprobes) .. CONTENTS - 1. Concepts: Kprobes, Jprobes, Return Probes + 1. Concepts: Kprobes, and Return Probes 2. Architectures Supported 3. Configuring Kprobes 4. API Reference @@ -16,12 +16,12 @@ Kernel Probes (Kprobes) 6. Probe Overhead 7. TODO 8. Kprobes Example - 9. Jprobes Example - 10. Kretprobes Example + 9. Kretprobes Example + 10. Deprecated Features Appendix A: The kprobes debugfs interface Appendix B: The kprobes sysctl interface -Concepts: Kprobes, Jprobes, Return Probes +Concepts: Kprobes and Return Probes ========================================= Kprobes enables you to dynamically break into any kernel routine and @@ -32,12 +32,10 @@ routine to be invoked when the breakpoint is hit. .. [1] some parts of the kernel code can not be trapped, see :ref:`kprobes_blacklist`) -There are currently three types of probes: kprobes, jprobes, and -kretprobes (also called return probes). A kprobe can be inserted -on virtually any instruction in the kernel. A jprobe is inserted at -the entry to a kernel function, and provides convenient access to the -function's arguments. A return probe fires when a specified function -returns. +There are currently two types of probes: kprobes, and kretprobes +(also called return probes). A kprobe can be inserted on virtually +any instruction in the kernel. A return probe fires when a specified +function returns. In the typical case, Kprobes-based instrumentation is packaged as a kernel module. The module's init function installs ("registers") @@ -82,45 +80,6 @@ After the instruction is single-stepped, Kprobes executes the "post_handler," if any, that is associated with the kprobe. Execution then continues with the instruction following the probepoint. -How Does a Jprobe Work? ------------------------ - -A jprobe is implemented using a kprobe that is placed on a function's -entry point. It employs a simple mirroring principle to allow -seamless access to the probed function's arguments. The jprobe -handler routine should have the same signature (arg list and return -type) as the function being probed, and must always end by calling -the Kprobes function jprobe_return(). - -Here's how it works. When the probe is hit, Kprobes makes a copy of -the saved registers and a generous portion of the stack (see below). -Kprobes then points the saved instruction pointer at the jprobe's -handler routine, and returns from the trap. As a result, control -passes to the handler, which is presented with the same register and -stack contents as the probed function. When it is done, the handler -calls jprobe_return(), which traps again to restore the original stack -contents and processor state and switch to the probed function. - -By convention, the callee owns its arguments, so gcc may produce code -that unexpectedly modifies that portion of the stack. This is why -Kprobes saves a copy of the stack and restores it after the jprobe -handler has run. Up to MAX_STACK_SIZE bytes are copied -- e.g., -64 bytes on i386. - -Note that the probed function's args may be passed on the stack -or in registers. The jprobe will work in either case, so long as the -handler's prototype matches that of the probed function. - -Note that in some architectures (e.g.: arm64 and sparc64) the stack -copy is not done, as the actual location of stacked parameters may be -outside of a reasonable MAX_STACK_SIZE value and because that location -cannot be determined by the jprobes code. In this case the jprobes -user must be careful to make certain the calling signature of the -function does not cause parameters to be passed on the stack (e.g.: -more than eight function arguments, an argument of more than sixteen -bytes, or more than 64 bytes of argument data, depending on -architecture). - Return Probes ------------- @@ -245,8 +204,7 @@ Pre-optimization After preparing the detour buffer, Kprobes verifies that none of the following situations exist: -- The probe has either a break_handler (i.e., it's a jprobe) or a - post_handler. +- The probe has a post_handler. - Other instructions in the optimized region are probed. - The probe is disabled. @@ -331,7 +289,7 @@ rejects registering it, if the given address is in the blacklist. Architectures Supported ======================= -Kprobes, jprobes, and return probes are implemented on the following +Kprobes and return probes are implemented on the following architectures: - i386 (Supports jump optimization) @@ -446,27 +404,6 @@ architecture-specific trap number associated with the fault (e.g., on i386, 13 for a general protection fault or 14 for a page fault). Returns 1 if it successfully handled the exception. -register_jprobe ---------------- - -:: - - #include <linux/kprobes.h> - int register_jprobe(struct jprobe *jp) - -Sets a breakpoint at the address jp->kp.addr, which must be the address -of the first instruction of a function. When the breakpoint is hit, -Kprobes runs the handler whose address is jp->entry. - -The handler should have the same arg list and return type as the probed -function; and just before it returns, it must call jprobe_return(). -(The handler never actually returns, since jprobe_return() returns -control to Kprobes.) If the probed function is declared asmlinkage -or anything else that affects how args are passed, the handler's -declaration must match. - -register_jprobe() returns 0 on success, or a negative errno otherwise. - register_kretprobe ------------------ @@ -513,7 +450,6 @@ unregister_*probe #include <linux/kprobes.h> void unregister_kprobe(struct kprobe *kp); - void unregister_jprobe(struct jprobe *jp); void unregister_kretprobe(struct kretprobe *rp); Removes the specified probe. The unregister function can be called @@ -532,7 +468,6 @@ register_*probes #include <linux/kprobes.h> int register_kprobes(struct kprobe **kps, int num); int register_kretprobes(struct kretprobe **rps, int num); - int register_jprobes(struct jprobe **jps, int num); Registers each of the num probes in the specified array. If any error occurs during registration, all probes in the array, up to @@ -555,7 +490,6 @@ unregister_*probes #include <linux/kprobes.h> void unregister_kprobes(struct kprobe **kps, int num); void unregister_kretprobes(struct kretprobe **rps, int num); - void unregister_jprobes(struct jprobe **jps, int num); Removes each of the num probes in the specified array at once. @@ -574,7 +508,6 @@ disable_*probe #include <linux/kprobes.h> int disable_kprobe(struct kprobe *kp); int disable_kretprobe(struct kretprobe *rp); - int disable_jprobe(struct jprobe *jp); Temporarily disables the specified ``*probe``. You can enable it again by using enable_*probe(). You must specify the probe which has been registered. @@ -587,7 +520,6 @@ enable_*probe #include <linux/kprobes.h> int enable_kprobe(struct kprobe *kp); int enable_kretprobe(struct kretprobe *rp); - int enable_jprobe(struct jprobe *jp); Enables ``*probe`` which has been disabled by disable_*probe(). You must specify the probe which has been registered. @@ -595,12 +527,10 @@ the probe which has been registered. Kprobes Features and Limitations ================================ -Kprobes allows multiple probes at the same address. Currently, -however, there cannot be multiple jprobes on the same function at -the same time. Also, a probepoint for which there is a jprobe or -a post_handler cannot be optimized. So if you install a jprobe, -or a kprobe with a post_handler, at an optimized probepoint, the -probepoint will be unoptimized automatically. +Kprobes allows multiple probes at the same address. Also, +a probepoint for which there is a post_handler cannot be optimized. +So if you install a kprobe with a post_handler, at an optimized +probepoint, the probepoint will be unoptimized automatically. In general, you can install a probe anywhere in the kernel. In particular, you can probe interrupt handlers. Known exceptions @@ -662,7 +592,7 @@ We're unaware of other specific cases where this could be a problem. If, upon entry to or exit from a function, the CPU is running on a stack other than that of the current task, registering a return probe on that function may produce undesirable results. For this -reason, Kprobes doesn't support return probes (or kprobes or jprobes) +reason, Kprobes doesn't support return probes (or kprobes) on the x86_64 version of __switch_to(); the registration functions return -EINVAL. @@ -706,24 +636,24 @@ Probe Overhead On a typical CPU in use in 2005, a kprobe hit takes 0.5 to 1.0 microseconds to process. Specifically, a benchmark that hits the same probepoint repeatedly, firing a simple handler each time, reports 1-2 -million hits per second, depending on the architecture. A jprobe or -return-probe hit typically takes 50-75% longer than a kprobe hit. +million hits per second, depending on the architecture. A return-probe +hit typically takes 50-75% longer than a kprobe hit. When you have a return probe set on a function, adding a kprobe at the entry to that function adds essentially no overhead. Here are sample overhead figures (in usec) for different architectures:: - k = kprobe; j = jprobe; r = return probe; kr = kprobe + return probe - on same function; jr = jprobe + return probe on same function:: + k = kprobe; r = return probe; kr = kprobe + return probe + on same function i386: Intel Pentium M, 1495 MHz, 2957.31 bogomips - k = 0.57 usec; j = 1.00; r = 0.92; kr = 0.99; jr = 1.40 + k = 0.57 usec; r = 0.92; kr = 0.99 x86_64: AMD Opteron 246, 1994 MHz, 3971.48 bogomips - k = 0.49 usec; j = 0.76; r = 0.80; kr = 0.82; jr = 1.07 + k = 0.49 usec; r = 0.80; kr = 0.82 ppc64: POWER5 (gr), 1656 MHz (SMT disabled, 1 virtual CPU per physical CPU) - k = 0.77 usec; j = 1.31; r = 1.26; kr = 1.45; jr = 1.99 + k = 0.77 usec; r = 1.26; kr = 1.45 Optimized Probe Overhead ------------------------ @@ -755,11 +685,6 @@ Kprobes Example See samples/kprobes/kprobe_example.c -Jprobes Example -=============== - -See samples/kprobes/jprobe_example.c - Kretprobes Example ================== @@ -772,6 +697,37 @@ For additional information on Kprobes, refer to the following URLs: - http://www-users.cs.umn.edu/~boutcher/kprobes/ - http://www.linuxsymposium.org/2006/linuxsymposium_procv2.pdf (pages 101-115) +Deprecated Features +=================== + +Jprobes is now a deprecated feature. People who are depending on it should +migrate to other tracing features or use older kernels. Please consider to +migrate your tool to one of the following options: + +- Use trace-event to trace target function with arguments. + + trace-event is a low-overhead (and almost no visible overhead if it + is off) statically defined event interface. You can define new events + and trace it via ftrace or any other tracing tools. + + See the following urls: + + - https://lwn.net/Articles/379903/ + - https://lwn.net/Articles/381064/ + - https://lwn.net/Articles/383362/ + +- Use ftrace dynamic events (kprobe event) with perf-probe. + + If you build your kernel with debug info (CONFIG_DEBUG_INFO=y), you can + find which register/stack is assigned to which local variable or arguments + by using perf-probe and set up new event to trace it. + + See following documents: + + - Documentation/trace/kprobetrace.txt + - Documentation/trace/events.txt + - tools/perf/Documentation/perf-probe.txt + The kprobes debugfs interface ============================= @@ -783,14 +739,13 @@ under the /sys/kernel/debug/kprobes/ directory (assuming debugfs is mounted at / /sys/kernel/debug/kprobes/list: Lists all registered probes on the system:: c015d71a k vfs_read+0x0 - c011a316 j do_fork+0x0 c03dedc5 r tcp_v4_rcv+0x0 The first column provides the kernel address where the probe is inserted. -The second column identifies the type of probe (k - kprobe, r - kretprobe -and j - jprobe), while the third column specifies the symbol+offset of -the probe. If the probed function belongs to a module, the module name -is also specified. Following columns show probe status. If the probe is on +The second column identifies the type of probe (k - kprobe and r - kretprobe) +while the third column specifies the symbol+offset of the probe. +If the probed function belongs to a module, the module name is also +specified. Following columns show probe status. If the probe is on a virtual address that is no longer valid (module init sections, module virtual addresses that correspond to modules that've been unloaded), such probes are marked with [GONE]. If the probe is temporarily disabled, diff --git a/arch/Kconfig b/arch/Kconfig index 057370a0ac4e..400b9e1b2f27 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -91,7 +91,7 @@ config STATIC_KEYS_SELFTEST config OPTPROBES def_bool y depends on KPROBES && HAVE_OPTPROBES - depends on !PREEMPT + select TASKS_RCU if PREEMPT config KPROBES_ON_FTRACE def_bool y diff --git a/arch/arm/probes/kprobes/test-core.c b/arch/arm/probes/kprobes/test-core.c index 1c98a87786ca..9ed0129bed3c 100644 --- a/arch/arm/probes/kprobes/test-core.c +++ b/arch/arm/probes/kprobes/test-core.c @@ -227,7 +227,6 @@ static bool test_regs_ok; static int test_func_instance; static int pre_handler_called; static int post_handler_called; -static int jprobe_func_called; static int kretprobe_handler_called; static int tests_failed; @@ -370,50 +369,6 @@ static int test_kprobe(long (*func)(long, long)) return 0; } -static void __kprobes jprobe_func(long r0, long r1) -{ - jprobe_func_called = test_func_instance; - if (r0 == FUNC_ARG1 && r1 == FUNC_ARG2) - test_regs_ok = true; - jprobe_return(); -} - -static struct jprobe the_jprobe = { - .entry = jprobe_func, -}; - -static int test_jprobe(long (*func)(long, long)) -{ - int ret; - - the_jprobe.kp.addr = (kprobe_opcode_t *)func; - ret = register_jprobe(&the_jprobe); - if (ret < 0) { - pr_err("FAIL: register_jprobe failed with %d\n", ret); - return ret; - } - - ret = call_test_func(func, true); - - unregister_jprobe(&the_jprobe); - the_jprobe.kp.flags = 0; /* Clear disable flag to allow reuse */ - - if (!ret) - return -EINVAL; - if (jprobe_func_called != test_func_instance) { - pr_err("FAIL: jprobe handler function not called\n"); - return -EINVAL; - } - if (!call_test_func(func, false)) - return -EINVAL; - if (jprobe_func_called == test_func_instance) { - pr_err("FAIL: probe called after unregistering\n"); - return -EINVAL; - } - - return 0; -} - static int __kprobes kretprobe_handler(struct kretprobe_instance *ri, struct pt_regs *regs) { @@ -451,7 +406,7 @@ static int test_kretprobe(long (*func)(long, long)) } if (!call_test_func(func, false)) return -EINVAL; - if (jprobe_func_called == test_func_instance) { + if (kretprobe_handler_called == test_func_instance) { pr_err("FAIL: kretprobe called after unregistering\n"); return -EINVAL; } @@ -468,18 +423,6 @@ static int run_api_tests(long (*func)(long, long)) if (ret < 0) return ret; - pr_info(" jprobe\n"); - ret = test_jprobe(func); -#if defined(CONFIG_THUMB2_KERNEL) && !defined(MODULE) - if (ret == -EINVAL) { - pr_err("FAIL: Known longtime bug with jprobe on Thumb kernels\n"); - tests_failed = ret; - ret = 0; - } -#endif - if (ret < 0) - return ret; - pr_info(" kretprobe\n"); ret = test_kretprobe(func); if (ret < 0) diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c index 9fb9a1f1e47b..43445da30cea 100644 --- a/arch/x86/events/intel/core.c +++ b/arch/x86/events/intel/core.c @@ -2958,6 +2958,10 @@ static unsigned long intel_pmu_free_running_flags(struct perf_event *event) if (event->attr.use_clockid) flags &= ~PERF_SAMPLE_TIME; + if (!event->attr.exclude_kernel) + flags &= ~PERF_SAMPLE_REGS_USER; + if (event->attr.sample_regs_user & ~PEBS_REGS) + flags &= ~(PERF_SAMPLE_REGS_USER | PERF_SAMPLE_REGS_INTR); return flags; } diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h index 4196f81ec0e1..f7aaadf9331f 100644 --- a/arch/x86/events/perf_event.h +++ b/arch/x86/events/perf_event.h @@ -85,13 +85,15 @@ struct amd_nb { * Flags PEBS can handle without an PMI. * * TID can only be handled by flushing at context switch. + * REGS_USER can be handled for events limited to ring 3. * */ #define PEBS_FREERUNNING_FLAGS \ (PERF_SAMPLE_IP | PERF_SAMPLE_TID | PERF_SAMPLE_ADDR | \ PERF_SAMPLE_ID | PERF_SAMPLE_CPU | PERF_SAMPLE_STREAM_ID | \ PERF_SAMPLE_DATA_SRC | PERF_SAMPLE_IDENTIFIER | \ - PERF_SAMPLE_TRANSACTION | PERF_SAMPLE_PHYS_ADDR) + PERF_SAMPLE_TRANSACTION | PERF_SAMPLE_PHYS_ADDR | \ + PERF_SAMPLE_REGS_INTR | PERF_SAMPLE_REGS_USER) /* * A debug store configuration. @@ -110,6 +112,26 @@ struct debug_store { u64 pebs_event_reset[MAX_PEBS_EVENTS]; }; +#define PEBS_REGS \ + (PERF_REG_X86_AX | \ + PERF_REG_X86_BX | \ + PERF_REG_X86_CX | \ + PERF_REG_X86_DX | \ + PERF_REG_X86_DI | \ + PERF_REG_X86_SI | \ + PERF_REG_X86_SP | \ + PERF_REG_X86_BP | \ + PERF_REG_X86_IP | \ + PERF_REG_X86_FLAGS | \ + PERF_REG_X86_R8 | \ + PERF_REG_X86_R9 | \ + PERF_REG_X86_R10 | \ + PERF_REG_X86_R11 | \ + PERF_REG_X86_R12 | \ + PERF_REG_X86_R13 | \ + PERF_REG_X86_R14 | \ + PERF_REG_X86_R15) + /* * Per register state. */ diff --git a/arch/x86/include/asm/kprobes.h b/arch/x86/include/asm/kprobes.h index 6cf65437b5e5..9f2e3102e0bb 100644 --- a/arch/x86/include/asm/kprobes.h +++ b/arch/x86/include/asm/kprobes.h @@ -58,8 +58,8 @@ extern __visible kprobe_opcode_t optprobe_template_call[]; extern __visible kprobe_opcode_t optprobe_template_end[]; #define MAX_OPTIMIZED_LENGTH (MAX_INSN_SIZE + RELATIVE_ADDR_SIZE) #define MAX_OPTINSN_SIZE \ - (((unsigned long)&optprobe_template_end - \ - (unsigned long)&optprobe_template_entry) + \ + (((unsigned long)optprobe_template_end - \ + (unsigned long)optprobe_template_entry) + \ MAX_OPTIMIZED_LENGTH + RELATIVEJUMP_SIZE) extern const int kretprobe_blacklist_size; diff --git a/arch/x86/kernel/kprobes/common.h b/arch/x86/kernel/kprobes/common.h index 615105cf7d58..ae38dccf0c8f 100644 --- a/arch/x86/kernel/kprobes/common.h +++ b/arch/x86/kernel/kprobes/common.h @@ -85,11 +85,11 @@ extern unsigned long recover_probed_instruction(kprobe_opcode_t *buf, * Copy an instruction and adjust the displacement if the instruction * uses the %rip-relative addressing mode. */ -extern int __copy_instruction(u8 *dest, u8 *src, struct insn *insn); +extern int __copy_instruction(u8 *dest, u8 *src, u8 *real, struct insn *insn); /* Generate a relative-jump/call instruction */ -extern void synthesize_reljump(void *from, void *to); -extern void synthesize_relcall(void *from, void *to); +extern void synthesize_reljump(void *dest, void *from, void *to); +extern void synthesize_relcall(void *dest, void *from, void *to); #ifdef CONFIG_OPTPROBES extern int setup_detour_execution(struct kprobe *p, struct pt_regs *regs, int reenter); diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c index 0742491cbb73..bd36f3c33cd0 100644 --- a/arch/x86/kernel/kprobes/core.c +++ b/arch/x86/kernel/kprobes/core.c @@ -119,29 +119,29 @@ struct kretprobe_blackpoint kretprobe_blacklist[] = { const int kretprobe_blacklist_size = ARRAY_SIZE(kretprobe_blacklist); static nokprobe_inline void -__synthesize_relative_insn(void *from, void *to, u8 op) +__synthesize_relative_insn(void *dest, void *from, void *to, u8 op) { struct __arch_relative_insn { u8 op; s32 raddr; } __packed *insn; - insn = (struct __arch_relative_insn *)from; + insn = (struct __arch_relative_insn *)dest; insn->raddr = (s32)((long)(to) - ((long)(from) + 5)); insn->op = op; } /* Insert a jump instruction at address 'from', which jumps to address 'to'.*/ -void synthesize_reljump(void *from, void *to) +void synthesize_reljump(void *dest, void *from, void *to) { - __synthesize_relative_insn(from, to, RELATIVEJUMP_OPCODE); + __synthesize_relative_insn(dest, from, to, RELATIVEJUMP_OPCODE); } NOKPROBE_SYMBOL(synthesize_reljump); /* Insert a call instruction at address 'from', which calls address 'to'.*/ -void synthesize_relcall(void *from, void *to) +void synthesize_relcall(void *dest, void *from, void *to) { - __synthesize_relative_insn(from, to, RELATIVECALL_OPCODE); + __synthesize_relative_insn(dest, from, to, RELATIVECALL_OPCODE); } NOKPROBE_SYMBOL(synthesize_relcall); @@ -346,10 +346,11 @@ static int is_IF_modifier(kprobe_opcode_t *insn) /* * Copy an instruction with recovering modified instruction by kprobes * and adjust the displacement if the instruction uses the %rip-relative - * addressing mode. + * addressing mode. Note that since @real will be the final place of copied + * instruction, displacement must be adjust by @real, not @dest. * This returns the length of copied instruction, or 0 if it has an error. */ -int __copy_instruction(u8 *dest, u8 *src, struct insn *insn) +int __copy_instruction(u8 *dest, u8 *src, u8 *real, struct insn *insn) { kprobe_opcode_t buf[MAX_INSN_SIZE]; unsigned long recovered_insn = @@ -387,11 +388,11 @@ int __copy_instruction(u8 *dest, u8 *src, struct insn *insn) * have given. */ newdisp = (u8 *) src + (s64) insn->displacement.value - - (u8 *) dest; + - (u8 *) real; if ((s64) (s32) newdisp != newdisp) { pr_err("Kprobes error: new displacement does not fit into s32 (%llx)\n", newdisp); pr_err("\tSrc: %p, Dest: %p, old disp: %x\n", - src, dest, insn->displacement.value); + src, real, insn->displacement.value); return 0; } disp = (u8 *) dest + insn_offset_displacement(insn); @@ -402,20 +403,38 @@ int __copy_instruction(u8 *dest, u8 *src, struct insn *insn) } /* Prepare reljump right after instruction to boost */ -static void prepare_boost(struct kprobe *p, struct insn *insn) +static int prepare_boost(kprobe_opcode_t *buf, struct kprobe *p, + struct insn *insn) { + int len = insn->length; + if (can_boost(insn, p->addr) && - MAX_INSN_SIZE - insn->length >= RELATIVEJUMP_SIZE) { + MAX_INSN_SIZE - len >= RELATIVEJUMP_SIZE) { /* * These instructions can be executed directly if it * jumps back to correct address. */ - synthesize_reljump(p->ainsn.insn + insn->length, + synthesize_reljump(buf + len, p->ainsn.insn + len, p->addr + insn->length); + len += RELATIVEJUMP_SIZE; p->ainsn.boostable = true; } else { p->ainsn.boostable = false; } + + return len; +} + +/* Make page to RO mode when allocate it */ +void *alloc_insn_page(void) +{ + void *page; + + page = module_alloc(PAGE_SIZE); + if (page) + set_memory_ro((unsigned long)page & PAGE_MASK, 1); + + return page; } /* Recover page to RW mode before releasing it */ @@ -429,12 +448,11 @@ void free_insn_page(void *page) static int arch_copy_kprobe(struct kprobe *p) { struct insn insn; + kprobe_opcode_t buf[MAX_INSN_SIZE]; int len; - set_memory_rw((unsigned long)p->ainsn.insn & PAGE_MASK, 1); - /* Copy an instruction with recovering if other optprobe modifies it.*/ - len = __copy_instruction(p->ainsn.insn, p->addr, &insn); + len = __copy_instruction(buf, p->addr, p->ainsn.insn, &insn); if (!len) return -EINVAL; @@ -442,15 +460,16 @@ static int arch_copy_kprobe(struct kprobe *p) * __copy_instruction can modify the displacement of the instruction, * but it doesn't affect boostable check. */ - prepare_boost(p, &insn); - - set_memory_ro((unsigned long)p->ainsn.insn & PAGE_MASK, 1); + len = prepare_boost(buf, p, &insn); /* Check whether the instruction modifies Interrupt Flag or not */ - p->ainsn.if_modifier = is_IF_modifier(p->ainsn.insn); + p->ainsn.if_modifier = is_IF_modifier(buf); /* Also, displacement change doesn't affect the first byte */ - p->opcode = p->ainsn.insn[0]; + p->opcode = buf[0]; + + /* OK, write back the instruction(s) into ROX insn buffer */ + text_poke(p->ainsn.insn, buf, len); return 0; } diff --git a/arch/x86/kernel/kprobes/ftrace.c b/arch/x86/kernel/kprobes/ftrace.c index 041f7b6dfa0f..8dc0161cec8f 100644 --- a/arch/x86/kernel/kprobes/ftrace.c +++ b/arch/x86/kernel/kprobes/ftrace.c @@ -26,7 +26,7 @@ #include "common.h" static nokprobe_inline -int __skip_singlestep(struct kprobe *p, struct pt_regs *regs, +void __skip_singlestep(struct kprobe *p, struct pt_regs *regs, struct kprobe_ctlblk *kcb, unsigned long orig_ip) { /* @@ -41,33 +41,31 @@ int __skip_singlestep(struct kprobe *p, struct pt_regs *regs, __this_cpu_write(current_kprobe, NULL); if (orig_ip) regs->ip = orig_ip; - return 1; } int skip_singlestep(struct kprobe *p, struct pt_regs *regs, struct kprobe_ctlblk *kcb) { - if (kprobe_ftrace(p)) - return __skip_singlestep(p, regs, kcb, 0); - else - return 0; + if (kprobe_ftrace(p)) { + __skip_singlestep(p, regs, kcb, 0); + preempt_enable_no_resched(); + return 1; + } + return 0; } NOKPROBE_SYMBOL(skip_singlestep); -/* Ftrace callback handler for kprobes */ +/* Ftrace callback handler for kprobes -- called under preepmt disabed */ void kprobe_ftrace_handler(unsigned long ip, unsigned long parent_ip, struct ftrace_ops *ops, struct pt_regs *regs) { struct kprobe *p; struct kprobe_ctlblk *kcb; - unsigned long flags; - - /* Disable irq for emulating a breakpoint and avoiding preempt */ - local_irq_save(flags); + /* Preempt is disabled by ftrace */ p = get_kprobe((kprobe_opcode_t *)ip); if (unlikely(!p) || kprobe_disabled(p)) - goto end; + return; kcb = get_kprobe_ctlblk(); if (kprobe_running()) { @@ -77,17 +75,19 @@ void kprobe_ftrace_handler(unsigned long ip, unsigned long parent_ip, /* Kprobe handler expects regs->ip = ip + 1 as breakpoint hit */ regs->ip = ip + sizeof(kprobe_opcode_t); + /* To emulate trap based kprobes, preempt_disable here */ + preempt_disable(); __this_cpu_write(current_kprobe, p); kcb->kprobe_status = KPROBE_HIT_ACTIVE; - if (!p->pre_handler || !p->pre_handler(p, regs)) + if (!p->pre_handler || !p->pre_handler(p, regs)) { __skip_singlestep(p, regs, kcb, orig_ip); + preempt_enable_no_resched(); + } /* * If pre_handler returns !0, it sets regs->ip and - * resets current kprobe. + * resets current kprobe, and keep preempt count +1. */ } -end: - local_irq_restore(flags); } NOKPROBE_SYMBOL(kprobe_ftrace_handler); diff --git a/arch/x86/kernel/kprobes/opt.c b/arch/x86/kernel/kprobes/opt.c index 4f98aad38237..e941136e24d8 100644 --- a/arch/x86/kernel/kprobes/opt.c +++ b/arch/x86/kernel/kprobes/opt.c @@ -142,11 +142,11 @@ void optprobe_template_func(void); STACK_FRAME_NON_STANDARD(optprobe_template_func); #define TMPL_MOVE_IDX \ - ((long)&optprobe_template_val - (long)&optprobe_template_entry) + ((long)optprobe_template_val - (long)optprobe_template_entry) #define TMPL_CALL_IDX \ - ((long)&optprobe_template_call - (long)&optprobe_template_entry) + ((long)optprobe_template_call - (long)optprobe_template_entry) #define TMPL_END_IDX \ - ((long)&optprobe_template_end - (long)&optprobe_template_entry) + ((long)optprobe_template_end - (long)optprobe_template_entry) #define INT3_SIZE sizeof(kprobe_opcode_t) @@ -154,17 +154,15 @@ STACK_FRAME_NON_STANDARD(optprobe_template_func); static void optimized_callback(struct optimized_kprobe *op, struct pt_regs *regs) { - struct kprobe_ctlblk *kcb = get_kprobe_ctlblk(); - unsigned long flags; - /* This is possible if op is under delayed unoptimizing */ if (kprobe_disabled(&op->kp)) return; - local_irq_save(flags); + preempt_disable(); if (kprobe_running()) { kprobes_inc_nmissed_count(&op->kp); } else { + struct kprobe_ctlblk *kcb = get_kprobe_ctlblk(); /* Save skipped registers */ #ifdef CONFIG_X86_64 regs->cs = __KERNEL_CS; @@ -180,17 +178,17 @@ optimized_callback(struct optimized_kprobe *op, struct pt_regs *regs) opt_pre_handler(&op->kp, regs); __this_cpu_write(current_kprobe, NULL); } - local_irq_restore(flags); + preempt_enable_no_resched(); } NOKPROBE_SYMBOL(optimized_callback); -static int copy_optimized_instructions(u8 *dest, u8 *src) +static int copy_optimized_instructions(u8 *dest, u8 *src, u8 *real) { struct insn insn; int len = 0, ret; while (len < RELATIVEJUMP_SIZE) { - ret = __copy_instruction(dest + len, src + len, &insn); + ret = __copy_instruction(dest + len, src + len, real, &insn); if (!ret || !can_boost(&insn, src + len)) return -EINVAL; len += ret; @@ -343,57 +341,66 @@ void arch_remove_optimized_kprobe(struct optimized_kprobe *op) int arch_prepare_optimized_kprobe(struct optimized_kprobe *op, struct kprobe *__unused) { - u8 *buf; - int ret; + u8 *buf = NULL, *slot; + int ret, len; long rel; if (!can_optimize((unsigned long)op->kp.addr)) return -EILSEQ; - op->optinsn.insn = get_optinsn_slot(); - if (!op->optinsn.insn) + buf = kzalloc(MAX_OPTINSN_SIZE, GFP_KERNEL); + if (!buf) return -ENOMEM; + op->optinsn.insn = slot = get_optinsn_slot(); + if (!slot) { + ret = -ENOMEM; + goto out; + } + /* * Verify if the address gap is in 2GB range, because this uses * a relative jump. */ - rel = (long)op->optinsn.insn - (long)op->kp.addr + RELATIVEJUMP_SIZE; + rel = (long)slot - (long)op->kp.addr + RELATIVEJUMP_SIZE; if (abs(rel) > 0x7fffffff) { - __arch_remove_optimized_kprobe(op, 0); - return -ERANGE; + ret = -ERANGE; + goto err; } - buf = (u8 *)op->optinsn.insn; - set_memory_rw((unsigned long)buf & PAGE_MASK, 1); + /* Copy arch-dep-instance from template */ + memcpy(buf, optprobe_template_entry, TMPL_END_IDX); /* Copy instructions into the out-of-line buffer */ - ret = copy_optimized_instructions(buf + TMPL_END_IDX, op->kp.addr); - if (ret < 0) { - __arch_remove_optimized_kprobe(op, 0); - return ret; - } + ret = copy_optimized_instructions(buf + TMPL_END_IDX, op->kp.addr, + slot + TMPL_END_IDX); + if (ret < 0) + goto err; op->optinsn.size = ret; - - /* Copy arch-dep-instance from template */ - memcpy(buf, &optprobe_template_entry, TMPL_END_IDX); + len = TMPL_END_IDX + op->optinsn.size; /* Set probe information */ synthesize_set_arg1(buf + TMPL_MOVE_IDX, (unsigned long)op); /* Set probe function call */ - synthesize_relcall(buf + TMPL_CALL_IDX, optimized_callback); + synthesize_relcall(buf + TMPL_CALL_IDX, + slot + TMPL_CALL_IDX, optimized_callback); /* Set returning jmp instruction at the tail of out-of-line buffer */ - synthesize_reljump(buf + TMPL_END_IDX + op->optinsn.size, + synthesize_reljump(buf + len, slot + len, (u8 *)op->kp.addr + op->optinsn.size); - - set_memory_ro((unsigned long)buf & PAGE_MASK, 1); - - flush_icache_range((unsigned long) buf, - (unsigned long) buf + TMPL_END_IDX + - op->optinsn.size + RELATIVEJUMP_SIZE); - return 0; + len += RELATIVEJUMP_SIZE; + + /* We have to use text_poke for instuction buffer because it is RO */ + text_poke(slot, buf, len); + ret = 0; +out: + kfree(buf); + return ret; + +err: + __arch_remove_optimized_kprobe(op, 0); + goto out; } /* diff --git a/drivers/misc/lkdtm_core.c b/drivers/misc/lkdtm_core.c index 981b3ef71e47..ed7f0c61c59a 100644 --- a/drivers/misc/lkdtm_core.c +++ b/drivers/misc/lkdtm_core.c @@ -56,122 +56,54 @@ static ssize_t direct_entry(struct file *f, const char __user *user_buf, size_t count, loff_t *off); #ifdef CONFIG_KPROBES -static void lkdtm_handler(void); +static int lkdtm_kprobe_handler(struct kprobe *kp, struct pt_regs *regs); static ssize_t lkdtm_debugfs_entry(struct file *f, const char __user *user_buf, size_t count, loff_t *off); - - -/* jprobe entry point handlers. */ -static unsigned int jp_do_irq(unsigned int irq) -{ - lkdtm_handler(); - jprobe_return(); - return 0; -} - -static irqreturn_t jp_handle_irq_event(unsigned int irq, - struct irqaction *action) -{ - lkdtm_handler(); - jprobe_return(); - return 0; -} - -static void jp_tasklet_action(struct softirq_action *a) -{ - lkdtm_handler(); - jprobe_return(); -} - -static void jp_ll_rw_block(int rw, int nr, struct buffer_head *bhs[]) -{ - lkdtm_handler(); - jprobe_return(); -} - -struct scan_control; - -static unsigned long jp_shrink_inactive_list(unsigned long max_scan, - struct zone *zone, - struct scan_control *sc) -{ - lkdtm_handler(); - jprobe_return(); - return 0; -} - -static int jp_hrtimer_start(struct hrtimer *timer, ktime_t tim, - const enum hrtimer_mode mode) -{ - lkdtm_handler(); - jprobe_return(); - return 0; -} - -static int jp_scsi_dispatch_cmd(struct scsi_cmnd *cmd) -{ - lkdtm_handler(); - jprobe_return(); - return 0; -} - -# ifdef CONFIG_IDE -static int jp_generic_ide_ioctl(ide_drive_t *drive, struct file *file, - struct block_device *bdev, unsigned int cmd, - unsigned long arg) -{ - lkdtm_handler(); - jprobe_return(); - return 0; -} -# endif +# define CRASHPOINT_KPROBE(_symbol) \ + .kprobe = { \ + .symbol_name = (_symbol), \ + .pre_handler = lkdtm_kprobe_handler, \ + }, +# define CRASHPOINT_WRITE(_symbol) \ + (_symbol) ? lkdtm_debugfs_entry : direct_entry +#else +# define CRASHPOINT_KPROBE(_symbol) +# define CRASHPOINT_WRITE(_symbol) direct_entry #endif /* Crash points */ struct crashpoint { const char *name; const struct file_operations fops; - struct jprobe jprobe; + struct kprobe kprobe; }; -#define CRASHPOINT(_name, _write, _symbol, _entry) \ +#define CRASHPOINT(_name, _symbol) \ { \ .name = _name, \ .fops = { \ .read = lkdtm_debugfs_read, \ .llseek = generic_file_llseek, \ .open = lkdtm_debugfs_open, \ - .write = _write, \ - }, \ - .jprobe = { \ - .kp.symbol_name = _symbol, \ - .entry = (kprobe_opcode_t *)_entry, \ + .write = CRASHPOINT_WRITE(_symbol) \ }, \ + CRASHPOINT_KPROBE(_symbol) \ } /* Define the possible places where we can trigger a crash point. */ -struct crashpoint crashpoints[] = { - CRASHPOINT("DIRECT", direct_entry, - NULL, NULL), +static struct crashpoint crashpoints[] = { + CRASHPOINT("DIRECT", NULL), #ifdef CONFIG_KPROBES - CRASHPOINT("INT_HARDWARE_ENTRY", lkdtm_debugfs_entry, - "do_IRQ", jp_do_irq), - CRASHPOINT("INT_HW_IRQ_EN", lkdtm_debugfs_entry, - "handle_IRQ_event", jp_handle_irq_event), - CRASHPOINT("INT_TASKLET_ENTRY", lkdtm_debugfs_entry, - "tasklet_action", jp_tasklet_action), - CRASHPOINT("FS_DEVRW", lkdtm_debugfs_entry, - "ll_rw_block", jp_ll_rw_block), - CRASHPOINT("MEM_SWAPOUT", lkdtm_debugfs_entry, - "shrink_inactive_list", jp_shrink_inactive_list), - CRASHPOINT("TIMERADD", lkdtm_debugfs_entry, - "hrtimer_start", jp_hrtimer_start), - CRASHPOINT("SCSI_DISPATCH_CMD", lkdtm_debugfs_entry, - "scsi_dispatch_cmd", jp_scsi_dispatch_cmd), + CRASHPOINT("INT_HARDWARE_ENTRY", "do_IRQ"), + CRASHPOINT("INT_HW_IRQ_EN", "handle_IRQ_event"), + CRASHPOINT("INT_TASKLET_ENTRY", "tasklet_action"), + CRASHPOINT("FS_DEVRW", "ll_rw_block"), + CRASHPOINT("MEM_SWAPOUT", "shrink_inactive_list"), + CRASHPOINT("TIMERADD", "hrtimer_start"), + CRASHPOINT("SCSI_DISPATCH_CMD", "scsi_dispatch_cmd"), # ifdef CONFIG_IDE - CRASHPOINT("IDE_CORE_CP", lkdtm_debugfs_entry, - "generic_ide_ioctl", jp_generic_ide_ioctl), + CRASHPOINT("IDE_CORE_CP", "generic_ide_ioctl"), # endif #endif }; @@ -254,8 +186,8 @@ struct crashtype crashtypes[] = { }; -/* Global jprobe entry and crashtype. */ -static struct jprobe *lkdtm_jprobe; +/* Global kprobe entry and crashtype. */ +static struct kprobe *lkdtm_kprobe; struct crashpoint *lkdtm_crashpoint; struct crashtype *lkdtm_crashtype; @@ -298,7 +230,8 @@ static struct crashtype *find_crashtype(const char *name) */ static noinline void lkdtm_do_action(struct crashtype *crashtype) { - BUG_ON(!crashtype || !crashtype->func); + if (WARN_ON(!crashtype || !crashtype->func)) + return; crashtype->func(); } @@ -308,22 +241,22 @@ static int lkdtm_register_cpoint(struct crashpoint *crashpoint, int ret; /* If this doesn't have a symbol, just call immediately. */ - if (!crashpoint->jprobe.kp.symbol_name) { + if (!crashpoint->kprobe.symbol_name) { lkdtm_do_action(crashtype); return 0; } - if (lkdtm_jprobe != NULL) - unregister_jprobe(lkdtm_jprobe); + if (lkdtm_kprobe != NULL) + unregister_kprobe(lkdtm_kprobe); lkdtm_crashpoint = crashpoint; lkdtm_crashtype = crashtype; - lkdtm_jprobe = &crashpoint->jprobe; - ret = register_jprobe(lkdtm_jprobe); + lkdtm_kprobe = &crashpoint->kprobe; + ret = register_kprobe(lkdtm_kprobe); if (ret < 0) { - pr_info("Couldn't register jprobe %s\n", - crashpoint->jprobe.kp.symbol_name); - lkdtm_jprobe = NULL; + pr_info("Couldn't register kprobe %s\n", + crashpoint->kprobe.symbol_name); + lkdtm_kprobe = NULL; lkdtm_crashpoint = NULL; lkdtm_crashtype = NULL; } @@ -336,13 +269,14 @@ static int lkdtm_register_cpoint(struct crashpoint *crashpoint, static int crash_count = DEFAULT_COUNT; static DEFINE_SPINLOCK(crash_count_lock); -/* Called by jprobe entry points. */ -static void lkdtm_handler(void) +/* Called by kprobe entry points. */ +static int lkdtm_kprobe_handler(struct kprobe *kp, struct pt_regs *regs) { unsigned long flags; bool do_it = false; - BUG_ON(!lkdtm_crashpoint || !lkdtm_crashtype); + if (WARN_ON(!lkdtm_crashpoint || !lkdtm_crashtype)) + return 0; spin_lock_irqsave(&crash_count_lock, flags); crash_count--; @@ -357,6 +291,8 @@ static void lkdtm_handler(void) if (do_it) lkdtm_do_action(lkdtm_crashtype); + + return 0; } static ssize_t lkdtm_debugfs_entry(struct file *f, @@ -556,8 +492,8 @@ static void __exit lkdtm_module_exit(void) /* Handle test-specific clean-up. */ lkdtm_usercopy_exit(); - if (lkdtm_jprobe != NULL) - unregister_jprobe(lkdtm_jprobe); + if (lkdtm_kprobe != NULL) + unregister_kprobe(lkdtm_kprobe); pr_info("Crash point unregistered\n"); } diff --git a/include/linux/kprobes.h b/include/linux/kprobes.h index bd2684700b74..9440a2fc8893 100644 --- a/include/linux/kprobes.h +++ b/include/linux/kprobes.h @@ -391,10 +391,6 @@ int register_kprobes(struct kprobe **kps, int num); void unregister_kprobes(struct kprobe **kps, int num); int setjmp_pre_handler(struct kprobe *, struct pt_regs *); int longjmp_break_handler(struct kprobe *, struct pt_regs *); -int register_jprobe(struct jprobe *p); -void unregister_jprobe(struct jprobe *p); -int register_jprobes(struct jprobe **jps, int num); -void unregister_jprobes(struct jprobe **jps, int num); void jprobe_return(void); unsigned long arch_deref_entry_point(void *); @@ -443,20 +439,6 @@ static inline void unregister_kprobe(struct kprobe *p) static inline void unregister_kprobes(struct kprobe **kps, int num) { } -static inline int register_jprobe(struct jprobe *p) -{ - return -ENOSYS; -} -static inline int register_jprobes(struct jprobe **jps, int num) -{ - return -ENOSYS; -} -static inline void unregister_jprobe(struct jprobe *p) -{ -} -static inline void unregister_jprobes(struct jprobe **jps, int num) -{ -} static inline void jprobe_return(void) { } @@ -486,6 +468,20 @@ static inline int enable_kprobe(struct kprobe *kp) return -ENOSYS; } #endif /* CONFIG_KPROBES */ +static inline int register_jprobe(struct jprobe *p) +{ + return -ENOSYS; +} +static inline int register_jprobes(struct jprobe **jps, int num) +{ + return -ENOSYS; +} +static inline void unregister_jprobe(struct jprobe *p) +{ +} +static inline void unregister_jprobes(struct jprobe **jps, int num) +{ +} static inline int disable_kretprobe(struct kretprobe *rp) { return disable_kprobe(&rp->kp); @@ -496,11 +492,11 @@ static inline int enable_kretprobe(struct kretprobe *rp) } static inline int disable_jprobe(struct jprobe *jp) { - return disable_kprobe(&jp->kp); + return -ENOSYS; } static inline int enable_jprobe(struct jprobe *jp) { - return enable_kprobe(&jp->kp); + return -ENOSYS; } #ifndef CONFIG_KPROBES diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h index 8e22f24ded6a..874b71a70058 100644 --- a/include/linux/perf_event.h +++ b/include/linux/perf_event.h @@ -485,9 +485,9 @@ struct perf_addr_filters_head { }; /** - * enum perf_event_active_state - the states of a event + * enum perf_event_state - the states of a event */ -enum perf_event_active_state { +enum perf_event_state { PERF_EVENT_STATE_DEAD = -4, PERF_EVENT_STATE_EXIT = -3, PERF_EVENT_STATE_ERROR = -2, @@ -578,7 +578,7 @@ struct perf_event { struct pmu *pmu; void *pmu_private; - enum perf_event_active_state state; + enum perf_event_state state; unsigned int attach_state; local64_t count; atomic64_t child_count; @@ -588,26 +588,10 @@ struct perf_event { * has been enabled (i.e. eligible to run, and the task has * been scheduled in, if this is a per-task event) * and running (scheduled onto the CPU), respectively. - * - * They are computed from tstamp_enabled, tstamp_running and - * tstamp_stopped when the event is in INACTIVE or ACTIVE state. */ u64 total_time_enabled; u64 total_time_running; - - /* - * These are timestamps used for computing total_time_enabled - * and total_time_running when the event is in INACTIVE or - * ACTIVE state, measured in nanoseconds from an arbitrary point - * in time. - * tstamp_enabled: the notional time when the event was enabled - * tstamp_running: the notional time when the event was scheduled on - * tstamp_stopped: in INACTIVE state, the notional time when the - * event was scheduled off. - */ - u64 tstamp_enabled; - u64 tstamp_running; - u64 tstamp_stopped; + u64 tstamp; /* * timestamp shadows the actual context timing but it can @@ -699,7 +683,6 @@ struct perf_event { #ifdef CONFIG_CGROUP_PERF struct perf_cgroup *cgrp; /* cgroup event is attach to */ - int cgrp_defer_enabled; #endif struct list_head sb_list; @@ -806,6 +789,7 @@ struct perf_output_handle { struct bpf_perf_event_data_kern { struct pt_regs *regs; struct perf_sample_data *data; + struct perf_event *event; }; #ifdef CONFIG_CGROUP_PERF @@ -884,7 +868,8 @@ perf_event_create_kernel_counter(struct perf_event_attr *attr, void *context); extern void perf_pmu_migrate_context(struct pmu *pmu, int src_cpu, int dst_cpu); -int perf_event_read_local(struct perf_event *event, u64 *value); +int perf_event_read_local(struct perf_event *event, u64 *value, + u64 *enabled, u64 *running); extern u64 perf_event_read_value(struct perf_event *event, u64 *enabled, u64 *running); @@ -1286,7 +1271,8 @@ static inline const struct perf_event_attr *perf_event_attrs(struct perf_event * { return ERR_PTR(-EINVAL); } -static inline int perf_event_read_local(struct perf_event *event, u64 *value) +static inline int perf_event_read_local(struct perf_event *event, u64 *value, + u64 *enabled, u64 *running) { return -EINVAL; } diff --git a/kernel/bpf/arraymap.c b/kernel/bpf/arraymap.c index e2636737b69b..c4b9ab01bba5 100644 --- a/kernel/bpf/arraymap.c +++ b/kernel/bpf/arraymap.c @@ -492,7 +492,7 @@ static void *perf_event_fd_array_get_ptr(struct bpf_map *map, ee = ERR_PTR(-EOPNOTSUPP); event = perf_file->private_data; - if (perf_event_read_local(event, &value) == -EOPNOTSUPP) + if (perf_event_read_local(event, &value, NULL, NULL) == -EOPNOTSUPP) goto err_out; ee = bpf_event_entry_gen(perf_file, map_file); diff --git a/kernel/events/core.c b/kernel/events/core.c index c298847d4b85..4c39c05e029a 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -582,6 +582,88 @@ static inline u64 perf_event_clock(struct perf_event *event) return event->clock(); } +/* + * State based event timekeeping... + * + * The basic idea is to use event->state to determine which (if any) time + * fields to increment with the current delta. This means we only need to + * update timestamps when we change state or when they are explicitly requested + * (read). + * + * Event groups make things a little more complicated, but not terribly so. The + * rules for a group are that if the group leader is OFF the entire group is + * OFF, irrespecive of what the group member states are. This results in + * __perf_effective_state(). + * + * A futher ramification is that when a group leader flips between OFF and + * !OFF, we need to update all group member times. + * + * + * NOTE: perf_event_time() is based on the (cgroup) context time, and thus we + * need to make sure the relevant context time is updated before we try and + * update our timestamps. + */ + +static __always_inline enum perf_event_state +__perf_effective_state(struct perf_event *event) +{ + struct perf_event *leader = event->group_leader; + + if (leader->state <= PERF_EVENT_STATE_OFF) + return leader->state; + + return event->state; +} + +static __always_inline void +__perf_update_times(struct perf_event *event, u64 now, u64 *enabled, u64 *running) +{ + enum perf_event_state state = __perf_effective_state(event); + u64 delta = now - event->tstamp; + + *enabled = event->total_time_enabled; + if (state >= PERF_EVENT_STATE_INACTIVE) + *enabled += delta; + + *running = event->total_time_running; + if (state >= PERF_EVENT_STATE_ACTIVE) + *running += delta; +} + +static void perf_event_update_time(struct perf_event *event) +{ + u64 now = perf_event_time(event); + + __perf_update_times(event, now, &event->total_time_enabled, + &event->total_time_running); + event->tstamp = now; +} + +static void perf_event_update_sibling_time(struct perf_event *leader) +{ + struct perf_event *sibling; + + list_for_each_entry(sibling, &leader->sibling_list, group_entry) + perf_event_update_time(sibling); +} + +static void +perf_event_set_state(struct perf_event *event, enum perf_event_state state) +{ + if (event->state == state) + return; + + perf_event_update_time(event); + /* + * If a group leader gets enabled/disabled all its siblings + * are affected too. + */ + if ((event->state < 0) ^ (state < 0)) + perf_event_update_sibling_time(event); + + WRITE_ONCE(event->state, state); +} + #ifdef CONFIG_CGROUP_PERF static inline bool @@ -841,40 +923,6 @@ perf_cgroup_set_shadow_time(struct perf_event *event, u64 now) event->shadow_ctx_time = now - t->timestamp; } -static inline void -perf_cgroup_defer_enabled(struct perf_event *event) -{ - /* - * when the current task's perf cgroup does not match - * the event's, we need to remember to call the - * perf_mark_enable() function the first time a task with - * a matching perf cgroup is scheduled in. - */ - if (is_cgroup_event(event) && !perf_cgroup_match(event)) - event->cgrp_defer_enabled = 1; -} - -static inline void -perf_cgroup_mark_enabled(struct perf_event *event, - struct perf_event_context *ctx) -{ - struct perf_event *sub; - u64 tstamp = perf_event_time(event); - - if (!event->cgrp_defer_enabled) - return; - - event->cgrp_defer_enabled = 0; - - event->tstamp_enabled = tstamp - event->total_time_enabled; - list_for_each_entry(sub, &event->sibling_list, group_entry) { - if (sub->state >= PERF_EVENT_STATE_INACTIVE) { - sub->tstamp_enabled = tstamp - sub->total_time_enabled; - sub->cgrp_defer_enabled = 0; - } - } -} - /* * Update cpuctx->cgrp so that it is set when first cgroup event is added and * cleared when last cgroup event is removed. @@ -975,17 +1023,6 @@ static inline u64 perf_cgroup_event_time(struct perf_event *event) } static inline void -perf_cgroup_defer_enabled(struct perf_event *event) -{ -} - -static inline void -perf_cgroup_mark_enabled(struct perf_event *event, - struct perf_event_context *ctx) -{ -} - -static inline void list_update_cgroup_event(struct perf_event *event, struct perf_event_context *ctx, bool add) { @@ -1398,60 +1435,6 @@ static u64 perf_event_time(struct perf_event *event) return ctx ? ctx->time : 0; } -/* - * Update the total_time_enabled and total_time_running fields for a event. - */ -static void update_event_times(struct perf_event *event) -{ - struct perf_event_context *ctx = event->ctx; - u64 run_end; - - lockdep_assert_held(&ctx->lock); - - if (event->state < PERF_EVENT_STATE_INACTIVE || - event->group_leader->state < PERF_EVENT_STATE_INACTIVE) - return; - - /* - * in cgroup mode, time_enabled represents - * the time the event was enabled AND active - * tasks were in the monitored cgroup. This is - * independent of the activity of the context as - * there may be a mix of cgroup and non-cgroup events. - * - * That is why we treat cgroup events differently - * here. - */ - if (is_cgroup_event(event)) - run_end = perf_cgroup_event_time(event); - else if (ctx->is_active) - run_end = ctx->time; - else - run_end = event->tstamp_stopped; - - event->total_time_enabled = run_end - event->tstamp_enabled; - - if (event->state == PERF_EVENT_STATE_INACTIVE) - run_end = event->tstamp_stopped; - else - run_end = perf_event_time(event); - - event->total_time_running = run_end - event->tstamp_running; - -} - -/* - * Update total_time_enabled and total_time_running for all events in a group. - */ -static void update_group_times(struct perf_event *leader) -{ - struct perf_event *event; - - update_event_times(leader); - list_for_each_entry(event, &leader->sibling_list, group_entry) - update_event_times(event); -} - static enum event_type_t get_event_type(struct perf_event *event) { struct perf_event_context *ctx = event->ctx; @@ -1494,6 +1477,8 @@ list_add_event(struct perf_event *event, struct perf_event_context *ctx) WARN_ON_ONCE(event->attach_state & PERF_ATTACH_CONTEXT); event->attach_state |= PERF_ATTACH_CONTEXT; + event->tstamp = perf_event_time(event); + /* * If we're a stand alone event or group leader, we go to the context * list, group events are kept attached to the group so that @@ -1701,8 +1686,6 @@ list_del_event(struct perf_event *event, struct perf_event_context *ctx) if (event->group_leader == event) list_del_init(&event->group_entry); - update_group_times(event); - /* * If event was in error state, then keep it * that way, otherwise bogus counts will be @@ -1711,7 +1694,7 @@ list_del_event(struct perf_event *event, struct perf_event_context *ctx) * of the event */ if (event->state > PERF_EVENT_STATE_OFF) - event->state = PERF_EVENT_STATE_OFF; + perf_event_set_state(event, PERF_EVENT_STATE_OFF); ctx->generation++; } @@ -1810,38 +1793,24 @@ event_sched_out(struct perf_event *event, struct perf_cpu_context *cpuctx, struct perf_event_context *ctx) { - u64 tstamp = perf_event_time(event); - u64 delta; + enum perf_event_state state = PERF_EVENT_STATE_INACTIVE; WARN_ON_ONCE(event->ctx != ctx); lockdep_assert_held(&ctx->lock); - /* - * An event which could not be activated because of - * filter mismatch still needs to have its timings - * maintained, otherwise bogus information is return - * via read() for time_enabled, time_running: - */ - if (event->state == PERF_EVENT_STATE_INACTIVE && - !event_filter_match(event)) { - delta = tstamp - event->tstamp_stopped; - event->tstamp_running += delta; - event->tstamp_stopped = tstamp; - } - if (event->state != PERF_EVENT_STATE_ACTIVE) return; perf_pmu_disable(event->pmu); - event->tstamp_stopped = tstamp; event->pmu->del(event, 0); event->oncpu = -1; - event->state = PERF_EVENT_STATE_INACTIVE; + if (event->pending_disable) { event->pending_disable = 0; - event->state = PERF_EVENT_STATE_OFF; + state = PERF_EVENT_STATE_OFF; } + perf_event_set_state(event, state); if (!is_software_event(event)) cpuctx->active_oncpu--; @@ -1861,7 +1830,9 @@ group_sched_out(struct perf_event *group_event, struct perf_event_context *ctx) { struct perf_event *event; - int state = group_event->state; + + if (group_event->state != PERF_EVENT_STATE_ACTIVE) + return; perf_pmu_disable(ctx->pmu); @@ -1875,7 +1846,7 @@ group_sched_out(struct perf_event *group_event, perf_pmu_enable(ctx->pmu); - if (state == PERF_EVENT_STATE_ACTIVE && group_event->attr.exclusive) + if (group_event->attr.exclusive) cpuctx->exclusive = 0; } @@ -1895,6 +1866,11 @@ __perf_remove_from_context(struct perf_event *event, { unsigned long flags = (unsigned long)info; + if (ctx->is_active & EVENT_TIME) { + update_context_time(ctx); + update_cgrp_time_from_cpuctx(cpuctx); + } + event_sched_out(event, cpuctx, ctx); if (flags & DETACH_GROUP) perf_group_detach(event); @@ -1957,14 +1933,17 @@ static void __perf_event_disable(struct perf_event *event, if (event->state < PERF_EVENT_STATE_INACTIVE) return; - update_context_time(ctx); - update_cgrp_time_from_event(event); - update_group_times(event); + if (ctx->is_active & EVENT_TIME) { + update_context_time(ctx); + update_cgrp_time_from_event(event); + } + if (event == event->group_leader) group_sched_out(event, cpuctx, ctx); else event_sched_out(event, cpuctx, ctx); - event->state = PERF_EVENT_STATE_OFF; + + perf_event_set_state(event, PERF_EVENT_STATE_OFF); } /* @@ -2021,8 +2000,7 @@ void perf_event_disable_inatomic(struct perf_event *event) } static void perf_set_shadow_time(struct perf_event *event, - struct perf_event_context *ctx, - u64 tstamp) + struct perf_event_context *ctx) { /* * use the correct time source for the time snapshot @@ -2050,9 +2028,9 @@ static void perf_set_shadow_time(struct perf_event *event, * is cleaner and simpler to understand. */ if (is_cgroup_event(event)) - perf_cgroup_set_shadow_time(event, tstamp); + perf_cgroup_set_shadow_time(event, event->tstamp); else - event->shadow_ctx_time = tstamp - ctx->timestamp; + event->shadow_ctx_time = event->tstamp - ctx->timestamp; } #define MAX_INTERRUPTS (~0ULL) @@ -2065,7 +2043,6 @@ event_sched_in(struct perf_event *event, struct perf_cpu_context *cpuctx, struct perf_event_context *ctx) { - u64 tstamp = perf_event_time(event); int ret = 0; lockdep_assert_held(&ctx->lock); @@ -2075,11 +2052,12 @@ event_sched_in(struct perf_event *event, WRITE_ONCE(event->oncpu, smp_processor_id()); /* - * Order event::oncpu write to happen before the ACTIVE state - * is visible. + * Order event::oncpu write to happen before the ACTIVE state is + * visible. This allows perf_event_{stop,read}() to observe the correct + * ->oncpu if it sees ACTIVE. */ smp_wmb(); - WRITE_ONCE(event->state, PERF_EVENT_STATE_ACTIVE); + perf_event_set_state(event, PERF_EVENT_STATE_ACTIVE); /* * Unthrottle events, since we scheduled we might have missed several @@ -2091,26 +2069,19 @@ event_sched_in(struct perf_event *event, event->hw.interrupts = 0; } - /* - * The new state must be visible before we turn it on in the hardware: - */ - smp_wmb(); - perf_pmu_disable(event->pmu); - perf_set_shadow_time(event, ctx, tstamp); + perf_set_shadow_time(event, ctx); perf_log_itrace_start(event); if (event->pmu->add(event, PERF_EF_START)) { - event->state = PERF_EVENT_STATE_INACTIVE; + perf_event_set_state(event, PERF_EVENT_STATE_INACTIVE); event->oncpu = -1; ret = -EAGAIN; goto out; } - event->tstamp_running += tstamp - event->tstamp_stopped; - if (!is_software_event(event)) cpuctx->active_oncpu++; if (!ctx->nr_active++) @@ -2134,8 +2105,6 @@ group_sched_in(struct perf_event *group_event, { struct perf_event *event, *partial_group = NULL; struct pmu *pmu = ctx->pmu; - u64 now = ctx->time; - bool simulate = false; if (group_event->state == PERF_EVENT_STATE_OFF) return 0; @@ -2165,27 +2134,13 @@ group_error: /* * Groups can be scheduled in as one unit only, so undo any * partial group before returning: - * The events up to the failed event are scheduled out normally, - * tstamp_stopped will be updated. - * - * The failed events and the remaining siblings need to have - * their timings updated as if they had gone thru event_sched_in() - * and event_sched_out(). This is required to get consistent timings - * across the group. This also takes care of the case where the group - * could never be scheduled by ensuring tstamp_stopped is set to mark - * the time the event was actually stopped, such that time delta - * calculation in update_event_times() is correct. + * The events up to the failed event are scheduled out normally. */ list_for_each_entry(event, &group_event->sibling_list, group_entry) { if (event == partial_group) - simulate = true; + break; - if (simulate) { - event->tstamp_running += now - event->tstamp_stopped; - event->tstamp_stopped = now; - } else { - event_sched_out(event, cpuctx, ctx); - } + event_sched_out(event, cpuctx, ctx); } event_sched_out(group_event, cpuctx, ctx); @@ -2227,46 +2182,11 @@ static int group_can_go_on(struct perf_event *event, return can_add_hw; } -/* - * Complement to update_event_times(). This computes the tstamp_* values to - * continue 'enabled' state from @now, and effectively discards the time - * between the prior tstamp_stopped and now (as we were in the OFF state, or - * just switched (context) time base). - * - * This further assumes '@event->state == INACTIVE' (we just came from OFF) and - * cannot have been scheduled in yet. And going into INACTIVE state means - * '@event->tstamp_stopped = @now'. - * - * Thus given the rules of update_event_times(): - * - * total_time_enabled = tstamp_stopped - tstamp_enabled - * total_time_running = tstamp_stopped - tstamp_running - * - * We can insert 'tstamp_stopped == now' and reverse them to compute new - * tstamp_* values. - */ -static void __perf_event_enable_time(struct perf_event *event, u64 now) -{ - WARN_ON_ONCE(event->state != PERF_EVENT_STATE_INACTIVE); - - event->tstamp_stopped = now; - event->tstamp_enabled = now - event->total_time_enabled; - event->tstamp_running = now - event->total_time_running; -} - static void add_event_to_ctx(struct perf_event *event, struct perf_event_context *ctx) { - u64 tstamp = perf_event_time(event); - list_add_event(event, ctx); perf_group_attach(event); - /* - * We can be called with event->state == STATE_OFF when we create with - * .disabled = 1. In that case the IOC_ENABLE will call this function. - */ - if (event->state == PERF_EVENT_STATE_INACTIVE) - __perf_event_enable_time(event, tstamp); } static void ctx_sched_out(struct perf_event_context *ctx, @@ -2498,28 +2418,6 @@ again: } /* - * Put a event into inactive state and update time fields. - * Enabling the leader of a group effectively enables all - * the group members that aren't explicitly disabled, so we - * have to update their ->tstamp_enabled also. - * Note: this works for group members as well as group leaders - * since the non-leader members' sibling_lists will be empty. - */ -static void __perf_event_mark_enabled(struct perf_event *event) -{ - struct perf_event *sub; - u64 tstamp = perf_event_time(event); - - event->state = PERF_EVENT_STATE_INACTIVE; - __perf_event_enable_time(event, tstamp); - list_for_each_entry(sub, &event->sibling_list, group_entry) { - /* XXX should not be > INACTIVE if event isn't */ - if (sub->state >= PERF_EVENT_STATE_INACTIVE) - __perf_event_enable_time(sub, tstamp); - } -} - -/* * Cross CPU call to enable a performance event */ static void __perf_event_enable(struct perf_event *event, @@ -2537,14 +2435,12 @@ static void __perf_event_enable(struct perf_event *event, if (ctx->is_active) ctx_sched_out(ctx, cpuctx, EVENT_TIME); - __perf_event_mark_enabled(event); + perf_event_set_state(event, PERF_EVENT_STATE_INACTIVE); if (!ctx->is_active) return; if (!event_filter_match(event)) { - if (is_cgroup_event(event)) - perf_cgroup_defer_enabled(event); ctx_sched_in(ctx, cpuctx, EVENT_TIME, current); return; } @@ -2864,18 +2760,10 @@ static void __perf_event_sync_stat(struct perf_event *event, * we know the event must be on the current CPU, therefore we * don't need to use it. */ - switch (event->state) { - case PERF_EVENT_STATE_ACTIVE: + if (event->state == PERF_EVENT_STATE_ACTIVE) event->pmu->read(event); - /* fall-through */ - - case PERF_EVENT_STATE_INACTIVE: - update_event_times(event); - break; - default: - break; - } + perf_event_update_time(event); /* * In order to keep per-task stats reliable we need to flip the event @@ -3112,10 +3000,6 @@ ctx_pinned_sched_in(struct perf_event_context *ctx, if (!event_filter_match(event)) continue; - /* may need to reset tstamp_enabled */ - if (is_cgroup_event(event)) - perf_cgroup_mark_enabled(event, ctx); - if (group_can_go_on(event, cpuctx, 1)) group_sched_in(event, cpuctx, ctx); @@ -3123,10 +3007,8 @@ ctx_pinned_sched_in(struct perf_event_context *ctx, * If this pinned group hasn't been scheduled, * put it in error state. */ - if (event->state == PERF_EVENT_STATE_INACTIVE) { - update_group_times(event); - event->state = PERF_EVENT_STATE_ERROR; - } + if (event->state == PERF_EVENT_STATE_INACTIVE) + perf_event_set_state(event, PERF_EVENT_STATE_ERROR); } } @@ -3148,10 +3030,6 @@ ctx_flexible_sched_in(struct perf_event_context *ctx, if (!event_filter_match(event)) continue; - /* may need to reset tstamp_enabled */ - if (is_cgroup_event(event)) - perf_cgroup_mark_enabled(event, ctx); - if (group_can_go_on(event, cpuctx, can_add_hw)) { if (group_sched_in(event, cpuctx, ctx)) can_add_hw = 0; @@ -3543,7 +3421,7 @@ static int event_enable_on_exec(struct perf_event *event, if (event->state >= PERF_EVENT_STATE_INACTIVE) return 0; - __perf_event_mark_enabled(event); + perf_event_set_state(event, PERF_EVENT_STATE_INACTIVE); return 1; } @@ -3637,12 +3515,15 @@ static void __perf_event_read(void *info) return; raw_spin_lock(&ctx->lock); - if (ctx->is_active) { + if (ctx->is_active & EVENT_TIME) { update_context_time(ctx); update_cgrp_time_from_event(event); } - update_event_times(event); + perf_event_update_time(event); + if (data->group) + perf_event_update_sibling_time(event); + if (event->state != PERF_EVENT_STATE_ACTIVE) goto unlock; @@ -3657,7 +3538,6 @@ static void __perf_event_read(void *info) pmu->read(event); list_for_each_entry(sub, &event->sibling_list, group_entry) { - update_event_times(sub); if (sub->state == PERF_EVENT_STATE_ACTIVE) { /* * Use sibling's PMU rather than @event's since @@ -3686,7 +3566,8 @@ static inline u64 perf_event_count(struct perf_event *event) * will not be local and we cannot read them atomically * - must not have a pmu::count method */ -int perf_event_read_local(struct perf_event *event, u64 *value) +int perf_event_read_local(struct perf_event *event, u64 *value, + u64 *enabled, u64 *running) { unsigned long flags; int ret = 0; @@ -3720,6 +3601,7 @@ int perf_event_read_local(struct perf_event *event, u64 *value) goto out; } + /* * If the event is currently on this CPU, its either a per-task event, * or local to this CPU. Furthermore it means its ACTIVE (otherwise @@ -3729,6 +3611,16 @@ int perf_event_read_local(struct perf_event *event, u64 *value) event->pmu->read(event); *value = local64_read(&event->count); + if (enabled || running) { + u64 now = event->shadow_ctx_time + perf_clock(); + u64 __enabled, __running; + + __perf_update_times(event, now, &__enabled, &__running); + if (enabled) + *enabled = __enabled; + if (running) + *running = __running; + } out: local_irq_restore(flags); @@ -3737,23 +3629,35 @@ out: static int perf_event_read(struct perf_event *event, bool group) { + enum perf_event_state state = READ_ONCE(event->state); int event_cpu, ret = 0; /* * If event is enabled and currently active on a CPU, update the * value in the event structure: */ - if (event->state == PERF_EVENT_STATE_ACTIVE) { - struct perf_read_data data = { - .event = event, - .group = group, - .ret = 0, - }; +again: + if (state == PERF_EVENT_STATE_ACTIVE) { + struct perf_read_data data; + + /* + * Orders the ->state and ->oncpu loads such that if we see + * ACTIVE we must also see the right ->oncpu. + * + * Matches the smp_wmb() from event_sched_in(). + */ + smp_rmb(); event_cpu = READ_ONCE(event->oncpu); if ((unsigned)event_cpu >= nr_cpu_ids) return 0; + data = (struct perf_read_data){ + .event = event, + .group = group, + .ret = 0, + }; + preempt_disable(); event_cpu = __perf_event_read_cpu(event, event_cpu); @@ -3770,24 +3674,30 @@ static int perf_event_read(struct perf_event *event, bool group) (void)smp_call_function_single(event_cpu, __perf_event_read, &data, 1); preempt_enable(); ret = data.ret; - } else if (event->state == PERF_EVENT_STATE_INACTIVE) { + + } else if (state == PERF_EVENT_STATE_INACTIVE) { struct perf_event_context *ctx = event->ctx; unsigned long flags; raw_spin_lock_irqsave(&ctx->lock, flags); + state = event->state; + if (state != PERF_EVENT_STATE_INACTIVE) { + raw_spin_unlock_irqrestore(&ctx->lock, flags); + goto again; + } + /* - * may read while context is not active - * (e.g., thread is blocked), in that case - * we cannot update context time + * May read while context is not active (e.g., thread is + * blocked), in that case we cannot update context time */ - if (ctx->is_active) { + if (ctx->is_active & EVENT_TIME) { update_context_time(ctx); update_cgrp_time_from_event(event); } + + perf_event_update_time(event); if (group) - update_group_times(event); - else - update_event_times(event); + perf_event_update_sibling_time(event); raw_spin_unlock_irqrestore(&ctx->lock, flags); } @@ -4390,7 +4300,7 @@ static int perf_release(struct inode *inode, struct file *file) return 0; } -u64 perf_event_read_value(struct perf_event *event, u64 *enabled, u64 *running) +static u64 __perf_event_read_value(struct perf_event *event, u64 *enabled, u64 *running) { struct perf_event *child; u64 total = 0; @@ -4418,6 +4328,18 @@ u64 perf_event_read_value(struct perf_event *event, u64 *enabled, u64 *running) return total; } + +u64 perf_event_read_value(struct perf_event *event, u64 *enabled, u64 *running) +{ + struct perf_event_context *ctx; + u64 count; + + ctx = perf_event_ctx_lock(event); + count = __perf_event_read_value(event, enabled, running); + perf_event_ctx_unlock(event, ctx); + + return count; +} EXPORT_SYMBOL_GPL(perf_event_read_value); static int __perf_read_group_add(struct perf_event *leader, @@ -4433,6 +4355,8 @@ static int __perf_read_group_add(struct perf_event *leader, if (ret) return ret; + raw_spin_lock_irqsave(&ctx->lock, flags); + /* * Since we co-schedule groups, {enabled,running} times of siblings * will be identical to those of the leader, so we only publish one @@ -4455,8 +4379,6 @@ static int __perf_read_group_add(struct perf_event *leader, if (read_format & PERF_FORMAT_ID) values[n++] = primary_event_id(leader); - raw_spin_lock_irqsave(&ctx->lock, flags); - list_for_each_entry(sub, &leader->sibling_list, group_entry) { values[n++] += perf_event_count(sub); if (read_format & PERF_FORMAT_ID) @@ -4520,7 +4442,7 @@ static int perf_read_one(struct perf_event *event, u64 values[4]; int n = 0; - values[n++] = perf_event_read_value(event, &enabled, &running); + values[n++] = __perf_event_read_value(event, &enabled, &running); if (read_format & PERF_FORMAT_TOTAL_TIME_ENABLED) values[n++] = enabled; if (read_format & PERF_FORMAT_TOTAL_TIME_RUNNING) @@ -4899,8 +4821,7 @@ static void calc_timer_values(struct perf_event *event, *now = perf_clock(); ctx_time = event->shadow_ctx_time + *now; - *enabled = ctx_time - event->tstamp_enabled; - *running = ctx_time - event->tstamp_running; + __perf_update_times(event, ctx_time, enabled, running); } static void perf_event_init_userpage(struct perf_event *event) @@ -8074,6 +7995,7 @@ static void bpf_overflow_handler(struct perf_event *event, struct bpf_perf_event_data_kern ctx = { .data = data, .regs = regs, + .event = event, }; int ret = 0; @@ -9404,6 +9326,11 @@ static void account_event(struct perf_event *event) inc = true; if (inc) { + /* + * We need the mutex here because static_branch_enable() + * must complete *before* the perf_sched_count increment + * becomes visible. + */ if (atomic_inc_not_zero(&perf_sched_count)) goto enabled; @@ -10529,7 +10456,7 @@ perf_event_exit_event(struct perf_event *child_event, if (parent_event) perf_group_detach(child_event); list_del_event(child_event, child_ctx); - child_event->state = PERF_EVENT_STATE_EXIT; /* is_event_hup() */ + perf_event_set_state(child_event, PERF_EVENT_STATE_EXIT); /* is_event_hup() */ raw_spin_unlock_irq(&child_ctx->lock); /* @@ -10767,7 +10694,7 @@ inherit_event(struct perf_event *parent_event, struct perf_event *group_leader, struct perf_event_context *child_ctx) { - enum perf_event_active_state parent_state = parent_event->state; + enum perf_event_state parent_state = parent_event->state; struct perf_event *child_event; unsigned long flags; @@ -11103,6 +11030,7 @@ static void __perf_event_exit_context(void *__info) struct perf_event *event; raw_spin_lock(&ctx->lock); + ctx_sched_out(ctx, cpuctx, EVENT_TIME); list_for_each_entry(event, &ctx->event_list, event_entry) __perf_remove_from_context(event, cpuctx, ctx, (void *)DETACH_GROUP); raw_spin_unlock(&ctx->lock); diff --git a/kernel/kprobes.c b/kernel/kprobes.c index a1606a4224e1..da2ccf142358 100644 --- a/kernel/kprobes.c +++ b/kernel/kprobes.c @@ -117,7 +117,7 @@ enum kprobe_slot_state { SLOT_USED = 2, }; -static void *alloc_insn_page(void) +void __weak *alloc_insn_page(void) { return module_alloc(PAGE_SIZE); } @@ -573,13 +573,15 @@ static void kprobe_optimizer(struct work_struct *work) do_unoptimize_kprobes(); /* - * Step 2: Wait for quiesence period to ensure all running interrupts - * are done. Because optprobe may modify multiple instructions - * there is a chance that Nth instruction is interrupted. In that - * case, running interrupt can return to 2nd-Nth byte of jump - * instruction. This wait is for avoiding it. + * Step 2: Wait for quiesence period to ensure all potentially + * preempted tasks to have normally scheduled. Because optprobe + * may modify multiple instructions, there is a chance that Nth + * instruction is preempted. In that case, such tasks can return + * to 2nd-Nth byte of jump instruction. This wait is for avoiding it. + * Note that on non-preemptive kernel, this is transparently converted + * to synchronoze_sched() to wait for all interrupts to have completed. */ - synchronize_sched(); + synchronize_rcu_tasks(); /* Step 3: Optimize kprobes after quiesence period */ do_optimize_kprobes(); @@ -1769,6 +1771,7 @@ unsigned long __weak arch_deref_entry_point(void *entry) return (unsigned long)entry; } +#if 0 int register_jprobes(struct jprobe **jps, int num) { int ret = 0, i; @@ -1837,6 +1840,7 @@ void unregister_jprobes(struct jprobe **jps, int num) } } EXPORT_SYMBOL_GPL(unregister_jprobes); +#endif #ifdef CONFIG_KRETPROBES /* diff --git a/kernel/test_kprobes.c b/kernel/test_kprobes.c index 0dbab6d1acb4..dd53e354f630 100644 --- a/kernel/test_kprobes.c +++ b/kernel/test_kprobes.c @@ -22,7 +22,7 @@ #define div_factor 3 -static u32 rand1, preh_val, posth_val, jph_val; +static u32 rand1, preh_val, posth_val; static int errors, handler_errors, num_tests; static u32 (*target)(u32 value); static u32 (*target2)(u32 value); @@ -34,6 +34,10 @@ static noinline u32 kprobe_target(u32 value) static int kp_pre_handler(struct kprobe *p, struct pt_regs *regs) { + if (preemptible()) { + handler_errors++; + pr_err("pre-handler is preemptible\n"); + } preh_val = (rand1 / div_factor); return 0; } @@ -41,6 +45,10 @@ static int kp_pre_handler(struct kprobe *p, struct pt_regs *regs) static void kp_post_handler(struct kprobe *p, struct pt_regs *regs, unsigned long flags) { + if (preemptible()) { + handler_errors++; + pr_err("post-handler is preemptible\n"); + } if (preh_val != (rand1 / div_factor)) { handler_errors++; pr_err("incorrect value in post_handler\n"); @@ -154,8 +162,15 @@ static int test_kprobes(void) } +#if 0 +static u32 jph_val; + static u32 j_kprobe_target(u32 value) { + if (preemptible()) { + handler_errors++; + pr_err("jprobe-handler is preemptible\n"); + } if (value != rand1) { handler_errors++; pr_err("incorrect value in jprobe handler\n"); @@ -227,11 +242,19 @@ static int test_jprobes(void) return 0; } +#else +#define test_jprobe() (0) +#define test_jprobes() (0) +#endif #ifdef CONFIG_KRETPROBES static u32 krph_val; static int entry_handler(struct kretprobe_instance *ri, struct pt_regs *regs) { + if (preemptible()) { + handler_errors++; + pr_err("kretprobe entry handler is preemptible\n"); + } krph_val = (rand1 / div_factor); return 0; } @@ -240,6 +263,10 @@ static int return_handler(struct kretprobe_instance *ri, struct pt_regs *regs) { unsigned long ret = regs_return_value(regs); + if (preemptible()) { + handler_errors++; + pr_err("kretprobe return handler is preemptible\n"); + } if (ret != (rand1 / div_factor)) { handler_errors++; pr_err("incorrect value in kretprobe handler\n"); diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c index dc498b605d5d..95888ae6c263 100644 --- a/kernel/trace/bpf_trace.c +++ b/kernel/trace/bpf_trace.c @@ -275,7 +275,7 @@ BPF_CALL_2(bpf_perf_event_read, struct bpf_map *, map, u64, flags) if (!ee) return -ENOENT; - err = perf_event_read_local(ee->event, &value); + err = perf_event_read_local(ee->event, &value, NULL, NULL); /* * this api is ugly since we miss [-22..-2] range of valid * counter values, but that's uapi diff --git a/samples/kprobes/Makefile b/samples/kprobes/Makefile index 68739bc4fc6a..880e54d2c082 100644 --- a/samples/kprobes/Makefile +++ b/samples/kprobes/Makefile @@ -1,5 +1,5 @@ # builds the kprobes example kernel modules; # then to use one (as root): insmod <module_name.ko> -obj-$(CONFIG_SAMPLE_KPROBES) += kprobe_example.o jprobe_example.o +obj-$(CONFIG_SAMPLE_KPROBES) += kprobe_example.o obj-$(CONFIG_SAMPLE_KRETPROBES) += kretprobe_example.o diff --git a/samples/kprobes/jprobe_example.c b/samples/kprobes/jprobe_example.c deleted file mode 100644 index e3c0a40909f7..000000000000 --- a/samples/kprobes/jprobe_example.c +++ /dev/null @@ -1,67 +0,0 @@ -/* - * Here's a sample kernel module showing the use of jprobes to dump - * the arguments of _do_fork(). - * - * For more information on theory of operation of jprobes, see - * Documentation/kprobes.txt - * - * Build and insert the kernel module as done in the kprobe example. - * You will see the trace data in /var/log/messages and on the - * console whenever _do_fork() is invoked to create a new process. - * (Some messages may be suppressed if syslogd is configured to - * eliminate duplicate messages.) - */ - -#include <linux/kernel.h> -#include <linux/module.h> -#include <linux/kprobes.h> - -/* - * Jumper probe for _do_fork. - * Mirror principle enables access to arguments of the probed routine - * from the probe handler. - */ - -/* Proxy routine having the same arguments as actual _do_fork() routine */ -static long j_do_fork(unsigned long clone_flags, unsigned long stack_start, - unsigned long stack_size, int __user *parent_tidptr, - int __user *child_tidptr, unsigned long tls) -{ - pr_info("jprobe: clone_flags = 0x%lx, stack_start = 0x%lx " - "stack_size = 0x%lx\n", clone_flags, stack_start, stack_size); - - /* Always end with a call to jprobe_return(). */ - jprobe_return(); - return 0; -} - -static struct jprobe my_jprobe = { - .entry = j_do_fork, - .kp = { - .symbol_name = "_do_fork", - }, -}; - -static int __init jprobe_init(void) -{ - int ret; - - ret = register_jprobe(&my_jprobe); - if (ret < 0) { - pr_err("register_jprobe failed, returned %d\n", ret); - return -1; - } - pr_info("Planted jprobe at %p, handler addr %p\n", - my_jprobe.kp.addr, my_jprobe.entry); - return 0; -} - -static void __exit jprobe_exit(void) -{ - unregister_jprobe(&my_jprobe); - pr_info("jprobe at %p unregistered\n", my_jprobe.kp.addr); -} - -module_init(jprobe_init) -module_exit(jprobe_exit) -MODULE_LICENSE("GPL"); diff --git a/tools/include/linux/poison.h b/tools/include/linux/poison.h index 4bf6777a8a03..9fdcd3eaac3b 100644 --- a/tools/include/linux/poison.h +++ b/tools/include/linux/poison.h @@ -15,6 +15,10 @@ # define POISON_POINTER_DELTA 0 #endif +#ifdef __cplusplus +#define LIST_POISON1 NULL +#define LIST_POISON2 NULL +#else /* * These are non-NULL pointers that will result in page faults * under normal circumstances, used to verify that nobody uses @@ -22,6 +26,7 @@ */ #define LIST_POISON1 ((void *) 0x100 + POISON_POINTER_DELTA) #define LIST_POISON2 ((void *) 0x200 + POISON_POINTER_DELTA) +#endif /********** include/linux/timer.h **********/ /* diff --git a/tools/include/uapi/linux/kcmp.h b/tools/include/uapi/linux/kcmp.h new file mode 100644 index 000000000000..481e103da78e --- /dev/null +++ b/tools/include/uapi/linux/kcmp.h @@ -0,0 +1,27 @@ +#ifndef _UAPI_LINUX_KCMP_H +#define _UAPI_LINUX_KCMP_H + +#include <linux/types.h> + +/* Comparison type */ +enum kcmp_type { + KCMP_FILE, + KCMP_VM, + KCMP_FILES, + KCMP_FS, + KCMP_SIGHAND, + KCMP_IO, + KCMP_SYSVSEM, + KCMP_EPOLL_TFD, + + KCMP_TYPES, +}; + +/* Slot for KCMP_EPOLL_TFD */ +struct kcmp_epoll_slot { + __u32 efd; /* epoll file descriptor */ + __u32 tfd; /* target file number */ + __u32 toff; /* target offset within same numbered sequence */ +}; + +#endif /* _UAPI_LINUX_KCMP_H */ diff --git a/tools/include/uapi/linux/prctl.h b/tools/include/uapi/linux/prctl.h new file mode 100644 index 000000000000..a8d0759a9e40 --- /dev/null +++ b/tools/include/uapi/linux/prctl.h @@ -0,0 +1,200 @@ +#ifndef _LINUX_PRCTL_H +#define _LINUX_PRCTL_H + +#include <linux/types.h> + +/* Values to pass as first argument to prctl() */ + +#define PR_SET_PDEATHSIG 1 /* Second arg is a signal */ +#define PR_GET_PDEATHSIG 2 /* Second arg is a ptr to return the signal */ + +/* Get/set current->mm->dumpable */ +#define PR_GET_DUMPABLE 3 +#define PR_SET_DUMPABLE 4 + +/* Get/set unaligned access control bits (if meaningful) */ +#define PR_GET_UNALIGN 5 +#define PR_SET_UNALIGN 6 +# define PR_UNALIGN_NOPRINT 1 /* silently fix up unaligned user accesses */ +# define PR_UNALIGN_SIGBUS 2 /* generate SIGBUS on unaligned user access */ + +/* Get/set whether or not to drop capabilities on setuid() away from + * uid 0 (as per security/commoncap.c) */ +#define PR_GET_KEEPCAPS 7 +#define PR_SET_KEEPCAPS 8 + +/* Get/set floating-point emulation control bits (if meaningful) */ +#define PR_GET_FPEMU 9 +#define PR_SET_FPEMU 10 +# define PR_FPEMU_NOPRINT 1 /* silently emulate fp operations accesses */ +# define PR_FPEMU_SIGFPE 2 /* don't emulate fp operations, send SIGFPE instead */ + +/* Get/set floating-point exception mode (if meaningful) */ +#define PR_GET_FPEXC 11 +#define PR_SET_FPEXC 12 +# define PR_FP_EXC_SW_ENABLE 0x80 /* Use FPEXC for FP exception enables */ +# define PR_FP_EXC_DIV 0x010000 /* floating point divide by zero */ +# define PR_FP_EXC_OVF 0x020000 /* floating point overflow */ +# define PR_FP_EXC_UND 0x040000 /* floating point underflow */ +# define PR_FP_EXC_RES 0x080000 /* floating point inexact result */ +# define PR_FP_EXC_INV 0x100000 /* floating point invalid operation */ +# define PR_FP_EXC_DISABLED 0 /* FP exceptions disabled */ +# define PR_FP_EXC_NONRECOV 1 /* async non-recoverable exc. mode */ +# define PR_FP_EXC_ASYNC 2 /* async recoverable exception mode */ +# define PR_FP_EXC_PRECISE 3 /* precise exception mode */ + +/* Get/set whether we use statistical process timing or accurate timestamp + * based process timing */ +#define PR_GET_TIMING 13 +#define PR_SET_TIMING 14 +# define PR_TIMING_STATISTICAL 0 /* Normal, traditional, + statistical process timing */ +# define PR_TIMING_TIMESTAMP 1 /* Accurate timestamp based + process timing */ + +#define PR_SET_NAME 15 /* Set process name */ +#define PR_GET_NAME 16 /* Get process name */ + +/* Get/set process endian */ +#define PR_GET_ENDIAN 19 +#define PR_SET_ENDIAN 20 +# define PR_ENDIAN_BIG 0 +# define PR_ENDIAN_LITTLE 1 /* True little endian mode */ +# define PR_ENDIAN_PPC_LITTLE 2 /* "PowerPC" pseudo little endian */ + +/* Get/set process seccomp mode */ +#define PR_GET_SECCOMP 21 +#define PR_SET_SECCOMP 22 + +/* Get/set the capability bounding set (as per security/commoncap.c) */ +#define PR_CAPBSET_READ 23 +#define PR_CAPBSET_DROP 24 + +/* Get/set the process' ability to use the timestamp counter instruction */ +#define PR_GET_TSC 25 +#define PR_SET_TSC 26 +# define PR_TSC_ENABLE 1 /* allow the use of the timestamp counter */ +# define PR_TSC_SIGSEGV 2 /* throw a SIGSEGV instead of reading the TSC */ + +/* Get/set securebits (as per security/commoncap.c) */ +#define PR_GET_SECUREBITS 27 +#define PR_SET_SECUREBITS 28 + +/* + * Get/set the timerslack as used by poll/select/nanosleep + * A value of 0 means "use default" + */ +#define PR_SET_TIMERSLACK 29 +#define PR_GET_TIMERSLACK 30 + +#define PR_TASK_PERF_EVENTS_DISABLE 31 +#define PR_TASK_PERF_EVENTS_ENABLE 32 + +/* + * Set early/late kill mode for hwpoison memory corruption. + * This influences when the process gets killed on a memory corruption. + */ +#define PR_MCE_KILL 33 +# define PR_MCE_KILL_CLEAR 0 +# define PR_MCE_KILL_SET 1 + +# define PR_MCE_KILL_LATE 0 +# define PR_MCE_KILL_EARLY 1 +# define PR_MCE_KILL_DEFAULT 2 + +#define PR_MCE_KILL_GET 34 + +/* + * Tune up process memory map specifics. + */ +#define PR_SET_MM 35 +# define PR_SET_MM_START_CODE 1 +# define PR_SET_MM_END_CODE 2 +# define PR_SET_MM_START_DATA 3 +# define PR_SET_MM_END_DATA 4 +# define PR_SET_MM_START_STACK 5 +# define PR_SET_MM_START_BRK 6 +# define PR_SET_MM_BRK 7 +# define PR_SET_MM_ARG_START 8 +# define PR_SET_MM_ARG_END 9 +# define PR_SET_MM_ENV_START 10 +# define PR_SET_MM_ENV_END 11 +# define PR_SET_MM_AUXV 12 +# define PR_SET_MM_EXE_FILE 13 +# define PR_SET_MM_MAP 14 +# define PR_SET_MM_MAP_SIZE 15 + +/* + * This structure provides new memory descriptor + * map which mostly modifies /proc/pid/stat[m] + * output for a task. This mostly done in a + * sake of checkpoint/restore functionality. + */ +struct prctl_mm_map { + __u64 start_code; /* code section bounds */ + __u64 end_code; + __u64 start_data; /* data section bounds */ + __u64 end_data; + __u64 start_brk; /* heap for brk() syscall */ + __u64 brk; + __u64 start_stack; /* stack starts at */ + __u64 arg_start; /* command line arguments bounds */ + __u64 arg_end; + __u64 env_start; /* environment variables bounds */ + __u64 env_end; + __u64 *auxv; /* auxiliary vector */ + __u32 auxv_size; /* vector size */ + __u32 exe_fd; /* /proc/$pid/exe link file */ +}; + +/* + * Set specific pid that is allowed to ptrace the current task. + * A value of 0 mean "no process". + */ +#define PR_SET_PTRACER 0x59616d61 +# define PR_SET_PTRACER_ANY ((unsigned long)-1) + +#define PR_SET_CHILD_SUBREAPER 36 +#define PR_GET_CHILD_SUBREAPER 37 + +/* + * If no_new_privs is set, then operations that grant new privileges (i.e. + * execve) will either fail or not grant them. This affects suid/sgid, + * file capabilities, and LSMs. + * + * Operations that merely manipulate or drop existing privileges (setresuid, + * capset, etc.) will still work. Drop those privileges if you want them gone. + * + * Changing LSM security domain is considered a new privilege. So, for example, + * asking selinux for a specific new context (e.g. with runcon) will result + * in execve returning -EPERM. + * + * See Documentation/prctl/no_new_privs.txt for more details. + */ +#define PR_SET_NO_NEW_PRIVS 38 +#define PR_GET_NO_NEW_PRIVS 39 + +#define PR_GET_TID_ADDRESS 40 + +#define PR_SET_THP_DISABLE 41 +#define PR_GET_THP_DISABLE 42 + +/* + * Tell the kernel to start/stop helping userspace manage bounds tables. + */ +#define PR_MPX_ENABLE_MANAGEMENT 43 +#define PR_MPX_DISABLE_MANAGEMENT 44 + +#define PR_SET_FP_MODE 45 +#define PR_GET_FP_MODE 46 +# define PR_FP_MODE_FR (1 << 0) /* 64b FP registers */ +# define PR_FP_MODE_FRE (1 << 1) /* 32b compatibility */ + +/* Control the ambient capability set */ +#define PR_CAP_AMBIENT 47 +# define PR_CAP_AMBIENT_IS_SET 1 +# define PR_CAP_AMBIENT_RAISE 2 +# define PR_CAP_AMBIENT_LOWER 3 +# define PR_CAP_AMBIENT_CLEAR_ALL 4 + +#endif /* _LINUX_PRCTL_H */ diff --git a/tools/perf/Documentation/perf-list.txt b/tools/perf/Documentation/perf-list.txt index f709de54707b..e2a897ae3596 100644 --- a/tools/perf/Documentation/perf-list.txt +++ b/tools/perf/Documentation/perf-list.txt @@ -8,7 +8,8 @@ perf-list - List all symbolic event types SYNOPSIS -------- [verse] -'perf list' [--no-desc] [--long-desc] [hw|sw|cache|tracepoint|pmu|sdt|event_glob] +'perf list' [--no-desc] [--long-desc] + [hw|sw|cache|tracepoint|pmu|sdt|metric|metricgroup|event_glob] DESCRIPTION ----------- @@ -47,6 +48,8 @@ counted. The following modifiers exist: P - use maximum detected precise level S - read sample value (PERF_SAMPLE_READ) D - pin the event to the PMU + W - group is weak and will fallback to non-group if not schedulable, + only supported in 'perf stat' for now. The 'p' modifier can be used for specifying how precise the instruction address should be. The 'p' modifier can be specified multiple times: @@ -201,7 +204,7 @@ For example Intel Core CPUs typically have four generic performance counters for the core, plus three fixed counters for instructions, cycles and ref-cycles. Some special events have restrictions on which counter they can schedule, and may not support multiple instances in a single group. -When too many events are specified in the group none of them will not +When too many events are specified in the group some of them will not be measured. Globally pinned events can limit the number of counters available for @@ -246,6 +249,10 @@ To limit the list use: . 'sdt' to list all Statically Defined Tracepoint events. +. 'metric' to list metrics + +. 'metricgroup' to list metricgroups with metrics. + . If none of the above is matched, it will apply the supplied glob to all events, printing the ones that match. diff --git a/tools/perf/Documentation/perf-record.txt b/tools/perf/Documentation/perf-record.txt index 63526f4416ea..5a626ef666c2 100644 --- a/tools/perf/Documentation/perf-record.txt +++ b/tools/perf/Documentation/perf-record.txt @@ -377,6 +377,8 @@ symbolic names, e.g. on x86, ax, si. To list the available registers use --intr-regs=\?. To name registers, pass a comma separated list such as --intr-regs=ax,bx. The list of register is architecture dependent. +--user-regs:: +Capture user registers at sample time. Same arguments as -I. --running-time:: Record running and enabled time for read events (:S) diff --git a/tools/perf/Documentation/perf-report.txt b/tools/perf/Documentation/perf-report.txt index 383a98d992ed..ddde2b54af57 100644 --- a/tools/perf/Documentation/perf-report.txt +++ b/tools/perf/Documentation/perf-report.txt @@ -434,7 +434,8 @@ include::itrace.txt[] --inline:: If a callgraph address belongs to an inlined function, the inline stack - will be printed. Each entry is function name or file/line. + will be printed. Each entry is function name or file/line. Enabled by + default, disable with --no-inline. include::callchain-overhead-calculation.txt[] diff --git a/tools/perf/Documentation/perf-sched.txt b/tools/perf/Documentation/perf-sched.txt index a092a2499e8f..55b67338548e 100644 --- a/tools/perf/Documentation/perf-sched.txt +++ b/tools/perf/Documentation/perf-sched.txt @@ -106,6 +106,14 @@ OPTIONS for 'perf sched timehist' --max-stack:: Maximum number of functions to display in backtrace, default 5. +-p=:: +--pid=:: + Only show events for given process ID (comma separated list). + +-t=:: +--tid=:: + Only show events for given thread ID (comma separated list). + -s:: --summary:: Show only a summary of scheduling by thread with min, max, and average diff --git a/tools/perf/Documentation/perf-script.txt b/tools/perf/Documentation/perf-script.txt index 18dfcfa38454..2811fcf684cb 100644 --- a/tools/perf/Documentation/perf-script.txt +++ b/tools/perf/Documentation/perf-script.txt @@ -116,8 +116,8 @@ OPTIONS --fields:: Comma separated list of fields to print. Options are: comm, tid, pid, time, cpu, event, trace, ip, sym, dso, addr, symoff, - srcline, period, iregs, brstack, brstacksym, flags, bpf-output, brstackinsn, brstackoff, - callindent, insn, insnlen, synth, phys_addr. + srcline, period, iregs, uregs, brstack, brstacksym, flags, bpf-output, brstackinsn, + brstackoff, callindent, insn, insnlen, synth, phys_addr. Field list can be prepended with the type, trace, sw or hw, to indicate to which event type the field list applies. e.g., -F sw:comm,tid,time,ip,sym and -F trace:time,cpu,trace @@ -325,9 +325,14 @@ include::itrace.txt[] Set the maximum number of program blocks to print with brstackasm for each sample. +--per-event-dump:: + Create per event files with a "perf.data.EVENT.dump" name instead of + printing to stdout, useful, for instance, for generating flamegraphs. + --inline:: If a callgraph address belongs to an inlined function, the inline stack - will be printed. Each entry has function name and file/line. + will be printed. Each entry has function name and file/line. Enabled by + default, disable with --no-inline. SEE ALSO -------- diff --git a/tools/perf/Documentation/perf-stat.txt b/tools/perf/Documentation/perf-stat.txt index c37d61682dfb..823fce7674bb 100644 --- a/tools/perf/Documentation/perf-stat.txt +++ b/tools/perf/Documentation/perf-stat.txt @@ -199,6 +199,13 @@ Aggregate counts per processor socket for system-wide mode measurements. --per-core:: Aggregate counts per physical processor for system-wide mode measurements. +-M:: +--metrics:: +Print metrics or metricgroups specified in a comma separated list. +For a group all metrics from the group are added. +The events from the metrics are automatically measured. +See perf list output for the possble metrics and metricgroups. + -A:: --no-aggr:: Do not aggregate counts across all monitored CPUs. diff --git a/tools/perf/Documentation/perf-top.txt b/tools/perf/Documentation/perf-top.txt index d864ea6fd367..4353262bc462 100644 --- a/tools/perf/Documentation/perf-top.txt +++ b/tools/perf/Documentation/perf-top.txt @@ -240,6 +240,9 @@ Default is to monitor all CPUS. --force:: Don't do ownership validation. +--num-thread-synthesize:: + The number of threads to run when synthesizing events for existing processes. + By default, the number of threads equals to the number of online CPUs. INTERACTIVE PROMPTING KEYS -------------------------- diff --git a/tools/perf/Makefile.perf b/tools/perf/Makefile.perf index 91ef44bfaf3e..68cf1360a3f3 100644 --- a/tools/perf/Makefile.perf +++ b/tools/perf/Makefile.perf @@ -173,7 +173,7 @@ AWK = awk # non-config cases config := 1 -NON_CONFIG_TARGETS := clean TAGS tags cscope help install-doc install-man install-html install-info install-pdf doc man html info pdf +NON_CONFIG_TARGETS := clean python-clean TAGS tags cscope help install-doc install-man install-html install-info install-pdf doc man html info pdf ifdef MAKECMDGOALS ifeq ($(filter-out $(NON_CONFIG_TARGETS),$(MAKECMDGOALS)),) @@ -420,6 +420,13 @@ sndrv_pcm_ioctl_tbl := $(srctree)/tools/perf/trace/beauty/sndrv_pcm_ioctl.sh $(sndrv_pcm_ioctl_array): $(sndrv_pcm_hdr_dir)/asound.h $(sndrv_pcm_ioctl_tbl) $(Q)$(SHELL) '$(sndrv_pcm_ioctl_tbl)' $(sndrv_pcm_hdr_dir) > $@ +kcmp_type_array := $(beauty_outdir)/kcmp_type_array.c +kcmp_hdr_dir := $(srctree)/tools/include/uapi/linux/ +kcmp_type_tbl := $(srctree)/tools/perf/trace/beauty/kcmp_type.sh + +$(kcmp_type_array): $(kcmp_hdr_dir)/kcmp.h $(kcmp_type_tbl) + $(Q)$(SHELL) '$(kcmp_type_tbl)' $(kcmp_hdr_dir) > $@ + kvm_ioctl_array := $(beauty_ioctl_outdir)/kvm_ioctl_array.c kvm_hdr_dir := $(srctree)/tools/include/uapi/linux kvm_ioctl_tbl := $(srctree)/tools/perf/trace/beauty/kvm_ioctl.sh @@ -441,6 +448,20 @@ perf_ioctl_tbl := $(srctree)/tools/perf/trace/beauty/perf_ioctl.sh $(perf_ioctl_array): $(perf_hdr_dir)/perf_event.h $(perf_ioctl_tbl) $(Q)$(SHELL) '$(perf_ioctl_tbl)' $(perf_hdr_dir) > $@ +madvise_behavior_array := $(beauty_outdir)/madvise_behavior_array.c +madvise_hdr_dir := $(srctree)/tools/include/uapi/asm-generic/ +madvise_behavior_tbl := $(srctree)/tools/perf/trace/beauty/madvise_behavior.sh + +$(madvise_behavior_array): $(madvise_hdr_dir)/mman-common.h $(madvise_behavior_tbl) + $(Q)$(SHELL) '$(madvise_behavior_tbl)' $(madvise_hdr_dir) > $@ + +prctl_option_array := $(beauty_outdir)/prctl_option_array.c +prctl_hdr_dir := $(srctree)/tools/include/uapi/linux/ +prctl_option_tbl := $(srctree)/tools/perf/trace/beauty/prctl_option.sh + +$(prctl_option_array): $(prctl_hdr_dir)/prctl.h $(prctl_option_tbl) + $(Q)$(SHELL) '$(prctl_option_tbl)' $(prctl_hdr_dir) > $@ + all: shell_compatibility_test $(ALL_PROGRAMS) $(LANG_BINDINGS) $(OTHER_PROGRAMS) $(OUTPUT)python/perf.so: $(PYTHON_EXT_SRCS) $(PYTHON_EXT_DEPS) $(LIBTRACEEVENT_DYNAMIC_LIST) @@ -539,9 +560,12 @@ prepare: $(OUTPUT)PERF-VERSION-FILE $(OUTPUT)common-cmds.h archheaders $(drm_ioc $(pkey_alloc_access_rights_array) \ $(sndrv_pcm_ioctl_array) \ $(sndrv_ctl_ioctl_array) \ + $(kcmp_type_array) \ $(kvm_ioctl_array) \ $(vhost_virtio_ioctl_array) \ - $(perf_ioctl_array) + $(madvise_behavior_array) \ + $(perf_ioctl_array) \ + $(prctl_option_array) $(OUTPUT)%.o: %.c prepare FORCE $(Q)$(MAKE) -f $(srctree)/tools/build/Makefile.build dir=$(build-dir) $@ @@ -802,7 +826,10 @@ config-clean: $(call QUIET_CLEAN, config) $(Q)$(MAKE) -C $(srctree)/tools/build/feature/ $(if $(OUTPUT),OUTPUT=$(OUTPUT)feature/,) clean >/dev/null -clean:: $(LIBTRACEEVENT)-clean $(LIBAPI)-clean $(LIBBPF)-clean $(LIBSUBCMD)-clean config-clean fixdep-clean +python-clean: + $(python-clean) + +clean:: $(LIBTRACEEVENT)-clean $(LIBAPI)-clean $(LIBBPF)-clean $(LIBSUBCMD)-clean config-clean fixdep-clean python-clean $(call QUIET_CLEAN, core-objs) $(RM) $(LIB_FILE) $(OUTPUT)perf-archive $(OUTPUT)perf-with-kcore $(LANG_BINDINGS) $(Q)find $(if $(OUTPUT),$(OUTPUT),.) -name '*.o' -delete -o -name '\.*.cmd' -delete -o -name '\.*.d' -delete $(Q)$(RM) $(OUTPUT).config-detected @@ -811,15 +838,17 @@ clean:: $(LIBTRACEEVENT)-clean $(LIBAPI)-clean $(LIBBPF)-clean $(LIBSUBCMD)-clea $(OUTPUT)util/intel-pt-decoder/inat-tables.c \ $(OUTPUT)tests/llvm-src-{base,kbuild,prologue,relocation}.c \ $(OUTPUT)pmu-events/pmu-events.c \ + $(OUTPUT)$(madvise_behavior_array) \ $(OUTPUT)$(drm_ioctl_array) \ $(OUTPUT)$(pkey_alloc_access_rights_array) \ $(OUTPUT)$(sndrv_ctl_ioctl_array) \ $(OUTPUT)$(sndrv_pcm_ioctl_array) \ $(OUTPUT)$(kvm_ioctl_array) \ + $(OUTPUT)$(kcmp_type_array) \ $(OUTPUT)$(vhost_virtio_ioctl_array) \ - $(OUTPUT)$(perf_ioctl_array) + $(OUTPUT)$(perf_ioctl_array) \ + $(OUTPUT)$(prctl_option_array) $(QUIET_SUBDIR0)Documentation $(QUIET_SUBDIR1) clean - $(python-clean) # # To provide FEATURE-DUMP into $(FEATURE_DUMP_COPY) diff --git a/tools/perf/arch/arm/annotate/instructions.c b/tools/perf/arch/arm/annotate/instructions.c index b39b16395aac..f64516d5b23e 100644 --- a/tools/perf/arch/arm/annotate/instructions.c +++ b/tools/perf/arch/arm/annotate/instructions.c @@ -1,4 +1,5 @@ // SPDX-License-Identifier: GPL-2.0 +#include <linux/compiler.h> #include <sys/types.h> #include <regex.h> @@ -24,7 +25,7 @@ static struct ins_ops *arm__associate_instruction_ops(struct arch *arch, const c return ops; } -static int arm__annotate_init(struct arch *arch) +static int arm__annotate_init(struct arch *arch, char *cpuid __maybe_unused) { struct arm_annotate *arm; int err; diff --git a/tools/perf/arch/arm64/annotate/instructions.c b/tools/perf/arch/arm64/annotate/instructions.c index 9a3e0523e2c9..6688977e4ac7 100644 --- a/tools/perf/arch/arm64/annotate/instructions.c +++ b/tools/perf/arch/arm64/annotate/instructions.c @@ -1,4 +1,5 @@ // SPDX-License-Identifier: GPL-2.0 +#include <linux/compiler.h> #include <sys/types.h> #include <regex.h> @@ -26,7 +27,7 @@ static struct ins_ops *arm64__associate_instruction_ops(struct arch *arch, const return ops; } -static int arm64__annotate_init(struct arch *arch) +static int arm64__annotate_init(struct arch *arch, char *cpuid __maybe_unused) { struct arm64_annotate *arm; int err; diff --git a/tools/perf/arch/powerpc/annotate/instructions.c b/tools/perf/arch/powerpc/annotate/instructions.c index b7bc04980fe8..a3f423c27cae 100644 --- a/tools/perf/arch/powerpc/annotate/instructions.c +++ b/tools/perf/arch/powerpc/annotate/instructions.c @@ -1,4 +1,6 @@ // SPDX-License-Identifier: GPL-2.0 +#include <linux/compiler.h> + static struct ins_ops *powerpc__associate_instruction_ops(struct arch *arch, const char *name) { int i; @@ -47,7 +49,7 @@ static struct ins_ops *powerpc__associate_instruction_ops(struct arch *arch, con return ops; } -static int powerpc__annotate_init(struct arch *arch) +static int powerpc__annotate_init(struct arch *arch, char *cpuid __maybe_unused) { if (!arch->initialized) { arch->initialized = true; diff --git a/tools/perf/arch/s390/annotate/instructions.c b/tools/perf/arch/s390/annotate/instructions.c index c9a81673e8aa..e0e466c650df 100644 --- a/tools/perf/arch/s390/annotate/instructions.c +++ b/tools/perf/arch/s390/annotate/instructions.c @@ -1,4 +1,6 @@ // SPDX-License-Identifier: GPL-2.0 +#include <linux/compiler.h> + static struct ins_ops *s390__associate_ins_ops(struct arch *arch, const char *name) { struct ins_ops *ops = NULL; @@ -20,7 +22,7 @@ static struct ins_ops *s390__associate_ins_ops(struct arch *arch, const char *na return ops; } -static int s390__annotate_init(struct arch *arch) +static int s390__annotate_init(struct arch *arch, char *cpuid __maybe_unused) { if (!arch->initialized) { arch->initialized = true; diff --git a/tools/perf/arch/x86/annotate/instructions.c b/tools/perf/arch/x86/annotate/instructions.c index 4adfb4ce2864..5bd1ba8c0282 100644 --- a/tools/perf/arch/x86/annotate/instructions.c +++ b/tools/perf/arch/x86/annotate/instructions.c @@ -123,3 +123,17 @@ static int x86__cpuid_parse(struct arch *arch, char *cpuid) return -1; } + +static int x86__annotate_init(struct arch *arch, char *cpuid) +{ + int err = 0; + + if (arch->initialized) + return 0; + + if (cpuid) + err = x86__cpuid_parse(arch, cpuid); + + arch->initialized = true; + return err; +} diff --git a/tools/perf/arch/x86/include/arch-tests.h b/tools/perf/arch/x86/include/arch-tests.h index 9834fdc7c59e..c1bd979b957b 100644 --- a/tools/perf/arch/x86/include/arch-tests.h +++ b/tools/perf/arch/x86/include/arch-tests.h @@ -9,7 +9,6 @@ struct test; int test__rdpmc(struct test *test __maybe_unused, int subtest); int test__perf_time_to_tsc(struct test *test __maybe_unused, int subtest); int test__insn_x86(struct test *test __maybe_unused, int subtest); -int test__intel_cqm_count_nmi_context(struct test *test __maybe_unused, int subtest); #ifdef HAVE_DWARF_UNWIND_SUPPORT struct thread; diff --git a/tools/perf/arch/x86/tests/Build b/tools/perf/arch/x86/tests/Build index cbb7e978166b..8e2c5a38c3b9 100644 --- a/tools/perf/arch/x86/tests/Build +++ b/tools/perf/arch/x86/tests/Build @@ -5,4 +5,3 @@ libperf-y += arch-tests.o libperf-y += rdpmc.o libperf-y += perf-time-to-tsc.o libperf-$(CONFIG_AUXTRACE) += insn-x86.o -libperf-y += intel-cqm.o diff --git a/tools/perf/arch/x86/tests/arch-tests.c b/tools/perf/arch/x86/tests/arch-tests.c index 34a078136a47..cc1802ff5410 100644 --- a/tools/perf/arch/x86/tests/arch-tests.c +++ b/tools/perf/arch/x86/tests/arch-tests.c @@ -25,10 +25,6 @@ struct test arch_tests[] = { }, #endif { - .desc = "Intel cqm nmi context read", - .func = test__intel_cqm_count_nmi_context, - }, - { .func = NULL, }, diff --git a/tools/perf/builtin-annotate.c b/tools/perf/builtin-annotate.c index 3d32aa45016d..f15731a3d438 100644 --- a/tools/perf/builtin-annotate.c +++ b/tools/perf/builtin-annotate.c @@ -357,7 +357,7 @@ static int __cmd_annotate(struct perf_annotate *ann) } if (total_nr_samples == 0) { - ui__error("The %s file has no samples!\n", session->file->path); + ui__error("The %s file has no samples!\n", session->data->file.path); goto out; } @@ -401,7 +401,7 @@ int cmd_annotate(int argc, const char **argv) .ordering_requires_timestamps = true, }, }; - struct perf_data_file file = { + struct perf_data data = { .mode = PERF_DATA_MODE_READ, }; struct option options[] = { @@ -411,7 +411,7 @@ int cmd_annotate(int argc, const char **argv) "only consider symbols in these dsos"), OPT_STRING('s', "symbol", &annotate.sym_hist_filter, "symbol", "symbol to annotate"), - OPT_BOOLEAN('f', "force", &file.force, "don't complain, do it"), + OPT_BOOLEAN('f', "force", &data.force, "don't complain, do it"), OPT_INCR('v', "verbose", &verbose, "be more verbose (show symbol address, etc)"), OPT_BOOLEAN('q', "quiet", &quiet, "do now show any message"), @@ -483,9 +483,9 @@ int cmd_annotate(int argc, const char **argv) if (quiet) perf_quiet_option(); - file.path = input_name; + data.file.path = input_name; - annotate.session = perf_session__new(&file, false, &annotate.tool); + annotate.session = perf_session__new(&data, false, &annotate.tool); if (annotate.session == NULL) return -1; diff --git a/tools/perf/builtin-buildid-cache.c b/tools/perf/builtin-buildid-cache.c index 5f53a7ad5ef3..3d354ba6e9c5 100644 --- a/tools/perf/builtin-buildid-cache.c +++ b/tools/perf/builtin-buildid-cache.c @@ -312,7 +312,7 @@ int cmd_buildid_cache(int argc, const char **argv) *kcore_filename = NULL; char sbuf[STRERR_BUFSIZE]; - struct perf_data_file file = { + struct perf_data data = { .mode = PERF_DATA_MODE_READ, }; struct perf_session *session = NULL; @@ -353,10 +353,10 @@ int cmd_buildid_cache(int argc, const char **argv) nsi = nsinfo__new(ns_id); if (missing_filename) { - file.path = missing_filename; - file.force = force; + data.file.path = missing_filename; + data.force = force; - session = perf_session__new(&file, false, NULL); + session = perf_session__new(&data, false, NULL); if (session == NULL) return -1; } diff --git a/tools/perf/builtin-buildid-list.c b/tools/perf/builtin-buildid-list.c index ec2f327cd79d..78abbe8d9d5f 100644 --- a/tools/perf/builtin-buildid-list.c +++ b/tools/perf/builtin-buildid-list.c @@ -51,10 +51,12 @@ static bool dso__skip_buildid(struct dso *dso, int with_hits) static int perf_session__list_build_ids(bool force, bool with_hits) { struct perf_session *session; - struct perf_data_file file = { - .path = input_name, - .mode = PERF_DATA_MODE_READ, - .force = force, + struct perf_data data = { + .file = { + .path = input_name, + }, + .mode = PERF_DATA_MODE_READ, + .force = force, }; symbol__elf_init(); @@ -64,7 +66,7 @@ static int perf_session__list_build_ids(bool force, bool with_hits) if (filename__fprintf_build_id(input_name, stdout) > 0) goto out; - session = perf_session__new(&file, false, &build_id__mark_dso_hit_ops); + session = perf_session__new(&data, false, &build_id__mark_dso_hit_ops); if (session == NULL) return -1; @@ -72,7 +74,7 @@ static int perf_session__list_build_ids(bool force, bool with_hits) * We take all buildids when the file contains AUX area tracing data * because we do not decode the trace because it would take too long. */ - if (!perf_data_file__is_pipe(&file) && + if (!perf_data__is_pipe(&data) && perf_header__has_feat(&session->header, HEADER_AUXTRACE)) with_hits = false; @@ -80,7 +82,7 @@ static int perf_session__list_build_ids(bool force, bool with_hits) * in pipe-mode, the only way to get the buildids is to parse * the record stream. Buildids are stored as RECORD_HEADER_BUILD_ID */ - if (with_hits || perf_data_file__is_pipe(&file)) + if (with_hits || perf_data__is_pipe(&data)) perf_session__process_events(session); perf_session__fprintf_dsos_buildid(session, stdout, dso__skip_buildid, with_hits); diff --git a/tools/perf/builtin-c2c.c b/tools/perf/builtin-c2c.c index fd32ad08c6d4..17855c4626a0 100644 --- a/tools/perf/builtin-c2c.c +++ b/tools/perf/builtin-c2c.c @@ -2524,7 +2524,7 @@ static int perf_c2c__report(int argc, const char **argv) { struct perf_session *session; struct ui_progress prog; - struct perf_data_file file = { + struct perf_data data = { .mode = PERF_DATA_MODE_READ, }; char callchain_default_opt[] = CALLCHAIN_DEFAULT_OPT; @@ -2573,8 +2573,8 @@ static int perf_c2c__report(int argc, const char **argv) if (!input_name || !strlen(input_name)) input_name = "perf.data"; - file.path = input_name; - file.force = symbol_conf.force; + data.file.path = input_name; + data.force = symbol_conf.force; err = setup_display(display); if (err) @@ -2592,7 +2592,7 @@ static int perf_c2c__report(int argc, const char **argv) goto out; } - session = perf_session__new(&file, 0, &c2c.tool); + session = perf_session__new(&data, 0, &c2c.tool); if (session == NULL) { pr_debug("No memory for session\n"); goto out; @@ -2612,7 +2612,7 @@ static int perf_c2c__report(int argc, const char **argv) goto out_session; /* No pipe support at the moment. */ - if (perf_data_file__is_pipe(session->file)) { + if (perf_data__is_pipe(session->data)) { pr_debug("No pipe support at the moment.\n"); goto out_session; } @@ -2733,6 +2733,7 @@ static int perf_c2c__record(int argc, const char **argv) if (!perf_mem_events[j].supported) { pr_err("failed: event '%s' not supported\n", perf_mem_events[j].name); + free(rec_argv); return -1; } diff --git a/tools/perf/builtin-config.c b/tools/perf/builtin-config.c index abfa49eaf7fd..514f70f95b57 100644 --- a/tools/perf/builtin-config.c +++ b/tools/perf/builtin-config.c @@ -35,8 +35,7 @@ static struct option config_options[] = { OPT_END() }; -static int set_config(struct perf_config_set *set, const char *file_name, - const char *var, const char *value) +static int set_config(struct perf_config_set *set, const char *file_name) { struct perf_config_section *section = NULL; struct perf_config_item *item = NULL; @@ -50,7 +49,6 @@ static int set_config(struct perf_config_set *set, const char *file_name, if (!fp) return -1; - perf_config_set__collect(set, file_name, var, value); fprintf(fp, "%s\n", first_line); /* overwrite configvariables */ @@ -162,6 +160,7 @@ int cmd_config(int argc, const char **argv) struct perf_config_set *set; char *user_config = mkpath("%s/.perfconfig", getenv("HOME")); const char *config_filename; + bool changed = false; argc = parse_options(argc, argv, config_options, config_usage, PARSE_OPT_STOP_AT_NON_OPTION); @@ -232,15 +231,26 @@ int cmd_config(int argc, const char **argv) goto out_err; } } else { - if (set_config(set, config_filename, var, value) < 0) { - pr_err("Failed to set '%s=%s' on %s\n", - var, value, config_filename); + if (perf_config_set__collect(set, config_filename, + var, value) < 0) { + pr_err("Failed to add '%s=%s'\n", + var, value); free(arg); goto out_err; } + changed = true; } free(arg); } + + if (!changed) + break; + + if (set_config(set, config_filename) < 0) { + pr_err("Failed to set the configs on %s\n", + config_filename); + goto out_err; + } } ret = 0; diff --git a/tools/perf/builtin-diff.c b/tools/perf/builtin-diff.c index 56223bdfa205..d660cb7b222b 100644 --- a/tools/perf/builtin-diff.c +++ b/tools/perf/builtin-diff.c @@ -48,7 +48,7 @@ struct diff_hpp_fmt { struct data__file { struct perf_session *session; - struct perf_data_file file; + struct perf_data data; int idx; struct hists *hists; struct diff_hpp_fmt fmt[PERF_HPP_DIFF__MAX_INDEX]; @@ -708,7 +708,7 @@ static void data__fprintf(void) data__for_each_file(i, d) fprintf(stdout, "# [%d] %s %s\n", - d->idx, d->file.path, + d->idx, d->data.file.path, !d->idx ? "(Baseline)" : ""); fprintf(stdout, "#\n"); @@ -777,16 +777,16 @@ static int __cmd_diff(void) int ret = -EINVAL, i; data__for_each_file(i, d) { - d->session = perf_session__new(&d->file, false, &tool); + d->session = perf_session__new(&d->data, false, &tool); if (!d->session) { - pr_err("Failed to open %s\n", d->file.path); + pr_err("Failed to open %s\n", d->data.file.path); ret = -1; goto out_delete; } ret = perf_session__process_events(d->session); if (ret) { - pr_err("Failed to process %s\n", d->file.path); + pr_err("Failed to process %s\n", d->data.file.path); goto out_delete; } @@ -1287,11 +1287,11 @@ static int data_init(int argc, const char **argv) return -ENOMEM; data__for_each_file(i, d) { - struct perf_data_file *file = &d->file; + struct perf_data *data = &d->data; - file->path = use_default ? defaults[i] : argv[i]; - file->mode = PERF_DATA_MODE_READ, - file->force = force, + data->file.path = use_default ? defaults[i] : argv[i]; + data->mode = PERF_DATA_MODE_READ, + data->force = force, d->idx = i; } diff --git a/tools/perf/builtin-evlist.c b/tools/perf/builtin-evlist.c index cdd145613f60..e06e822ce634 100644 --- a/tools/perf/builtin-evlist.c +++ b/tools/perf/builtin-evlist.c @@ -22,14 +22,16 @@ static int __cmd_evlist(const char *file_name, struct perf_attr_details *details { struct perf_session *session; struct perf_evsel *pos; - struct perf_data_file file = { - .path = file_name, - .mode = PERF_DATA_MODE_READ, - .force = details->force, + struct perf_data data = { + .file = { + .path = file_name, + }, + .mode = PERF_DATA_MODE_READ, + .force = details->force, }; bool has_tracepoint = false; - session = perf_session__new(&file, 0, NULL); + session = perf_session__new(&data, 0, NULL); if (session == NULL) return -1; diff --git a/tools/perf/builtin-inject.c b/tools/perf/builtin-inject.c index 3e0e73b0dc67..16a28547ca86 100644 --- a/tools/perf/builtin-inject.c +++ b/tools/perf/builtin-inject.c @@ -36,7 +36,7 @@ struct perf_inject { bool strip; bool jit_mode; const char *input_name; - struct perf_data_file output; + struct perf_data output; u64 bytes_written; u64 aux_id; struct list_head samples; @@ -53,7 +53,7 @@ static int output_bytes(struct perf_inject *inject, void *buf, size_t sz) { ssize_t size; - size = perf_data_file__write(&inject->output, buf, sz); + size = perf_data__write(&inject->output, buf, sz); if (size < 0) return -errno; @@ -146,7 +146,7 @@ static s64 perf_event__repipe_auxtrace(struct perf_tool *tool, if (!inject->output.is_pipe) { off_t offset; - offset = lseek(inject->output.fd, 0, SEEK_CUR); + offset = lseek(inject->output.file.fd, 0, SEEK_CUR); if (offset == -1) return -errno; ret = auxtrace_index__auxtrace_event(&session->auxtrace_index, @@ -155,11 +155,11 @@ static s64 perf_event__repipe_auxtrace(struct perf_tool *tool, return ret; } - if (perf_data_file__is_pipe(session->file) || !session->one_mmap) { + if (perf_data__is_pipe(session->data) || !session->one_mmap) { ret = output_bytes(inject, event, event->header.size); if (ret < 0) return ret; - ret = copy_bytes(inject, perf_data_file__fd(session->file), + ret = copy_bytes(inject, perf_data__fd(session->data), event->auxtrace.size); } else { ret = output_bytes(inject, event, @@ -638,8 +638,8 @@ static int __cmd_inject(struct perf_inject *inject) { int ret = -EINVAL; struct perf_session *session = inject->session; - struct perf_data_file *file_out = &inject->output; - int fd = perf_data_file__fd(file_out); + struct perf_data *data_out = &inject->output; + int fd = perf_data__fd(data_out); u64 output_data_offset; signal(SIGINT, sig_handler); @@ -694,14 +694,14 @@ static int __cmd_inject(struct perf_inject *inject) if (!inject->itrace_synth_opts.set) auxtrace_index__free(&session->auxtrace_index); - if (!file_out->is_pipe) + if (!data_out->is_pipe) lseek(fd, output_data_offset, SEEK_SET); ret = perf_session__process_events(session); if (ret) return ret; - if (!file_out->is_pipe) { + if (!data_out->is_pipe) { if (inject->build_ids) perf_header__set_feat(&session->header, HEADER_BUILD_ID); @@ -776,11 +776,13 @@ int cmd_inject(int argc, const char **argv) .input_name = "-", .samples = LIST_HEAD_INIT(inject.samples), .output = { - .path = "-", - .mode = PERF_DATA_MODE_WRITE, + .file = { + .path = "-", + }, + .mode = PERF_DATA_MODE_WRITE, }, }; - struct perf_data_file file = { + struct perf_data data = { .mode = PERF_DATA_MODE_READ, }; int ret; @@ -790,7 +792,7 @@ int cmd_inject(int argc, const char **argv) "Inject build-ids into the output stream"), OPT_STRING('i', "input", &inject.input_name, "file", "input file name"), - OPT_STRING('o', "output", &inject.output.path, "file", + OPT_STRING('o', "output", &inject.output.file.path, "file", "output file name"), OPT_BOOLEAN('s', "sched-stat", &inject.sched_stat, "Merge sched-stat and sched-switch for getting events " @@ -802,7 +804,7 @@ int cmd_inject(int argc, const char **argv) "be more verbose (show build ids, etc)"), OPT_STRING(0, "kallsyms", &symbol_conf.kallsyms_name, "file", "kallsyms pathname"), - OPT_BOOLEAN('f', "force", &file.force, "don't complain, do it"), + OPT_BOOLEAN('f', "force", &data.force, "don't complain, do it"), OPT_CALLBACK_OPTARG(0, "itrace", &inject.itrace_synth_opts, NULL, "opts", "Instruction Tracing options", itrace_parse_synth_opts), @@ -830,15 +832,15 @@ int cmd_inject(int argc, const char **argv) return -1; } - if (perf_data_file__open(&inject.output)) { + if (perf_data__open(&inject.output)) { perror("failed to create output file"); return -1; } inject.tool.ordered_events = inject.sched_stat; - file.path = inject.input_name; - inject.session = perf_session__new(&file, true, &inject.tool); + data.file.path = inject.input_name; + inject.session = perf_session__new(&data, true, &inject.tool); if (inject.session == NULL) return -1; diff --git a/tools/perf/builtin-kmem.c b/tools/perf/builtin-kmem.c index 35d4b9c9a9e8..557d391f564a 100644 --- a/tools/perf/builtin-kmem.c +++ b/tools/perf/builtin-kmem.c @@ -1894,7 +1894,7 @@ int cmd_kmem(int argc, const char **argv) { const char * const default_slab_sort = "frag,hit,bytes"; const char * const default_page_sort = "bytes,hit"; - struct perf_data_file file = { + struct perf_data data = { .mode = PERF_DATA_MODE_READ, }; const struct option kmem_options[] = { @@ -1910,7 +1910,7 @@ int cmd_kmem(int argc, const char **argv) "page, order, migtype, gfp", parse_sort_opt), OPT_CALLBACK('l', "line", NULL, "num", "show n lines", parse_line_opt), OPT_BOOLEAN(0, "raw-ip", &raw_ip, "show raw ip instead of symbol"), - OPT_BOOLEAN('f', "force", &file.force, "don't complain, do it"), + OPT_BOOLEAN('f', "force", &data.force, "don't complain, do it"), OPT_CALLBACK_NOOPT(0, "slab", NULL, NULL, "Analyze slab allocator", parse_slab_opt), OPT_CALLBACK_NOOPT(0, "page", NULL, NULL, "Analyze page allocator", @@ -1950,9 +1950,9 @@ int cmd_kmem(int argc, const char **argv) return __cmd_record(argc, argv); } - file.path = input_name; + data.file.path = input_name; - kmem_session = session = perf_session__new(&file, false, &perf_kmem); + kmem_session = session = perf_session__new(&data, false, &perf_kmem); if (session == NULL) return -1; @@ -1984,7 +1984,8 @@ int cmd_kmem(int argc, const char **argv) if (perf_time__parse_str(&ptime, time_str) != 0) { pr_err("Invalid time string\n"); - return -EINVAL; + ret = -EINVAL; + goto out_delete; } if (!strcmp(argv[0], "stat")) { diff --git a/tools/perf/builtin-kvm.c b/tools/perf/builtin-kvm.c index 5fb40368d5d1..0c36f2ac6a0e 100644 --- a/tools/perf/builtin-kvm.c +++ b/tools/perf/builtin-kvm.c @@ -35,7 +35,6 @@ #include <termios.h> #include <semaphore.h> #include <signal.h> -#include <pthread.h> #include <math.h> static const char *get_filename_for_perf_kvm(void) @@ -1069,10 +1068,12 @@ static int read_events(struct perf_kvm_stat *kvm) .namespaces = perf_event__process_namespaces, .ordered_events = true, }; - struct perf_data_file file = { - .path = kvm->file_name, - .mode = PERF_DATA_MODE_READ, - .force = kvm->force, + struct perf_data file = { + .file = { + .path = kvm->file_name, + }, + .mode = PERF_DATA_MODE_READ, + .force = kvm->force, }; kvm->tool = eops; @@ -1360,7 +1361,7 @@ static int kvm_events_live(struct perf_kvm_stat *kvm, "perf kvm stat live [<options>]", NULL }; - struct perf_data_file file = { + struct perf_data data = { .mode = PERF_DATA_MODE_WRITE, }; @@ -1434,7 +1435,7 @@ static int kvm_events_live(struct perf_kvm_stat *kvm, /* * perf session */ - kvm->session = perf_session__new(&file, false, &kvm->tool); + kvm->session = perf_session__new(&data, false, &kvm->tool); if (kvm->session == NULL) { err = -1; goto out; @@ -1443,7 +1444,8 @@ static int kvm_events_live(struct perf_kvm_stat *kvm, perf_session__set_id_hdr_size(kvm->session); ordered_events__set_copy_on_queue(&kvm->session->ordered_events, true); machine__synthesize_threads(&kvm->session->machines.host, &kvm->opts.target, - kvm->evlist->threads, false, kvm->opts.proc_map_timeout); + kvm->evlist->threads, false, + kvm->opts.proc_map_timeout, 1); err = kvm_live_open_events(kvm); if (err) goto out; diff --git a/tools/perf/builtin-list.c b/tools/perf/builtin-list.c index eeedbe433776..ead221e49f00 100644 --- a/tools/perf/builtin-list.c +++ b/tools/perf/builtin-list.c @@ -16,6 +16,7 @@ #include "util/cache.h" #include "util/pmu.h" #include "util/debug.h" +#include "util/metricgroup.h" #include <subcmd/parse-options.h> static bool desc_flag = true; @@ -80,6 +81,10 @@ int cmd_list(int argc, const char **argv) long_desc_flag, details_flag); else if (strcmp(argv[i], "sdt") == 0) print_sdt_events(NULL, NULL, raw_dump); + else if (strcmp(argv[i], "metric") == 0) + metricgroup__print(true, false, NULL, raw_dump); + else if (strcmp(argv[i], "metricgroup") == 0) + metricgroup__print(false, true, NULL, raw_dump); else if ((sep = strchr(argv[i], ':')) != NULL) { int sep_idx; @@ -97,6 +102,7 @@ int cmd_list(int argc, const char **argv) s[sep_idx] = '\0'; print_tracepoint_events(s, s + sep_idx + 1, raw_dump); print_sdt_events(s, s + sep_idx + 1, raw_dump); + metricgroup__print(true, true, s, raw_dump); free(s); } else { if (asprintf(&s, "*%s*", argv[i]) < 0) { @@ -113,6 +119,7 @@ int cmd_list(int argc, const char **argv) details_flag); print_tracepoint_events(NULL, s, raw_dump); print_sdt_events(NULL, s, raw_dump); + metricgroup__print(true, true, NULL, raw_dump); free(s); } } diff --git a/tools/perf/builtin-lock.c b/tools/perf/builtin-lock.c index fe69cd6b89e1..6e0189df2b3b 100644 --- a/tools/perf/builtin-lock.c +++ b/tools/perf/builtin-lock.c @@ -865,13 +865,15 @@ static int __cmd_report(bool display_info) .namespaces = perf_event__process_namespaces, .ordered_events = true, }; - struct perf_data_file file = { - .path = input_name, - .mode = PERF_DATA_MODE_READ, - .force = force, + struct perf_data data = { + .file = { + .path = input_name, + }, + .mode = PERF_DATA_MODE_READ, + .force = force, }; - session = perf_session__new(&file, false, &eops); + session = perf_session__new(&data, false, &eops); if (!session) { pr_err("Initializing perf session failed\n"); return -1; diff --git a/tools/perf/builtin-mem.c b/tools/perf/builtin-mem.c index 4db960085273..506564651cda 100644 --- a/tools/perf/builtin-mem.c +++ b/tools/perf/builtin-mem.c @@ -113,6 +113,7 @@ static int __cmd_record(int argc, const char **argv, struct perf_mem *mem) if (!perf_mem_events[j].supported) { pr_err("failed: event '%s' not supported\n", perf_mem_events__name(j)); + free(rec_argv); return -1; } @@ -236,13 +237,15 @@ static int process_sample_event(struct perf_tool *tool, static int report_raw_events(struct perf_mem *mem) { - struct perf_data_file file = { - .path = input_name, - .mode = PERF_DATA_MODE_READ, - .force = mem->force, + struct perf_data data = { + .file = { + .path = input_name, + }, + .mode = PERF_DATA_MODE_READ, + .force = mem->force, }; int ret; - struct perf_session *session = perf_session__new(&file, false, + struct perf_session *session = perf_session__new(&data, false, &mem->tool); if (session == NULL) diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c index 0c95ffefb6cc..3d7f33e19df2 100644 --- a/tools/perf/builtin-record.c +++ b/tools/perf/builtin-record.c @@ -67,7 +67,7 @@ struct record { struct perf_tool tool; struct record_opts opts; u64 bytes_written; - struct perf_data_file file; + struct perf_data data; struct auxtrace_record *itr; struct perf_evlist *evlist; struct perf_session *session; @@ -108,7 +108,7 @@ static bool switch_output_time(struct record *rec) static int record__write(struct record *rec, void *bf, size_t size) { - if (perf_data_file__write(rec->session->file, bf, size) < 0) { + if (perf_data__write(rec->session->data, bf, size) < 0) { pr_err("failed to write perf data, error: %m\n"); return -1; } @@ -130,107 +130,12 @@ static int process_synthesized_event(struct perf_tool *tool, return record__write(rec, event, event->header.size); } -static int -backward_rb_find_range(void *buf, int mask, u64 head, u64 *start, u64 *end) +static int record__pushfn(void *to, void *bf, size_t size) { - struct perf_event_header *pheader; - u64 evt_head = head; - int size = mask + 1; - - pr_debug2("backward_rb_find_range: buf=%p, head=%"PRIx64"\n", buf, head); - pheader = (struct perf_event_header *)(buf + (head & mask)); - *start = head; - while (true) { - if (evt_head - head >= (unsigned int)size) { - pr_debug("Finished reading backward ring buffer: rewind\n"); - if (evt_head - head > (unsigned int)size) - evt_head -= pheader->size; - *end = evt_head; - return 0; - } - - pheader = (struct perf_event_header *)(buf + (evt_head & mask)); - - if (pheader->size == 0) { - pr_debug("Finished reading backward ring buffer: get start\n"); - *end = evt_head; - return 0; - } - - evt_head += pheader->size; - pr_debug3("move evt_head: %"PRIx64"\n", evt_head); - } - WARN_ONCE(1, "Shouldn't get here\n"); - return -1; -} - -static int -rb_find_range(void *data, int mask, u64 head, u64 old, - u64 *start, u64 *end, bool backward) -{ - if (!backward) { - *start = old; - *end = head; - return 0; - } - - return backward_rb_find_range(data, mask, head, start, end); -} - -static int -record__mmap_read(struct record *rec, struct perf_mmap *md, - bool overwrite, bool backward) -{ - u64 head = perf_mmap__read_head(md); - u64 old = md->prev; - u64 end = head, start = old; - unsigned char *data = md->base + page_size; - unsigned long size; - void *buf; - int rc = 0; - - if (rb_find_range(data, md->mask, head, - old, &start, &end, backward)) - return -1; - - if (start == end) - return 0; + struct record *rec = to; rec->samples++; - - size = end - start; - if (size > (unsigned long)(md->mask) + 1) { - WARN_ONCE(1, "failed to keep up with mmap data. (warn only once)\n"); - - md->prev = head; - perf_mmap__consume(md, overwrite || backward); - return 0; - } - - if ((start & md->mask) + size != (end & md->mask)) { - buf = &data[start & md->mask]; - size = md->mask + 1 - (start & md->mask); - start += size; - - if (record__write(rec, buf, size) < 0) { - rc = -1; - goto out; - } - } - - buf = &data[start & md->mask]; - size = end - start; - start += size; - - if (record__write(rec, buf, size) < 0) { - rc = -1; - goto out; - } - - md->prev = head; - perf_mmap__consume(md, overwrite || backward); -out: - return rc; + return record__write(rec, bf, size); } static volatile int done; @@ -269,13 +174,13 @@ static int record__process_auxtrace(struct perf_tool *tool, size_t len1, void *data2, size_t len2) { struct record *rec = container_of(tool, struct record, tool); - struct perf_data_file *file = &rec->file; + struct perf_data *data = &rec->data; size_t padding; u8 pad[8] = {0}; - if (!perf_data_file__is_pipe(file)) { + if (!perf_data__is_pipe(data)) { off_t file_offset; - int fd = perf_data_file__fd(file); + int fd = perf_data__fd(data); int err; file_offset = lseek(fd, 0, SEEK_CUR); @@ -494,10 +399,10 @@ static int process_sample_event(struct perf_tool *tool, static int process_buildids(struct record *rec) { - struct perf_data_file *file = &rec->file; + struct perf_data *data = &rec->data; struct perf_session *session = rec->session; - if (file->size == 0) + if (data->size == 0) return 0; /* @@ -577,8 +482,7 @@ static int record__mmap_read_evlist(struct record *rec, struct perf_evlist *evli struct auxtrace_mmap *mm = &maps[i].auxtrace_mmap; if (maps[i].base) { - if (record__mmap_read(rec, &maps[i], - evlist->overwrite, backward) != 0) { + if (perf_mmap__push(&maps[i], evlist->overwrite, backward, rec, record__pushfn) != 0) { rc = -1; goto out; } @@ -641,14 +545,14 @@ static void record__init_features(struct record *rec) static void record__finish_output(struct record *rec) { - struct perf_data_file *file = &rec->file; - int fd = perf_data_file__fd(file); + struct perf_data *data = &rec->data; + int fd = perf_data__fd(data); - if (file->is_pipe) + if (data->is_pipe) return; rec->session->header.data_size += rec->bytes_written; - file->size = lseek(perf_data_file__fd(file), 0, SEEK_CUR); + data->size = lseek(perf_data__fd(data), 0, SEEK_CUR); if (!rec->no_buildid) { process_buildids(rec); @@ -687,7 +591,7 @@ static int record__synthesize(struct record *rec, bool tail); static int record__switch_output(struct record *rec, bool at_exit) { - struct perf_data_file *file = &rec->file; + struct perf_data *data = &rec->data; int fd, err; /* Same Size: "2015122520103046"*/ @@ -705,7 +609,7 @@ record__switch_output(struct record *rec, bool at_exit) return -EINVAL; } - fd = perf_data_file__switch(file, timestamp, + fd = perf_data__switch(data, timestamp, rec->session->header.data_offset, at_exit); if (fd >= 0 && !at_exit) { @@ -715,7 +619,7 @@ record__switch_output(struct record *rec, bool at_exit) if (!quiet) fprintf(stderr, "[ perf record: Dump %s.%s ]\n", - file->path, timestamp); + data->file.path, timestamp); /* Output tracking events */ if (!at_exit) { @@ -790,16 +694,16 @@ static int record__synthesize(struct record *rec, bool tail) { struct perf_session *session = rec->session; struct machine *machine = &session->machines.host; - struct perf_data_file *file = &rec->file; + struct perf_data *data = &rec->data; struct record_opts *opts = &rec->opts; struct perf_tool *tool = &rec->tool; - int fd = perf_data_file__fd(file); + int fd = perf_data__fd(data); int err = 0; if (rec->opts.tail_synthesize != tail) return 0; - if (file->is_pipe) { + if (data->is_pipe) { err = perf_event__synthesize_features( tool, session, rec->evlist, process_synthesized_event); if (err < 0) { @@ -864,7 +768,7 @@ static int record__synthesize(struct record *rec, bool tail) err = __machine__synthesize_threads(machine, tool, &opts->target, rec->evlist->threads, process_synthesized_event, opts->sample_address, - opts->proc_map_timeout); + opts->proc_map_timeout, 1); out: return err; } @@ -878,7 +782,7 @@ static int __cmd_record(struct record *rec, int argc, const char **argv) struct machine *machine; struct perf_tool *tool = &rec->tool; struct record_opts *opts = &rec->opts; - struct perf_data_file *file = &rec->file; + struct perf_data *data = &rec->data; struct perf_session *session; bool disabled = false, draining = false; int fd; @@ -904,20 +808,20 @@ static int __cmd_record(struct record *rec, int argc, const char **argv) signal(SIGUSR2, SIG_IGN); } - session = perf_session__new(file, false, tool); + session = perf_session__new(data, false, tool); if (session == NULL) { pr_err("Perf session creation failed.\n"); return -1; } - fd = perf_data_file__fd(file); + fd = perf_data__fd(data); rec->session = session; record__init_features(rec); if (forks) { err = perf_evlist__prepare_workload(rec->evlist, &opts->target, - argv, file->is_pipe, + argv, data->is_pipe, workload_exec_failed_signal); if (err < 0) { pr_err("Couldn't run the workload!\n"); @@ -953,7 +857,7 @@ static int __cmd_record(struct record *rec, int argc, const char **argv) if (!rec->evlist->nr_groups) perf_header__clear_feat(&session->header, HEADER_GROUP_DESC); - if (file->is_pipe) { + if (data->is_pipe) { err = perf_header__write_pipe(fd); if (err < 0) goto out_child; @@ -1214,8 +1118,8 @@ out_child: samples[0] = '\0'; fprintf(stderr, "[ perf record: Captured and wrote %.3f MB %s%s%s ]\n", - perf_data_file__size(file) / 1024.0 / 1024.0, - file->path, postfix, samples); + perf_data__size(data) / 1024.0 / 1024.0, + data->file.path, postfix, samples); } out_delete_session: @@ -1579,7 +1483,7 @@ static struct option __record_options[] = { OPT_STRING('C', "cpu", &record.opts.target.cpu_list, "cpu", "list of cpus to monitor"), OPT_U64('c', "count", &record.opts.user_interval, "event period to sample"), - OPT_STRING('o', "output", &record.file.path, "file", + OPT_STRING('o', "output", &record.data.file.path, "file", "output file name"), OPT_BOOLEAN_SET('i', "no-inherit", &record.opts.no_inherit, &record.opts.no_inherit_set, @@ -1644,6 +1548,9 @@ static struct option __record_options[] = { OPT_CALLBACK_OPTARG('I', "intr-regs", &record.opts.sample_intr_regs, NULL, "any register", "sample selected machine registers on interrupt," " use -I ? to list register names", parse_regs), + OPT_CALLBACK_OPTARG(0, "user-regs", &record.opts.sample_user_regs, NULL, "any register", + "sample selected machine registers on interrupt," + " use -I ? to list register names", parse_regs), OPT_BOOLEAN(0, "running-time", &record.opts.running_time, "Record running/enabled time of read (:S) events"), OPT_CALLBACK('k', "clockid", &record.opts, diff --git a/tools/perf/builtin-report.c b/tools/perf/builtin-report.c index fae4b0340750..1394cd8d96f7 100644 --- a/tools/perf/builtin-report.c +++ b/tools/perf/builtin-report.c @@ -258,7 +258,7 @@ static int report__setup_sample_type(struct report *rep) { struct perf_session *session = rep->session; u64 sample_type = perf_evlist__combined_sample_type(session->evlist); - bool is_pipe = perf_data_file__is_pipe(session->file); + bool is_pipe = perf_data__is_pipe(session->data); if (session->itrace_synth_opts->callchain || (!is_pipe && @@ -569,7 +569,7 @@ static int __cmd_report(struct report *rep) int ret; struct perf_session *session = rep->session; struct perf_evsel *pos; - struct perf_data_file *file = session->file; + struct perf_data *data = session->data; signal(SIGINT, sig_handler); @@ -638,7 +638,7 @@ static int __cmd_report(struct report *rep) rep->nr_entries += evsel__hists(pos)->nr_entries; if (rep->nr_entries == 0) { - ui__error("The %s file has no samples!\n", file->path); + ui__error("The %s file has no samples!\n", data->file.path); return 0; } @@ -880,7 +880,7 @@ int cmd_report(int argc, const char **argv) "Show inline function"), OPT_END() }; - struct perf_data_file file = { + struct perf_data data = { .mode = PERF_DATA_MODE_READ, }; int ret = hists__init(); @@ -941,11 +941,11 @@ int cmd_report(int argc, const char **argv) input_name = "perf.data"; } - file.path = input_name; - file.force = symbol_conf.force; + data.file.path = input_name; + data.force = symbol_conf.force; repeat: - session = perf_session__new(&file, false, &report.tool); + session = perf_session__new(&data, false, &report.tool); if (session == NULL) return -1; diff --git a/tools/perf/builtin-sched.c b/tools/perf/builtin-sched.c index f380d91ee609..83283fedb00f 100644 --- a/tools/perf/builtin-sched.c +++ b/tools/perf/builtin-sched.c @@ -1701,14 +1701,16 @@ static int perf_sched__read_events(struct perf_sched *sched) { "sched:sched_migrate_task", process_sched_migrate_task_event, }, }; struct perf_session *session; - struct perf_data_file file = { - .path = input_name, - .mode = PERF_DATA_MODE_READ, - .force = sched->force, + struct perf_data data = { + .file = { + .path = input_name, + }, + .mode = PERF_DATA_MODE_READ, + .force = sched->force, }; int rc = -1; - session = perf_session__new(&file, false, &sched->tool); + session = perf_session__new(&data, false, &sched->tool); if (session == NULL) { pr_debug("No Memory for session\n"); return -1; @@ -2903,10 +2905,12 @@ static int perf_sched__timehist(struct perf_sched *sched) const struct perf_evsel_str_handler migrate_handlers[] = { { "sched:sched_migrate_task", timehist_migrate_task_event, }, }; - struct perf_data_file file = { - .path = input_name, - .mode = PERF_DATA_MODE_READ, - .force = sched->force, + struct perf_data data = { + .file = { + .path = input_name, + }, + .mode = PERF_DATA_MODE_READ, + .force = sched->force, }; struct perf_session *session; @@ -2931,7 +2935,7 @@ static int perf_sched__timehist(struct perf_sched *sched) symbol_conf.use_callchain = sched->show_callchain; - session = perf_session__new(&file, false, &sched->tool); + session = perf_session__new(&data, false, &sched->tool); if (session == NULL) return -ENOMEM; @@ -3364,6 +3368,10 @@ int cmd_sched(int argc, const char **argv) OPT_STRING(0, "time", &sched.time_str, "str", "Time span for analysis (start,stop)"), OPT_BOOLEAN(0, "state", &sched.show_state, "Show task state when sched-out"), + OPT_STRING('p', "pid", &symbol_conf.pid_list_str, "pid[,pid...]", + "analyze events only for given process id(s)"), + OPT_STRING('t', "tid", &symbol_conf.tid_list_str, "tid[,tid...]", + "analyze events only for given thread id(s)"), OPT_PARENT(sched_options) }; diff --git a/tools/perf/builtin-script.c b/tools/perf/builtin-script.c index 0fe02758de7d..68f36dc0344f 100644 --- a/tools/perf/builtin-script.c +++ b/tools/perf/builtin-script.c @@ -89,6 +89,7 @@ enum perf_output_field { PERF_OUTPUT_BRSTACKOFF = 1U << 24, PERF_OUTPUT_SYNTH = 1U << 25, PERF_OUTPUT_PHYS_ADDR = 1U << 26, + PERF_OUTPUT_UREGS = 1U << 27, }; struct output_option { @@ -110,6 +111,7 @@ struct output_option { {.str = "srcline", .field = PERF_OUTPUT_SRCLINE}, {.str = "period", .field = PERF_OUTPUT_PERIOD}, {.str = "iregs", .field = PERF_OUTPUT_IREGS}, + {.str = "uregs", .field = PERF_OUTPUT_UREGS}, {.str = "brstack", .field = PERF_OUTPUT_BRSTACK}, {.str = "brstacksym", .field = PERF_OUTPUT_BRSTACKSYM}, {.str = "data_src", .field = PERF_OUTPUT_DATA_SRC}, @@ -209,6 +211,51 @@ static struct { }, }; +struct perf_evsel_script { + char *filename; + FILE *fp; + u64 samples; +}; + +static struct perf_evsel_script *perf_evsel_script__new(struct perf_evsel *evsel, + struct perf_data *data) +{ + struct perf_evsel_script *es = malloc(sizeof(*es)); + + if (es != NULL) { + if (asprintf(&es->filename, "%s.%s.dump", data->file.path, perf_evsel__name(evsel)) < 0) + goto out_free; + es->fp = fopen(es->filename, "w"); + if (es->fp == NULL) + goto out_free_filename; + es->samples = 0; + } + + return es; +out_free_filename: + zfree(&es->filename); +out_free: + free(es); + return NULL; +} + +static void perf_evsel_script__delete(struct perf_evsel_script *es) +{ + zfree(&es->filename); + fclose(es->fp); + es->fp = NULL; + free(es); +} + +static int perf_evsel_script__fprintf(struct perf_evsel_script *es, FILE *fp) +{ + struct stat st; + + fstat(fileno(es->fp), &st); + return fprintf(fp, "[ perf script: Wrote %.3f MB %s (%" PRIu64 " samples) ]\n", + st.st_size / 1024.0 / 1024.0, es->filename, es->samples); +} + static inline int output_type(unsigned int type) { switch (type) { @@ -386,6 +433,11 @@ static int perf_evsel__check_attr(struct perf_evsel *evsel, PERF_OUTPUT_IREGS)) return -EINVAL; + if (PRINT_FIELD(UREGS) && + perf_evsel__check_stype(evsel, PERF_SAMPLE_REGS_USER, "UREGS", + PERF_OUTPUT_UREGS)) + return -EINVAL; + if (PRINT_FIELD(PHYS_ADDR) && perf_evsel__check_stype(evsel, PERF_SAMPLE_PHYS_ADDR, "PHYS_ADDR", PERF_OUTPUT_PHYS_ADDR)) @@ -494,51 +546,76 @@ out: return 0; } -static void print_sample_iregs(struct perf_sample *sample, - struct perf_event_attr *attr) +static int perf_sample__fprintf_iregs(struct perf_sample *sample, + struct perf_event_attr *attr, FILE *fp) { struct regs_dump *regs = &sample->intr_regs; uint64_t mask = attr->sample_regs_intr; unsigned i = 0, r; + int printed = 0; if (!regs) - return; + return 0; for_each_set_bit(r, (unsigned long *) &mask, sizeof(mask) * 8) { u64 val = regs->regs[i++]; - printf("%5s:0x%"PRIx64" ", perf_reg_name(r), val); + printed += fprintf(fp, "%5s:0x%"PRIx64" ", perf_reg_name(r), val); } + + return printed; } -static void print_sample_start(struct perf_sample *sample, - struct thread *thread, - struct perf_evsel *evsel) +static int perf_sample__fprintf_uregs(struct perf_sample *sample, + struct perf_event_attr *attr, FILE *fp) +{ + struct regs_dump *regs = &sample->user_regs; + uint64_t mask = attr->sample_regs_user; + unsigned i = 0, r; + int printed = 0; + + if (!regs || !regs->regs) + return 0; + + printed += fprintf(fp, " ABI:%" PRIu64 " ", regs->abi); + + for_each_set_bit(r, (unsigned long *) &mask, sizeof(mask) * 8) { + u64 val = regs->regs[i++]; + printed += fprintf(fp, "%5s:0x%"PRIx64" ", perf_reg_name(r), val); + } + + return printed; +} + +static int perf_sample__fprintf_start(struct perf_sample *sample, + struct thread *thread, + struct perf_evsel *evsel, FILE *fp) { struct perf_event_attr *attr = &evsel->attr; unsigned long secs; unsigned long long nsecs; + int printed = 0; if (PRINT_FIELD(COMM)) { if (latency_format) - printf("%8.8s ", thread__comm_str(thread)); + printed += fprintf(fp, "%8.8s ", thread__comm_str(thread)); else if (PRINT_FIELD(IP) && symbol_conf.use_callchain) - printf("%s ", thread__comm_str(thread)); + printed += fprintf(fp, "%s ", thread__comm_str(thread)); else - printf("%16s ", thread__comm_str(thread)); + printed += fprintf(fp, "%16s ", thread__comm_str(thread)); } if (PRINT_FIELD(PID) && PRINT_FIELD(TID)) - printf("%5d/%-5d ", sample->pid, sample->tid); + printed += fprintf(fp, "%5d/%-5d ", sample->pid, sample->tid); else if (PRINT_FIELD(PID)) - printf("%5d ", sample->pid); + printed += fprintf(fp, "%5d ", sample->pid); else if (PRINT_FIELD(TID)) - printf("%5d ", sample->tid); + printed += fprintf(fp, "%5d ", sample->tid); if (PRINT_FIELD(CPU)) { if (latency_format) - printf("%3d ", sample->cpu); + printed += fprintf(fp, "%3d ", sample->cpu); else - printf("[%03d] ", sample->cpu); + printed += fprintf(fp, "[%03d] ", sample->cpu); } if (PRINT_FIELD(TIME)) { @@ -547,13 +624,15 @@ static void print_sample_start(struct perf_sample *sample, nsecs -= secs * NSEC_PER_SEC; if (nanosecs) - printf("%5lu.%09llu: ", secs, nsecs); + printed += fprintf(fp, "%5lu.%09llu: ", secs, nsecs); else { char sample_time[32]; timestamp__scnprintf_usec(sample->time, sample_time, sizeof(sample_time)); - printf("%12s: ", sample_time); + printed += fprintf(fp, "%12s: ", sample_time); } } + + return printed; } static inline char @@ -565,16 +644,17 @@ mispred_str(struct branch_entry *br) return br->flags.predicted ? 'P' : 'M'; } -static void print_sample_brstack(struct perf_sample *sample, - struct thread *thread, - struct perf_event_attr *attr) +static int perf_sample__fprintf_brstack(struct perf_sample *sample, + struct thread *thread, + struct perf_event_attr *attr, FILE *fp) { struct branch_stack *br = sample->branch_stack; struct addr_location alf, alt; u64 i, from, to; + int printed = 0; if (!(br && br->nr)) - return; + return 0; for (i = 0; i < br->nr; i++) { from = br->entries[i].from; @@ -587,38 +667,41 @@ static void print_sample_brstack(struct perf_sample *sample, thread__find_addr_map(thread, sample->cpumode, MAP__FUNCTION, to, &alt); } - printf(" 0x%"PRIx64, from); + printed += fprintf(fp, " 0x%"PRIx64, from); if (PRINT_FIELD(DSO)) { - printf("("); - map__fprintf_dsoname(alf.map, stdout); - printf(")"); + printed += fprintf(fp, "("); + printed += map__fprintf_dsoname(alf.map, fp); + printed += fprintf(fp, ")"); } - printf("/0x%"PRIx64, to); + printed += fprintf(fp, "/0x%"PRIx64, to); if (PRINT_FIELD(DSO)) { - printf("("); - map__fprintf_dsoname(alt.map, stdout); - printf(")"); + printed += fprintf(fp, "("); + printed += map__fprintf_dsoname(alt.map, fp); + printed += fprintf(fp, ")"); } - printf("/%c/%c/%c/%d ", + printed += fprintf(fp, "/%c/%c/%c/%d ", mispred_str( br->entries + i), br->entries[i].flags.in_tx? 'X' : '-', br->entries[i].flags.abort? 'A' : '-', br->entries[i].flags.cycles); } + + return printed; } -static void print_sample_brstacksym(struct perf_sample *sample, - struct thread *thread, - struct perf_event_attr *attr) +static int perf_sample__fprintf_brstacksym(struct perf_sample *sample, + struct thread *thread, + struct perf_event_attr *attr, FILE *fp) { struct branch_stack *br = sample->branch_stack; struct addr_location alf, alt; u64 i, from, to; + int printed = 0; if (!(br && br->nr)) - return; + return 0; for (i = 0; i < br->nr; i++) { @@ -635,37 +718,40 @@ static void print_sample_brstacksym(struct perf_sample *sample, if (alt.map) alt.sym = map__find_symbol(alt.map, alt.addr); - symbol__fprintf_symname_offs(alf.sym, &alf, stdout); + printed += symbol__fprintf_symname_offs(alf.sym, &alf, fp); if (PRINT_FIELD(DSO)) { - printf("("); - map__fprintf_dsoname(alf.map, stdout); - printf(")"); + printed += fprintf(fp, "("); + printed += map__fprintf_dsoname(alf.map, fp); + printed += fprintf(fp, ")"); } - putchar('/'); - symbol__fprintf_symname_offs(alt.sym, &alt, stdout); + printed += fprintf(fp, "%c", '/'); + printed += symbol__fprintf_symname_offs(alt.sym, &alt, fp); if (PRINT_FIELD(DSO)) { - printf("("); - map__fprintf_dsoname(alt.map, stdout); - printf(")"); + printed += fprintf(fp, "("); + printed += map__fprintf_dsoname(alt.map, fp); + printed += fprintf(fp, ")"); } - printf("/%c/%c/%c/%d ", + printed += fprintf(fp, "/%c/%c/%c/%d ", mispred_str( br->entries + i), br->entries[i].flags.in_tx? 'X' : '-', br->entries[i].flags.abort? 'A' : '-', br->entries[i].flags.cycles); } + + return printed; } -static void print_sample_brstackoff(struct perf_sample *sample, - struct thread *thread, - struct perf_event_attr *attr) +static int perf_sample__fprintf_brstackoff(struct perf_sample *sample, + struct thread *thread, + struct perf_event_attr *attr, FILE *fp) { struct branch_stack *br = sample->branch_stack; struct addr_location alf, alt; u64 i, from, to; + int printed = 0; if (!(br && br->nr)) - return; + return 0; for (i = 0; i < br->nr; i++) { @@ -682,24 +768,26 @@ static void print_sample_brstackoff(struct perf_sample *sample, if (alt.map && !alt.map->dso->adjust_symbols) to = map__map_ip(alt.map, to); - printf(" 0x%"PRIx64, from); + printed += fprintf(fp, " 0x%"PRIx64, from); if (PRINT_FIELD(DSO)) { - printf("("); - map__fprintf_dsoname(alf.map, stdout); - printf(")"); + printed += fprintf(fp, "("); + printed += map__fprintf_dsoname(alf.map, fp); + printed += fprintf(fp, ")"); } - printf("/0x%"PRIx64, to); + printed += fprintf(fp, "/0x%"PRIx64, to); if (PRINT_FIELD(DSO)) { - printf("("); - map__fprintf_dsoname(alt.map, stdout); - printf(")"); + printed += fprintf(fp, "("); + printed += map__fprintf_dsoname(alt.map, fp); + printed += fprintf(fp, ")"); } - printf("/%c/%c/%c/%d ", + printed += fprintf(fp, "/%c/%c/%c/%d ", mispred_str(br->entries + i), br->entries[i].flags.in_tx ? 'X' : '-', br->entries[i].flags.abort ? 'A' : '-', br->entries[i].flags.cycles); } + + return printed; } #define MAXBB 16384UL @@ -727,27 +815,26 @@ static int grab_bb(u8 *buffer, u64 start, u64 end, * but the exit is not. Let the caller patch it up. */ if (kernel != machine__kernel_ip(machine, end)) { - printf("\tblock %" PRIx64 "-%" PRIx64 " transfers between kernel and user\n", - start, end); + pr_debug("\tblock %" PRIx64 "-%" PRIx64 " transfers between kernel and user\n", start, end); return -ENXIO; } memset(&al, 0, sizeof(al)); if (end - start > MAXBB - MAXINSN) { if (last) - printf("\tbrstack does not reach to final jump (%" PRIx64 "-%" PRIx64 ")\n", start, end); + pr_debug("\tbrstack does not reach to final jump (%" PRIx64 "-%" PRIx64 ")\n", start, end); else - printf("\tblock %" PRIx64 "-%" PRIx64 " (%" PRIu64 ") too long to dump\n", start, end, end - start); + pr_debug("\tblock %" PRIx64 "-%" PRIx64 " (%" PRIu64 ") too long to dump\n", start, end, end - start); return 0; } thread__find_addr_map(thread, *cpumode, MAP__FUNCTION, start, &al); if (!al.map || !al.map->dso) { - printf("\tcannot resolve %" PRIx64 "-%" PRIx64 "\n", start, end); + pr_debug("\tcannot resolve %" PRIx64 "-%" PRIx64 "\n", start, end); return 0; } if (al.map->dso->data.status == DSO_DATA_STATUS_ERROR) { - printf("\tcannot resolve %" PRIx64 "-%" PRIx64 "\n", start, end); + pr_debug("\tcannot resolve %" PRIx64 "-%" PRIx64 "\n", start, end); return 0; } @@ -760,36 +847,35 @@ static int grab_bb(u8 *buffer, u64 start, u64 end, *is64bit = al.map->dso->is_64_bit; if (len <= 0) - printf("\tcannot fetch code for block at %" PRIx64 "-%" PRIx64 "\n", + pr_debug("\tcannot fetch code for block at %" PRIx64 "-%" PRIx64 "\n", start, end); return len; } -static void print_jump(uint64_t ip, struct branch_entry *en, - struct perf_insn *x, u8 *inbuf, int len, - int insn) +static int ip__fprintf_jump(uint64_t ip, struct branch_entry *en, + struct perf_insn *x, u8 *inbuf, int len, + int insn, FILE *fp) { - printf("\t%016" PRIx64 "\t%-30s\t#%s%s%s%s", - ip, - dump_insn(x, ip, inbuf, len, NULL), - en->flags.predicted ? " PRED" : "", - en->flags.mispred ? " MISPRED" : "", - en->flags.in_tx ? " INTX" : "", - en->flags.abort ? " ABORT" : ""); + int printed = fprintf(fp, "\t%016" PRIx64 "\t%-30s\t#%s%s%s%s", ip, + dump_insn(x, ip, inbuf, len, NULL), + en->flags.predicted ? " PRED" : "", + en->flags.mispred ? " MISPRED" : "", + en->flags.in_tx ? " INTX" : "", + en->flags.abort ? " ABORT" : ""); if (en->flags.cycles) { - printf(" %d cycles", en->flags.cycles); + printed += fprintf(fp, " %d cycles", en->flags.cycles); if (insn) - printf(" %.2f IPC", (float)insn / en->flags.cycles); + printed += fprintf(fp, " %.2f IPC", (float)insn / en->flags.cycles); } - putchar('\n'); + return printed + fprintf(fp, "\n"); } -static void print_ip_sym(struct thread *thread, u8 cpumode, int cpu, - uint64_t addr, struct symbol **lastsym, - struct perf_event_attr *attr) +static int ip__fprintf_sym(uint64_t addr, struct thread *thread, + u8 cpumode, int cpu, struct symbol **lastsym, + struct perf_event_attr *attr, FILE *fp) { struct addr_location al; - int off; + int off, printed = 0; memset(&al, 0, sizeof(al)); @@ -798,7 +884,7 @@ static void print_ip_sym(struct thread *thread, u8 cpumode, int cpu, thread__find_addr_map(thread, cpumode, MAP__VARIABLE, addr, &al); if ((*lastsym) && al.addr >= (*lastsym)->start && al.addr < (*lastsym)->end) - return; + return 0; al.cpu = cpu; al.sym = NULL; @@ -806,37 +892,39 @@ static void print_ip_sym(struct thread *thread, u8 cpumode, int cpu, al.sym = map__find_symbol(al.map, al.addr); if (!al.sym) - return; + return 0; if (al.addr < al.sym->end) off = al.addr - al.sym->start; else off = al.addr - al.map->start - al.sym->start; - printf("\t%s", al.sym->name); + printed += fprintf(fp, "\t%s", al.sym->name); if (off) - printf("%+d", off); - putchar(':'); + printed += fprintf(fp, "%+d", off); + printed += fprintf(fp, ":"); if (PRINT_FIELD(SRCLINE)) - map__fprintf_srcline(al.map, al.addr, "\t", stdout); - putchar('\n'); + printed += map__fprintf_srcline(al.map, al.addr, "\t", fp); + printed += fprintf(fp, "\n"); *lastsym = al.sym; + + return printed; } -static void print_sample_brstackinsn(struct perf_sample *sample, - struct thread *thread, - struct perf_event_attr *attr, - struct machine *machine) +static int perf_sample__fprintf_brstackinsn(struct perf_sample *sample, + struct thread *thread, + struct perf_event_attr *attr, + struct machine *machine, FILE *fp) { struct branch_stack *br = sample->branch_stack; u64 start, end; - int i, insn, len, nr, ilen; + int i, insn, len, nr, ilen, printed = 0; struct perf_insn x; u8 buffer[MAXBB]; unsigned off; struct symbol *lastsym = NULL; if (!(br && br->nr)) - return; + return 0; nr = br->nr; if (max_blocks && nr > max_blocks + 1) nr = max_blocks + 1; @@ -844,17 +932,17 @@ static void print_sample_brstackinsn(struct perf_sample *sample, x.thread = thread; x.cpu = sample->cpu; - putchar('\n'); + printed += fprintf(fp, "%c", '\n'); /* Handle first from jump, of which we don't know the entry. */ len = grab_bb(buffer, br->entries[nr-1].from, br->entries[nr-1].from, machine, thread, &x.is64bit, &x.cpumode, false); if (len > 0) { - print_ip_sym(thread, x.cpumode, x.cpu, - br->entries[nr - 1].from, &lastsym, attr); - print_jump(br->entries[nr - 1].from, &br->entries[nr - 1], - &x, buffer, len, 0); + printed += ip__fprintf_sym(br->entries[nr - 1].from, thread, + x.cpumode, x.cpu, &lastsym, attr, fp); + printed += ip__fprintf_jump(br->entries[nr - 1].from, &br->entries[nr - 1], + &x, buffer, len, 0, fp); } /* Print all blocks */ @@ -880,13 +968,13 @@ static void print_sample_brstackinsn(struct perf_sample *sample, for (off = 0;; off += ilen) { uint64_t ip = start + off; - print_ip_sym(thread, x.cpumode, x.cpu, ip, &lastsym, attr); + printed += ip__fprintf_sym(ip, thread, x.cpumode, x.cpu, &lastsym, attr, fp); if (ip == end) { - print_jump(ip, &br->entries[i], &x, buffer + off, len - off, insn); + printed += ip__fprintf_jump(ip, &br->entries[i], &x, buffer + off, len - off, insn, fp); break; } else { - printf("\t%016" PRIx64 "\t%s\n", ip, - dump_insn(&x, ip, buffer + off, len - off, &ilen)); + printed += fprintf(fp, "\t%016" PRIx64 "\t%s\n", ip, + dump_insn(&x, ip, buffer + off, len - off, &ilen)); if (ilen == 0) break; insn++; @@ -899,9 +987,9 @@ static void print_sample_brstackinsn(struct perf_sample *sample, * has not been executed yet. */ if (br->entries[0].from == sample->ip) - return; + goto out; if (br->entries[0].flags.abort) - return; + goto out; /* * Print final block upto sample @@ -909,58 +997,61 @@ static void print_sample_brstackinsn(struct perf_sample *sample, start = br->entries[0].to; end = sample->ip; len = grab_bb(buffer, start, end, machine, thread, &x.is64bit, &x.cpumode, true); - print_ip_sym(thread, x.cpumode, x.cpu, start, &lastsym, attr); + printed += ip__fprintf_sym(start, thread, x.cpumode, x.cpu, &lastsym, attr, fp); if (len <= 0) { /* Print at least last IP if basic block did not work */ len = grab_bb(buffer, sample->ip, sample->ip, machine, thread, &x.is64bit, &x.cpumode, false); if (len <= 0) - return; + goto out; - printf("\t%016" PRIx64 "\t%s\n", sample->ip, + printed += fprintf(fp, "\t%016" PRIx64 "\t%s\n", sample->ip, dump_insn(&x, sample->ip, buffer, len, NULL)); - return; + goto out; } for (off = 0; off <= end - start; off += ilen) { - printf("\t%016" PRIx64 "\t%s\n", start + off, - dump_insn(&x, start + off, buffer + off, len - off, &ilen)); + printed += fprintf(fp, "\t%016" PRIx64 "\t%s\n", start + off, + dump_insn(&x, start + off, buffer + off, len - off, &ilen)); if (ilen == 0) break; } +out: + return printed; } -static void print_sample_addr(struct perf_sample *sample, - struct thread *thread, - struct perf_event_attr *attr) +static int perf_sample__fprintf_addr(struct perf_sample *sample, + struct thread *thread, + struct perf_event_attr *attr, FILE *fp) { struct addr_location al; - - printf("%16" PRIx64, sample->addr); + int printed = fprintf(fp, "%16" PRIx64, sample->addr); if (!sample_addr_correlates_sym(attr)) - return; + goto out; thread__resolve(thread, &al, sample); if (PRINT_FIELD(SYM)) { - printf(" "); + printed += fprintf(fp, " "); if (PRINT_FIELD(SYMOFFSET)) - symbol__fprintf_symname_offs(al.sym, &al, stdout); + printed += symbol__fprintf_symname_offs(al.sym, &al, fp); else - symbol__fprintf_symname(al.sym, stdout); + printed += symbol__fprintf_symname(al.sym, fp); } if (PRINT_FIELD(DSO)) { - printf(" ("); - map__fprintf_dsoname(al.map, stdout); - printf(")"); + printed += fprintf(fp, " ("); + printed += map__fprintf_dsoname(al.map, fp); + printed += fprintf(fp, ")"); } +out: + return printed; } -static void print_sample_callindent(struct perf_sample *sample, - struct perf_evsel *evsel, - struct thread *thread, - struct addr_location *al) +static int perf_sample__fprintf_callindent(struct perf_sample *sample, + struct perf_evsel *evsel, + struct thread *thread, + struct addr_location *al, FILE *fp) { struct perf_event_attr *attr = &evsel->attr; size_t depth = thread_stack__depth(thread); @@ -995,12 +1086,12 @@ static void print_sample_callindent(struct perf_sample *sample, } if (name) - len = printf("%*s%s", (int)depth * 4, "", name); + len = fprintf(fp, "%*s%s", (int)depth * 4, "", name); else if (ip) - len = printf("%*s%16" PRIx64, (int)depth * 4, "", ip); + len = fprintf(fp, "%*s%16" PRIx64, (int)depth * 4, "", ip); if (len < 0) - return; + return len; /* * Try to keep the output length from changing frequently so that the @@ -1010,39 +1101,46 @@ static void print_sample_callindent(struct perf_sample *sample, spacing = round_up(len + 4, 32); if (len < spacing) - printf("%*s", spacing - len, ""); + len += fprintf(fp, "%*s", spacing - len, ""); + + return len; } -static void print_insn(struct perf_sample *sample, - struct perf_event_attr *attr, - struct thread *thread, - struct machine *machine) +static int perf_sample__fprintf_insn(struct perf_sample *sample, + struct perf_event_attr *attr, + struct thread *thread, + struct machine *machine, FILE *fp) { + int printed = 0; + if (PRINT_FIELD(INSNLEN)) - printf(" ilen: %d", sample->insn_len); + printed += fprintf(fp, " ilen: %d", sample->insn_len); if (PRINT_FIELD(INSN)) { int i; - printf(" insn:"); + printed += fprintf(fp, " insn:"); for (i = 0; i < sample->insn_len; i++) - printf(" %02x", (unsigned char)sample->insn[i]); + printed += fprintf(fp, " %02x", (unsigned char)sample->insn[i]); } if (PRINT_FIELD(BRSTACKINSN)) - print_sample_brstackinsn(sample, thread, attr, machine); + printed += perf_sample__fprintf_brstackinsn(sample, thread, attr, machine, fp); + + return printed; } -static void print_sample_bts(struct perf_sample *sample, - struct perf_evsel *evsel, - struct thread *thread, - struct addr_location *al, - struct machine *machine) +static int perf_sample__fprintf_bts(struct perf_sample *sample, + struct perf_evsel *evsel, + struct thread *thread, + struct addr_location *al, + struct machine *machine, FILE *fp) { struct perf_event_attr *attr = &evsel->attr; unsigned int type = output_type(attr->type); bool print_srcline_last = false; + int printed = 0; if (PRINT_FIELD(CALLINDENT)) - print_sample_callindent(sample, evsel, thread, al); + printed += perf_sample__fprintf_callindent(sample, evsel, thread, al, fp); /* print branch_from information */ if (PRINT_FIELD(IP)) { @@ -1055,31 +1153,30 @@ static void print_sample_bts(struct perf_sample *sample, cursor = &callchain_cursor; if (cursor == NULL) { - putchar(' '); + printed += fprintf(fp, " "); if (print_opts & EVSEL__PRINT_SRCLINE) { print_srcline_last = true; print_opts &= ~EVSEL__PRINT_SRCLINE; } } else - putchar('\n'); + printed += fprintf(fp, "\n"); - sample__fprintf_sym(sample, al, 0, print_opts, cursor, stdout); + printed += sample__fprintf_sym(sample, al, 0, print_opts, cursor, fp); } /* print branch_to information */ if (PRINT_FIELD(ADDR) || ((evsel->attr.sample_type & PERF_SAMPLE_ADDR) && !output[type].user_set)) { - printf(" => "); - print_sample_addr(sample, thread, attr); + printed += fprintf(fp, " => "); + printed += perf_sample__fprintf_addr(sample, thread, attr, fp); } if (print_srcline_last) - map__fprintf_srcline(al->map, al->addr, "\n ", stdout); - - print_insn(sample, attr, thread, machine); + printed += map__fprintf_srcline(al->map, al->addr, "\n ", fp); - printf("\n"); + printed += perf_sample__fprintf_insn(sample, attr, thread, machine, fp); + return printed + fprintf(fp, "\n"); } static struct { @@ -1102,7 +1199,7 @@ static struct { {0, NULL} }; -static void print_sample_flags(u32 flags) +static int perf_sample__fprintf_flags(u32 flags, FILE *fp) { const char *chars = PERF_IP_FLAG_CHARS; const int n = strlen(PERF_IP_FLAG_CHARS); @@ -1129,9 +1226,9 @@ static void print_sample_flags(u32 flags) str[pos] = 0; if (name) - printf(" %-7s%4s ", name, in_tx ? "(x)" : ""); - else - printf(" %-11s ", str); + return fprintf(fp, " %-7s%4s ", name, in_tx ? "(x)" : ""); + + return fprintf(fp, " %-11s ", str); } struct printer_data { @@ -1140,40 +1237,40 @@ struct printer_data { bool is_printable; }; -static void -print_sample_bpf_output_printer(enum binary_printer_ops op, - unsigned int val, - void *extra) +static int sample__fprintf_bpf_output(enum binary_printer_ops op, + unsigned int val, + void *extra, FILE *fp) { unsigned char ch = (unsigned char)val; struct printer_data *printer_data = extra; + int printed = 0; switch (op) { case BINARY_PRINT_DATA_BEGIN: - printf("\n"); + printed += fprintf(fp, "\n"); break; case BINARY_PRINT_LINE_BEGIN: - printf("%17s", !printer_data->line_no ? "BPF output:" : + printed += fprintf(fp, "%17s", !printer_data->line_no ? "BPF output:" : " "); break; case BINARY_PRINT_ADDR: - printf(" %04x:", val); + printed += fprintf(fp, " %04x:", val); break; case BINARY_PRINT_NUM_DATA: - printf(" %02x", val); + printed += fprintf(fp, " %02x", val); break; case BINARY_PRINT_NUM_PAD: - printf(" "); + printed += fprintf(fp, " "); break; case BINARY_PRINT_SEP: - printf(" "); + printed += fprintf(fp, " "); break; case BINARY_PRINT_CHAR_DATA: if (printer_data->hit_nul && ch) printer_data->is_printable = false; if (!isprint(ch)) { - printf("%c", '.'); + printed += fprintf(fp, "%c", '.'); if (!printer_data->is_printable) break; @@ -1183,154 +1280,154 @@ print_sample_bpf_output_printer(enum binary_printer_ops op, else printer_data->is_printable = false; } else { - printf("%c", ch); + printed += fprintf(fp, "%c", ch); } break; case BINARY_PRINT_CHAR_PAD: - printf(" "); + printed += fprintf(fp, " "); break; case BINARY_PRINT_LINE_END: - printf("\n"); + printed += fprintf(fp, "\n"); printer_data->line_no++; break; case BINARY_PRINT_DATA_END: default: break; } + + return printed; } -static void print_sample_bpf_output(struct perf_sample *sample) +static int perf_sample__fprintf_bpf_output(struct perf_sample *sample, FILE *fp) { unsigned int nr_bytes = sample->raw_size; struct printer_data printer_data = {0, false, true}; - - print_binary(sample->raw_data, nr_bytes, 8, - print_sample_bpf_output_printer, &printer_data); + int printed = binary__fprintf(sample->raw_data, nr_bytes, 8, + sample__fprintf_bpf_output, &printer_data, fp); if (printer_data.is_printable && printer_data.hit_nul) - printf("%17s \"%s\"\n", "BPF string:", - (char *)(sample->raw_data)); + printed += fprintf(fp, "%17s \"%s\"\n", "BPF string:", (char *)(sample->raw_data)); + + return printed; } -static void print_sample_spacing(int len, int spacing) +static int perf_sample__fprintf_spacing(int len, int spacing, FILE *fp) { if (len > 0 && len < spacing) - printf("%*s", spacing - len, ""); + return fprintf(fp, "%*s", spacing - len, ""); + + return 0; } -static void print_sample_pt_spacing(int len) +static int perf_sample__fprintf_pt_spacing(int len, FILE *fp) { - print_sample_spacing(len, 34); + return perf_sample__fprintf_spacing(len, 34, fp); } -static void print_sample_synth_ptwrite(struct perf_sample *sample) +static int perf_sample__fprintf_synth_ptwrite(struct perf_sample *sample, FILE *fp) { struct perf_synth_intel_ptwrite *data = perf_sample__synth_ptr(sample); int len; if (perf_sample__bad_synth_size(sample, *data)) - return; + return 0; - len = printf(" IP: %u payload: %#" PRIx64 " ", + len = fprintf(fp, " IP: %u payload: %#" PRIx64 " ", data->ip, le64_to_cpu(data->payload)); - print_sample_pt_spacing(len); + return len + perf_sample__fprintf_pt_spacing(len, fp); } -static void print_sample_synth_mwait(struct perf_sample *sample) +static int perf_sample__fprintf_synth_mwait(struct perf_sample *sample, FILE *fp) { struct perf_synth_intel_mwait *data = perf_sample__synth_ptr(sample); int len; if (perf_sample__bad_synth_size(sample, *data)) - return; + return 0; - len = printf(" hints: %#x extensions: %#x ", - data->hints, data->extensions); - print_sample_pt_spacing(len); + len = fprintf(fp, " hints: %#x extensions: %#x ", + data->hints, data->extensions); + return len + perf_sample__fprintf_pt_spacing(len, fp); } -static void print_sample_synth_pwre(struct perf_sample *sample) +static int perf_sample__fprintf_synth_pwre(struct perf_sample *sample, FILE *fp) { struct perf_synth_intel_pwre *data = perf_sample__synth_ptr(sample); int len; if (perf_sample__bad_synth_size(sample, *data)) - return; + return 0; - len = printf(" hw: %u cstate: %u sub-cstate: %u ", - data->hw, data->cstate, data->subcstate); - print_sample_pt_spacing(len); + len = fprintf(fp, " hw: %u cstate: %u sub-cstate: %u ", + data->hw, data->cstate, data->subcstate); + return len + perf_sample__fprintf_pt_spacing(len, fp); } -static void print_sample_synth_exstop(struct perf_sample *sample) +static int perf_sample__fprintf_synth_exstop(struct perf_sample *sample, FILE *fp) { struct perf_synth_intel_exstop *data = perf_sample__synth_ptr(sample); int len; if (perf_sample__bad_synth_size(sample, *data)) - return; + return 0; - len = printf(" IP: %u ", data->ip); - print_sample_pt_spacing(len); + len = fprintf(fp, " IP: %u ", data->ip); + return len + perf_sample__fprintf_pt_spacing(len, fp); } -static void print_sample_synth_pwrx(struct perf_sample *sample) +static int perf_sample__fprintf_synth_pwrx(struct perf_sample *sample, FILE *fp) { struct perf_synth_intel_pwrx *data = perf_sample__synth_ptr(sample); int len; if (perf_sample__bad_synth_size(sample, *data)) - return; + return 0; - len = printf(" deepest cstate: %u last cstate: %u wake reason: %#x ", + len = fprintf(fp, " deepest cstate: %u last cstate: %u wake reason: %#x ", data->deepest_cstate, data->last_cstate, data->wake_reason); - print_sample_pt_spacing(len); + return len + perf_sample__fprintf_pt_spacing(len, fp); } -static void print_sample_synth_cbr(struct perf_sample *sample) +static int perf_sample__fprintf_synth_cbr(struct perf_sample *sample, FILE *fp) { struct perf_synth_intel_cbr *data = perf_sample__synth_ptr(sample); unsigned int percent, freq; int len; if (perf_sample__bad_synth_size(sample, *data)) - return; + return 0; freq = (le32_to_cpu(data->freq) + 500) / 1000; - len = printf(" cbr: %2u freq: %4u MHz ", data->cbr, freq); + len = fprintf(fp, " cbr: %2u freq: %4u MHz ", data->cbr, freq); if (data->max_nonturbo) { percent = (5 + (1000 * data->cbr) / data->max_nonturbo) / 10; - len += printf("(%3u%%) ", percent); + len += fprintf(fp, "(%3u%%) ", percent); } - print_sample_pt_spacing(len); + return len + perf_sample__fprintf_pt_spacing(len, fp); } -static void print_sample_synth(struct perf_sample *sample, - struct perf_evsel *evsel) +static int perf_sample__fprintf_synth(struct perf_sample *sample, + struct perf_evsel *evsel, FILE *fp) { switch (evsel->attr.config) { case PERF_SYNTH_INTEL_PTWRITE: - print_sample_synth_ptwrite(sample); - break; + return perf_sample__fprintf_synth_ptwrite(sample, fp); case PERF_SYNTH_INTEL_MWAIT: - print_sample_synth_mwait(sample); - break; + return perf_sample__fprintf_synth_mwait(sample, fp); case PERF_SYNTH_INTEL_PWRE: - print_sample_synth_pwre(sample); - break; + return perf_sample__fprintf_synth_pwre(sample, fp); case PERF_SYNTH_INTEL_EXSTOP: - print_sample_synth_exstop(sample); - break; + return perf_sample__fprintf_synth_exstop(sample, fp); case PERF_SYNTH_INTEL_PWRX: - print_sample_synth_pwrx(sample); - break; + return perf_sample__fprintf_synth_pwrx(sample, fp); case PERF_SYNTH_INTEL_CBR: - print_sample_synth_cbr(sample); - break; + return perf_sample__fprintf_synth_cbr(sample, fp); default: break; } + + return 0; } struct perf_script { @@ -1341,6 +1438,7 @@ struct perf_script { bool show_switch_events; bool show_namespace_events; bool allocated; + bool per_event_dump; struct cpu_map *cpus; struct thread_map *threads; int name_width; @@ -1362,7 +1460,7 @@ static int perf_evlist__max_name_len(struct perf_evlist *evlist) return max; } -static size_t data_src__printf(u64 data_src) +static int data_src__fprintf(u64 data_src, FILE *fp) { struct mem_info mi = { .data_src.val = data_src }; char decode[100]; @@ -1376,7 +1474,7 @@ static size_t data_src__printf(u64 data_src) if (maxlen < len) maxlen = len; - return printf("%-*s", maxlen, out); + return fprintf(fp, "%-*s", maxlen, out); } static void process_event(struct perf_script *script, @@ -1387,14 +1485,18 @@ static void process_event(struct perf_script *script, struct thread *thread = al->thread; struct perf_event_attr *attr = &evsel->attr; unsigned int type = output_type(attr->type); + struct perf_evsel_script *es = evsel->priv; + FILE *fp = es->fp; if (output[type].fields == 0) return; - print_sample_start(sample, thread, evsel); + ++es->samples; + + perf_sample__fprintf_start(sample, thread, evsel, fp); if (PRINT_FIELD(PERIOD)) - printf("%10" PRIu64 " ", sample->period); + fprintf(fp, "%10" PRIu64 " ", sample->period); if (PRINT_FIELD(EVNAME)) { const char *evname = perf_evsel__name(evsel); @@ -1402,33 +1504,33 @@ static void process_event(struct perf_script *script, if (!script->name_width) script->name_width = perf_evlist__max_name_len(script->session->evlist); - printf("%*s: ", script->name_width, - evname ? evname : "[unknown]"); + fprintf(fp, "%*s: ", script->name_width, evname ?: "[unknown]"); } if (print_flags) - print_sample_flags(sample->flags); + perf_sample__fprintf_flags(sample->flags, fp); if (is_bts_event(attr)) { - print_sample_bts(sample, evsel, thread, al, machine); + perf_sample__fprintf_bts(sample, evsel, thread, al, machine, fp); return; } - if (PRINT_FIELD(TRACE)) - event_format__print(evsel->tp_format, sample->cpu, - sample->raw_data, sample->raw_size); + if (PRINT_FIELD(TRACE)) { + event_format__fprintf(evsel->tp_format, sample->cpu, + sample->raw_data, sample->raw_size, fp); + } if (attr->type == PERF_TYPE_SYNTH && PRINT_FIELD(SYNTH)) - print_sample_synth(sample, evsel); + perf_sample__fprintf_synth(sample, evsel, fp); if (PRINT_FIELD(ADDR)) - print_sample_addr(sample, thread, attr); + perf_sample__fprintf_addr(sample, thread, attr, fp); if (PRINT_FIELD(DATA_SRC)) - data_src__printf(sample->data_src); + data_src__fprintf(sample->data_src, fp); if (PRINT_FIELD(WEIGHT)) - printf("%16" PRIu64, sample->weight); + fprintf(fp, "%16" PRIu64, sample->weight); if (PRINT_FIELD(IP)) { struct callchain_cursor *cursor = NULL; @@ -1438,27 +1540,30 @@ static void process_event(struct perf_script *script, sample, NULL, NULL, scripting_max_stack) == 0) cursor = &callchain_cursor; - putchar(cursor ? '\n' : ' '); - sample__fprintf_sym(sample, al, 0, output[type].print_ip_opts, cursor, stdout); + fputc(cursor ? '\n' : ' ', fp); + sample__fprintf_sym(sample, al, 0, output[type].print_ip_opts, cursor, fp); } if (PRINT_FIELD(IREGS)) - print_sample_iregs(sample, attr); + perf_sample__fprintf_iregs(sample, attr, fp); + + if (PRINT_FIELD(UREGS)) + perf_sample__fprintf_uregs(sample, attr, fp); if (PRINT_FIELD(BRSTACK)) - print_sample_brstack(sample, thread, attr); + perf_sample__fprintf_brstack(sample, thread, attr, fp); else if (PRINT_FIELD(BRSTACKSYM)) - print_sample_brstacksym(sample, thread, attr); + perf_sample__fprintf_brstacksym(sample, thread, attr, fp); else if (PRINT_FIELD(BRSTACKOFF)) - print_sample_brstackoff(sample, thread, attr); + perf_sample__fprintf_brstackoff(sample, thread, attr, fp); if (perf_evsel__is_bpf_output(evsel) && PRINT_FIELD(BPF_OUTPUT)) - print_sample_bpf_output(sample); - print_insn(sample, attr, thread, machine); + perf_sample__fprintf_bpf_output(sample, fp); + perf_sample__fprintf_insn(sample, attr, thread, machine, fp); if (PRINT_FIELD(PHYS_ADDR)) - printf("%16" PRIx64, sample->phys_addr); - printf("\n"); + fprintf(fp, "%16" PRIx64, sample->phys_addr); + fprintf(fp, "\n"); } static struct scripting_ops *scripting_ops; @@ -1632,7 +1737,7 @@ static int process_comm_event(struct perf_tool *tool, sample->tid = event->comm.tid; sample->pid = event->comm.pid; } - print_sample_start(sample, thread, evsel); + perf_sample__fprintf_start(sample, thread, evsel, stdout); perf_event__fprintf(event, stdout); ret = 0; out: @@ -1667,7 +1772,7 @@ static int process_namespaces_event(struct perf_tool *tool, sample->tid = event->namespaces.tid; sample->pid = event->namespaces.pid; } - print_sample_start(sample, thread, evsel); + perf_sample__fprintf_start(sample, thread, evsel, stdout); perf_event__fprintf(event, stdout); ret = 0; out: @@ -1700,7 +1805,7 @@ static int process_fork_event(struct perf_tool *tool, sample->tid = event->fork.tid; sample->pid = event->fork.pid; } - print_sample_start(sample, thread, evsel); + perf_sample__fprintf_start(sample, thread, evsel, stdout); perf_event__fprintf(event, stdout); thread__put(thread); @@ -1729,7 +1834,7 @@ static int process_exit_event(struct perf_tool *tool, sample->tid = event->fork.tid; sample->pid = event->fork.pid; } - print_sample_start(sample, thread, evsel); + perf_sample__fprintf_start(sample, thread, evsel, stdout); perf_event__fprintf(event, stdout); if (perf_event__process_exit(tool, event, sample, machine) < 0) @@ -1764,7 +1869,7 @@ static int process_mmap_event(struct perf_tool *tool, sample->tid = event->mmap.tid; sample->pid = event->mmap.pid; } - print_sample_start(sample, thread, evsel); + perf_sample__fprintf_start(sample, thread, evsel, stdout); perf_event__fprintf(event, stdout); thread__put(thread); return 0; @@ -1795,7 +1900,7 @@ static int process_mmap2_event(struct perf_tool *tool, sample->tid = event->mmap2.tid; sample->pid = event->mmap2.pid; } - print_sample_start(sample, thread, evsel); + perf_sample__fprintf_start(sample, thread, evsel, stdout); perf_event__fprintf(event, stdout); thread__put(thread); return 0; @@ -1821,7 +1926,7 @@ static int process_switch_event(struct perf_tool *tool, return -1; } - print_sample_start(sample, thread, evsel); + perf_sample__fprintf_start(sample, thread, evsel, stdout); perf_event__fprintf(event, stdout); thread__put(thread); return 0; @@ -1832,6 +1937,65 @@ static void sig_handler(int sig __maybe_unused) session_done = 1; } +static void perf_script__fclose_per_event_dump(struct perf_script *script) +{ + struct perf_evlist *evlist = script->session->evlist; + struct perf_evsel *evsel; + + evlist__for_each_entry(evlist, evsel) { + if (!evsel->priv) + break; + perf_evsel_script__delete(evsel->priv); + evsel->priv = NULL; + } +} + +static int perf_script__fopen_per_event_dump(struct perf_script *script) +{ + struct perf_evsel *evsel; + + evlist__for_each_entry(script->session->evlist, evsel) { + evsel->priv = perf_evsel_script__new(evsel, script->session->data); + if (evsel->priv == NULL) + goto out_err_fclose; + } + + return 0; + +out_err_fclose: + perf_script__fclose_per_event_dump(script); + return -1; +} + +static int perf_script__setup_per_event_dump(struct perf_script *script) +{ + struct perf_evsel *evsel; + static struct perf_evsel_script es_stdout; + + if (script->per_event_dump) + return perf_script__fopen_per_event_dump(script); + + es_stdout.fp = stdout; + + evlist__for_each_entry(script->session->evlist, evsel) + evsel->priv = &es_stdout; + + return 0; +} + +static void perf_script__exit_per_event_dump_stats(struct perf_script *script) +{ + struct perf_evsel *evsel; + + evlist__for_each_entry(script->session->evlist, evsel) { + struct perf_evsel_script *es = evsel->priv; + + perf_evsel_script__fprintf(es, stdout); + perf_evsel_script__delete(es); + evsel->priv = NULL; + } +} + static int __cmd_script(struct perf_script *script) { int ret; @@ -1853,8 +2017,16 @@ static int __cmd_script(struct perf_script *script) if (script->show_namespace_events) script->tool.namespaces = process_namespaces_event; + if (perf_script__setup_per_event_dump(script)) { + pr_err("Couldn't create the per event dump files\n"); + return -1; + } + ret = perf_session__process_events(script->session); + if (script->per_event_dump) + perf_script__exit_per_event_dump_stats(script); + if (debug_mode) pr_err("Misordered timestamps: %" PRIu64 "\n", nr_unordered); @@ -2419,14 +2591,16 @@ int find_scripts(char **scripts_array, char **scripts_path_array) char scripts_path[MAXPATHLEN], lang_path[MAXPATHLEN]; DIR *scripts_dir, *lang_dir; struct perf_session *session; - struct perf_data_file file = { - .path = input_name, - .mode = PERF_DATA_MODE_READ, + struct perf_data data = { + .file = { + .path = input_name, + }, + .mode = PERF_DATA_MODE_READ, }; char *temp; int i = 0; - session = perf_session__new(&file, false, NULL); + session = perf_session__new(&data, false, NULL); if (!session) return -1; @@ -2704,7 +2878,7 @@ int cmd_script(int argc, const char **argv) .ordering_requires_timestamps = true, }, }; - struct perf_data_file file = { + struct perf_data data = { .mode = PERF_DATA_MODE_READ, }; const struct option options[] = { @@ -2740,7 +2914,7 @@ int cmd_script(int argc, const char **argv) "+field to add and -field to remove." "Valid types: hw,sw,trace,raw,synth. " "Fields: comm,tid,pid,time,cpu,event,trace,ip,sym,dso," - "addr,symoff,period,iregs,brstack,brstacksym,flags," + "addr,symoff,period,iregs,uregs,brstack,brstacksym,flags," "bpf-output,callindent,insn,insnlen,brstackinsn,synth,phys_addr", parse_output_fields), OPT_BOOLEAN('a', "all-cpus", &system_wide, @@ -2772,6 +2946,8 @@ int cmd_script(int argc, const char **argv) "Show context switch events (if recorded)"), OPT_BOOLEAN('\0', "show-namespace-events", &script.show_namespace_events, "Show namespace events (if recorded)"), + OPT_BOOLEAN('\0', "per-event-dump", &script.per_event_dump, + "Dump trace output to files named by the monitored events"), OPT_BOOLEAN('f', "force", &symbol_conf.force, "don't complain, do it"), OPT_INTEGER(0, "max-blocks", &max_blocks, "Maximum number of code blocks to dump with brstackinsn"), @@ -2802,13 +2978,15 @@ int cmd_script(int argc, const char **argv) NULL }; + perf_set_singlethreaded(); + setup_scripting(); argc = parse_options_subcommand(argc, argv, options, script_subcommands, script_usage, PARSE_OPT_STOP_AT_NON_OPTION); - file.path = input_name; - file.force = symbol_conf.force; + data.file.path = input_name; + data.force = symbol_conf.force; if (argc > 1 && !strncmp(argv[0], "rec", strlen("rec"))) { rec_script_path = get_script_path(argv[1], RECORD_SUFFIX); @@ -2975,7 +3153,7 @@ int cmd_script(int argc, const char **argv) if (!script_name) setup_pager(); - session = perf_session__new(&file, false, &script.tool); + session = perf_session__new(&data, false, &script.tool); if (session == NULL) return -1; @@ -3016,7 +3194,8 @@ int cmd_script(int argc, const char **argv) machine__resolve_kernel_addr, &session->machines.host) < 0) { pr_err("%s: failed to set libtraceevent function resolver\n", __func__); - return -1; + err = -1; + goto out_delete; } if (generate_script_lang) { @@ -3030,7 +3209,7 @@ int cmd_script(int argc, const char **argv) goto out_delete; } - input = open(file.path, O_RDONLY); /* input_name */ + input = open(data.file.path, O_RDONLY); /* input_name */ if (input < 0) { err = -errno; perror("failed to open file"); @@ -3076,7 +3255,8 @@ int cmd_script(int argc, const char **argv) /* needs to be parsed after looking up reference time */ if (perf_time__parse_str(&script.ptime, script.time_str) != 0) { pr_err("Invalid time string\n"); - return -EINVAL; + err = -EINVAL; + goto out_delete; } err = __cmd_script(&script); diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c index 69523ed55894..59af5a8419e2 100644 --- a/tools/perf/builtin-stat.c +++ b/tools/perf/builtin-stat.c @@ -65,6 +65,7 @@ #include "util/tool.h" #include "util/group.h" #include "util/string2.h" +#include "util/metricgroup.h" #include "asm/bug.h" #include <linux/time64.h> @@ -133,6 +134,8 @@ static const char *smi_cost_attrs = { static struct perf_evlist *evsel_list; +static struct rblist metric_events; + static struct target target = { .uid = UINT_MAX, }; @@ -172,7 +175,7 @@ static int print_free_counters_hint; struct perf_stat { bool record; - struct perf_data_file file; + struct perf_data data; struct perf_session *session; u64 bytes_written; struct perf_tool tool; @@ -192,6 +195,11 @@ static struct perf_stat_config stat_config = { .scale = true, }; +static bool is_duration_time(struct perf_evsel *evsel) +{ + return !strcmp(evsel->name, "duration_time"); +} + static inline void diff_timespec(struct timespec *r, struct timespec *a, struct timespec *b) { @@ -245,7 +253,7 @@ static int create_perf_stat_counter(struct perf_evsel *evsel) * by attr->sample_type != 0, and we can't run it on * stat sessions. */ - if (!(STAT_RECORD && perf_stat.file.is_pipe)) + if (!(STAT_RECORD && perf_stat.data.is_pipe)) attr->sample_type = PERF_SAMPLE_IDENTIFIER; /* @@ -287,7 +295,7 @@ static int process_synthesized_event(struct perf_tool *tool __maybe_unused, struct perf_sample *sample __maybe_unused, struct machine *machine __maybe_unused) { - if (perf_data_file__write(&perf_stat.file, event, event->header.size) < 0) { + if (perf_data__write(&perf_stat.data, event, event->header.size) < 0) { pr_err("failed to write perf data, error: %m\n"); return -1; } @@ -407,6 +415,8 @@ static void process_interval(void) pr_err("failed to write stat round event\n"); } + init_stats(&walltime_nsecs_stats); + update_stats(&walltime_nsecs_stats, stat_config.interval * 1000000); print_counters(&rs, 0, NULL); } @@ -582,6 +592,32 @@ static bool perf_evsel__should_store_id(struct perf_evsel *counter) return STAT_RECORD || counter->attr.read_format & PERF_FORMAT_ID; } +static struct perf_evsel *perf_evsel__reset_weak_group(struct perf_evsel *evsel) +{ + struct perf_evsel *c2, *leader; + bool is_open = true; + + leader = evsel->leader; + pr_debug("Weak group for %s/%d failed\n", + leader->name, leader->nr_members); + + /* + * for_each_group_member doesn't work here because it doesn't + * include the first entry. + */ + evlist__for_each_entry(evsel_list, c2) { + if (c2 == evsel) + is_open = false; + if (c2->leader == leader) { + if (is_open) + perf_evsel__close(c2); + c2->leader = c2; + c2->nr_members = 0; + } + } + return leader; +} + static int __run_perf_stat(int argc, const char **argv) { int interval = stat_config.interval; @@ -592,7 +628,7 @@ static int __run_perf_stat(int argc, const char **argv) size_t l; int status = 0; const bool forks = (argc > 0); - bool is_pipe = STAT_RECORD ? perf_stat.file.is_pipe : false; + bool is_pipe = STAT_RECORD ? perf_stat.data.is_pipe : false; struct perf_evsel_config_term *err_term; if (interval) { @@ -618,6 +654,15 @@ static int __run_perf_stat(int argc, const char **argv) evlist__for_each_entry(evsel_list, counter) { try_again: if (create_perf_stat_counter(counter) < 0) { + + /* Weak group failed. Reset the group. */ + if ((errno == EINVAL || errno == EBADF) && + counter->leader != counter && + counter->weak_group) { + counter = perf_evsel__reset_weak_group(counter); + goto try_again; + } + /* * PPC returns ENXIO for HW counters until 2.6.37 * (behavior changed with commit b0a873e). @@ -674,10 +719,10 @@ try_again: } if (STAT_RECORD) { - int err, fd = perf_data_file__fd(&perf_stat.file); + int err, fd = perf_data__fd(&perf_stat.data); if (is_pipe) { - err = perf_header__write_pipe(perf_data_file__fd(&perf_stat.file)); + err = perf_header__write_pipe(perf_data__fd(&perf_stat.data)); } else { err = perf_session__write_header(perf_stat.session, evsel_list, fd, false); @@ -800,7 +845,7 @@ static void print_noise(struct perf_evsel *evsel, double avg) if (run_count == 1) return; - ps = evsel->priv; + ps = evsel->stats; print_noise_pct(stddev_stats(&ps->res_stats[0]), avg); } @@ -1199,7 +1244,7 @@ static void printout(int id, int nr, struct perf_evsel *counter, double uval, perf_stat__print_shadow_stats(counter, uval, first_shadow_cpu(counter, id), - &out); + &out, &metric_events); if (!csv_output && !metric_only) { print_noise(counter, noise); print_running(run, ena); @@ -1222,8 +1267,7 @@ static void aggr_update_shadow(void) continue; val += perf_counts(counter->counts, cpu, 0)->val; } - val = val * counter->scale; - perf_stat__update_shadow_stats(counter, &val, + perf_stat__update_shadow_stats(counter, val, first_shadow_cpu(counter, id)); } } @@ -1325,6 +1369,9 @@ static void print_aggr(char *prefix) ad.id = id = aggr_map->map[s]; first = true; evlist__for_each_entry(evsel_list, counter) { + if (is_duration_time(counter)) + continue; + ad.val = ad.ena = ad.run = 0; ad.nr = 0; if (!collect_data(counter, aggr_cb, &ad)) @@ -1384,7 +1431,7 @@ static void counter_aggr_cb(struct perf_evsel *counter, void *data, bool first __maybe_unused) { struct caggr_data *cd = data; - struct perf_stat_evsel *ps = counter->priv; + struct perf_stat_evsel *ps = counter->stats; cd->avg += avg_stats(&ps->res_stats[0]); cd->avg_enabled += avg_stats(&ps->res_stats[1]); @@ -1468,6 +1515,8 @@ static void print_no_aggr_metric(char *prefix) if (prefix) fputs(prefix, stat_config.output); evlist__for_each_entry(evsel_list, counter) { + if (is_duration_time(counter)) + continue; if (first) { aggr_printout(counter, cpu, 0); first = false; @@ -1522,6 +1571,8 @@ static void print_metric_headers(const char *prefix, bool no_indent) /* Print metrics headers only */ evlist__for_each_entry(evsel_list, counter) { + if (is_duration_time(counter)) + continue; os.evsel = counter; out.ctx = &os; out.print_metric = print_metric_header; @@ -1530,7 +1581,8 @@ static void print_metric_headers(const char *prefix, bool no_indent) os.evsel = counter; perf_stat__print_shadow_stats(counter, 0, 0, - &out); + &out, + &metric_events); } fputc('\n', stat_config.output); } @@ -1643,7 +1695,7 @@ static void print_counters(struct timespec *ts, int argc, const char **argv) char buf[64], *prefix = NULL; /* Do not print anything if we record to the pipe. */ - if (STAT_RECORD && perf_stat.file.is_pipe) + if (STAT_RECORD && perf_stat.data.is_pipe) return; if (interval) @@ -1668,12 +1720,18 @@ static void print_counters(struct timespec *ts, int argc, const char **argv) print_aggr(prefix); break; case AGGR_THREAD: - evlist__for_each_entry(evsel_list, counter) + evlist__for_each_entry(evsel_list, counter) { + if (is_duration_time(counter)) + continue; print_aggr_thread(counter, prefix); + } break; case AGGR_GLOBAL: - evlist__for_each_entry(evsel_list, counter) + evlist__for_each_entry(evsel_list, counter) { + if (is_duration_time(counter)) + continue; print_counter_aggr(counter, prefix); + } if (metric_only) fputc('\n', stat_config.output); break; @@ -1681,8 +1739,11 @@ static void print_counters(struct timespec *ts, int argc, const char **argv) if (metric_only) print_no_aggr_metric(prefix); else { - evlist__for_each_entry(evsel_list, counter) + evlist__for_each_entry(evsel_list, counter) { + if (is_duration_time(counter)) + continue; print_counter(counter, prefix); + } } break; case AGGR_UNSET: @@ -1754,6 +1815,13 @@ static int enable_metric_only(const struct option *opt __maybe_unused, return 0; } +static int parse_metric_groups(const struct option *opt, + const char *str, + int unset __maybe_unused) +{ + return metricgroup__parse_groups(opt, str, &metric_events); +} + static const struct option stat_options[] = { OPT_BOOLEAN('T', "transaction", &transaction_run, "hardware transaction statistics"), @@ -1819,6 +1887,9 @@ static const struct option stat_options[] = { "measure topdown level 1 statistics"), OPT_BOOLEAN(0, "smi-cost", &smi_cost, "measure SMI cost"), + OPT_CALLBACK('M', "metrics", &evsel_list, "metric/metric group list", + "monitor specified metrics or metric groups (separated by ,)", + parse_metric_groups), OPT_END() }; @@ -2334,20 +2405,20 @@ static void init_features(struct perf_session *session) static int __cmd_record(int argc, const char **argv) { struct perf_session *session; - struct perf_data_file *file = &perf_stat.file; + struct perf_data *data = &perf_stat.data; argc = parse_options(argc, argv, stat_options, stat_record_usage, PARSE_OPT_STOP_AT_NON_OPTION); if (output_name) - file->path = output_name; + data->file.path = output_name; if (run_count != 1 || forever) { pr_err("Cannot use -r option with perf stat record.\n"); return -1; } - session = perf_session__new(file, false, NULL); + session = perf_session__new(data, false, NULL); if (session == NULL) { pr_err("Perf session creation failed.\n"); return -1; @@ -2405,7 +2476,7 @@ int process_stat_config_event(struct perf_tool *tool, if (st->aggr_mode != AGGR_UNSET) stat_config.aggr_mode = st->aggr_mode; - if (perf_stat.file.is_pipe) + if (perf_stat.data.is_pipe) perf_stat_init_aggr_mode(); else perf_stat_init_aggr_mode_file(st); @@ -2513,10 +2584,10 @@ static int __cmd_report(int argc, const char **argv) input_name = "perf.data"; } - perf_stat.file.path = input_name; - perf_stat.file.mode = PERF_DATA_MODE_READ; + perf_stat.data.file.path = input_name; + perf_stat.data.mode = PERF_DATA_MODE_READ; - session = perf_session__new(&perf_stat.file, false, &perf_stat.tool); + session = perf_session__new(&perf_stat.data, false, &perf_stat.tool); if (session == NULL) return -1; @@ -2787,7 +2858,7 @@ int cmd_stat(int argc, const char **argv) * records, but the need to suppress the kptr_restrict messages in older * tools remain -acme */ - int fd = perf_data_file__fd(&perf_stat.file); + int fd = perf_data__fd(&perf_stat.data); int err = perf_event__synthesize_kernel_mmap((void *)&perf_stat, process_synthesized_event, &perf_stat.session->machines.host); @@ -2801,7 +2872,7 @@ int cmd_stat(int argc, const char **argv) pr_err("failed to write stat round event\n"); } - if (!perf_stat.file.is_pipe) { + if (!perf_stat.data.is_pipe) { perf_stat.session->header.data_size += perf_stat.bytes_written; perf_session__write_header(perf_stat.session, evsel_list, fd, true); } diff --git a/tools/perf/builtin-timechart.c b/tools/perf/builtin-timechart.c index 4e2e61695986..813698a9b8c7 100644 --- a/tools/perf/builtin-timechart.c +++ b/tools/perf/builtin-timechart.c @@ -1601,13 +1601,15 @@ static int __cmd_timechart(struct timechart *tchart, const char *output_name) { "syscalls:sys_exit_pselect6", process_exit_poll }, { "syscalls:sys_exit_select", process_exit_poll }, }; - struct perf_data_file file = { - .path = input_name, - .mode = PERF_DATA_MODE_READ, - .force = tchart->force, + struct perf_data data = { + .file = { + .path = input_name, + }, + .mode = PERF_DATA_MODE_READ, + .force = tchart->force, }; - struct perf_session *session = perf_session__new(&file, false, + struct perf_session *session = perf_session__new(&data, false, &tchart->tool); int ret = -EINVAL; @@ -1617,7 +1619,7 @@ static int __cmd_timechart(struct timechart *tchart, const char *output_name) symbol__init(&session->header.env); (void)perf_header__process_sections(&session->header, - perf_data_file__fd(session->file), + perf_data__fd(session->data), tchart, process_header); @@ -1732,8 +1734,10 @@ static int timechart__io_record(int argc, const char **argv) if (rec_argv == NULL) return -ENOMEM; - if (asprintf(&filter, "common_pid != %d", getpid()) < 0) + if (asprintf(&filter, "common_pid != %d", getpid()) < 0) { + free(rec_argv); return -ENOMEM; + } p = rec_argv; for (i = 0; i < common_args_nr; i++) diff --git a/tools/perf/builtin-top.c b/tools/perf/builtin-top.c index ee954bde7e3e..477a8699f0b5 100644 --- a/tools/perf/builtin-top.c +++ b/tools/perf/builtin-top.c @@ -958,8 +958,16 @@ static int __cmd_top(struct perf_top *top) if (perf_session__register_idle_thread(top->session) < 0) goto out_delete; + if (top->nr_threads_synthesize > 1) + perf_set_multithreaded(); + machine__synthesize_threads(&top->session->machines.host, &opts->target, - top->evlist->threads, false, opts->proc_map_timeout); + top->evlist->threads, false, + opts->proc_map_timeout, + top->nr_threads_synthesize); + + if (top->nr_threads_synthesize > 1) + perf_set_singlethreaded(); if (perf_hpp_list.socket) { ret = perf_env__read_cpu_topology_map(&perf_env); @@ -1112,6 +1120,7 @@ int cmd_top(int argc, const char **argv) }, .max_stack = sysctl_perf_event_max_stack, .sym_pcnt_filter = 5, + .nr_threads_synthesize = UINT_MAX, }; struct record_opts *opts = &top.record_opts; struct target *target = &opts->target; @@ -1221,6 +1230,8 @@ int cmd_top(int argc, const char **argv) OPT_BOOLEAN(0, "hierarchy", &symbol_conf.report_hierarchy, "Show entries in a hierarchy"), OPT_BOOLEAN(0, "force", &symbol_conf.force, "don't complain, do it"), + OPT_UINTEGER(0, "num-thread-synthesize", &top.nr_threads_synthesize, + "number of thread to run event synthesize"), OPT_END() }; const char * const top_usage[] = { diff --git a/tools/perf/builtin-trace.c b/tools/perf/builtin-trace.c index d5d7fff1c211..f2757d38c7d7 100644 --- a/tools/perf/builtin-trace.c +++ b/tools/perf/builtin-trace.c @@ -578,7 +578,6 @@ static struct syscall_fmt { } syscall_fmts[] = { { .name = "access", .arg = { [1] = { .scnprintf = SCA_ACCMODE, /* mode */ }, }, }, - { .name = "arch_prctl", .alias = "prctl", }, { .name = "bpf", .arg = { [0] = STRARRAY(cmd, bpf_cmd), }, }, { .name = "brk", .hexret = true, @@ -634,6 +633,12 @@ static struct syscall_fmt { #else [2] = { .scnprintf = SCA_HEX, /* arg */ }, }, }, #endif + { .name = "kcmp", .nr_args = 5, + .arg = { [0] = { .name = "pid1", .scnprintf = SCA_PID, }, + [1] = { .name = "pid2", .scnprintf = SCA_PID, }, + [2] = { .name = "type", .scnprintf = SCA_KCMP_TYPE, }, + [3] = { .name = "idx1", .scnprintf = SCA_KCMP_IDX, }, + [4] = { .name = "idx2", .scnprintf = SCA_KCMP_IDX, }, }, }, { .name = "keyctl", .arg = { [0] = STRARRAY(option, keyctl_options), }, }, { .name = "kill", @@ -703,6 +708,10 @@ static struct syscall_fmt { [3] = { .scnprintf = SCA_INT, /* pkey */ }, }, }, { .name = "poll", .timeout = true, }, { .name = "ppoll", .timeout = true, }, + { .name = "prctl", .alias = "arch_prctl", + .arg = { [0] = { .scnprintf = SCA_PRCTL_OPTION, /* option */ }, + [1] = { .scnprintf = SCA_PRCTL_ARG2, /* arg2 */ }, + [2] = { .scnprintf = SCA_PRCTL_ARG3, /* arg3 */ }, }, }, { .name = "pread", .alias = "pread64", }, { .name = "preadv", .alias = "pread", }, { .name = "prlimit64", @@ -985,6 +994,23 @@ size_t syscall_arg__scnprintf_fd(char *bf, size_t size, struct syscall_arg *arg) return printed; } +size_t pid__scnprintf_fd(struct trace *trace, pid_t pid, int fd, char *bf, size_t size) +{ + size_t printed = scnprintf(bf, size, "%d", fd); + struct thread *thread = machine__find_thread(trace->host, pid, pid); + + if (thread) { + const char *path = thread__fd_path(thread, fd, trace); + + if (path) + printed += scnprintf(bf + printed, size - printed, "<%s>", path); + + thread__put(thread); + } + + return printed; +} + static size_t syscall_arg__scnprintf_close_fd(char *bf, size_t size, struct syscall_arg *arg) { @@ -1131,7 +1157,7 @@ static int trace__symbols_init(struct trace *trace, struct perf_evlist *evlist) err = __machine__synthesize_threads(trace->host, &trace->tool, &trace->opts.target, evlist->threads, trace__tool_process, false, - trace->opts.proc_map_timeout); + trace->opts.proc_map_timeout, 1); if (err) symbol__exit(); @@ -1836,16 +1862,14 @@ out_dump: goto out_put; } -static void bpf_output__printer(enum binary_printer_ops op, - unsigned int val, void *extra) +static int bpf_output__printer(enum binary_printer_ops op, + unsigned int val, void *extra __maybe_unused, FILE *fp) { - FILE *output = extra; unsigned char ch = (unsigned char)val; switch (op) { case BINARY_PRINT_CHAR_DATA: - fprintf(output, "%c", isprint(ch) ? ch : '.'); - break; + return fprintf(fp, "%c", isprint(ch) ? ch : '.'); case BINARY_PRINT_DATA_BEGIN: case BINARY_PRINT_LINE_BEGIN: case BINARY_PRINT_ADDR: @@ -1858,13 +1882,15 @@ static void bpf_output__printer(enum binary_printer_ops op, default: break; } + + return 0; } static void bpf_output__fprintf(struct trace *trace, struct perf_sample *sample) { - print_binary(sample->raw_data, sample->raw_size, 8, - bpf_output__printer, trace->output); + binary__fprintf(sample->raw_data, sample->raw_size, 8, + bpf_output__printer, NULL, trace->output); } static int trace__event_handler(struct trace *trace, struct perf_evsel *evsel, @@ -2086,6 +2112,7 @@ static int trace__record(struct trace *trace, int argc, const char **argv) rec_argv[j++] = "syscalls:sys_enter,syscalls:sys_exit"; else { pr_err("Neither raw_syscalls nor syscalls events exist.\n"); + free(rec_argv); return -1; } } @@ -2538,10 +2565,12 @@ static int trace__replay(struct trace *trace) const struct perf_evsel_str_handler handlers[] = { { "probe:vfs_getname", trace__vfs_getname, }, }; - struct perf_data_file file = { - .path = input_name, - .mode = PERF_DATA_MODE_READ, - .force = trace->force, + struct perf_data data = { + .file = { + .path = input_name, + }, + .mode = PERF_DATA_MODE_READ, + .force = trace->force, }; struct perf_session *session; struct perf_evsel *evsel; @@ -2564,7 +2593,7 @@ static int trace__replay(struct trace *trace) /* add tid to output */ trace->multiple_threads = true; - session = perf_session__new(&file, false, &trace->tool); + session = perf_session__new(&data, false, &trace->tool); if (session == NULL) return -1; @@ -2740,20 +2769,23 @@ DEFINE_RESORT_RB(threads, (thread__nr_events(a->thread->priv) < thread__nr_event static size_t trace__fprintf_thread_summary(struct trace *trace, FILE *fp) { - DECLARE_RESORT_RB_MACHINE_THREADS(threads, trace->host); size_t printed = trace__fprintf_threads_header(fp); struct rb_node *nd; + int i; - if (threads == NULL) { - fprintf(fp, "%s", "Error sorting output by nr_events!\n"); - return 0; - } + for (i = 0; i < THREADS__TABLE_SIZE; i++) { + DECLARE_RESORT_RB_MACHINE_THREADS(threads, trace->host, i); - resort_rb__for_each_entry(nd, threads) - printed += trace__fprintf_thread(fp, threads_entry->thread, trace); + if (threads == NULL) { + fprintf(fp, "%s", "Error sorting output by nr_events!\n"); + return 0; + } - resort_rb__delete(threads); + resort_rb__for_each_entry(nd, threads) + printed += trace__fprintf_thread(fp, threads_entry->thread, trace); + resort_rb__delete(threads); + } return printed; } diff --git a/tools/perf/check-headers.sh b/tools/perf/check-headers.sh index 50cd6228f506..77406d25e521 100755 --- a/tools/perf/check-headers.sh +++ b/tools/perf/check-headers.sh @@ -5,8 +5,10 @@ HEADERS=' include/uapi/drm/drm.h include/uapi/drm/i915_drm.h include/uapi/linux/fcntl.h +include/uapi/linux/kcmp.h include/uapi/linux/kvm.h include/uapi/linux/perf_event.h +include/uapi/linux/prctl.h include/uapi/linux/sched.h include/uapi/linux/stat.h include/uapi/linux/vhost.h @@ -58,6 +60,11 @@ check () { } +# Check if we have the kernel headers (tools/perf/../../include), else +# we're probably on a detached tarball, so no point in trying to check +# differences. +test -d ../../include || exit 0 + # simple diff check for i in $HEADERS; do check $i -B diff --git a/tools/perf/perf.h b/tools/perf/perf.h index f75f3dec7485..2357f4ccc9c7 100644 --- a/tools/perf/perf.h +++ b/tools/perf/perf.h @@ -66,6 +66,7 @@ struct record_opts { unsigned int user_freq; u64 branch_stack; u64 sample_intr_regs; + u64 sample_user_regs; u64 default_interval; u64 user_interval; size_t auxtrace_snapshot_size; diff --git a/tools/perf/pmu-events/arch/x86/broadwell/bdw-metrics.json b/tools/perf/pmu-events/arch/x86/broadwell/bdw-metrics.json new file mode 100644 index 000000000000..00bfdb5c5acb --- /dev/null +++ b/tools/perf/pmu-events/arch/x86/broadwell/bdw-metrics.json @@ -0,0 +1,164 @@ +[ + { + "BriefDescription": "Instructions Per Cycle (per logical thread)", + "MetricExpr": "INST_RETIRED.ANY / CPU_CLK_UNHALTED.THREAD", + "MetricGroup": "TopDownL1", + "MetricName": "IPC" + }, + { + "BriefDescription": "Uops Per Instruction", + "MetricExpr": "UOPS_RETIRED.RETIRE_SLOTS / INST_RETIRED.ANY", + "MetricGroup": "Pipeline", + "MetricName": "UPI" + }, + { + "BriefDescription": "Rough Estimation of fraction of fetched lines bytes that were likely consumed by program instructions", + "MetricExpr": "min( 1 , IDQ.MITE_UOPS / ( (UOPS_RETIRED.RETIRE_SLOTS / INST_RETIRED.ANY) * 16 * ( ICACHE.HIT + ICACHE.MISSES ) / 4.0 ) )", + "MetricGroup": "Frontend", + "MetricName": "IFetch_Line_Utilization" + }, + { + "BriefDescription": "Fraction of Uops delivered by the DSB (aka Decoded Icache; or Uop Cache)", + "MetricExpr": "IDQ.DSB_UOPS / ( IDQ.DSB_UOPS + LSD.UOPS + IDQ.MITE_UOPS + IDQ.MS_UOPS )", + "MetricGroup": "DSB; Frontend_Bandwidth", + "MetricName": "DSB_Coverage" + }, + { + "BriefDescription": "Cycles Per Instruction (threaded)", + "MetricExpr": "1 / (INST_RETIRED.ANY / cycles)", + "MetricGroup": "Pipeline;Summary", + "MetricName": "CPI" + }, + { + "BriefDescription": "Per-thread actual clocks when the logical processor is active. This is called 'Clockticks' in VTune.", + "MetricExpr": "CPU_CLK_UNHALTED.THREAD", + "MetricGroup": "Summary", + "MetricName": "CLKS" + }, + { + "BriefDescription": "Total issue-pipeline slots", + "MetricExpr": "4*(( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else cycles)", + "MetricGroup": "TopDownL1", + "MetricName": "SLOTS" + }, + { + "BriefDescription": "Total number of retired Instructions", + "MetricExpr": "INST_RETIRED.ANY", + "MetricGroup": "Summary", + "MetricName": "Instructions" + }, + { + "BriefDescription": "Instructions Per Cycle (per physical core)", + "MetricExpr": "INST_RETIRED.ANY / (( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else cycles)", + "MetricGroup": "SMT", + "MetricName": "CoreIPC" + }, + { + "BriefDescription": "Instruction-Level-Parallelism (average number of uops executed when there is at least 1 uop executed)", + "MetricExpr": "UOPS_EXECUTED.THREAD / (( cpu@UOPS_EXECUTED.CORE\\,cmask\\=1@ / 2) if #SMT_on else UOPS_EXECUTED.CYCLES_GE_1_UOP_EXEC)", + "MetricGroup": "Pipeline;Ports_Utilization", + "MetricName": "ILP" + }, + { + "BriefDescription": "Average Branch Address Clear Cost (fraction of cycles)", + "MetricExpr": "2* (( RS_EVENTS.EMPTY_CYCLES - ICACHE.IFDATA_STALL - (( 14 * ITLB_MISSES.STLB_HIT + cpu@ITLB_MISSES.WALK_DURATION\\,cmask\\=1@ + 7* ITLB_MISSES.WALK_COMPLETED )) ) / RS_EVENTS.EMPTY_END)", + "MetricGroup": "Unknown_Branches", + "MetricName": "BAClear_Cost" + }, + { + "BriefDescription": "Core actual clocks when any thread is active on the physical core", + "MetricExpr": "( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else CPU_CLK_UNHALTED.THREAD", + "MetricGroup": "SMT", + "MetricName": "CORE_CLKS" + }, + { + "BriefDescription": "Actual Average Latency for L1 data-cache miss demand loads", + "MetricExpr": "L1D_PEND_MISS.PENDING / ( MEM_LOAD_UOPS_RETIRED.L1_MISS + mem_load_uops_retired.hit_lfb )", + "MetricGroup": "Memory_Bound;Memory_Lat", + "MetricName": "Load_Miss_Real_Latency" + }, + { + "BriefDescription": "Memory-Level-Parallelism (average number of L1 miss demand load when there is at least 1 such miss)", + "MetricExpr": "L1D_PEND_MISS.PENDING / (( cpu@l1d_pend_miss.pending_cycles\\,any\\=1@ / 2) if #SMT_on else L1D_PEND_MISS.PENDING_CYCLES)", + "MetricGroup": "Memory_Bound;Memory_BW", + "MetricName": "MLP" + }, + { + "BriefDescription": "Utilization of the core's Page Walker(s) serving STLB misses triggered by instruction/Load/Store accesses", + "MetricExpr": "( cpu@ITLB_MISSES.WALK_DURATION\\,cmask\\=1@ + cpu@DTLB_LOAD_MISSES.WALK_DURATION\\,cmask\\=1@ + cpu@DTLB_STORE_MISSES.WALK_DURATION\\,cmask\\=1@ + 7*(DTLB_STORE_MISSES.WALK_COMPLETED+DTLB_LOAD_MISSES.WALK_COMPLETED+ITLB_MISSES.WALK_COMPLETED)) / (( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else cycles)", + "MetricGroup": "TLB", + "MetricName": "Page_Walks_Utilization" + }, + { + "BriefDescription": "Average CPU Utilization", + "MetricExpr": "CPU_CLK_UNHALTED.REF_TSC / msr@tsc@", + "MetricGroup": "Summary", + "MetricName": "CPU_Utilization" + }, + { + "BriefDescription": "Giga Floating Point Operations Per Second", + "MetricExpr": "(( 1*( FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_ARITH_INST_RETIRED.SCALAR_DOUBLE ) + 2* FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + 4*( FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE ) + 8* FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE )) / 1000000000 / duration_time", + "MetricGroup": "FLOPS;Summary", + "MetricName": "GFLOPs" + }, + { + "BriefDescription": "Average Frequency Utilization relative nominal frequency", + "MetricExpr": "CPU_CLK_UNHALTED.THREAD / CPU_CLK_UNHALTED.REF_TSC", + "MetricGroup": "Power", + "MetricName": "Turbo_Utilization" + }, + { + "BriefDescription": "Fraction of cycles where both hardware threads were active", + "MetricExpr": "1 - CPU_CLK_THREAD_UNHALTED.ONE_THREAD_ACTIVE / ( CPU_CLK_THREAD_UNHALTED.REF_XCLK_ANY / 2 ) if #SMT_on else 0", + "MetricGroup": "SMT;Summary", + "MetricName": "SMT_2T_Utilization" + }, + { + "BriefDescription": "Fraction of cycles spent in Kernel mode", + "MetricExpr": "CPU_CLK_UNHALTED.REF_TSC:u / CPU_CLK_UNHALTED.REF_TSC", + "MetricGroup": "Summary", + "MetricName": "Kernel_Utilization" + }, + { + "BriefDescription": "C3 residency percent per core", + "MetricExpr": "(cstate_core@c3\\-residency@ / msr@tsc@) * 100", + "MetricGroup": "Power", + "MetricName": "C3_Core_Residency" + }, + { + "BriefDescription": "C6 residency percent per core", + "MetricExpr": "(cstate_core@c6\\-residency@ / msr@tsc@) * 100", + "MetricGroup": "Power", + "MetricName": "C6_Core_Residency" + }, + { + "BriefDescription": "C7 residency percent per core", + "MetricExpr": "(cstate_core@c7\\-residency@ / msr@tsc@) * 100", + "MetricGroup": "Power", + "MetricName": "C7_Core_Residency" + }, + { + "BriefDescription": "C2 residency percent per package", + "MetricExpr": "(cstate_pkg@c2\\-residency@ / msr@tsc@) * 100", + "MetricGroup": "Power", + "MetricName": "C2_Pkg_Residency" + }, + { + "BriefDescription": "C3 residency percent per package", + "MetricExpr": "(cstate_pkg@c3\\-residency@ / msr@tsc@) * 100", + "MetricGroup": "Power", + "MetricName": "C3_Pkg_Residency" + }, + { + "BriefDescription": "C6 residency percent per package", + "MetricExpr": "(cstate_pkg@c6\\-residency@ / msr@tsc@) * 100", + "MetricGroup": "Power", + "MetricName": "C6_Pkg_Residency" + }, + { + "BriefDescription": "C7 residency percent per package", + "MetricExpr": "(cstate_pkg@c7\\-residency@ / msr@tsc@) * 100", + "MetricGroup": "Power", + "MetricName": "C7_Pkg_Residency" + } +] diff --git a/tools/perf/pmu-events/arch/x86/broadwellde/bdwde-metrics.json b/tools/perf/pmu-events/arch/x86/broadwellde/bdwde-metrics.json new file mode 100644 index 000000000000..49c5f123d811 --- /dev/null +++ b/tools/perf/pmu-events/arch/x86/broadwellde/bdwde-metrics.json @@ -0,0 +1,164 @@ +[ + { + "BriefDescription": "Instructions Per Cycle (per logical thread)", + "MetricExpr": "INST_RETIRED.ANY / CPU_CLK_UNHALTED.THREAD", + "MetricGroup": "TopDownL1", + "MetricName": "IPC" + }, + { + "BriefDescription": "Uops Per Instruction", + "MetricExpr": "UOPS_RETIRED.RETIRE_SLOTS / INST_RETIRED.ANY", + "MetricGroup": "Pipeline", + "MetricName": "UPI" + }, + { + "BriefDescription": "Rough Estimation of fraction of fetched lines bytes that were likely consumed by program instructions", + "MetricExpr": "min( 1 , IDQ.MITE_UOPS / ( UOPS_RETIRED.RETIRE_SLOTS / INST_RETIRED.ANY * 16 * ( ICACHE.HIT + ICACHE.MISSES ) / 4.0 ) )", + "MetricGroup": "Frontend", + "MetricName": "IFetch_Line_Utilization" + }, + { + "BriefDescription": "Fraction of Uops delivered by the DSB (aka Decoded Icache; or Uop Cache)", + "MetricExpr": "IDQ.DSB_UOPS / ( IDQ.DSB_UOPS + LSD.UOPS + IDQ.MITE_UOPS + IDQ.MS_UOPS )", + "MetricGroup": "DSB; Frontend_Bandwidth", + "MetricName": "DSB_Coverage" + }, + { + "BriefDescription": "Cycles Per Instruction (threaded)", + "MetricExpr": "1 / INST_RETIRED.ANY / cycles", + "MetricGroup": "Pipeline;Summary", + "MetricName": "CPI" + }, + { + "BriefDescription": "Per-thread actual clocks when the logical processor is active. This is called 'Clockticks' in VTune.", + "MetricExpr": "CPU_CLK_UNHALTED.THREAD", + "MetricGroup": "Summary", + "MetricName": "CLKS" + }, + { + "BriefDescription": "Total issue-pipeline slots", + "MetricExpr": "4*( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else cycles", + "MetricGroup": "TopDownL1", + "MetricName": "SLOTS" + }, + { + "BriefDescription": "Total number of retired Instructions", + "MetricExpr": "INST_RETIRED.ANY", + "MetricGroup": "Summary", + "MetricName": "Instructions" + }, + { + "BriefDescription": "Instructions Per Cycle (per physical core)", + "MetricExpr": "INST_RETIRED.ANY / ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else cycles", + "MetricGroup": "SMT", + "MetricName": "CoreIPC" + }, + { + "BriefDescription": "Instruction-Level-Parallelism (average number of uops executed when there is at least 1 uop executed)", + "MetricExpr": "UOPS_EXECUTED.THREAD / ( cpu@uops_executed.core\\,cmask\\=1@ / 2) if #SMT_on else UOPS_EXECUTED.CYCLES_GE_1_UOP_EXEC", + "MetricGroup": "Pipeline;Ports_Utilization", + "MetricName": "ILP" + }, + { + "BriefDescription": "Average Branch Address Clear Cost (fraction of cycles)", + "MetricExpr": "2* ( RS_EVENTS.EMPTY_CYCLES - ICACHE.IFDATA_STALL - ( 14 * ITLB_MISSES.STLB_HIT + cpu@ITLB_MISSES.WALK_DURATION\\,cmask\\=1@ + 7* ITLB_MISSES.WALK_COMPLETED ) ) / RS_EVENTS.EMPTY_END", + "MetricGroup": "Unknown_Branches", + "MetricName": "BAClear_Cost" + }, + { + "BriefDescription": "Core actual clocks when any thread is active on the physical core", + "MetricExpr": "( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else CPU_CLK_UNHALTED.THREAD", + "MetricGroup": "SMT", + "MetricName": "CORE_CLKS" + }, + { + "BriefDescription": "Actual Average Latency for L1 data-cache miss demand loads", + "MetricExpr": "L1D_PEND_MISS.PENDING / ( MEM_LOAD_UOPS_RETIRED.L1_MISS + mem_load_uops_retired.hit_lfb )", + "MetricGroup": "Memory_Bound;Memory_Lat", + "MetricName": "Load_Miss_Real_Latency" + }, + { + "BriefDescription": "Memory-Level-Parallelism (average number of L1 miss demand load when there is at least 1 such miss)", + "MetricExpr": "L1D_PEND_MISS.PENDING / ( cpu@l1d_pend_miss.pending_cycles\\,any\\=1@ / 2) if #SMT_on else L1D_PEND_MISS.PENDING_CYCLES", + "MetricGroup": "Memory_Bound;Memory_BW", + "MetricName": "MLP" + }, + { + "BriefDescription": "Utilization of the core's Page Walker(s) serving STLB misses triggered by instruction/Load/Store accesses", + "MetricExpr": "( cpu@ITLB_MISSES.WALK_DURATION\\,cmask\\=1@ + cpu@DTLB_LOAD_MISSES.WALK_DURATION\\,cmask\\=1@ + cpu@DTLB_STORE_MISSES.WALK_DURATION\\,cmask\\=1@ + 7*(DTLB_STORE_MISSES.WALK_COMPLETED+DTLB_LOAD_MISSES.WALK_COMPLETED+ITLB_MISSES.WALK_COMPLETED)) / ( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else cycles", + "MetricGroup": "TLB", + "MetricName": "Page_Walks_Utilization" + }, + { + "BriefDescription": "Average CPU Utilization", + "MetricExpr": "CPU_CLK_UNHALTED.REF_TSC / msr@tsc@", + "MetricGroup": "Summary", + "MetricName": "CPU_Utilization" + }, + { + "BriefDescription": "Giga Floating Point Operations Per Second", + "MetricExpr": "( 1*( FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_ARITH_INST_RETIRED.SCALAR_DOUBLE ) + 2* FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + 4*( FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE ) + 8* FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE ) / 1000000000 / duration_time", + "MetricGroup": "FLOPS;Summary", + "MetricName": "GFLOPs" + }, + { + "BriefDescription": "Average Frequency Utilization relative nominal frequency", + "MetricExpr": "CPU_CLK_UNHALTED.THREAD / CPU_CLK_UNHALTED.REF_TSC", + "MetricGroup": "Power", + "MetricName": "Turbo_Utilization" + }, + { + "BriefDescription": "Fraction of cycles where both hardware threads were active", + "MetricExpr": "1 - CPU_CLK_THREAD_UNHALTED.ONE_THREAD_ACTIVE / ( CPU_CLK_THREAD_UNHALTED.REF_XCLK_ANY / 2 ) if #SMT_on else 0", + "MetricGroup": "SMT;Summary", + "MetricName": "SMT_2T_Utilization" + }, + { + "BriefDescription": "Fraction of cycles spent in Kernel mode", + "MetricExpr": "CPU_CLK_UNHALTED.REF_TSC:u / CPU_CLK_UNHALTED.REF_TSC", + "MetricGroup": "Summary", + "MetricName": "Kernel_Utilization" + }, + { + "BriefDescription": "C3 residency percent per core", + "MetricExpr": "(cstate_core@c3\\-residency@ / msr@tsc@) * 100", + "MetricGroup": "Power", + "MetricName": "C3_Core_Residency" + }, + { + "BriefDescription": "C6 residency percent per core", + "MetricExpr": "(cstate_core@c6\\-residency@ / msr@tsc@) * 100", + "MetricGroup": "Power", + "MetricName": "C6_Core_Residency" + }, + { + "BriefDescription": "C7 residency percent per core", + "MetricExpr": "(cstate_core@c7\\-residency@ / msr@tsc@) * 100", + "MetricGroup": "Power", + "MetricName": "C7_Core_Residency" + }, + { + "BriefDescription": "C2 residency percent per package", + "MetricExpr": "(cstate_pkg@c2\\-residency@ / msr@tsc@) * 100", + "MetricGroup": "Power", + "MetricName": "C2_Pkg_Residency" + }, + { + "BriefDescription": "C3 residency percent per package", + "MetricExpr": "(cstate_pkg@c3\\-residency@ / msr@tsc@) * 100", + "MetricGroup": "Power", + "MetricName": "C3_Pkg_Residency" + }, + { + "BriefDescription": "C6 residency percent per package", + "MetricExpr": "(cstate_pkg@c6\\-residency@ / msr@tsc@) * 100", + "MetricGroup": "Power", + "MetricName": "C6_Pkg_Residency" + }, + { + "BriefDescription": "C7 residency percent per package", + "MetricExpr": "(cstate_pkg@c7\\-residency@ / msr@tsc@) * 100", + "MetricGroup": "Power", + "MetricName": "C7_Pkg_Residency" + } +] diff --git a/tools/perf/pmu-events/arch/x86/broadwellx/bdx-metrics.json b/tools/perf/pmu-events/arch/x86/broadwellx/bdx-metrics.json new file mode 100644 index 000000000000..5a7f1ec24200 --- /dev/null +++ b/tools/perf/pmu-events/arch/x86/broadwellx/bdx-metrics.json @@ -0,0 +1,164 @@ +[ + { + "BriefDescription": "Instructions Per Cycle (per logical thread)", + "MetricExpr": "INST_RETIRED.ANY / CPU_CLK_UNHALTED.THREAD", + "MetricGroup": "TopDownL1", + "MetricName": "IPC" + }, + { + "BriefDescription": "Uops Per Instruction", + "MetricExpr": "UOPS_RETIRED.RETIRE_SLOTS / INST_RETIRED.ANY", + "MetricGroup": "Pipeline", + "MetricName": "UPI" + }, + { + "BriefDescription": "Rough Estimation of fraction of fetched lines bytes that were likely consumed by program instructions", + "MetricExpr": "min( 1 , IDQ.MITE_UOPS / ( (UOPS_RETIRED.RETIRE_SLOTS / INST_RETIRED.ANY) * 16 * ( ICACHE.HIT + ICACHE.MISSES ) / 4.0 ) )", + "MetricGroup": "Frontend", + "MetricName": "IFetch_Line_Utilization" + }, + { + "BriefDescription": "Fraction of Uops delivered by the DSB (aka Decoded Icache; or Uop Cache)", + "MetricExpr": "IDQ.DSB_UOPS / ( IDQ.DSB_UOPS + LSD.UOPS + IDQ.MITE_UOPS + IDQ.MS_UOPS )", + "MetricGroup": "DSB; Frontend_Bandwidth", + "MetricName": "DSB_Coverage" + }, + { + "BriefDescription": "Cycles Per Instruction (threaded)", + "MetricExpr": "1 / (INST_RETIRED.ANY / cycles)", + "MetricGroup": "Pipeline;Summary", + "MetricName": "CPI" + }, + { + "BriefDescription": "Per-thread actual clocks when the logical processor is active. This is called 'Clockticks' in VTune.", + "MetricExpr": "CPU_CLK_UNHALTED.THREAD", + "MetricGroup": "Summary", + "MetricName": "CLKS" + }, + { + "BriefDescription": "Total issue-pipeline slots", + "MetricExpr": "4*(( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else cycles)", + "MetricGroup": "TopDownL1", + "MetricName": "SLOTS" + }, + { + "BriefDescription": "Total number of retired Instructions", + "MetricExpr": "INST_RETIRED.ANY", + "MetricGroup": "Summary", + "MetricName": "Instructions" + }, + { + "BriefDescription": "Instructions Per Cycle (per physical core)", + "MetricExpr": "INST_RETIRED.ANY / (( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else cycles)", + "MetricGroup": "SMT", + "MetricName": "CoreIPC" + }, + { + "BriefDescription": "Instruction-Level-Parallelism (average number of uops executed when there is at least 1 uop executed)", + "MetricExpr": "UOPS_EXECUTED.THREAD / (( cpu@UOPS_EXECUTED.CORE\\,cmask\\=1@ / 2) if #SMT_on else UOPS_EXECUTED.CYCLES_GE_1_UOP_EXEC)", + "MetricGroup": "Pipeline;Ports_Utilization", + "MetricName": "ILP" + }, + { + "BriefDescription": "Average Branch Address Clear Cost (fraction of cycles)", + "MetricExpr": "2* (( RS_EVENTS.EMPTY_CYCLES - ICACHE.IFDATA_STALL - (( 14 * ITLB_MISSES.STLB_HIT + cpu@ITLB_MISSES.WALK_DURATION\\,cmask\\=1@ + 7* ITLB_MISSES.WALK_COMPLETED )) ) / RS_EVENTS.EMPTY_END)", + "MetricGroup": "Unknown_Branches", + "MetricName": "BAClear_Cost" + }, + { + "BriefDescription": "Core actual clocks when any thread is active on the physical core", + "MetricExpr": "( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else CPU_CLK_UNHALTED.THREAD", + "MetricGroup": "SMT", + "MetricName": "CORE_CLKS" + }, + { + "BriefDescription": "Actual Average Latency for L1 data-cache miss demand loads", + "MetricExpr": "L1D_PEND_MISS.PENDING / ( MEM_LOAD_UOPS_RETIRED.L1_MISS + mem_load_uops_retired.hit_lfb )", + "MetricGroup": "Memory_Bound;Memory_Lat", + "MetricName": "Load_Miss_Real_Latency" + }, + { + "BriefDescription": "Memory-Level-Parallelism (average number of L1 miss demand load when there is at least 1 such miss)", + "MetricExpr": "L1D_PEND_MISS.PENDING / (( cpu@l1d_pend_miss.pending_cycles\\,any\\=1@ / 2) if #SMT_on else L1D_PEND_MISS.PENDING_CYCLES)", + "MetricGroup": "Memory_Bound;Memory_BW", + "MetricName": "MLP" + }, + { + "BriefDescription": "Utilization of the core's Page Walker(s) serving STLB misses triggered by instruction/Load/Store accesses", + "MetricExpr": "( ITLB_MISSES.WALK_DURATION + DTLB_LOAD_MISSES.WALK_DURATION + DTLB_STORE_MISSES.WALK_DURATION + 7*(DTLB_STORE_MISSES.WALK_COMPLETED+DTLB_LOAD_MISSES.WALK_COMPLETED+ITLB_MISSES.WALK_COMPLETED) ) / (2*(( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else cycles))", + "MetricGroup": "TLB", + "MetricName": "Page_Walks_Utilization" + }, + { + "BriefDescription": "Average CPU Utilization", + "MetricExpr": "CPU_CLK_UNHALTED.REF_TSC / msr@tsc@", + "MetricGroup": "Summary", + "MetricName": "CPU_Utilization" + }, + { + "BriefDescription": "Giga Floating Point Operations Per Second", + "MetricExpr": "(( 1*( FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_ARITH_INST_RETIRED.SCALAR_DOUBLE ) + 2* FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + 4*( FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE ) + 8* FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE )) / 1000000000 / duration_time", + "MetricGroup": "FLOPS;Summary", + "MetricName": "GFLOPs" + }, + { + "BriefDescription": "Average Frequency Utilization relative nominal frequency", + "MetricExpr": "CPU_CLK_UNHALTED.THREAD / CPU_CLK_UNHALTED.REF_TSC", + "MetricGroup": "Power", + "MetricName": "Turbo_Utilization" + }, + { + "BriefDescription": "Fraction of cycles where both hardware threads were active", + "MetricExpr": "1 - CPU_CLK_THREAD_UNHALTED.ONE_THREAD_ACTIVE / ( CPU_CLK_THREAD_UNHALTED.REF_XCLK_ANY / 2 ) if #SMT_on else 0", + "MetricGroup": "SMT;Summary", + "MetricName": "SMT_2T_Utilization" + }, + { + "BriefDescription": "Fraction of cycles spent in Kernel mode", + "MetricExpr": "CPU_CLK_UNHALTED.REF_TSC:u / CPU_CLK_UNHALTED.REF_TSC", + "MetricGroup": "Summary", + "MetricName": "Kernel_Utilization" + }, + { + "BriefDescription": "C3 residency percent per core", + "MetricExpr": "(cstate_core@c3\\-residency@ / msr@tsc@) * 100", + "MetricGroup": "Power", + "MetricName": "C3_Core_Residency" + }, + { + "BriefDescription": "C6 residency percent per core", + "MetricExpr": "(cstate_core@c6\\-residency@ / msr@tsc@) * 100", + "MetricGroup": "Power", + "MetricName": "C6_Core_Residency" + }, + { + "BriefDescription": "C7 residency percent per core", + "MetricExpr": "(cstate_core@c7\\-residency@ / msr@tsc@) * 100", + "MetricGroup": "Power", + "MetricName": "C7_Core_Residency" + }, + { + "BriefDescription": "C2 residency percent per package", + "MetricExpr": "(cstate_pkg@c2\\-residency@ / msr@tsc@) * 100", + "MetricGroup": "Power", + "MetricName": "C2_Pkg_Residency" + }, + { + "BriefDescription": "C3 residency percent per package", + "MetricExpr": "(cstate_pkg@c3\\-residency@ / msr@tsc@) * 100", + "MetricGroup": "Power", + "MetricName": "C3_Pkg_Residency" + }, + { + "BriefDescription": "C6 residency percent per package", + "MetricExpr": "(cstate_pkg@c6\\-residency@ / msr@tsc@) * 100", + "MetricGroup": "Power", + "MetricName": "C6_Pkg_Residency" + }, + { + "BriefDescription": "C7 residency percent per package", + "MetricExpr": "(cstate_pkg@c7\\-residency@ / msr@tsc@) * 100", + "MetricGroup": "Power", + "MetricName": "C7_Pkg_Residency" + } +] diff --git a/tools/perf/pmu-events/arch/x86/goldmontplus/cache.json b/tools/perf/pmu-events/arch/x86/goldmontplus/cache.json new file mode 100644 index 000000000000..b4791b443a66 --- /dev/null +++ b/tools/perf/pmu-events/arch/x86/goldmontplus/cache.json @@ -0,0 +1,1453 @@ +[ + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts memory requests originating from the core that miss in the L2 cache.", + "EventCode": "0x2E", + "Counter": "0,1,2,3", + "UMask": "0x41", + "PEBScounters": "0,1,2,3", + "EventName": "LONGEST_LAT_CACHE.MISS", + "PDIR_COUNTER": "na", + "SampleAfterValue": "200003", + "BriefDescription": "L2 cache request misses" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts memory requests originating from the core that reference a cache line in the L2 cache.", + "EventCode": "0x2E", + "Counter": "0,1,2,3", + "UMask": "0x4f", + "PEBScounters": "0,1,2,3", + "EventName": "LONGEST_LAT_CACHE.REFERENCE", + "PDIR_COUNTER": "na", + "SampleAfterValue": "200003", + "BriefDescription": "L2 cache requests" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts the number of demand and prefetch transactions that the L2 XQ rejects due to a full or near full condition which likely indicates back pressure from the intra-die interconnect (IDI) fabric. The XQ may reject transactions from the L2Q (non-cacheable requests), L2 misses and L2 write-back victims.", + "EventCode": "0x30", + "Counter": "0,1,2,3", + "UMask": "0x0", + "PEBScounters": "0,1,2,3", + "EventName": "L2_REJECT_XQ.ALL", + "PDIR_COUNTER": "na", + "SampleAfterValue": "200003", + "BriefDescription": "Requests rejected by the XQ" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts the number of demand and L1 prefetcher requests rejected by the L2Q due to a full or nearly full condition which likely indicates back pressure from L2Q. It also counts requests that would have gone directly to the XQ, but are rejected due to a full or nearly full condition, indicating back pressure from the IDI link. The L2Q may also reject transactions from a core to insure fairness between cores, or to delay a core's dirty eviction when the address conflicts with incoming external snoops.", + "EventCode": "0x31", + "Counter": "0,1,2,3", + "UMask": "0x0", + "PEBScounters": "0,1,2,3", + "EventName": "CORE_REJECT_L2Q.ALL", + "PDIR_COUNTER": "na", + "SampleAfterValue": "200003", + "BriefDescription": "Requests rejected by the L2Q" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts when a modified (dirty) cache line is evicted from the data L1 cache and needs to be written back to memory. No count will occur if the evicted line is clean, and hence does not require a writeback.", + "EventCode": "0x51", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "DL1.REPLACEMENT", + "PDIR_COUNTER": "na", + "SampleAfterValue": "200003", + "BriefDescription": "L1 Cache evictions for dirty data" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts cycles that fetch is stalled due to an outstanding ICache miss. That is, the decoder queue is able to accept bytes, but the fetch unit is unable to provide bytes due to an ICache miss. Note: this event is not the same as the total number of cycles spent retrieving instruction cache lines from the memory hierarchy.", + "EventCode": "0x86", + "Counter": "0,1,2,3", + "UMask": "0x2", + "PEBScounters": "0,1,2,3", + "EventName": "FETCH_STALL.ICACHE_FILL_PENDING_CYCLES", + "PDIR_COUNTER": "na", + "SampleAfterValue": "200003", + "BriefDescription": "Cycles code-fetch stalled due to an outstanding ICache miss." + }, + { + "CollectPEBSRecord": "1", + "EventCode": "0xB7", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "OFFCORE_RESPONSE", + "PDIR_COUNTER": "na", + "SampleAfterValue": "100007", + "BriefDescription": "Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)" + }, + { + "PEBS": "2", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts locked memory uops retired. This includes regular locks and bus locks. (To specifically count bus locks only, see the Offcore response event.) A locked access is one with a lock prefix, or an exchange to memory. See the SDM for a complete description of which memory load accesses are locks.", + "EventCode": "0xD0", + "Counter": "0,1,2,3", + "UMask": "0x21", + "PEBScounters": "0,1,2,3", + "EventName": "MEM_UOPS_RETIRED.LOCK_LOADS", + "SampleAfterValue": "200003", + "BriefDescription": "Locked load uops retired (Precise event capable)" + }, + { + "PEBS": "2", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts load uops retired where the data requested spans a 64 byte cache line boundary.", + "EventCode": "0xD0", + "Counter": "0,1,2,3", + "UMask": "0x41", + "PEBScounters": "0,1,2,3", + "EventName": "MEM_UOPS_RETIRED.SPLIT_LOADS", + "SampleAfterValue": "200003", + "BriefDescription": "Load uops retired that split a cache-line (Precise event capable)" + }, + { + "PEBS": "2", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts store uops retired where the data requested spans a 64 byte cache line boundary.", + "EventCode": "0xD0", + "Counter": "0,1,2,3", + "UMask": "0x42", + "PEBScounters": "0,1,2,3", + "EventName": "MEM_UOPS_RETIRED.SPLIT_STORES", + "SampleAfterValue": "200003", + "BriefDescription": "Stores uops retired that split a cache-line (Precise event capable)" + }, + { + "PEBS": "2", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts memory uops retired where the data requested spans a 64 byte cache line boundary.", + "EventCode": "0xD0", + "Counter": "0,1,2,3", + "UMask": "0x43", + "PEBScounters": "0,1,2,3", + "EventName": "MEM_UOPS_RETIRED.SPLIT", + "SampleAfterValue": "200003", + "BriefDescription": "Memory uops retired that split a cache-line (Precise event capable)" + }, + { + "PEBS": "2", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts the number of load uops retired.", + "EventCode": "0xD0", + "Counter": "0,1,2,3", + "UMask": "0x81", + "PEBScounters": "0,1,2,3", + "EventName": "MEM_UOPS_RETIRED.ALL_LOADS", + "SampleAfterValue": "200003", + "BriefDescription": "Load uops retired (Precise event capable)" + }, + { + "PEBS": "2", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts the number of store uops retired.", + "EventCode": "0xD0", + "Counter": "0,1,2,3", + "UMask": "0x82", + "PEBScounters": "0,1,2,3", + "EventName": "MEM_UOPS_RETIRED.ALL_STORES", + "SampleAfterValue": "200003", + "BriefDescription": "Store uops retired (Precise event capable)" + }, + { + "PEBS": "2", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts the number of memory uops retired that is either a loads or a store or both.", + "EventCode": "0xD0", + "Counter": "0,1,2,3", + "UMask": "0x83", + "PEBScounters": "0,1,2,3", + "EventName": "MEM_UOPS_RETIRED.ALL", + "SampleAfterValue": "200003", + "BriefDescription": "Memory uops retired (Precise event capable)" + }, + { + "PEBS": "2", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts load uops retired that hit the L1 data cache.", + "EventCode": "0xD1", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "MEM_LOAD_UOPS_RETIRED.L1_HIT", + "SampleAfterValue": "200003", + "BriefDescription": "Load uops retired that hit L1 data cache (Precise event capable)" + }, + { + "PEBS": "2", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts load uops retired that hit in the L2 cache.", + "EventCode": "0xD1", + "Counter": "0,1,2,3", + "UMask": "0x2", + "PEBScounters": "0,1,2,3", + "EventName": "MEM_LOAD_UOPS_RETIRED.L2_HIT", + "SampleAfterValue": "200003", + "BriefDescription": "Load uops retired that hit L2 (Precise event capable)" + }, + { + "PEBS": "2", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts load uops retired that miss the L1 data cache.", + "EventCode": "0xD1", + "Counter": "0,1,2,3", + "UMask": "0x8", + "PEBScounters": "0,1,2,3", + "EventName": "MEM_LOAD_UOPS_RETIRED.L1_MISS", + "SampleAfterValue": "200003", + "BriefDescription": "Load uops retired that missed L1 data cache (Precise event capable)" + }, + { + "PEBS": "2", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts load uops retired that miss in the L2 cache.", + "EventCode": "0xD1", + "Counter": "0,1,2,3", + "UMask": "0x10", + "PEBScounters": "0,1,2,3", + "EventName": "MEM_LOAD_UOPS_RETIRED.L2_MISS", + "SampleAfterValue": "200003", + "BriefDescription": "Load uops retired that missed L2 (Precise event capable)" + }, + { + "PEBS": "2", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts load uops retired where the cache line containing the data was in the modified state of another core or modules cache (HITM). More specifically, this means that when the load address was checked by other caching agents (typically another processor) in the system, one of those caching agents indicated that they had a dirty copy of the data. Loads that obtain a HITM response incur greater latency than most is typical for a load. In addition, since HITM indicates that some other processor had this data in its cache, it implies that the data was shared between processors, or potentially was a lock or semaphore value. This event is useful for locating sharing, false sharing, and contended locks.", + "EventCode": "0xD1", + "Counter": "0,1,2,3", + "UMask": "0x20", + "PEBScounters": "0,1,2,3", + "EventName": "MEM_LOAD_UOPS_RETIRED.HITM", + "SampleAfterValue": "200003", + "BriefDescription": "Memory uop retired where cross core or cross module HITM occurred (Precise event capable)" + }, + { + "PEBS": "2", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts memory load uops retired where the data is retrieved from the WCB (or fill buffer), indicating that the load found its data while that data was in the process of being brought into the L1 cache. Typically a load will receive this indication when some other load or prefetch missed the L1 cache and was in the process of retrieving the cache line containing the data, but that process had not yet finished (and written the data back to the cache). For example, consider load X and Y, both referencing the same cache line that is not in the L1 cache. If load X misses cache first, it obtains and WCB (or fill buffer) and begins the process of requesting the data. When load Y requests the data, it will either hit the WCB, or the L1 cache, depending on exactly what time the request to Y occurs.", + "EventCode": "0xD1", + "Counter": "0,1,2,3", + "UMask": "0x40", + "PEBScounters": "0,1,2,3", + "EventName": "MEM_LOAD_UOPS_RETIRED.WCB_HIT", + "SampleAfterValue": "200003", + "BriefDescription": "Loads retired that hit WCB (Precise event capable)" + }, + { + "PEBS": "2", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts memory load uops retired where the data is retrieved from DRAM. Event is counted at retirement, so the speculative loads are ignored. A memory load can hit (or miss) the L1 cache, hit (or miss) the L2 cache, hit DRAM, hit in the WCB or receive a HITM response.", + "EventCode": "0xD1", + "Counter": "0,1,2,3", + "UMask": "0x80", + "PEBScounters": "0,1,2,3", + "EventName": "MEM_LOAD_UOPS_RETIRED.DRAM_HIT", + "SampleAfterValue": "200003", + "BriefDescription": "Loads retired that came from DRAM (Precise event capable)" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts demand cacheable data reads of full cache lines have any transaction responses from the uncore subsystem. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)", + "EventCode": "0xB7", + "MSRValue": "0x0000010001", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.ANY_RESPONSE", + "PDIR_COUNTER": "na", + "MSRIndex": "0x1a6, 0x1a7", + "SampleAfterValue": "100007", + "BriefDescription": "Counts demand cacheable data reads of full cache lines have any transaction responses from the uncore subsystem.", + "Offcore": "1" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts demand cacheable data reads of full cache lines hit the L2 cache. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)", + "EventCode": "0xB7", + "MSRValue": "0x0000040001", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L2_HIT", + "PDIR_COUNTER": "na", + "MSRIndex": "0x1a6, 0x1a7", + "SampleAfterValue": "100007", + "BriefDescription": "Counts demand cacheable data reads of full cache lines hit the L2 cache.", + "Offcore": "1" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts demand cacheable data reads of full cache lines true miss for the L2 cache with a snoop miss in the other processor module. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)", + "EventCode": "0xB7", + "MSRValue": "0x0200000001", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L2_MISS.SNOOP_MISS_OR_NO_SNOOP_NEEDED", + "PDIR_COUNTER": "na", + "MSRIndex": "0x1a6, 0x1a7", + "SampleAfterValue": "100007", + "BriefDescription": "Counts demand cacheable data reads of full cache lines true miss for the L2 cache with a snoop miss in the other processor module. ", + "Offcore": "1" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts demand cacheable data reads of full cache lines miss the L2 cache with a snoop hit in the other processor module, data forwarding is required. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)", + "EventCode": "0xB7", + "MSRValue": "0x1000000001", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L2_MISS.HITM_OTHER_CORE", + "PDIR_COUNTER": "na", + "MSRIndex": "0x1a6, 0x1a7", + "SampleAfterValue": "100007", + "BriefDescription": "Counts demand cacheable data reads of full cache lines miss the L2 cache with a snoop hit in the other processor module, data forwarding is required.", + "Offcore": "1" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts demand cacheable data reads of full cache lines outstanding, per cycle, from the time of the L2 miss to when any response is received. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)", + "EventCode": "0xB7", + "MSRValue": "0x4000000001", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.OUTSTANDING", + "PDIR_COUNTER": "na", + "MSRIndex": "0x1a6", + "SampleAfterValue": "100007", + "BriefDescription": "Counts demand cacheable data reads of full cache lines outstanding, per cycle, from the time of the L2 miss to when any response is received.", + "Offcore": "1" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts demand reads for ownership (RFO) requests generated by a write to full data cache line have any transaction responses from the uncore subsystem. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)", + "EventCode": "0xB7", + "MSRValue": "0x0000010002", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "OFFCORE_RESPONSE.DEMAND_RFO.ANY_RESPONSE", + "PDIR_COUNTER": "na", + "MSRIndex": "0x1a6, 0x1a7", + "SampleAfterValue": "100007", + "BriefDescription": "Counts demand reads for ownership (RFO) requests generated by a write to full data cache line have any transaction responses from the uncore subsystem.", + "Offcore": "1" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts demand reads for ownership (RFO) requests generated by a write to full data cache line hit the L2 cache. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)", + "EventCode": "0xB7", + "MSRValue": "0x0000040002", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L2_HIT", + "PDIR_COUNTER": "na", + "MSRIndex": "0x1a6, 0x1a7", + "SampleAfterValue": "100007", + "BriefDescription": "Counts demand reads for ownership (RFO) requests generated by a write to full data cache line hit the L2 cache.", + "Offcore": "1" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts demand reads for ownership (RFO) requests generated by a write to full data cache line true miss for the L2 cache with a snoop miss in the other processor module. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)", + "EventCode": "0xB7", + "MSRValue": "0x0200000002", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L2_MISS.SNOOP_MISS_OR_NO_SNOOP_NEEDED", + "PDIR_COUNTER": "na", + "MSRIndex": "0x1a6, 0x1a7", + "SampleAfterValue": "100007", + "BriefDescription": "Counts demand reads for ownership (RFO) requests generated by a write to full data cache line true miss for the L2 cache with a snoop miss in the other processor module. ", + "Offcore": "1" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts demand reads for ownership (RFO) requests generated by a write to full data cache line miss the L2 cache with a snoop hit in the other processor module, data forwarding is required. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)", + "EventCode": "0xB7", + "MSRValue": "0x1000000002", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L2_MISS.HITM_OTHER_CORE", + "PDIR_COUNTER": "na", + "MSRIndex": "0x1a6, 0x1a7", + "SampleAfterValue": "100007", + "BriefDescription": "Counts demand reads for ownership (RFO) requests generated by a write to full data cache line miss the L2 cache with a snoop hit in the other processor module, data forwarding is required.", + "Offcore": "1" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts demand reads for ownership (RFO) requests generated by a write to full data cache line outstanding, per cycle, from the time of the L2 miss to when any response is received. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)", + "EventCode": "0xB7", + "MSRValue": "0x4000000002", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "OFFCORE_RESPONSE.DEMAND_RFO.OUTSTANDING", + "PDIR_COUNTER": "na", + "MSRIndex": "0x1a6", + "SampleAfterValue": "100007", + "BriefDescription": "Counts demand reads for ownership (RFO) requests generated by a write to full data cache line outstanding, per cycle, from the time of the L2 miss to when any response is received.", + "Offcore": "1" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts demand instruction cacheline and I-side prefetch requests that miss the instruction cache have any transaction responses from the uncore subsystem. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)", + "EventCode": "0xB7", + "MSRValue": "0x0000010004", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.ANY_RESPONSE", + "PDIR_COUNTER": "na", + "MSRIndex": "0x1a6, 0x1a7", + "SampleAfterValue": "100007", + "BriefDescription": "Counts demand instruction cacheline and I-side prefetch requests that miss the instruction cache have any transaction responses from the uncore subsystem.", + "Offcore": "1" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts demand instruction cacheline and I-side prefetch requests that miss the instruction cache hit the L2 cache. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)", + "EventCode": "0xB7", + "MSRValue": "0x0000040004", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L2_HIT", + "PDIR_COUNTER": "na", + "MSRIndex": "0x1a6, 0x1a7", + "SampleAfterValue": "100007", + "BriefDescription": "Counts demand instruction cacheline and I-side prefetch requests that miss the instruction cache hit the L2 cache.", + "Offcore": "1" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts demand instruction cacheline and I-side prefetch requests that miss the instruction cache true miss for the L2 cache with a snoop miss in the other processor module. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)", + "EventCode": "0xB7", + "MSRValue": "0x0200000004", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L2_MISS.SNOOP_MISS_OR_NO_SNOOP_NEEDED", + "PDIR_COUNTER": "na", + "MSRIndex": "0x1a6, 0x1a7", + "SampleAfterValue": "100007", + "BriefDescription": "Counts demand instruction cacheline and I-side prefetch requests that miss the instruction cache true miss for the L2 cache with a snoop miss in the other processor module. ", + "Offcore": "1" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts demand instruction cacheline and I-side prefetch requests that miss the instruction cache miss the L2 cache with a snoop hit in the other processor module, data forwarding is required. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)", + "EventCode": "0xB7", + "MSRValue": "0x1000000004", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L2_MISS.HITM_OTHER_CORE", + "PDIR_COUNTER": "na", + "MSRIndex": "0x1a6, 0x1a7", + "SampleAfterValue": "100007", + "BriefDescription": "Counts demand instruction cacheline and I-side prefetch requests that miss the instruction cache miss the L2 cache with a snoop hit in the other processor module, data forwarding is required.", + "Offcore": "1" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts demand instruction cacheline and I-side prefetch requests that miss the instruction cache outstanding, per cycle, from the time of the L2 miss to when any response is received. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)", + "EventCode": "0xB7", + "MSRValue": "0x4000000004", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.OUTSTANDING", + "PDIR_COUNTER": "na", + "MSRIndex": "0x1a6", + "SampleAfterValue": "100007", + "BriefDescription": "Counts demand instruction cacheline and I-side prefetch requests that miss the instruction cache outstanding, per cycle, from the time of the L2 miss to when any response is received.", + "Offcore": "1" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts the number of writeback transactions caused by L1 or L2 cache evictions have any transaction responses from the uncore subsystem. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)", + "EventCode": "0xB7", + "MSRValue": "0x0000010008", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "OFFCORE_RESPONSE.COREWB.ANY_RESPONSE", + "PDIR_COUNTER": "na", + "MSRIndex": "0x1a6, 0x1a7", + "SampleAfterValue": "100007", + "BriefDescription": "Counts the number of writeback transactions caused by L1 or L2 cache evictions have any transaction responses from the uncore subsystem.", + "Offcore": "1" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts the number of writeback transactions caused by L1 or L2 cache evictions hit the L2 cache. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)", + "EventCode": "0xB7", + "MSRValue": "0x0000040008", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "OFFCORE_RESPONSE.COREWB.L2_HIT", + "PDIR_COUNTER": "na", + "MSRIndex": "0x1a6, 0x1a7", + "SampleAfterValue": "100007", + "BriefDescription": "Counts the number of writeback transactions caused by L1 or L2 cache evictions hit the L2 cache.", + "Offcore": "1" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts the number of writeback transactions caused by L1 or L2 cache evictions true miss for the L2 cache with a snoop miss in the other processor module. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)", + "EventCode": "0xB7", + "MSRValue": "0x0200000008", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "OFFCORE_RESPONSE.COREWB.L2_MISS.SNOOP_MISS_OR_NO_SNOOP_NEEDED", + "PDIR_COUNTER": "na", + "MSRIndex": "0x1a6, 0x1a7", + "SampleAfterValue": "100007", + "BriefDescription": "Counts the number of writeback transactions caused by L1 or L2 cache evictions true miss for the L2 cache with a snoop miss in the other processor module. ", + "Offcore": "1" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts the number of writeback transactions caused by L1 or L2 cache evictions miss the L2 cache with a snoop hit in the other processor module, data forwarding is required. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)", + "EventCode": "0xB7", + "MSRValue": "0x1000000008", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "OFFCORE_RESPONSE.COREWB.L2_MISS.HITM_OTHER_CORE", + "PDIR_COUNTER": "na", + "MSRIndex": "0x1a6, 0x1a7", + "SampleAfterValue": "100007", + "BriefDescription": "Counts the number of writeback transactions caused by L1 or L2 cache evictions miss the L2 cache with a snoop hit in the other processor module, data forwarding is required.", + "Offcore": "1" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts the number of writeback transactions caused by L1 or L2 cache evictions outstanding, per cycle, from the time of the L2 miss to when any response is received. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)", + "EventCode": "0xB7", + "MSRValue": "0x4000000008", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "OFFCORE_RESPONSE.COREWB.OUTSTANDING", + "PDIR_COUNTER": "na", + "MSRIndex": "0x1a6", + "SampleAfterValue": "100007", + "BriefDescription": "Counts the number of writeback transactions caused by L1 or L2 cache evictions outstanding, per cycle, from the time of the L2 miss to when any response is received.", + "Offcore": "1" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts data cacheline reads generated by hardware L2 cache prefetcher have any transaction responses from the uncore subsystem. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)", + "EventCode": "0xB7", + "MSRValue": "0x0000010010", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.ANY_RESPONSE", + "PDIR_COUNTER": "na", + "MSRIndex": "0x1a6, 0x1a7", + "SampleAfterValue": "100007", + "BriefDescription": "Counts data cacheline reads generated by hardware L2 cache prefetcher have any transaction responses from the uncore subsystem.", + "Offcore": "1" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts data cacheline reads generated by hardware L2 cache prefetcher hit the L2 cache. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)", + "EventCode": "0xB7", + "MSRValue": "0x0000040010", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L2_HIT", + "PDIR_COUNTER": "na", + "MSRIndex": "0x1a6, 0x1a7", + "SampleAfterValue": "100007", + "BriefDescription": "Counts data cacheline reads generated by hardware L2 cache prefetcher hit the L2 cache.", + "Offcore": "1" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts data cacheline reads generated by hardware L2 cache prefetcher true miss for the L2 cache with a snoop miss in the other processor module. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)", + "EventCode": "0xB7", + "MSRValue": "0x0200000010", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L2_MISS.SNOOP_MISS_OR_NO_SNOOP_NEEDED", + "PDIR_COUNTER": "na", + "MSRIndex": "0x1a6, 0x1a7", + "SampleAfterValue": "100007", + "BriefDescription": "Counts data cacheline reads generated by hardware L2 cache prefetcher true miss for the L2 cache with a snoop miss in the other processor module. ", + "Offcore": "1" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts data cacheline reads generated by hardware L2 cache prefetcher miss the L2 cache with a snoop hit in the other processor module, data forwarding is required. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)", + "EventCode": "0xB7", + "MSRValue": "0x1000000010", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L2_MISS.HITM_OTHER_CORE", + "PDIR_COUNTER": "na", + "MSRIndex": "0x1a6, 0x1a7", + "SampleAfterValue": "100007", + "BriefDescription": "Counts data cacheline reads generated by hardware L2 cache prefetcher miss the L2 cache with a snoop hit in the other processor module, data forwarding is required.", + "Offcore": "1" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts data cacheline reads generated by hardware L2 cache prefetcher outstanding, per cycle, from the time of the L2 miss to when any response is received. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)", + "EventCode": "0xB7", + "MSRValue": "0x4000000010", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.OUTSTANDING", + "PDIR_COUNTER": "na", + "MSRIndex": "0x1a6", + "SampleAfterValue": "100007", + "BriefDescription": "Counts data cacheline reads generated by hardware L2 cache prefetcher outstanding, per cycle, from the time of the L2 miss to when any response is received.", + "Offcore": "1" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts reads for ownership (RFO) requests generated by L2 prefetcher have any transaction responses from the uncore subsystem. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)", + "EventCode": "0xB7", + "MSRValue": "0x0000010020", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "OFFCORE_RESPONSE.PF_L2_RFO.ANY_RESPONSE", + "PDIR_COUNTER": "na", + "MSRIndex": "0x1a6, 0x1a7", + "SampleAfterValue": "100007", + "BriefDescription": "Counts reads for ownership (RFO) requests generated by L2 prefetcher have any transaction responses from the uncore subsystem.", + "Offcore": "1" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts reads for ownership (RFO) requests generated by L2 prefetcher hit the L2 cache. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)", + "EventCode": "0xB7", + "MSRValue": "0x0000040020", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L2_HIT", + "PDIR_COUNTER": "na", + "MSRIndex": "0x1a6, 0x1a7", + "SampleAfterValue": "100007", + "BriefDescription": "Counts reads for ownership (RFO) requests generated by L2 prefetcher hit the L2 cache.", + "Offcore": "1" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts reads for ownership (RFO) requests generated by L2 prefetcher true miss for the L2 cache with a snoop miss in the other processor module. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)", + "EventCode": "0xB7", + "MSRValue": "0x0200000020", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L2_MISS.SNOOP_MISS_OR_NO_SNOOP_NEEDED", + "PDIR_COUNTER": "na", + "MSRIndex": "0x1a6, 0x1a7", + "SampleAfterValue": "100007", + "BriefDescription": "Counts reads for ownership (RFO) requests generated by L2 prefetcher true miss for the L2 cache with a snoop miss in the other processor module. ", + "Offcore": "1" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts reads for ownership (RFO) requests generated by L2 prefetcher miss the L2 cache with a snoop hit in the other processor module, data forwarding is required. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)", + "EventCode": "0xB7", + "MSRValue": "0x1000000020", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L2_MISS.HITM_OTHER_CORE", + "PDIR_COUNTER": "na", + "MSRIndex": "0x1a6, 0x1a7", + "SampleAfterValue": "100007", + "BriefDescription": "Counts reads for ownership (RFO) requests generated by L2 prefetcher miss the L2 cache with a snoop hit in the other processor module, data forwarding is required.", + "Offcore": "1" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts reads for ownership (RFO) requests generated by L2 prefetcher outstanding, per cycle, from the time of the L2 miss to when any response is received. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)", + "EventCode": "0xB7", + "MSRValue": "0x4000000020", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "OFFCORE_RESPONSE.PF_L2_RFO.OUTSTANDING", + "PDIR_COUNTER": "na", + "MSRIndex": "0x1a6", + "SampleAfterValue": "100007", + "BriefDescription": "Counts reads for ownership (RFO) requests generated by L2 prefetcher outstanding, per cycle, from the time of the L2 miss to when any response is received.", + "Offcore": "1" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts bus lock and split lock requests have any transaction responses from the uncore subsystem. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)", + "EventCode": "0xB7", + "MSRValue": "0x0000010400", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "OFFCORE_RESPONSE.BUS_LOCKS.ANY_RESPONSE", + "PDIR_COUNTER": "na", + "MSRIndex": "0x1a6, 0x1a7", + "SampleAfterValue": "100007", + "BriefDescription": "Counts bus lock and split lock requests have any transaction responses from the uncore subsystem.", + "Offcore": "1" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts bus lock and split lock requests hit the L2 cache. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)", + "EventCode": "0xB7", + "MSRValue": "0x0000040400", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "OFFCORE_RESPONSE.BUS_LOCKS.L2_HIT", + "PDIR_COUNTER": "na", + "MSRIndex": "0x1a6, 0x1a7", + "SampleAfterValue": "100007", + "BriefDescription": "Counts bus lock and split lock requests hit the L2 cache.", + "Offcore": "1" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts bus lock and split lock requests true miss for the L2 cache with a snoop miss in the other processor module. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)", + "EventCode": "0xB7", + "MSRValue": "0x0200000400", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "OFFCORE_RESPONSE.BUS_LOCKS.L2_MISS.SNOOP_MISS_OR_NO_SNOOP_NEEDED", + "PDIR_COUNTER": "na", + "MSRIndex": "0x1a6, 0x1a7", + "SampleAfterValue": "100007", + "BriefDescription": "Counts bus lock and split lock requests true miss for the L2 cache with a snoop miss in the other processor module. ", + "Offcore": "1" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts bus lock and split lock requests miss the L2 cache with a snoop hit in the other processor module, data forwarding is required. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)", + "EventCode": "0xB7", + "MSRValue": "0x1000000400", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "OFFCORE_RESPONSE.BUS_LOCKS.L2_MISS.HITM_OTHER_CORE", + "PDIR_COUNTER": "na", + "MSRIndex": "0x1a6, 0x1a7", + "SampleAfterValue": "100007", + "BriefDescription": "Counts bus lock and split lock requests miss the L2 cache with a snoop hit in the other processor module, data forwarding is required.", + "Offcore": "1" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts bus lock and split lock requests outstanding, per cycle, from the time of the L2 miss to when any response is received. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)", + "EventCode": "0xB7", + "MSRValue": "0x4000000400", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "OFFCORE_RESPONSE.BUS_LOCKS.OUTSTANDING", + "PDIR_COUNTER": "na", + "MSRIndex": "0x1a6", + "SampleAfterValue": "100007", + "BriefDescription": "Counts bus lock and split lock requests outstanding, per cycle, from the time of the L2 miss to when any response is received.", + "Offcore": "1" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts full cache line data writes to uncacheable write combining (USWC) memory region and full cache-line non-temporal writes have any transaction responses from the uncore subsystem. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)", + "EventCode": "0xB7", + "MSRValue": "0x0000010800", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "OFFCORE_RESPONSE.FULL_STREAMING_STORES.ANY_RESPONSE", + "PDIR_COUNTER": "na", + "MSRIndex": "0x1a6, 0x1a7", + "SampleAfterValue": "100007", + "BriefDescription": "Counts full cache line data writes to uncacheable write combining (USWC) memory region and full cache-line non-temporal writes have any transaction responses from the uncore subsystem.", + "Offcore": "1" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts full cache line data writes to uncacheable write combining (USWC) memory region and full cache-line non-temporal writes hit the L2 cache. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)", + "EventCode": "0xB7", + "MSRValue": "0x0000040800", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "OFFCORE_RESPONSE.FULL_STREAMING_STORES.L2_HIT", + "PDIR_COUNTER": "na", + "MSRIndex": "0x1a6, 0x1a7", + "SampleAfterValue": "100007", + "BriefDescription": "Counts full cache line data writes to uncacheable write combining (USWC) memory region and full cache-line non-temporal writes hit the L2 cache.", + "Offcore": "1" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts full cache line data writes to uncacheable write combining (USWC) memory region and full cache-line non-temporal writes true miss for the L2 cache with a snoop miss in the other processor module. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)", + "EventCode": "0xB7", + "MSRValue": "0x0200000800", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "OFFCORE_RESPONSE.FULL_STREAMING_STORES.L2_MISS.SNOOP_MISS_OR_NO_SNOOP_NEEDED", + "PDIR_COUNTER": "na", + "MSRIndex": "0x1a6, 0x1a7", + "SampleAfterValue": "100007", + "BriefDescription": "Counts full cache line data writes to uncacheable write combining (USWC) memory region and full cache-line non-temporal writes true miss for the L2 cache with a snoop miss in the other processor module. ", + "Offcore": "1" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts full cache line data writes to uncacheable write combining (USWC) memory region and full cache-line non-temporal writes miss the L2 cache with a snoop hit in the other processor module, data forwarding is required. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)", + "EventCode": "0xB7", + "MSRValue": "0x1000000800", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "OFFCORE_RESPONSE.FULL_STREAMING_STORES.L2_MISS.HITM_OTHER_CORE", + "PDIR_COUNTER": "na", + "MSRIndex": "0x1a6, 0x1a7", + "SampleAfterValue": "100007", + "BriefDescription": "Counts full cache line data writes to uncacheable write combining (USWC) memory region and full cache-line non-temporal writes miss the L2 cache with a snoop hit in the other processor module, data forwarding is required.", + "Offcore": "1" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts full cache line data writes to uncacheable write combining (USWC) memory region and full cache-line non-temporal writes outstanding, per cycle, from the time of the L2 miss to when any response is received. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)", + "EventCode": "0xB7", + "MSRValue": "0x4000000800", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "OFFCORE_RESPONSE.FULL_STREAMING_STORES.OUTSTANDING", + "PDIR_COUNTER": "na", + "MSRIndex": "0x1a6", + "SampleAfterValue": "100007", + "BriefDescription": "Counts full cache line data writes to uncacheable write combining (USWC) memory region and full cache-line non-temporal writes outstanding, per cycle, from the time of the L2 miss to when any response is received.", + "Offcore": "1" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts data cache lines requests by software prefetch instructions have any transaction responses from the uncore subsystem. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)", + "EventCode": "0xB7", + "MSRValue": "0x0000011000", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "OFFCORE_RESPONSE.SW_PREFETCH.ANY_RESPONSE", + "PDIR_COUNTER": "na", + "MSRIndex": "0x1a6, 0x1a7", + "SampleAfterValue": "100007", + "BriefDescription": "Counts data cache lines requests by software prefetch instructions have any transaction responses from the uncore subsystem.", + "Offcore": "1" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts data cache lines requests by software prefetch instructions hit the L2 cache. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)", + "EventCode": "0xB7", + "MSRValue": "0x0000041000", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "OFFCORE_RESPONSE.SW_PREFETCH.L2_HIT", + "PDIR_COUNTER": "na", + "MSRIndex": "0x1a6, 0x1a7", + "SampleAfterValue": "100007", + "BriefDescription": "Counts data cache lines requests by software prefetch instructions hit the L2 cache.", + "Offcore": "1" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts data cache lines requests by software prefetch instructions true miss for the L2 cache with a snoop miss in the other processor module. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)", + "EventCode": "0xB7", + "MSRValue": "0x0200001000", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "OFFCORE_RESPONSE.SW_PREFETCH.L2_MISS.SNOOP_MISS_OR_NO_SNOOP_NEEDED", + "PDIR_COUNTER": "na", + "MSRIndex": "0x1a6, 0x1a7", + "SampleAfterValue": "100007", + "BriefDescription": "Counts data cache lines requests by software prefetch instructions true miss for the L2 cache with a snoop miss in the other processor module. ", + "Offcore": "1" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts data cache lines requests by software prefetch instructions miss the L2 cache with a snoop hit in the other processor module, data forwarding is required. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)", + "EventCode": "0xB7", + "MSRValue": "0x1000001000", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "OFFCORE_RESPONSE.SW_PREFETCH.L2_MISS.HITM_OTHER_CORE", + "PDIR_COUNTER": "na", + "MSRIndex": "0x1a6, 0x1a7", + "SampleAfterValue": "100007", + "BriefDescription": "Counts data cache lines requests by software prefetch instructions miss the L2 cache with a snoop hit in the other processor module, data forwarding is required.", + "Offcore": "1" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts data cache lines requests by software prefetch instructions outstanding, per cycle, from the time of the L2 miss to when any response is received. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)", + "EventCode": "0xB7", + "MSRValue": "0x4000001000", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "OFFCORE_RESPONSE.SW_PREFETCH.OUTSTANDING", + "PDIR_COUNTER": "na", + "MSRIndex": "0x1a6", + "SampleAfterValue": "100007", + "BriefDescription": "Counts data cache lines requests by software prefetch instructions outstanding, per cycle, from the time of the L2 miss to when any response is received.", + "Offcore": "1" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts data cache line reads generated by hardware L1 data cache prefetcher have any transaction responses from the uncore subsystem. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)", + "EventCode": "0xB7", + "MSRValue": "0x0000012000", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "OFFCORE_RESPONSE.PF_L1_DATA_RD.ANY_RESPONSE", + "PDIR_COUNTER": "na", + "MSRIndex": "0x1a6, 0x1a7", + "SampleAfterValue": "100007", + "BriefDescription": "Counts data cache line reads generated by hardware L1 data cache prefetcher have any transaction responses from the uncore subsystem.", + "Offcore": "1" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts data cache line reads generated by hardware L1 data cache prefetcher hit the L2 cache. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)", + "EventCode": "0xB7", + "MSRValue": "0x0000042000", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "OFFCORE_RESPONSE.PF_L1_DATA_RD.L2_HIT", + "PDIR_COUNTER": "na", + "MSRIndex": "0x1a6, 0x1a7", + "SampleAfterValue": "100007", + "BriefDescription": "Counts data cache line reads generated by hardware L1 data cache prefetcher hit the L2 cache.", + "Offcore": "1" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts data cache line reads generated by hardware L1 data cache prefetcher true miss for the L2 cache with a snoop miss in the other processor module. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)", + "EventCode": "0xB7", + "MSRValue": "0x0200002000", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "OFFCORE_RESPONSE.PF_L1_DATA_RD.L2_MISS.SNOOP_MISS_OR_NO_SNOOP_NEEDED", + "PDIR_COUNTER": "na", + "MSRIndex": "0x1a6, 0x1a7", + "SampleAfterValue": "100007", + "BriefDescription": "Counts data cache line reads generated by hardware L1 data cache prefetcher true miss for the L2 cache with a snoop miss in the other processor module. ", + "Offcore": "1" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts data cache line reads generated by hardware L1 data cache prefetcher miss the L2 cache with a snoop hit in the other processor module, data forwarding is required. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)", + "EventCode": "0xB7", + "MSRValue": "0x1000002000", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "OFFCORE_RESPONSE.PF_L1_DATA_RD.L2_MISS.HITM_OTHER_CORE", + "PDIR_COUNTER": "na", + "MSRIndex": "0x1a6, 0x1a7", + "SampleAfterValue": "100007", + "BriefDescription": "Counts data cache line reads generated by hardware L1 data cache prefetcher miss the L2 cache with a snoop hit in the other processor module, data forwarding is required.", + "Offcore": "1" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts data cache line reads generated by hardware L1 data cache prefetcher outstanding, per cycle, from the time of the L2 miss to when any response is received. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)", + "EventCode": "0xB7", + "MSRValue": "0x4000002000", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "OFFCORE_RESPONSE.PF_L1_DATA_RD.OUTSTANDING", + "PDIR_COUNTER": "na", + "MSRIndex": "0x1a6", + "SampleAfterValue": "100007", + "BriefDescription": "Counts data cache line reads generated by hardware L1 data cache prefetcher outstanding, per cycle, from the time of the L2 miss to when any response is received.", + "Offcore": "1" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts any data writes to uncacheable write combining (USWC) memory region have any transaction responses from the uncore subsystem. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)", + "EventCode": "0xB7", + "MSRValue": "0x0000014800", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "OFFCORE_RESPONSE.STREAMING_STORES.ANY_RESPONSE", + "PDIR_COUNTER": "na", + "MSRIndex": "0x1a6, 0x1a7", + "SampleAfterValue": "100007", + "BriefDescription": "Counts any data writes to uncacheable write combining (USWC) memory region have any transaction responses from the uncore subsystem.", + "Offcore": "1" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts any data writes to uncacheable write combining (USWC) memory region hit the L2 cache. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)", + "EventCode": "0xB7", + "MSRValue": "0x0000044800", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "OFFCORE_RESPONSE.STREAMING_STORES.L2_HIT", + "PDIR_COUNTER": "na", + "MSRIndex": "0x1a6, 0x1a7", + "SampleAfterValue": "100007", + "BriefDescription": "Counts any data writes to uncacheable write combining (USWC) memory region hit the L2 cache.", + "Offcore": "1" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts any data writes to uncacheable write combining (USWC) memory region true miss for the L2 cache with a snoop miss in the other processor module. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)", + "EventCode": "0xB7", + "MSRValue": "0x0200004800", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "OFFCORE_RESPONSE.STREAMING_STORES.L2_MISS.SNOOP_MISS_OR_NO_SNOOP_NEEDED", + "PDIR_COUNTER": "na", + "MSRIndex": "0x1a6, 0x1a7", + "SampleAfterValue": "100007", + "BriefDescription": "Counts any data writes to uncacheable write combining (USWC) memory region true miss for the L2 cache with a snoop miss in the other processor module. ", + "Offcore": "1" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts any data writes to uncacheable write combining (USWC) memory region miss the L2 cache with a snoop hit in the other processor module, data forwarding is required. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)", + "EventCode": "0xB7", + "MSRValue": "0x1000004800", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "OFFCORE_RESPONSE.STREAMING_STORES.L2_MISS.HITM_OTHER_CORE", + "PDIR_COUNTER": "na", + "MSRIndex": "0x1a6, 0x1a7", + "SampleAfterValue": "100007", + "BriefDescription": "Counts any data writes to uncacheable write combining (USWC) memory region miss the L2 cache with a snoop hit in the other processor module, data forwarding is required.", + "Offcore": "1" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts any data writes to uncacheable write combining (USWC) memory region outstanding, per cycle, from the time of the L2 miss to when any response is received. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)", + "EventCode": "0xB7", + "MSRValue": "0x4000004800", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "OFFCORE_RESPONSE.STREAMING_STORES.OUTSTANDING", + "PDIR_COUNTER": "na", + "MSRIndex": "0x1a6", + "SampleAfterValue": "100007", + "BriefDescription": "Counts any data writes to uncacheable write combining (USWC) memory region outstanding, per cycle, from the time of the L2 miss to when any response is received.", + "Offcore": "1" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts requests to the uncore subsystem have any transaction responses from the uncore subsystem. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)", + "EventCode": "0xB7", + "MSRValue": "0x0000018000", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "OFFCORE_RESPONSE.ANY_REQUEST.ANY_RESPONSE", + "PDIR_COUNTER": "na", + "MSRIndex": "0x1a6, 0x1a7", + "SampleAfterValue": "100007", + "BriefDescription": "Counts requests to the uncore subsystem have any transaction responses from the uncore subsystem.", + "Offcore": "1" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts requests to the uncore subsystem hit the L2 cache. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)", + "EventCode": "0xB7", + "MSRValue": "0x0000048000", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "OFFCORE_RESPONSE.ANY_REQUEST.L2_HIT", + "PDIR_COUNTER": "na", + "MSRIndex": "0x1a6, 0x1a7", + "SampleAfterValue": "100007", + "BriefDescription": "Counts requests to the uncore subsystem hit the L2 cache.", + "Offcore": "1" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts requests to the uncore subsystem true miss for the L2 cache with a snoop miss in the other processor module. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)", + "EventCode": "0xB7", + "MSRValue": "0x0200008000", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "OFFCORE_RESPONSE.ANY_REQUEST.L2_MISS.SNOOP_MISS_OR_NO_SNOOP_NEEDED", + "PDIR_COUNTER": "na", + "MSRIndex": "0x1a6, 0x1a7", + "SampleAfterValue": "100007", + "BriefDescription": "Counts requests to the uncore subsystem true miss for the L2 cache with a snoop miss in the other processor module. ", + "Offcore": "1" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts requests to the uncore subsystem miss the L2 cache with a snoop hit in the other processor module, data forwarding is required. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)", + "EventCode": "0xB7", + "MSRValue": "0x1000008000", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "OFFCORE_RESPONSE.ANY_REQUEST.L2_MISS.HITM_OTHER_CORE", + "PDIR_COUNTER": "na", + "MSRIndex": "0x1a6, 0x1a7", + "SampleAfterValue": "100007", + "BriefDescription": "Counts requests to the uncore subsystem miss the L2 cache with a snoop hit in the other processor module, data forwarding is required.", + "Offcore": "1" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts requests to the uncore subsystem outstanding, per cycle, from the time of the L2 miss to when any response is received. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)", + "EventCode": "0xB7", + "MSRValue": "0x4000008000", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "OFFCORE_RESPONSE.ANY_REQUEST.OUTSTANDING", + "PDIR_COUNTER": "na", + "MSRIndex": "0x1a6", + "SampleAfterValue": "100007", + "BriefDescription": "Counts requests to the uncore subsystem outstanding, per cycle, from the time of the L2 miss to when any response is received.", + "Offcore": "1" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts data reads generated by L1 or L2 prefetchers have any transaction responses from the uncore subsystem. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)", + "EventCode": "0xB7", + "MSRValue": "0x0000013010", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "OFFCORE_RESPONSE.ANY_PF_DATA_RD.ANY_RESPONSE", + "PDIR_COUNTER": "na", + "MSRIndex": "0x1a6, 0x1a7", + "SampleAfterValue": "100007", + "BriefDescription": "Counts data reads generated by L1 or L2 prefetchers have any transaction responses from the uncore subsystem.", + "Offcore": "1" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts data reads generated by L1 or L2 prefetchers hit the L2 cache. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)", + "EventCode": "0xB7", + "MSRValue": "0x0000043010", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "OFFCORE_RESPONSE.ANY_PF_DATA_RD.L2_HIT", + "PDIR_COUNTER": "na", + "MSRIndex": "0x1a6, 0x1a7", + "SampleAfterValue": "100007", + "BriefDescription": "Counts data reads generated by L1 or L2 prefetchers hit the L2 cache.", + "Offcore": "1" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts data reads generated by L1 or L2 prefetchers true miss for the L2 cache with a snoop miss in the other processor module. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)", + "EventCode": "0xB7", + "MSRValue": "0x0200003010", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "OFFCORE_RESPONSE.ANY_PF_DATA_RD.L2_MISS.SNOOP_MISS_OR_NO_SNOOP_NEEDED", + "PDIR_COUNTER": "na", + "MSRIndex": "0x1a6, 0x1a7", + "SampleAfterValue": "100007", + "BriefDescription": "Counts data reads generated by L1 or L2 prefetchers true miss for the L2 cache with a snoop miss in the other processor module. ", + "Offcore": "1" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts data reads generated by L1 or L2 prefetchers miss the L2 cache with a snoop hit in the other processor module, data forwarding is required. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)", + "EventCode": "0xB7", + "MSRValue": "0x1000003010", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "OFFCORE_RESPONSE.ANY_PF_DATA_RD.L2_MISS.HITM_OTHER_CORE", + "PDIR_COUNTER": "na", + "MSRIndex": "0x1a6, 0x1a7", + "SampleAfterValue": "100007", + "BriefDescription": "Counts data reads generated by L1 or L2 prefetchers miss the L2 cache with a snoop hit in the other processor module, data forwarding is required.", + "Offcore": "1" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts data reads generated by L1 or L2 prefetchers outstanding, per cycle, from the time of the L2 miss to when any response is received. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)", + "EventCode": "0xB7", + "MSRValue": "0x4000003010", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "OFFCORE_RESPONSE.ANY_PF_DATA_RD.OUTSTANDING", + "PDIR_COUNTER": "na", + "MSRIndex": "0x1a6", + "SampleAfterValue": "100007", + "BriefDescription": "Counts data reads generated by L1 or L2 prefetchers outstanding, per cycle, from the time of the L2 miss to when any response is received.", + "Offcore": "1" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts data reads (demand & prefetch) have any transaction responses from the uncore subsystem. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)", + "EventCode": "0xB7", + "MSRValue": "0x0000013091", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "OFFCORE_RESPONSE.ANY_DATA_RD.ANY_RESPONSE", + "PDIR_COUNTER": "na", + "MSRIndex": "0x1a6, 0x1a7", + "SampleAfterValue": "100007", + "BriefDescription": "Counts data reads (demand & prefetch) have any transaction responses from the uncore subsystem.", + "Offcore": "1" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts data reads (demand & prefetch) hit the L2 cache. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)", + "EventCode": "0xB7", + "MSRValue": "0x0000043091", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "OFFCORE_RESPONSE.ANY_DATA_RD.L2_HIT", + "PDIR_COUNTER": "na", + "MSRIndex": "0x1a6, 0x1a7", + "SampleAfterValue": "100007", + "BriefDescription": "Counts data reads (demand & prefetch) hit the L2 cache.", + "Offcore": "1" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts data reads (demand & prefetch) true miss for the L2 cache with a snoop miss in the other processor module. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)", + "EventCode": "0xB7", + "MSRValue": "0x0200003091", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "OFFCORE_RESPONSE.ANY_DATA_RD.L2_MISS.SNOOP_MISS_OR_NO_SNOOP_NEEDED", + "PDIR_COUNTER": "na", + "MSRIndex": "0x1a6, 0x1a7", + "SampleAfterValue": "100007", + "BriefDescription": "Counts data reads (demand & prefetch) true miss for the L2 cache with a snoop miss in the other processor module. ", + "Offcore": "1" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts data reads (demand & prefetch) miss the L2 cache with a snoop hit in the other processor module, data forwarding is required. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)", + "EventCode": "0xB7", + "MSRValue": "0x1000003091", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "OFFCORE_RESPONSE.ANY_DATA_RD.L2_MISS.HITM_OTHER_CORE", + "PDIR_COUNTER": "na", + "MSRIndex": "0x1a6, 0x1a7", + "SampleAfterValue": "100007", + "BriefDescription": "Counts data reads (demand & prefetch) miss the L2 cache with a snoop hit in the other processor module, data forwarding is required.", + "Offcore": "1" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts data reads (demand & prefetch) outstanding, per cycle, from the time of the L2 miss to when any response is received. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)", + "EventCode": "0xB7", + "MSRValue": "0x4000003091", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "OFFCORE_RESPONSE.ANY_DATA_RD.OUTSTANDING", + "PDIR_COUNTER": "na", + "MSRIndex": "0x1a6", + "SampleAfterValue": "100007", + "BriefDescription": "Counts data reads (demand & prefetch) outstanding, per cycle, from the time of the L2 miss to when any response is received.", + "Offcore": "1" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts reads for ownership (RFO) requests (demand & prefetch) have any transaction responses from the uncore subsystem. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)", + "EventCode": "0xB7", + "MSRValue": "0x0000010022", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "OFFCORE_RESPONSE.ANY_RFO.ANY_RESPONSE", + "PDIR_COUNTER": "na", + "MSRIndex": "0x1a6, 0x1a7", + "SampleAfterValue": "100007", + "BriefDescription": "Counts reads for ownership (RFO) requests (demand & prefetch) have any transaction responses from the uncore subsystem.", + "Offcore": "1" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts reads for ownership (RFO) requests (demand & prefetch) hit the L2 cache. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)", + "EventCode": "0xB7", + "MSRValue": "0x0000040022", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "OFFCORE_RESPONSE.ANY_RFO.L2_HIT", + "PDIR_COUNTER": "na", + "MSRIndex": "0x1a6, 0x1a7", + "SampleAfterValue": "100007", + "BriefDescription": "Counts reads for ownership (RFO) requests (demand & prefetch) hit the L2 cache.", + "Offcore": "1" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts reads for ownership (RFO) requests (demand & prefetch) true miss for the L2 cache with a snoop miss in the other processor module. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)", + "EventCode": "0xB7", + "MSRValue": "0x0200000022", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "OFFCORE_RESPONSE.ANY_RFO.L2_MISS.SNOOP_MISS_OR_NO_SNOOP_NEEDED", + "PDIR_COUNTER": "na", + "MSRIndex": "0x1a6, 0x1a7", + "SampleAfterValue": "100007", + "BriefDescription": "Counts reads for ownership (RFO) requests (demand & prefetch) true miss for the L2 cache with a snoop miss in the other processor module. ", + "Offcore": "1" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts reads for ownership (RFO) requests (demand & prefetch) miss the L2 cache with a snoop hit in the other processor module, data forwarding is required. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)", + "EventCode": "0xB7", + "MSRValue": "0x1000000022", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "OFFCORE_RESPONSE.ANY_RFO.L2_MISS.HITM_OTHER_CORE", + "PDIR_COUNTER": "na", + "MSRIndex": "0x1a6, 0x1a7", + "SampleAfterValue": "100007", + "BriefDescription": "Counts reads for ownership (RFO) requests (demand & prefetch) miss the L2 cache with a snoop hit in the other processor module, data forwarding is required.", + "Offcore": "1" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts reads for ownership (RFO) requests (demand & prefetch) outstanding, per cycle, from the time of the L2 miss to when any response is received. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)", + "EventCode": "0xB7", + "MSRValue": "0x4000000022", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "OFFCORE_RESPONSE.ANY_RFO.OUTSTANDING", + "PDIR_COUNTER": "na", + "MSRIndex": "0x1a6", + "SampleAfterValue": "100007", + "BriefDescription": "Counts reads for ownership (RFO) requests (demand & prefetch) outstanding, per cycle, from the time of the L2 miss to when any response is received.", + "Offcore": "1" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts data read, code read, and read for ownership (RFO) requests (demand & prefetch) have any transaction responses from the uncore subsystem. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)", + "EventCode": "0xB7", + "MSRValue": "0x00000132b7", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "OFFCORE_RESPONSE.ANY_READ.ANY_RESPONSE", + "PDIR_COUNTER": "na", + "MSRIndex": "0x1a6, 0x1a7", + "SampleAfterValue": "100007", + "BriefDescription": "Counts data read, code read, and read for ownership (RFO) requests (demand & prefetch) have any transaction responses from the uncore subsystem.", + "Offcore": "1" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts data read, code read, and read for ownership (RFO) requests (demand & prefetch) hit the L2 cache. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)", + "EventCode": "0xB7", + "MSRValue": "0x00000432b7", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "OFFCORE_RESPONSE.ANY_READ.L2_HIT", + "PDIR_COUNTER": "na", + "MSRIndex": "0x1a6, 0x1a7", + "SampleAfterValue": "100007", + "BriefDescription": "Counts data read, code read, and read for ownership (RFO) requests (demand & prefetch) hit the L2 cache.", + "Offcore": "1" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts data read, code read, and read for ownership (RFO) requests (demand & prefetch) true miss for the L2 cache with a snoop miss in the other processor module. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)", + "EventCode": "0xB7", + "MSRValue": "0x02000032b7", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "OFFCORE_RESPONSE.ANY_READ.L2_MISS.SNOOP_MISS_OR_NO_SNOOP_NEEDED", + "PDIR_COUNTER": "na", + "MSRIndex": "0x1a6, 0x1a7", + "SampleAfterValue": "100007", + "BriefDescription": "Counts data read, code read, and read for ownership (RFO) requests (demand & prefetch) true miss for the L2 cache with a snoop miss in the other processor module. ", + "Offcore": "1" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts data read, code read, and read for ownership (RFO) requests (demand & prefetch) miss the L2 cache with a snoop hit in the other processor module, data forwarding is required. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)", + "EventCode": "0xB7", + "MSRValue": "0x10000032b7", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "OFFCORE_RESPONSE.ANY_READ.L2_MISS.HITM_OTHER_CORE", + "PDIR_COUNTER": "na", + "MSRIndex": "0x1a6, 0x1a7", + "SampleAfterValue": "100007", + "BriefDescription": "Counts data read, code read, and read for ownership (RFO) requests (demand & prefetch) miss the L2 cache with a snoop hit in the other processor module, data forwarding is required.", + "Offcore": "1" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts data read, code read, and read for ownership (RFO) requests (demand & prefetch) outstanding, per cycle, from the time of the L2 miss to when any response is received. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)", + "EventCode": "0xB7", + "MSRValue": "0x40000032b7", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "OFFCORE_RESPONSE.ANY_READ.OUTSTANDING", + "PDIR_COUNTER": "na", + "MSRIndex": "0x1a6", + "SampleAfterValue": "100007", + "BriefDescription": "Counts data read, code read, and read for ownership (RFO) requests (demand & prefetch) outstanding, per cycle, from the time of the L2 miss to when any response is received.", + "Offcore": "1" + } +]
\ No newline at end of file diff --git a/tools/perf/pmu-events/arch/x86/goldmontplus/frontend.json b/tools/perf/pmu-events/arch/x86/goldmontplus/frontend.json new file mode 100644 index 000000000000..a7878965ceab --- /dev/null +++ b/tools/perf/pmu-events/arch/x86/goldmontplus/frontend.json @@ -0,0 +1,62 @@ +[ + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts requests to the Instruction Cache (ICache) for one or more bytes in an ICache Line and that cache line is in the ICache (hit). The event strives to count on a cache line basis, so that multiple accesses which hit in a single cache line count as one ICACHE.HIT. Specifically, the event counts when straight line code crosses the cache line boundary, or when a branch target is to a new line, and that cache line is in the ICache. This event counts differently than Intel processors based on Silvermont microarchitecture.", + "EventCode": "0x80", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "ICACHE.HIT", + "PDIR_COUNTER": "na", + "SampleAfterValue": "200003", + "BriefDescription": "References per ICache line that are available in the ICache (hit). This event counts differently than Intel processors based on Silvermont microarchitecture" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts requests to the Instruction Cache (ICache) for one or more bytes in an ICache Line and that cache line is not in the ICache (miss). The event strives to count on a cache line basis, so that multiple accesses which miss in a single cache line count as one ICACHE.MISS. Specifically, the event counts when straight line code crosses the cache line boundary, or when a branch target is to a new line, and that cache line is not in the ICache. This event counts differently than Intel processors based on Silvermont microarchitecture.", + "EventCode": "0x80", + "Counter": "0,1,2,3", + "UMask": "0x2", + "PEBScounters": "0,1,2,3", + "EventName": "ICACHE.MISSES", + "PDIR_COUNTER": "na", + "SampleAfterValue": "200003", + "BriefDescription": "References per ICache line that are not available in the ICache (miss). This event counts differently than Intel processors based on Silvermont microarchitecture" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts requests to the Instruction Cache (ICache) for one or more bytes in an ICache Line. The event strives to count on a cache line basis, so that multiple fetches to a single cache line count as one ICACHE.ACCESS. Specifically, the event counts when accesses from straight line code crosses the cache line boundary, or when a branch target is to a new line.\r\nThis event counts differently than Intel processors based on Silvermont microarchitecture.", + "EventCode": "0x80", + "Counter": "0,1,2,3", + "UMask": "0x3", + "PEBScounters": "0,1,2,3", + "EventName": "ICACHE.ACCESSES", + "PDIR_COUNTER": "na", + "SampleAfterValue": "200003", + "BriefDescription": "References per ICache line. This event counts differently than Intel processors based on Silvermont microarchitecture" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts the number of times the Microcode Sequencer (MS) starts a flow of uops from the MSROM. It does not count every time a uop is read from the MSROM. The most common case that this counts is when a micro-coded instruction is encountered by the front end of the machine. Other cases include when an instruction encounters a fault, trap, or microcode assist of any sort that initiates a flow of uops. The event will count MS startups for uops that are speculative, and subsequently cleared by branch mispredict or a machine clear.", + "EventCode": "0xE7", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "MS_DECODED.MS_ENTRY", + "PDIR_COUNTER": "na", + "SampleAfterValue": "200003", + "BriefDescription": "MS decode starts" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts the number of times the prediction (from the predecode cache) for instruction length is incorrect.", + "EventCode": "0xE9", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "DECODE_RESTRICTION.PREDECODE_WRONG", + "PDIR_COUNTER": "na", + "SampleAfterValue": "200003", + "BriefDescription": "Decode restrictions due to predicting wrong instruction length" + } +]
\ No newline at end of file diff --git a/tools/perf/pmu-events/arch/x86/goldmontplus/memory.json b/tools/perf/pmu-events/arch/x86/goldmontplus/memory.json new file mode 100644 index 000000000000..91e0815f3ffb --- /dev/null +++ b/tools/perf/pmu-events/arch/x86/goldmontplus/memory.json @@ -0,0 +1,38 @@ +[ + { + "PEBS": "2", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts when a memory load of a uop spans a page boundary (a split) is retired.", + "EventCode": "0x13", + "Counter": "0,1,2,3", + "UMask": "0x2", + "PEBScounters": "0,1,2,3", + "EventName": "MISALIGN_MEM_REF.LOAD_PAGE_SPLIT", + "SampleAfterValue": "200003", + "BriefDescription": "Load uops that split a page (Precise event capable)" + }, + { + "PEBS": "2", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts when a memory store of a uop spans a page boundary (a split) is retired.", + "EventCode": "0x13", + "Counter": "0,1,2,3", + "UMask": "0x4", + "PEBScounters": "0,1,2,3", + "EventName": "MISALIGN_MEM_REF.STORE_PAGE_SPLIT", + "SampleAfterValue": "200003", + "BriefDescription": "Store uops that split a page (Precise event capable)" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts machine clears due to memory ordering issues. This occurs when a snoop request happens and the machine is uncertain if memory ordering will be preserved - as another core is in the process of modifying the data.", + "EventCode": "0xC3", + "Counter": "0,1,2,3", + "UMask": "0x2", + "PEBScounters": "0,1,2,3", + "EventName": "MACHINE_CLEARS.MEMORY_ORDERING", + "PDIR_COUNTER": "na", + "SampleAfterValue": "20003", + "BriefDescription": "Machine clears due to memory ordering issue" + } +]
\ No newline at end of file diff --git a/tools/perf/pmu-events/arch/x86/goldmontplus/other.json b/tools/perf/pmu-events/arch/x86/goldmontplus/other.json new file mode 100644 index 000000000000..b860374418ab --- /dev/null +++ b/tools/perf/pmu-events/arch/x86/goldmontplus/other.json @@ -0,0 +1,98 @@ +[ + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts cycles that fetch is stalled due to any reason. That is, the decoder queue is able to accept bytes, but the fetch unit is unable to provide bytes. This will include cycles due to an ITLB miss, ICache miss and other events.", + "EventCode": "0x86", + "Counter": "0,1,2,3", + "UMask": "0x0", + "PEBScounters": "0,1,2,3", + "EventName": "FETCH_STALL.ALL", + "PDIR_COUNTER": "na", + "SampleAfterValue": "200003", + "BriefDescription": "Cycles code-fetch stalled due to any reason." + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts cycles that fetch is stalled due to an outstanding ITLB miss. That is, the decoder queue is able to accept bytes, but the fetch unit is unable to provide bytes due to an ITLB miss. Note: this event is not the same as page walk cycles to retrieve an instruction translation.", + "EventCode": "0x86", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "FETCH_STALL.ITLB_FILL_PENDING_CYCLES", + "PDIR_COUNTER": "na", + "SampleAfterValue": "200003", + "BriefDescription": "Cycles the code-fetch stalls and an ITLB miss is outstanding." + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts the number of issue slots per core cycle that were not consumed by the backend due to either a full resource in the backend (RESOURCE_FULL) or due to the processor recovering from some event (RECOVERY).", + "EventCode": "0xCA", + "Counter": "0,1,2,3", + "UMask": "0x0", + "PEBScounters": "0,1,2,3", + "EventName": "ISSUE_SLOTS_NOT_CONSUMED.ANY", + "PDIR_COUNTER": "na", + "SampleAfterValue": "200003", + "BriefDescription": "Unfilled issue slots per cycle" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts the number of issue slots per core cycle that were not consumed because of a full resource in the backend. Including but not limited to resources such as the Re-order Buffer (ROB), reservation stations (RS), load/store buffers, physical registers, or any other needed machine resource that is currently unavailable. Note that uops must be available for consumption in order for this event to fire. If a uop is not available (Instruction Queue is empty), this event will not count.", + "EventCode": "0xCA", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "ISSUE_SLOTS_NOT_CONSUMED.RESOURCE_FULL", + "PDIR_COUNTER": "na", + "SampleAfterValue": "200003", + "BriefDescription": "Unfilled issue slots per cycle because of a full resource in the backend" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts the number of issue slots per core cycle that were not consumed by the backend because allocation is stalled waiting for a mispredicted jump to retire or other branch-like conditions (e.g. the event is relevant during certain microcode flows). Counts all issue slots blocked while within this window including slots where uops were not available in the Instruction Queue.", + "EventCode": "0xCA", + "Counter": "0,1,2,3", + "UMask": "0x2", + "PEBScounters": "0,1,2,3", + "EventName": "ISSUE_SLOTS_NOT_CONSUMED.RECOVERY", + "PDIR_COUNTER": "na", + "SampleAfterValue": "200003", + "BriefDescription": "Unfilled issue slots per cycle to recover" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts hardware interrupts received by the processor.", + "EventCode": "0xCB", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "HW_INTERRUPTS.RECEIVED", + "PDIR_COUNTER": "na", + "SampleAfterValue": "203", + "BriefDescription": "Hardware interrupts received" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts the number of core cycles during which interrupts are masked (disabled). Increments by 1 each core cycle that EFLAGS.IF is 0, regardless of whether interrupts are pending or not.", + "EventCode": "0xCB", + "Counter": "0,1,2,3", + "UMask": "0x2", + "PEBScounters": "0,1,2,3", + "EventName": "HW_INTERRUPTS.MASKED", + "PDIR_COUNTER": "na", + "SampleAfterValue": "200003", + "BriefDescription": "Cycles hardware interrupts are masked" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts core cycles during which there are pending interrupts, but interrupts are masked (EFLAGS.IF = 0).", + "EventCode": "0xCB", + "Counter": "0,1,2,3", + "UMask": "0x4", + "PEBScounters": "0,1,2,3", + "EventName": "HW_INTERRUPTS.PENDING_AND_MASKED", + "PDIR_COUNTER": "na", + "SampleAfterValue": "200003", + "BriefDescription": "Cycles pending interrupts are masked" + } +]
\ No newline at end of file diff --git a/tools/perf/pmu-events/arch/x86/goldmontplus/pipeline.json b/tools/perf/pmu-events/arch/x86/goldmontplus/pipeline.json new file mode 100644 index 000000000000..ccf1aed69197 --- /dev/null +++ b/tools/perf/pmu-events/arch/x86/goldmontplus/pipeline.json @@ -0,0 +1,544 @@ +[ + { + "PEBS": "2", + "CollectPEBSRecord": "1", + "PublicDescription": "Counts the number of instructions that retire execution. For instructions that consist of multiple uops, this event counts the retirement of the last uop of the instruction. The counter continues counting during hardware interrupts, traps, and inside interrupt handlers. This event uses fixed counter 0. You cannot collect a PEBs record for this event.", + "EventCode": "0x00", + "Counter": "Fixed counter 0", + "UMask": "0x1", + "PEBScounters": "32", + "EventName": "INST_RETIRED.ANY", + "PDIR_COUNTER": "na", + "SampleAfterValue": "2000003", + "BriefDescription": "Instructions retired (Fixed event)" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts the number of core cycles while the core is not in a halt state. The core enters the halt state when it is running the HLT instruction. In mobile systems the core frequency may change from time to time. For this reason this event may have a changing ratio with regards to time. This event uses fixed counter 1. You cannot collect a PEBs record for this event.", + "EventCode": "0x00", + "Counter": "Fixed counter 1", + "UMask": "0x2", + "PEBScounters": "33", + "EventName": "CPU_CLK_UNHALTED.CORE", + "PDIR_COUNTER": "na", + "SampleAfterValue": "2000003", + "BriefDescription": "Core cycles when core is not halted (Fixed event)" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts the number of reference cycles that the core is not in a halt state. The core enters the halt state when it is running the HLT instruction. In mobile systems the core frequency may change from time. This event is not affected by core frequency changes but counts as if the core is running at the maximum frequency all the time. This event uses fixed counter 2. You cannot collect a PEBs record for this event.", + "EventCode": "0x00", + "Counter": "Fixed counter 2", + "UMask": "0x3", + "PEBScounters": "34", + "EventName": "CPU_CLK_UNHALTED.REF_TSC", + "PDIR_COUNTER": "na", + "SampleAfterValue": "2000003", + "BriefDescription": "Reference cycles when core is not halted (Fixed event)" + }, + { + "PEBS": "2", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts a load blocked from using a store forward, but did not occur because the store data was not available at the right time. The forward might occur subsequently when the data is available.", + "EventCode": "0x03", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "LD_BLOCKS.DATA_UNKNOWN", + "SampleAfterValue": "200003", + "BriefDescription": "Loads blocked due to store data not ready (Precise event capable)" + }, + { + "PEBS": "2", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts a load blocked from using a store forward because of an address/size mismatch, only one of the loads blocked from each store will be counted.", + "EventCode": "0x03", + "Counter": "0,1,2,3", + "UMask": "0x2", + "PEBScounters": "0,1,2,3", + "EventName": "LD_BLOCKS.STORE_FORWARD", + "SampleAfterValue": "200003", + "BriefDescription": "Loads blocked due to store forward restriction (Precise event capable)" + }, + { + "PEBS": "2", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts loads that block because their address modulo 4K matches a pending store.", + "EventCode": "0x03", + "Counter": "0,1,2,3", + "UMask": "0x4", + "PEBScounters": "0,1,2,3", + "EventName": "LD_BLOCKS.4K_ALIAS", + "SampleAfterValue": "200003", + "BriefDescription": "Loads blocked because address has 4k partial address false dependence (Precise event capable)" + }, + { + "PEBS": "2", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts loads blocked because they are unable to find their physical address in the micro TLB (UTLB).", + "EventCode": "0x03", + "Counter": "0,1,2,3", + "UMask": "0x8", + "PEBScounters": "0,1,2,3", + "EventName": "LD_BLOCKS.UTLB_MISS", + "SampleAfterValue": "200003", + "BriefDescription": "Loads blocked because address in not in the UTLB (Precise event capable)" + }, + { + "PEBS": "2", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts anytime a load that retires is blocked for any reason.", + "EventCode": "0x03", + "Counter": "0,1,2,3", + "UMask": "0x10", + "PEBScounters": "0,1,2,3", + "EventName": "LD_BLOCKS.ALL_BLOCK", + "SampleAfterValue": "200003", + "BriefDescription": "Loads blocked (Precise event capable)" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts uops issued by the front end and allocated into the back end of the machine. This event counts uops that retire as well as uops that were speculatively executed but didn't retire. The sort of speculative uops that might be counted includes, but is not limited to those uops issued in the shadow of a miss-predicted branch, those uops that are inserted during an assist (such as for a denormal floating point result), and (previously allocated) uops that might be canceled during a machine clear.", + "EventCode": "0x0E", + "Counter": "0,1,2,3", + "UMask": "0x0", + "PEBScounters": "0,1,2,3", + "EventName": "UOPS_ISSUED.ANY", + "PDIR_COUNTER": "na", + "SampleAfterValue": "200003", + "BriefDescription": "Uops issued to the back end per cycle" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Core cycles when core is not halted. This event uses a (_P)rogrammable general purpose performance counter.", + "EventCode": "0x3C", + "Counter": "0,1,2,3", + "UMask": "0x0", + "PEBScounters": "0,1,2,3", + "EventName": "CPU_CLK_UNHALTED.CORE_P", + "PDIR_COUNTER": "na", + "SampleAfterValue": "2000003", + "BriefDescription": "Core cycles when core is not halted" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Reference cycles when core is not halted. This event uses a (_P)rogrammable general purpose performance counter.", + "EventCode": "0x3C", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "CPU_CLK_UNHALTED.REF", + "PDIR_COUNTER": "na", + "SampleAfterValue": "2000003", + "BriefDescription": "Reference cycles when core is not halted" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "This event used to measure front-end inefficiencies. I.e. when front-end of the machine is not delivering uops to the back-end and the back-end has is not stalled. This event can be used to identify if the machine is truly front-end bound. When this event occurs, it is an indication that the front-end of the machine is operating at less than its theoretical peak performance. Background: We can think of the processor pipeline as being divided into 2 broader parts: Front-end and Back-end. Front-end is responsible for fetching the instruction, decoding into uops in machine understandable format and putting them into a uop queue to be consumed by back end. The back-end then takes these uops, allocates the required resources. When all resources are ready, uops are executed. If the back-end is not ready to accept uops from the front-end, then we do not want to count these as front-end bottlenecks. However, whenever we have bottlenecks in the back-end, we will have allocation unit stalls and eventually forcing the front-end to wait until the back-end is ready to receive more uops. This event counts only when back-end is requesting more uops and front-end is not able to provide them. When 3 uops are requested and no uops are delivered, the event counts 3. When 3 are requested, and only 1 is delivered, the event counts 2. When only 2 are delivered, the event counts 1. Alternatively stated, the event will not count if 3 uops are delivered, or if the back end is stalled and not requesting any uops at all. Counts indicate missed opportunities for the front-end to deliver a uop to the back end. Some examples of conditions that cause front-end efficiencies are: ICache misses, ITLB misses, and decoder restrictions that limit the front-end bandwidth. Known Issues: Some uops require multiple allocation slots. These uops will not be charged as a front end 'not delivered' opportunity, and will be regarded as a back end problem. For example, the INC instruction has one uop that requires 2 issue slots. A stream of INC instructions will not count as UOPS_NOT_DELIVERED, even though only one instruction can be issued per clock. The low uop issue rate for a stream of INC instructions is considered to be a back end issue.", + "EventCode": "0x9C", + "Counter": "0,1,2,3", + "UMask": "0x0", + "PEBScounters": "0,1,2,3", + "EventName": "UOPS_NOT_DELIVERED.ANY", + "PDIR_COUNTER": "na", + "SampleAfterValue": "200003", + "BriefDescription": "Uops requested but not-delivered to the back-end per cycle" + }, + { + "PEBS": "2", + "CollectPEBSRecord": "1", + "PublicDescription": "Counts the number of instructions that retire execution. For instructions that consist of multiple uops, this event counts the retirement of the last uop of the instruction. The event continues counting during hardware interrupts, traps, and inside interrupt handlers. This is an architectural performance event. This event uses a (_P)rogrammable general purpose performance counter. *This event is Precise Event capable: The EventingRIP field in the PEBS record is precise to the address of the instruction which caused the event. Note: Because PEBS records can be collected only on IA32_PMC0, only one event can use the PEBS facility at a time.", + "EventCode": "0xC0", + "Counter": "0,1,2,3", + "UMask": "0x0", + "PEBScounters": "0,1,2,3", + "EventName": "INST_RETIRED.ANY_P", + "SampleAfterValue": "2000003", + "BriefDescription": "Instructions retired (Precise event capable)" + }, + { + "PEBS": "2", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts INST_RETIRED.ANY using the Reduced Skid PEBS feature that reduces the shadow in which events aren't counted allowing for a more unbiased distribution of samples across instructions retired.", + "EventCode": "0xC0", + "Counter": "0,1,2,3", + "UMask": "0x0", + "EventName": "INST_RETIRED.PREC_DIST", + "SampleAfterValue": "2000003", + "BriefDescription": "Instructions retired - using Reduced Skid PEBS feature" + }, + { + "PEBS": "2", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts uops which retired.", + "EventCode": "0xC2", + "Counter": "0,1,2,3", + "UMask": "0x0", + "PEBScounters": "0,1,2,3", + "EventName": "UOPS_RETIRED.ANY", + "PDIR_COUNTER": "na", + "SampleAfterValue": "2000003", + "BriefDescription": "Uops retired (Precise event capable)" + }, + { + "PEBS": "2", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts uops retired that are from the complex flows issued by the micro-sequencer (MS). Counts both the uops from a micro-coded instruction, and the uops that might be generated from a micro-coded assist.", + "EventCode": "0xC2", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "UOPS_RETIRED.MS", + "PDIR_COUNTER": "na", + "SampleAfterValue": "2000003", + "BriefDescription": "MS uops retired (Precise event capable)" + }, + { + "PEBS": "2", + "CollectPEBSRecord": "1", + "PublicDescription": "Counts the number of floating point divide uops retired.", + "EventCode": "0xC2", + "Counter": "0,1,2,3", + "UMask": "0x8", + "PEBScounters": "0,1,2,3", + "EventName": "UOPS_RETIRED.FPDIV", + "SampleAfterValue": "2000003", + "BriefDescription": "Floating point divide uops retired (Precise Event Capable)" + }, + { + "PEBS": "2", + "CollectPEBSRecord": "1", + "PublicDescription": "Counts the number of integer divide uops retired.", + "EventCode": "0xC2", + "Counter": "0,1,2,3", + "UMask": "0x10", + "PEBScounters": "0,1,2,3", + "EventName": "UOPS_RETIRED.IDIV", + "SampleAfterValue": "2000003", + "BriefDescription": "Integer divide uops retired (Precise Event Capable)" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts machine clears for any reason.", + "EventCode": "0xC3", + "Counter": "0,1,2,3", + "UMask": "0x0", + "PEBScounters": "0,1,2,3", + "EventName": "MACHINE_CLEARS.ALL", + "PDIR_COUNTER": "na", + "SampleAfterValue": "20003", + "BriefDescription": "All machine clears" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts the number of times that the processor detects that a program is writing to a code section and has to perform a machine clear because of that modification. Self-modifying code (SMC) causes a severe penalty in all Intel architecture processors.", + "EventCode": "0xC3", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "MACHINE_CLEARS.SMC", + "PDIR_COUNTER": "na", + "SampleAfterValue": "20003", + "BriefDescription": "Self-Modifying Code detected" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts machine clears due to floating point (FP) operations needing assists. For instance, if the result was a floating point denormal, the hardware clears the pipeline and reissues uops to produce the correct IEEE compliant denormal result.", + "EventCode": "0xC3", + "Counter": "0,1,2,3", + "UMask": "0x4", + "PEBScounters": "0,1,2,3", + "EventName": "MACHINE_CLEARS.FP_ASSIST", + "PDIR_COUNTER": "na", + "SampleAfterValue": "20003", + "BriefDescription": "Machine clears due to FP assists" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts machine clears due to memory disambiguation. Memory disambiguation happens when a load which has been issued conflicts with a previous unretired store in the pipeline whose address was not known at issue time, but is later resolved to be the same as the load address.", + "EventCode": "0xC3", + "Counter": "0,1,2,3", + "UMask": "0x8", + "PEBScounters": "0,1,2,3", + "EventName": "MACHINE_CLEARS.DISAMBIGUATION", + "PDIR_COUNTER": "na", + "SampleAfterValue": "20003", + "BriefDescription": "Machine clears due to memory disambiguation" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts the number of times that the machines clears due to a page fault. Covers both I-side and D-side(Loads/Stores) page faults. A page fault occurs when either page is not present, or an access violation", + "EventCode": "0xC3", + "Counter": "0,1,2,3", + "UMask": "0x20", + "PEBScounters": "0,1,2,3", + "EventName": "MACHINE_CLEARS.PAGE_FAULT", + "PDIR_COUNTER": "na", + "SampleAfterValue": "20003", + "BriefDescription": "Machines clear due to a page fault" + }, + { + "PEBS": "2", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts branch instructions retired for all branch types. This is an architectural performance event.", + "EventCode": "0xC4", + "Counter": "0,1,2,3", + "UMask": "0x0", + "PEBScounters": "0,1,2,3", + "EventName": "BR_INST_RETIRED.ALL_BRANCHES", + "SampleAfterValue": "200003", + "BriefDescription": "Retired branch instructions (Precise event capable)" + }, + { + "PEBS": "2", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts retired Jcc (Jump on Conditional Code/Jump if Condition is Met) branch instructions retired, including both when the branch was taken and when it was not taken.", + "EventCode": "0xC4", + "Counter": "0,1,2,3", + "UMask": "0x7e", + "PEBScounters": "0,1,2,3", + "EventName": "BR_INST_RETIRED.JCC", + "SampleAfterValue": "200003", + "BriefDescription": "Retired conditional branch instructions (Precise event capable)" + }, + { + "PEBS": "2", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts the number of taken branch instructions retired.", + "EventCode": "0xC4", + "Counter": "0,1,2,3", + "UMask": "0x80", + "PEBScounters": "0,1,2,3", + "EventName": "BR_INST_RETIRED.ALL_TAKEN_BRANCHES", + "SampleAfterValue": "200003", + "BriefDescription": "Retired taken branch instructions (Precise event capable)" + }, + { + "PEBS": "2", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts far branch instructions retired. This includes far jump, far call and return, and Interrupt call and return.", + "EventCode": "0xC4", + "Counter": "0,1,2,3", + "UMask": "0xbf", + "PEBScounters": "0,1,2,3", + "EventName": "BR_INST_RETIRED.FAR_BRANCH", + "SampleAfterValue": "200003", + "BriefDescription": "Retired far branch instructions (Precise event capable)" + }, + { + "PEBS": "2", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts near indirect call or near indirect jmp branch instructions retired.", + "EventCode": "0xC4", + "Counter": "0,1,2,3", + "UMask": "0xeb", + "PEBScounters": "0,1,2,3", + "EventName": "BR_INST_RETIRED.NON_RETURN_IND", + "SampleAfterValue": "200003", + "BriefDescription": "Retired instructions of near indirect Jmp or call (Precise event capable)" + }, + { + "PEBS": "2", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts near return branch instructions retired.", + "EventCode": "0xC4", + "Counter": "0,1,2,3", + "UMask": "0xf7", + "PEBScounters": "0,1,2,3", + "EventName": "BR_INST_RETIRED.RETURN", + "SampleAfterValue": "200003", + "BriefDescription": "Retired near return instructions (Precise event capable)" + }, + { + "PEBS": "2", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts near CALL branch instructions retired.", + "EventCode": "0xC4", + "Counter": "0,1,2,3", + "UMask": "0xf9", + "PEBScounters": "0,1,2,3", + "EventName": "BR_INST_RETIRED.CALL", + "SampleAfterValue": "200003", + "BriefDescription": "Retired near call instructions (Precise event capable)" + }, + { + "PEBS": "2", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts near indirect CALL branch instructions retired.", + "EventCode": "0xC4", + "Counter": "0,1,2,3", + "UMask": "0xfb", + "PEBScounters": "0,1,2,3", + "EventName": "BR_INST_RETIRED.IND_CALL", + "SampleAfterValue": "200003", + "BriefDescription": "Retired near indirect call instructions (Precise event capable)" + }, + { + "PEBS": "2", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts near relative CALL branch instructions retired.", + "EventCode": "0xC4", + "Counter": "0,1,2,3", + "UMask": "0xfd", + "PEBScounters": "0,1,2,3", + "EventName": "BR_INST_RETIRED.REL_CALL", + "SampleAfterValue": "200003", + "BriefDescription": "Retired near relative call instructions (Precise event capable)" + }, + { + "PEBS": "2", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts Jcc (Jump on Conditional Code/Jump if Condition is Met) branch instructions retired that were taken and does not count when the Jcc branch instruction were not taken.", + "EventCode": "0xC4", + "Counter": "0,1,2,3", + "UMask": "0xfe", + "PEBScounters": "0,1,2,3", + "EventName": "BR_INST_RETIRED.TAKEN_JCC", + "SampleAfterValue": "200003", + "BriefDescription": "Retired conditional branch instructions that were taken (Precise event capable)" + }, + { + "PEBS": "2", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts mispredicted branch instructions retired including all branch types.", + "EventCode": "0xC5", + "Counter": "0,1,2,3", + "UMask": "0x0", + "PEBScounters": "0,1,2,3", + "EventName": "BR_MISP_RETIRED.ALL_BRANCHES", + "SampleAfterValue": "200003", + "BriefDescription": "Retired mispredicted branch instructions (Precise event capable)" + }, + { + "PEBS": "2", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts mispredicted retired Jcc (Jump on Conditional Code/Jump if Condition is Met) branch instructions retired, including both when the branch was supposed to be taken and when it was not supposed to be taken (but the processor predicted the opposite condition).", + "EventCode": "0xC5", + "Counter": "0,1,2,3", + "UMask": "0x7e", + "PEBScounters": "0,1,2,3", + "EventName": "BR_MISP_RETIRED.JCC", + "SampleAfterValue": "200003", + "BriefDescription": "Retired mispredicted conditional branch instructions (Precise event capable)" + }, + { + "PEBS": "2", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts mispredicted branch instructions retired that were near indirect call or near indirect jmp, where the target address taken was not what the processor predicted.", + "EventCode": "0xC5", + "Counter": "0,1,2,3", + "UMask": "0xeb", + "PEBScounters": "0,1,2,3", + "EventName": "BR_MISP_RETIRED.NON_RETURN_IND", + "SampleAfterValue": "200003", + "BriefDescription": "Retired mispredicted instructions of near indirect Jmp or near indirect call (Precise event capable)" + }, + { + "PEBS": "2", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts mispredicted near RET branch instructions retired, where the return address taken was not what the processor predicted.", + "EventCode": "0xC5", + "Counter": "0,1,2,3", + "UMask": "0xf7", + "PEBScounters": "0,1,2,3", + "EventName": "BR_MISP_RETIRED.RETURN", + "SampleAfterValue": "200003", + "BriefDescription": "Retired mispredicted near return instructions (Precise event capable)" + }, + { + "PEBS": "2", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts mispredicted near indirect CALL branch instructions retired, where the target address taken was not what the processor predicted.", + "EventCode": "0xC5", + "Counter": "0,1,2,3", + "UMask": "0xfb", + "PEBScounters": "0,1,2,3", + "EventName": "BR_MISP_RETIRED.IND_CALL", + "SampleAfterValue": "200003", + "BriefDescription": "Retired mispredicted near indirect call instructions (Precise event capable)" + }, + { + "PEBS": "2", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts mispredicted retired Jcc (Jump on Conditional Code/Jump if Condition is Met) branch instructions retired that were supposed to be taken but the processor predicted that it would not be taken.", + "EventCode": "0xC5", + "Counter": "0,1,2,3", + "UMask": "0xfe", + "PEBScounters": "0,1,2,3", + "EventName": "BR_MISP_RETIRED.TAKEN_JCC", + "SampleAfterValue": "200003", + "BriefDescription": "Retired mispredicted conditional branch instructions that were taken (Precise event capable)" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts core cycles if either divide unit is busy.", + "EventCode": "0xCD", + "Counter": "0,1,2,3", + "UMask": "0x0", + "PEBScounters": "0,1,2,3", + "EventName": "CYCLES_DIV_BUSY.ALL", + "PDIR_COUNTER": "na", + "SampleAfterValue": "2000003", + "BriefDescription": "Cycles a divider is busy" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts core cycles the integer divide unit is busy.", + "EventCode": "0xCD", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "CYCLES_DIV_BUSY.IDIV", + "PDIR_COUNTER": "na", + "SampleAfterValue": "200003", + "BriefDescription": "Cycles the integer divide unit is busy" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts core cycles the floating point divide unit is busy.", + "EventCode": "0xCD", + "Counter": "0,1,2,3", + "UMask": "0x2", + "PEBScounters": "0,1,2,3", + "EventName": "CYCLES_DIV_BUSY.FPDIV", + "PDIR_COUNTER": "na", + "SampleAfterValue": "200003", + "BriefDescription": "Cycles the FP divide unit is busy" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts the number of times a BACLEAR is signaled for any reason, including, but not limited to indirect branch/call, Jcc (Jump on Conditional Code/Jump if Condition is Met) branch, unconditional branch/call, and returns.", + "EventCode": "0xE6", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "BACLEARS.ALL", + "PDIR_COUNTER": "na", + "SampleAfterValue": "200003", + "BriefDescription": "BACLEARs asserted for any branch type" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts BACLEARS on return instructions.", + "EventCode": "0xE6", + "Counter": "0,1,2,3", + "UMask": "0x8", + "PEBScounters": "0,1,2,3", + "EventName": "BACLEARS.RETURN", + "PDIR_COUNTER": "na", + "SampleAfterValue": "200003", + "BriefDescription": "BACLEARs asserted for return branch" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts BACLEARS on Jcc (Jump on Conditional Code/Jump if Condition is Met) branches.", + "EventCode": "0xE6", + "Counter": "0,1,2,3", + "UMask": "0x10", + "PEBScounters": "0,1,2,3", + "EventName": "BACLEARS.COND", + "PDIR_COUNTER": "na", + "SampleAfterValue": "200003", + "BriefDescription": "BACLEARs asserted for conditional branch" + } +]
\ No newline at end of file diff --git a/tools/perf/pmu-events/arch/x86/goldmontplus/virtual-memory.json b/tools/perf/pmu-events/arch/x86/goldmontplus/virtual-memory.json new file mode 100644 index 000000000000..0b53a3b0dfb8 --- /dev/null +++ b/tools/perf/pmu-events/arch/x86/goldmontplus/virtual-memory.json @@ -0,0 +1,218 @@ +[ + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts page walks completed due to demand data loads (including SW prefetches) whose address translations missed in all TLB levels and were mapped to 4K pages. The page walks can end with or without a page fault.", + "EventCode": "0x08", + "Counter": "0,1,2,3", + "UMask": "0x2", + "PEBScounters": "0,1,2,3", + "EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED_4K", + "PDIR_COUNTER": "na", + "SampleAfterValue": "200003", + "BriefDescription": "Page walk completed due to a demand load to a 4K page" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts page walks completed due to demand data loads (including SW prefetches) whose address translations missed in all TLB levels and were mapped to 2M or 4M pages. The page walks can end with or without a page fault.", + "EventCode": "0x08", + "Counter": "0,1,2,3", + "UMask": "0x4", + "PEBScounters": "0,1,2,3", + "EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED_2M_4M", + "PDIR_COUNTER": "na", + "SampleAfterValue": "200003", + "BriefDescription": "Page walk completed due to a demand load to a 2M or 4M page" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts page walks completed due to demand data loads (including SW prefetches) whose address translations missed in all TLB levels and were mapped to 1GB pages. The page walks can end with or without a page fault.", + "EventCode": "0x08", + "Counter": "0,1,2,3", + "UMask": "0x8", + "PEBScounters": "0,1,2,3", + "EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED_1GB", + "PDIR_COUNTER": "na", + "SampleAfterValue": "200003", + "BriefDescription": "Page walk completed due to a demand load to a 1GB page" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts once per cycle for each page walk occurring due to a load (demand data loads or SW prefetches). Includes cycles spent traversing the Extended Page Table (EPT). Average cycles per walk can be calculated by dividing by the number of walks.", + "EventCode": "0x08", + "Counter": "0,1,2,3", + "UMask": "0x10", + "PEBScounters": "0,1,2,3", + "EventName": "DTLB_LOAD_MISSES.WALK_PENDING", + "PDIR_COUNTER": "na", + "SampleAfterValue": "200003", + "BriefDescription": "Page walks outstanding due to a demand load every cycle." + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts page walks completed due to demand data stores whose address translations missed in the TLB and were mapped to 4K pages. The page walks can end with or without a page fault.", + "EventCode": "0x49", + "Counter": "0,1,2,3", + "UMask": "0x2", + "PEBScounters": "0,1,2,3", + "EventName": "DTLB_STORE_MISSES.WALK_COMPLETED_4K", + "PDIR_COUNTER": "na", + "SampleAfterValue": "2000003", + "BriefDescription": "Page walk completed due to a demand data store to a 4K page" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts page walks completed due to demand data stores whose address translations missed in the TLB and were mapped to 2M or 4M pages. The page walks can end with or without a page fault.", + "EventCode": "0x49", + "Counter": "0,1,2,3", + "UMask": "0x4", + "PEBScounters": "0,1,2,3", + "EventName": "DTLB_STORE_MISSES.WALK_COMPLETED_2M_4M", + "PDIR_COUNTER": "na", + "SampleAfterValue": "2000003", + "BriefDescription": "Page walk completed due to a demand data store to a 2M or 4M page" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts page walks completed due to demand data stores whose address translations missed in the TLB and were mapped to 1GB pages. The page walks can end with or without a page fault.", + "EventCode": "0x49", + "Counter": "0,1,2,3", + "UMask": "0x8", + "PEBScounters": "0,1,2,3", + "EventName": "DTLB_STORE_MISSES.WALK_COMPLETED_1GB", + "PDIR_COUNTER": "na", + "SampleAfterValue": "2000003", + "BriefDescription": "Page walk completed due to a demand data store to a 1GB page" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts once per cycle for each page walk occurring due to a demand data store. Includes cycles spent traversing the Extended Page Table (EPT). Average cycles per walk can be calculated by dividing by the number of walks.", + "EventCode": "0x49", + "Counter": "0,1,2,3", + "UMask": "0x10", + "PEBScounters": "0,1,2,3", + "EventName": "DTLB_STORE_MISSES.WALK_PENDING", + "PDIR_COUNTER": "na", + "SampleAfterValue": "200003", + "BriefDescription": "Page walks outstanding due to a demand data store every cycle." + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts once per cycle for each page walk only while traversing the Extended Page Table (EPT), and does not count during the rest of the translation. The EPT is used for translating Guest-Physical Addresses to Physical Addresses for Virtual Machine Monitors (VMMs). Average cycles per walk can be calculated by dividing the count by number of walks.", + "EventCode": "0x4F", + "Counter": "0,1,2,3", + "UMask": "0x10", + "PEBScounters": "0,1,2,3", + "EventName": "EPT.WALK_PENDING", + "PDIR_COUNTER": "na", + "SampleAfterValue": "200003", + "BriefDescription": "Page walks outstanding due to walking the EPT every cycle" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts the number of times the machine was unable to find a translation in the Instruction Translation Lookaside Buffer (ITLB) for a linear address of an instruction fetch. It counts when new translation are filled into the ITLB. The event is speculative in nature, but will not count translations (page walks) that are begun and not finished, or translations that are finished but not filled into the ITLB.", + "EventCode": "0x81", + "Counter": "0,1,2,3", + "UMask": "0x4", + "PEBScounters": "0,1,2,3", + "EventName": "ITLB.MISS", + "PDIR_COUNTER": "na", + "SampleAfterValue": "200003", + "BriefDescription": "ITLB misses" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts page walks completed due to instruction fetches whose address translations missed in the TLB and were mapped to 4K pages. The page walks can end with or without a page fault.", + "EventCode": "0x85", + "Counter": "0,1,2,3", + "UMask": "0x2", + "PEBScounters": "0,1,2,3", + "EventName": "ITLB_MISSES.WALK_COMPLETED_4K", + "PDIR_COUNTER": "na", + "SampleAfterValue": "2000003", + "BriefDescription": "Page walk completed due to an instruction fetch in a 4K page" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts page walks completed due to instruction fetches whose address translations missed in the TLB and were mapped to 2M or 4M pages. The page walks can end with or without a page fault.", + "EventCode": "0x85", + "Counter": "0,1,2,3", + "UMask": "0x4", + "PEBScounters": "0,1,2,3", + "EventName": "ITLB_MISSES.WALK_COMPLETED_2M_4M", + "PDIR_COUNTER": "na", + "SampleAfterValue": "2000003", + "BriefDescription": "Page walk completed due to an instruction fetch in a 2M or 4M page" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts page walks completed due to instruction fetches whose address translations missed in the TLB and were mapped to 1GB pages. The page walks can end with or without a page fault.", + "EventCode": "0x85", + "Counter": "0,1,2,3", + "UMask": "0x8", + "PEBScounters": "0,1,2,3", + "EventName": "ITLB_MISSES.WALK_COMPLETED_1GB", + "PDIR_COUNTER": "na", + "SampleAfterValue": "2000003", + "BriefDescription": "Page walk completed due to an instruction fetch in a 1GB page" + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts once per cycle for each page walk occurring due to an instruction fetch. Includes cycles spent traversing the Extended Page Table (EPT). Average cycles per walk can be calculated by dividing by the number of walks.", + "EventCode": "0x85", + "Counter": "0,1,2,3", + "UMask": "0x10", + "PEBScounters": "0,1,2,3", + "EventName": "ITLB_MISSES.WALK_PENDING", + "PDIR_COUNTER": "na", + "SampleAfterValue": "200003", + "BriefDescription": "Page walks outstanding due to an instruction fetch every cycle." + }, + { + "CollectPEBSRecord": "1", + "PublicDescription": "Counts STLB flushes. The TLBs are flushed on instructions like INVLPG and MOV to CR3.", + "EventCode": "0xBD", + "Counter": "0,1,2,3", + "UMask": "0x20", + "PEBScounters": "0,1,2,3", + "EventName": "TLB_FLUSHES.STLB_ANY", + "PDIR_COUNTER": "na", + "SampleAfterValue": "20003", + "BriefDescription": "STLB flushes" + }, + { + "PEBS": "2", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts load uops retired that caused a DTLB miss.", + "EventCode": "0xD0", + "Counter": "0,1,2,3", + "UMask": "0x11", + "PEBScounters": "0,1,2,3", + "EventName": "MEM_UOPS_RETIRED.DTLB_MISS_LOADS", + "SampleAfterValue": "200003", + "BriefDescription": "Load uops retired that missed the DTLB (Precise event capable)" + }, + { + "PEBS": "2", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts store uops retired that caused a DTLB miss.", + "EventCode": "0xD0", + "Counter": "0,1,2,3", + "UMask": "0x12", + "PEBScounters": "0,1,2,3", + "EventName": "MEM_UOPS_RETIRED.DTLB_MISS_STORES", + "SampleAfterValue": "200003", + "BriefDescription": "Store uops retired that missed the DTLB (Precise event capable)" + }, + { + "PEBS": "2", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts uops retired that had a DTLB miss on load, store or either. Note that when two distinct memory operations to the same page miss the DTLB, only one of them will be recorded as a DTLB miss.", + "EventCode": "0xD0", + "Counter": "0,1,2,3", + "UMask": "0x13", + "PEBScounters": "0,1,2,3", + "EventName": "MEM_UOPS_RETIRED.DTLB_MISS", + "SampleAfterValue": "200003", + "BriefDescription": "Memory uops retired that missed the DTLB (Precise event capable)" + } +]
\ No newline at end of file diff --git a/tools/perf/pmu-events/arch/x86/haswell/hsw-metrics.json b/tools/perf/pmu-events/arch/x86/haswell/hsw-metrics.json new file mode 100644 index 000000000000..5ab5c78fe580 --- /dev/null +++ b/tools/perf/pmu-events/arch/x86/haswell/hsw-metrics.json @@ -0,0 +1,158 @@ +[ + { + "BriefDescription": "Instructions Per Cycle (per logical thread)", + "MetricExpr": "INST_RETIRED.ANY / CPU_CLK_UNHALTED.THREAD", + "MetricGroup": "TopDownL1", + "MetricName": "IPC" + }, + { + "BriefDescription": "Uops Per Instruction", + "MetricExpr": "UOPS_RETIRED.RETIRE_SLOTS / INST_RETIRED.ANY", + "MetricGroup": "Pipeline", + "MetricName": "UPI" + }, + { + "BriefDescription": "Rough Estimation of fraction of fetched lines bytes that were likely consumed by program instructions", + "MetricExpr": "min( 1 , IDQ.MITE_UOPS / ( (UOPS_RETIRED.RETIRE_SLOTS / INST_RETIRED.ANY) * 16 * ( ICACHE.HIT + ICACHE.MISSES ) / 4.0 ) )", + "MetricGroup": "Frontend", + "MetricName": "IFetch_Line_Utilization" + }, + { + "BriefDescription": "Fraction of Uops delivered by the DSB (aka Decoded Icache; or Uop Cache)", + "MetricExpr": "IDQ.DSB_UOPS / ( IDQ.DSB_UOPS + LSD.UOPS + IDQ.MITE_UOPS + IDQ.MS_UOPS )", + "MetricGroup": "DSB; Frontend_Bandwidth", + "MetricName": "DSB_Coverage" + }, + { + "BriefDescription": "Cycles Per Instruction (threaded)", + "MetricExpr": "1 / (INST_RETIRED.ANY / cycles)", + "MetricGroup": "Pipeline;Summary", + "MetricName": "CPI" + }, + { + "BriefDescription": "Per-thread actual clocks when the logical processor is active. This is called 'Clockticks' in VTune.", + "MetricExpr": "CPU_CLK_UNHALTED.THREAD", + "MetricGroup": "Summary", + "MetricName": "CLKS" + }, + { + "BriefDescription": "Total issue-pipeline slots", + "MetricExpr": "4*(( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else cycles)", + "MetricGroup": "TopDownL1", + "MetricName": "SLOTS" + }, + { + "BriefDescription": "Total number of retired Instructions", + "MetricExpr": "INST_RETIRED.ANY", + "MetricGroup": "Summary", + "MetricName": "Instructions" + }, + { + "BriefDescription": "Instructions Per Cycle (per physical core)", + "MetricExpr": "INST_RETIRED.ANY / (( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else cycles)", + "MetricGroup": "SMT", + "MetricName": "CoreIPC" + }, + { + "BriefDescription": "Instruction-Level-Parallelism (average number of uops executed when there is at least 1 uop executed)", + "MetricExpr": "( UOPS_EXECUTED.CORE / 2 / (( cpu@UOPS_EXECUTED.CORE\\,cmask\\=1@ / 2) if #SMT_on else cpu@UOPS_EXECUTED.CORE\\,cmask\\=1@) ) if #SMT_on else UOPS_EXECUTED.CORE / (( cpu@UOPS_EXECUTED.CORE\\,cmask\\=1@ / 2) if #SMT_on else cpu@UOPS_EXECUTED.CORE\\,cmask\\=1@)", + "MetricGroup": "Pipeline;Ports_Utilization", + "MetricName": "ILP" + }, + { + "BriefDescription": "Average Branch Address Clear Cost (fraction of cycles)", + "MetricExpr": "2* (( RS_EVENTS.EMPTY_CYCLES - ICACHE.IFDATA_STALL - (( 14 * ITLB_MISSES.STLB_HIT + ITLB_MISSES.WALK_DURATION )) ) / RS_EVENTS.EMPTY_END)", + "MetricGroup": "Unknown_Branches", + "MetricName": "BAClear_Cost" + }, + { + "BriefDescription": "Core actual clocks when any thread is active on the physical core", + "MetricExpr": "( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else CPU_CLK_UNHALTED.THREAD", + "MetricGroup": "SMT", + "MetricName": "CORE_CLKS" + }, + { + "BriefDescription": "Actual Average Latency for L1 data-cache miss demand loads", + "MetricExpr": "L1D_PEND_MISS.PENDING / ( MEM_LOAD_UOPS_RETIRED.L1_MISS + mem_load_uops_retired.hit_lfb )", + "MetricGroup": "Memory_Bound;Memory_Lat", + "MetricName": "Load_Miss_Real_Latency" + }, + { + "BriefDescription": "Memory-Level-Parallelism (average number of L1 miss demand load when there is at least 1 such miss)", + "MetricExpr": "L1D_PEND_MISS.PENDING / (( cpu@l1d_pend_miss.pending_cycles\\,any\\=1@ / 2) if #SMT_on else L1D_PEND_MISS.PENDING_CYCLES)", + "MetricGroup": "Memory_Bound;Memory_BW", + "MetricName": "MLP" + }, + { + "BriefDescription": "Utilization of the core's Page Walker(s) serving STLB misses triggered by instruction/Load/Store accesses", + "MetricExpr": "( ITLB_MISSES.WALK_DURATION + DTLB_LOAD_MISSES.WALK_DURATION + DTLB_STORE_MISSES.WALK_DURATION ) / (( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else cycles)", + "MetricGroup": "TLB", + "MetricName": "Page_Walks_Utilization" + }, + { + "BriefDescription": "Average CPU Utilization", + "MetricExpr": "CPU_CLK_UNHALTED.REF_TSC / msr@tsc@", + "MetricGroup": "Summary", + "MetricName": "CPU_Utilization" + }, + { + "BriefDescription": "Average Frequency Utilization relative nominal frequency", + "MetricExpr": "CPU_CLK_UNHALTED.THREAD / CPU_CLK_UNHALTED.REF_TSC", + "MetricGroup": "Power", + "MetricName": "Turbo_Utilization" + }, + { + "BriefDescription": "Fraction of cycles where both hardware threads were active", + "MetricExpr": "1 - CPU_CLK_THREAD_UNHALTED.ONE_THREAD_ACTIVE / ( CPU_CLK_THREAD_UNHALTED.REF_XCLK_ANY / 2 ) if #SMT_on else 0", + "MetricGroup": "SMT;Summary", + "MetricName": "SMT_2T_Utilization" + }, + { + "BriefDescription": "Fraction of cycles spent in Kernel mode", + "MetricExpr": "CPU_CLK_UNHALTED.REF_TSC:u / CPU_CLK_UNHALTED.REF_TSC", + "MetricGroup": "Summary", + "MetricName": "Kernel_Utilization" + }, + { + "BriefDescription": "C3 residency percent per core", + "MetricExpr": "(cstate_core@c3\\-residency@ / msr@tsc@) * 100", + "MetricGroup": "Power", + "MetricName": "C3_Core_Residency" + }, + { + "BriefDescription": "C6 residency percent per core", + "MetricExpr": "(cstate_core@c6\\-residency@ / msr@tsc@) * 100", + "MetricGroup": "Power", + "MetricName": "C6_Core_Residency" + }, + { + "BriefDescription": "C7 residency percent per core", + "MetricExpr": "(cstate_core@c7\\-residency@ / msr@tsc@) * 100", + "MetricGroup": "Power", + "MetricName": "C7_Core_Residency" + }, + { + "BriefDescription": "C2 residency percent per package", + "MetricExpr": "(cstate_pkg@c2\\-residency@ / msr@tsc@) * 100", + "MetricGroup": "Power", + "MetricName": "C2_Pkg_Residency" + }, + { + "BriefDescription": "C3 residency percent per package", + "MetricExpr": "(cstate_pkg@c3\\-residency@ / msr@tsc@) * 100", + "MetricGroup": "Power", + "MetricName": "C3_Pkg_Residency" + }, + { + "BriefDescription": "C6 residency percent per package", + "MetricExpr": "(cstate_pkg@c6\\-residency@ / msr@tsc@) * 100", + "MetricGroup": "Power", + "MetricName": "C6_Pkg_Residency" + }, + { + "BriefDescription": "C7 residency percent per package", + "MetricExpr": "(cstate_pkg@c7\\-residency@ / msr@tsc@) * 100", + "MetricGroup": "Power", + "MetricName": "C7_Pkg_Residency" + } +] diff --git a/tools/perf/pmu-events/arch/x86/haswellx/hsx-metrics.json b/tools/perf/pmu-events/arch/x86/haswellx/hsx-metrics.json new file mode 100644 index 000000000000..5ab5c78fe580 --- /dev/null +++ b/tools/perf/pmu-events/arch/x86/haswellx/hsx-metrics.json @@ -0,0 +1,158 @@ +[ + { + "BriefDescription": "Instructions Per Cycle (per logical thread)", + "MetricExpr": "INST_RETIRED.ANY / CPU_CLK_UNHALTED.THREAD", + "MetricGroup": "TopDownL1", + "MetricName": "IPC" + }, + { + "BriefDescription": "Uops Per Instruction", + "MetricExpr": "UOPS_RETIRED.RETIRE_SLOTS / INST_RETIRED.ANY", + "MetricGroup": "Pipeline", + "MetricName": "UPI" + }, + { + "BriefDescription": "Rough Estimation of fraction of fetched lines bytes that were likely consumed by program instructions", + "MetricExpr": "min( 1 , IDQ.MITE_UOPS / ( (UOPS_RETIRED.RETIRE_SLOTS / INST_RETIRED.ANY) * 16 * ( ICACHE.HIT + ICACHE.MISSES ) / 4.0 ) )", + "MetricGroup": "Frontend", + "MetricName": "IFetch_Line_Utilization" + }, + { + "BriefDescription": "Fraction of Uops delivered by the DSB (aka Decoded Icache; or Uop Cache)", + "MetricExpr": "IDQ.DSB_UOPS / ( IDQ.DSB_UOPS + LSD.UOPS + IDQ.MITE_UOPS + IDQ.MS_UOPS )", + "MetricGroup": "DSB; Frontend_Bandwidth", + "MetricName": "DSB_Coverage" + }, + { + "BriefDescription": "Cycles Per Instruction (threaded)", + "MetricExpr": "1 / (INST_RETIRED.ANY / cycles)", + "MetricGroup": "Pipeline;Summary", + "MetricName": "CPI" + }, + { + "BriefDescription": "Per-thread actual clocks when the logical processor is active. This is called 'Clockticks' in VTune.", + "MetricExpr": "CPU_CLK_UNHALTED.THREAD", + "MetricGroup": "Summary", + "MetricName": "CLKS" + }, + { + "BriefDescription": "Total issue-pipeline slots", + "MetricExpr": "4*(( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else cycles)", + "MetricGroup": "TopDownL1", + "MetricName": "SLOTS" + }, + { + "BriefDescription": "Total number of retired Instructions", + "MetricExpr": "INST_RETIRED.ANY", + "MetricGroup": "Summary", + "MetricName": "Instructions" + }, + { + "BriefDescription": "Instructions Per Cycle (per physical core)", + "MetricExpr": "INST_RETIRED.ANY / (( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else cycles)", + "MetricGroup": "SMT", + "MetricName": "CoreIPC" + }, + { + "BriefDescription": "Instruction-Level-Parallelism (average number of uops executed when there is at least 1 uop executed)", + "MetricExpr": "( UOPS_EXECUTED.CORE / 2 / (( cpu@UOPS_EXECUTED.CORE\\,cmask\\=1@ / 2) if #SMT_on else cpu@UOPS_EXECUTED.CORE\\,cmask\\=1@) ) if #SMT_on else UOPS_EXECUTED.CORE / (( cpu@UOPS_EXECUTED.CORE\\,cmask\\=1@ / 2) if #SMT_on else cpu@UOPS_EXECUTED.CORE\\,cmask\\=1@)", + "MetricGroup": "Pipeline;Ports_Utilization", + "MetricName": "ILP" + }, + { + "BriefDescription": "Average Branch Address Clear Cost (fraction of cycles)", + "MetricExpr": "2* (( RS_EVENTS.EMPTY_CYCLES - ICACHE.IFDATA_STALL - (( 14 * ITLB_MISSES.STLB_HIT + ITLB_MISSES.WALK_DURATION )) ) / RS_EVENTS.EMPTY_END)", + "MetricGroup": "Unknown_Branches", + "MetricName": "BAClear_Cost" + }, + { + "BriefDescription": "Core actual clocks when any thread is active on the physical core", + "MetricExpr": "( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else CPU_CLK_UNHALTED.THREAD", + "MetricGroup": "SMT", + "MetricName": "CORE_CLKS" + }, + { + "BriefDescription": "Actual Average Latency for L1 data-cache miss demand loads", + "MetricExpr": "L1D_PEND_MISS.PENDING / ( MEM_LOAD_UOPS_RETIRED.L1_MISS + mem_load_uops_retired.hit_lfb )", + "MetricGroup": "Memory_Bound;Memory_Lat", + "MetricName": "Load_Miss_Real_Latency" + }, + { + "BriefDescription": "Memory-Level-Parallelism (average number of L1 miss demand load when there is at least 1 such miss)", + "MetricExpr": "L1D_PEND_MISS.PENDING / (( cpu@l1d_pend_miss.pending_cycles\\,any\\=1@ / 2) if #SMT_on else L1D_PEND_MISS.PENDING_CYCLES)", + "MetricGroup": "Memory_Bound;Memory_BW", + "MetricName": "MLP" + }, + { + "BriefDescription": "Utilization of the core's Page Walker(s) serving STLB misses triggered by instruction/Load/Store accesses", + "MetricExpr": "( ITLB_MISSES.WALK_DURATION + DTLB_LOAD_MISSES.WALK_DURATION + DTLB_STORE_MISSES.WALK_DURATION ) / (( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else cycles)", + "MetricGroup": "TLB", + "MetricName": "Page_Walks_Utilization" + }, + { + "BriefDescription": "Average CPU Utilization", + "MetricExpr": "CPU_CLK_UNHALTED.REF_TSC / msr@tsc@", + "MetricGroup": "Summary", + "MetricName": "CPU_Utilization" + }, + { + "BriefDescription": "Average Frequency Utilization relative nominal frequency", + "MetricExpr": "CPU_CLK_UNHALTED.THREAD / CPU_CLK_UNHALTED.REF_TSC", + "MetricGroup": "Power", + "MetricName": "Turbo_Utilization" + }, + { + "BriefDescription": "Fraction of cycles where both hardware threads were active", + "MetricExpr": "1 - CPU_CLK_THREAD_UNHALTED.ONE_THREAD_ACTIVE / ( CPU_CLK_THREAD_UNHALTED.REF_XCLK_ANY / 2 ) if #SMT_on else 0", + "MetricGroup": "SMT;Summary", + "MetricName": "SMT_2T_Utilization" + }, + { + "BriefDescription": "Fraction of cycles spent in Kernel mode", + "MetricExpr": "CPU_CLK_UNHALTED.REF_TSC:u / CPU_CLK_UNHALTED.REF_TSC", + "MetricGroup": "Summary", + "MetricName": "Kernel_Utilization" + }, + { + "BriefDescription": "C3 residency percent per core", + "MetricExpr": "(cstate_core@c3\\-residency@ / msr@tsc@) * 100", + "MetricGroup": "Power", + "MetricName": "C3_Core_Residency" + }, + { + "BriefDescription": "C6 residency percent per core", + "MetricExpr": "(cstate_core@c6\\-residency@ / msr@tsc@) * 100", + "MetricGroup": "Power", + "MetricName": "C6_Core_Residency" + }, + { + "BriefDescription": "C7 residency percent per core", + "MetricExpr": "(cstate_core@c7\\-residency@ / msr@tsc@) * 100", + "MetricGroup": "Power", + "MetricName": "C7_Core_Residency" + }, + { + "BriefDescription": "C2 residency percent per package", + "MetricExpr": "(cstate_pkg@c2\\-residency@ / msr@tsc@) * 100", + "MetricGroup": "Power", + "MetricName": "C2_Pkg_Residency" + }, + { + "BriefDescription": "C3 residency percent per package", + "MetricExpr": "(cstate_pkg@c3\\-residency@ / msr@tsc@) * 100", + "MetricGroup": "Power", + "MetricName": "C3_Pkg_Residency" + }, + { + "BriefDescription": "C6 residency percent per package", + "MetricExpr": "(cstate_pkg@c6\\-residency@ / msr@tsc@) * 100", + "MetricGroup": "Power", + "MetricName": "C6_Pkg_Residency" + }, + { + "BriefDescription": "C7 residency percent per package", + "MetricExpr": "(cstate_pkg@c7\\-residency@ / msr@tsc@) * 100", + "MetricGroup": "Power", + "MetricName": "C7_Pkg_Residency" + } +] diff --git a/tools/perf/pmu-events/arch/x86/ivybridge/ivb-metrics.json b/tools/perf/pmu-events/arch/x86/ivybridge/ivb-metrics.json new file mode 100644 index 000000000000..7c2679514efb --- /dev/null +++ b/tools/perf/pmu-events/arch/x86/ivybridge/ivb-metrics.json @@ -0,0 +1,164 @@ +[ + { + "BriefDescription": "Instructions Per Cycle (per logical thread)", + "MetricExpr": "INST_RETIRED.ANY / CPU_CLK_UNHALTED.THREAD", + "MetricGroup": "TopDownL1", + "MetricName": "IPC" + }, + { + "BriefDescription": "Uops Per Instruction", + "MetricExpr": "UOPS_RETIRED.RETIRE_SLOTS / INST_RETIRED.ANY", + "MetricGroup": "Pipeline", + "MetricName": "UPI" + }, + { + "BriefDescription": "Rough Estimation of fraction of fetched lines bytes that were likely consumed by program instructions", + "MetricExpr": "min( 1 , UOPS_ISSUED.ANY / ( (UOPS_RETIRED.RETIRE_SLOTS / INST_RETIRED.ANY) * 32 * ( ICACHE.HIT + ICACHE.MISSES ) / 4) )", + "MetricGroup": "Frontend", + "MetricName": "IFetch_Line_Utilization" + }, + { + "BriefDescription": "Fraction of Uops delivered by the DSB (aka Decoded Icache; or Uop Cache)", + "MetricExpr": "IDQ.DSB_UOPS / ( IDQ.DSB_UOPS + LSD.UOPS + IDQ.MITE_UOPS + IDQ.MS_UOPS )", + "MetricGroup": "DSB; Frontend_Bandwidth", + "MetricName": "DSB_Coverage" + }, + { + "BriefDescription": "Cycles Per Instruction (threaded)", + "MetricExpr": "1 / (INST_RETIRED.ANY / cycles)", + "MetricGroup": "Pipeline;Summary", + "MetricName": "CPI" + }, + { + "BriefDescription": "Per-thread actual clocks when the logical processor is active. This is called 'Clockticks' in VTune.", + "MetricExpr": "CPU_CLK_UNHALTED.THREAD", + "MetricGroup": "Summary", + "MetricName": "CLKS" + }, + { + "BriefDescription": "Total issue-pipeline slots", + "MetricExpr": "4*(( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else cycles)", + "MetricGroup": "TopDownL1", + "MetricName": "SLOTS" + }, + { + "BriefDescription": "Total number of retired Instructions", + "MetricExpr": "INST_RETIRED.ANY", + "MetricGroup": "Summary", + "MetricName": "Instructions" + }, + { + "BriefDescription": "Instructions Per Cycle (per physical core)", + "MetricExpr": "INST_RETIRED.ANY / (( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else cycles)", + "MetricGroup": "SMT", + "MetricName": "CoreIPC" + }, + { + "BriefDescription": "Instruction-Level-Parallelism (average number of uops executed when there is at least 1 uop executed)", + "MetricExpr": "UOPS_EXECUTED.THREAD / (( cpu@UOPS_EXECUTED.CORE\\,cmask\\=1@ / 2) if #SMT_on else UOPS_EXECUTED.CYCLES_GE_1_UOP_EXEC)", + "MetricGroup": "Pipeline;Ports_Utilization", + "MetricName": "ILP" + }, + { + "BriefDescription": "Average Branch Address Clear Cost (fraction of cycles)", + "MetricExpr": "2* (( RS_EVENTS.EMPTY_CYCLES - ICACHE.IFETCH_STALL ) / RS_EVENTS.EMPTY_END)", + "MetricGroup": "Unknown_Branches", + "MetricName": "BAClear_Cost" + }, + { + "BriefDescription": "Core actual clocks when any thread is active on the physical core", + "MetricExpr": "( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else CPU_CLK_UNHALTED.THREAD", + "MetricGroup": "SMT", + "MetricName": "CORE_CLKS" + }, + { + "BriefDescription": "Actual Average Latency for L1 data-cache miss demand loads", + "MetricExpr": "L1D_PEND_MISS.PENDING / ( MEM_LOAD_UOPS_RETIRED.L1_MISS + mem_load_uops_retired.hit_lfb )", + "MetricGroup": "Memory_Bound;Memory_Lat", + "MetricName": "Load_Miss_Real_Latency" + }, + { + "BriefDescription": "Memory-Level-Parallelism (average number of L1 miss demand load when there is at least 1 such miss)", + "MetricExpr": "L1D_PEND_MISS.PENDING / (( cpu@l1d_pend_miss.pending_cycles\\,any\\=1@ / 2) if #SMT_on else L1D_PEND_MISS.PENDING_CYCLES)", + "MetricGroup": "Memory_Bound;Memory_BW", + "MetricName": "MLP" + }, + { + "BriefDescription": "Utilization of the core's Page Walker(s) serving STLB misses triggered by instruction/Load/Store accesses", + "MetricExpr": "( ITLB_MISSES.WALK_DURATION + DTLB_LOAD_MISSES.WALK_DURATION + DTLB_STORE_MISSES.WALK_DURATION ) / (( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else cycles)", + "MetricGroup": "TLB", + "MetricName": "Page_Walks_Utilization" + }, + { + "BriefDescription": "Average CPU Utilization", + "MetricExpr": "CPU_CLK_UNHALTED.REF_TSC / msr@tsc@", + "MetricGroup": "Summary", + "MetricName": "CPU_Utilization" + }, + { + "BriefDescription": "Giga Floating Point Operations Per Second", + "MetricExpr": "(( 1*( FP_COMP_OPS_EXE.SSE_SCALAR_SINGLE + FP_COMP_OPS_EXE.SSE_SCALAR_DOUBLE ) + 2* FP_COMP_OPS_EXE.SSE_PACKED_DOUBLE + 4*( FP_COMP_OPS_EXE.SSE_PACKED_SINGLE + SIMD_FP_256.PACKED_DOUBLE ) + 8* SIMD_FP_256.PACKED_SINGLE )) / 1000000000 / duration_time", + "MetricGroup": "FLOPS;Summary", + "MetricName": "GFLOPs" + }, + { + "BriefDescription": "Average Frequency Utilization relative nominal frequency", + "MetricExpr": "CPU_CLK_UNHALTED.THREAD / CPU_CLK_UNHALTED.REF_TSC", + "MetricGroup": "Power", + "MetricName": "Turbo_Utilization" + }, + { + "BriefDescription": "Fraction of cycles where both hardware threads were active", + "MetricExpr": "1 - CPU_CLK_THREAD_UNHALTED.ONE_THREAD_ACTIVE / ( CPU_CLK_THREAD_UNHALTED.REF_XCLK_ANY / 2 ) if #SMT_on else 0", + "MetricGroup": "SMT;Summary", + "MetricName": "SMT_2T_Utilization" + }, + { + "BriefDescription": "Fraction of cycles spent in Kernel mode", + "MetricExpr": "CPU_CLK_UNHALTED.REF_TSC:u / CPU_CLK_UNHALTED.REF_TSC", + "MetricGroup": "Summary", + "MetricName": "Kernel_Utilization" + }, + { + "BriefDescription": "C3 residency percent per core", + "MetricExpr": "(cstate_core@c3\\-residency@ / msr@tsc@) * 100", + "MetricGroup": "Power", + "MetricName": "C3_Core_Residency" + }, + { + "BriefDescription": "C6 residency percent per core", + "MetricExpr": "(cstate_core@c6\\-residency@ / msr@tsc@) * 100", + "MetricGroup": "Power", + "MetricName": "C6_Core_Residency" + }, + { + "BriefDescription": "C7 residency percent per core", + "MetricExpr": "(cstate_core@c7\\-residency@ / msr@tsc@) * 100", + "MetricGroup": "Power", + "MetricName": "C7_Core_Residency" + }, + { + "BriefDescription": "C2 residency percent per package", + "MetricExpr": "(cstate_pkg@c2\\-residency@ / msr@tsc@) * 100", + "MetricGroup": "Power", + "MetricName": "C2_Pkg_Residency" + }, + { + "BriefDescription": "C3 residency percent per package", + "MetricExpr": "(cstate_pkg@c3\\-residency@ / msr@tsc@) * 100", + "MetricGroup": "Power", + "MetricName": "C3_Pkg_Residency" + }, + { + "BriefDescription": "C6 residency percent per package", + "MetricExpr": "(cstate_pkg@c6\\-residency@ / msr@tsc@) * 100", + "MetricGroup": "Power", + "MetricName": "C6_Pkg_Residency" + }, + { + "BriefDescription": "C7 residency percent per package", + "MetricExpr": "(cstate_pkg@c7\\-residency@ / msr@tsc@) * 100", + "MetricGroup": "Power", + "MetricName": "C7_Pkg_Residency" + } +] diff --git a/tools/perf/pmu-events/arch/x86/ivytown/ivt-metrics.json b/tools/perf/pmu-events/arch/x86/ivytown/ivt-metrics.json new file mode 100644 index 000000000000..7c2679514efb --- /dev/null +++ b/tools/perf/pmu-events/arch/x86/ivytown/ivt-metrics.json @@ -0,0 +1,164 @@ +[ + { + "BriefDescription": "Instructions Per Cycle (per logical thread)", + "MetricExpr": "INST_RETIRED.ANY / CPU_CLK_UNHALTED.THREAD", + "MetricGroup": "TopDownL1", + "MetricName": "IPC" + }, + { + "BriefDescription": "Uops Per Instruction", + "MetricExpr": "UOPS_RETIRED.RETIRE_SLOTS / INST_RETIRED.ANY", + "MetricGroup": "Pipeline", + "MetricName": "UPI" + }, + { + "BriefDescription": "Rough Estimation of fraction of fetched lines bytes that were likely consumed by program instructions", + "MetricExpr": "min( 1 , UOPS_ISSUED.ANY / ( (UOPS_RETIRED.RETIRE_SLOTS / INST_RETIRED.ANY) * 32 * ( ICACHE.HIT + ICACHE.MISSES ) / 4) )", + "MetricGroup": "Frontend", + "MetricName": "IFetch_Line_Utilization" + }, + { + "BriefDescription": "Fraction of Uops delivered by the DSB (aka Decoded Icache; or Uop Cache)", + "MetricExpr": "IDQ.DSB_UOPS / ( IDQ.DSB_UOPS + LSD.UOPS + IDQ.MITE_UOPS + IDQ.MS_UOPS )", + "MetricGroup": "DSB; Frontend_Bandwidth", + "MetricName": "DSB_Coverage" + }, + { + "BriefDescription": "Cycles Per Instruction (threaded)", + "MetricExpr": "1 / (INST_RETIRED.ANY / cycles)", + "MetricGroup": "Pipeline;Summary", + "MetricName": "CPI" + }, + { + "BriefDescription": "Per-thread actual clocks when the logical processor is active. This is called 'Clockticks' in VTune.", + "MetricExpr": "CPU_CLK_UNHALTED.THREAD", + "MetricGroup": "Summary", + "MetricName": "CLKS" + }, + { + "BriefDescription": "Total issue-pipeline slots", + "MetricExpr": "4*(( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else cycles)", + "MetricGroup": "TopDownL1", + "MetricName": "SLOTS" + }, + { + "BriefDescription": "Total number of retired Instructions", + "MetricExpr": "INST_RETIRED.ANY", + "MetricGroup": "Summary", + "MetricName": "Instructions" + }, + { + "BriefDescription": "Instructions Per Cycle (per physical core)", + "MetricExpr": "INST_RETIRED.ANY / (( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else cycles)", + "MetricGroup": "SMT", + "MetricName": "CoreIPC" + }, + { + "BriefDescription": "Instruction-Level-Parallelism (average number of uops executed when there is at least 1 uop executed)", + "MetricExpr": "UOPS_EXECUTED.THREAD / (( cpu@UOPS_EXECUTED.CORE\\,cmask\\=1@ / 2) if #SMT_on else UOPS_EXECUTED.CYCLES_GE_1_UOP_EXEC)", + "MetricGroup": "Pipeline;Ports_Utilization", + "MetricName": "ILP" + }, + { + "BriefDescription": "Average Branch Address Clear Cost (fraction of cycles)", + "MetricExpr": "2* (( RS_EVENTS.EMPTY_CYCLES - ICACHE.IFETCH_STALL ) / RS_EVENTS.EMPTY_END)", + "MetricGroup": "Unknown_Branches", + "MetricName": "BAClear_Cost" + }, + { + "BriefDescription": "Core actual clocks when any thread is active on the physical core", + "MetricExpr": "( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else CPU_CLK_UNHALTED.THREAD", + "MetricGroup": "SMT", + "MetricName": "CORE_CLKS" + }, + { + "BriefDescription": "Actual Average Latency for L1 data-cache miss demand loads", + "MetricExpr": "L1D_PEND_MISS.PENDING / ( MEM_LOAD_UOPS_RETIRED.L1_MISS + mem_load_uops_retired.hit_lfb )", + "MetricGroup": "Memory_Bound;Memory_Lat", + "MetricName": "Load_Miss_Real_Latency" + }, + { + "BriefDescription": "Memory-Level-Parallelism (average number of L1 miss demand load when there is at least 1 such miss)", + "MetricExpr": "L1D_PEND_MISS.PENDING / (( cpu@l1d_pend_miss.pending_cycles\\,any\\=1@ / 2) if #SMT_on else L1D_PEND_MISS.PENDING_CYCLES)", + "MetricGroup": "Memory_Bound;Memory_BW", + "MetricName": "MLP" + }, + { + "BriefDescription": "Utilization of the core's Page Walker(s) serving STLB misses triggered by instruction/Load/Store accesses", + "MetricExpr": "( ITLB_MISSES.WALK_DURATION + DTLB_LOAD_MISSES.WALK_DURATION + DTLB_STORE_MISSES.WALK_DURATION ) / (( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else cycles)", + "MetricGroup": "TLB", + "MetricName": "Page_Walks_Utilization" + }, + { + "BriefDescription": "Average CPU Utilization", + "MetricExpr": "CPU_CLK_UNHALTED.REF_TSC / msr@tsc@", + "MetricGroup": "Summary", + "MetricName": "CPU_Utilization" + }, + { + "BriefDescription": "Giga Floating Point Operations Per Second", + "MetricExpr": "(( 1*( FP_COMP_OPS_EXE.SSE_SCALAR_SINGLE + FP_COMP_OPS_EXE.SSE_SCALAR_DOUBLE ) + 2* FP_COMP_OPS_EXE.SSE_PACKED_DOUBLE + 4*( FP_COMP_OPS_EXE.SSE_PACKED_SINGLE + SIMD_FP_256.PACKED_DOUBLE ) + 8* SIMD_FP_256.PACKED_SINGLE )) / 1000000000 / duration_time", + "MetricGroup": "FLOPS;Summary", + "MetricName": "GFLOPs" + }, + { + "BriefDescription": "Average Frequency Utilization relative nominal frequency", + "MetricExpr": "CPU_CLK_UNHALTED.THREAD / CPU_CLK_UNHALTED.REF_TSC", + "MetricGroup": "Power", + "MetricName": "Turbo_Utilization" + }, + { + "BriefDescription": "Fraction of cycles where both hardware threads were active", + "MetricExpr": "1 - CPU_CLK_THREAD_UNHALTED.ONE_THREAD_ACTIVE / ( CPU_CLK_THREAD_UNHALTED.REF_XCLK_ANY / 2 ) if #SMT_on else 0", + "MetricGroup": "SMT;Summary", + "MetricName": "SMT_2T_Utilization" + }, + { + "BriefDescription": "Fraction of cycles spent in Kernel mode", + "MetricExpr": "CPU_CLK_UNHALTED.REF_TSC:u / CPU_CLK_UNHALTED.REF_TSC", + "MetricGroup": "Summary", + "MetricName": "Kernel_Utilization" + }, + { + "BriefDescription": "C3 residency percent per core", + "MetricExpr": "(cstate_core@c3\\-residency@ / msr@tsc@) * 100", + "MetricGroup": "Power", + "MetricName": "C3_Core_Residency" + }, + { + "BriefDescription": "C6 residency percent per core", + "MetricExpr": "(cstate_core@c6\\-residency@ / msr@tsc@) * 100", + "MetricGroup": "Power", + "MetricName": "C6_Core_Residency" + }, + { + "BriefDescription": "C7 residency percent per core", + "MetricExpr": "(cstate_core@c7\\-residency@ / msr@tsc@) * 100", + "MetricGroup": "Power", + "MetricName": "C7_Core_Residency" + }, + { + "BriefDescription": "C2 residency percent per package", + "MetricExpr": "(cstate_pkg@c2\\-residency@ / msr@tsc@) * 100", + "MetricGroup": "Power", + "MetricName": "C2_Pkg_Residency" + }, + { + "BriefDescription": "C3 residency percent per package", + "MetricExpr": "(cstate_pkg@c3\\-residency@ / msr@tsc@) * 100", + "MetricGroup": "Power", + "MetricName": "C3_Pkg_Residency" + }, + { + "BriefDescription": "C6 residency percent per package", + "MetricExpr": "(cstate_pkg@c6\\-residency@ / msr@tsc@) * 100", + "MetricGroup": "Power", + "MetricName": "C6_Pkg_Residency" + }, + { + "BriefDescription": "C7 residency percent per package", + "MetricExpr": "(cstate_pkg@c7\\-residency@ / msr@tsc@) * 100", + "MetricGroup": "Power", + "MetricName": "C7_Pkg_Residency" + } +] diff --git a/tools/perf/pmu-events/arch/x86/jaketown/jkt-metrics.json b/tools/perf/pmu-events/arch/x86/jaketown/jkt-metrics.json new file mode 100644 index 000000000000..fd7d7c438226 --- /dev/null +++ b/tools/perf/pmu-events/arch/x86/jaketown/jkt-metrics.json @@ -0,0 +1,140 @@ +[ + { + "BriefDescription": "Instructions Per Cycle (per logical thread)", + "MetricExpr": "INST_RETIRED.ANY / CPU_CLK_UNHALTED.THREAD", + "MetricGroup": "TopDownL1", + "MetricName": "IPC" + }, + { + "BriefDescription": "Uops Per Instruction", + "MetricExpr": "UOPS_RETIRED.RETIRE_SLOTS / INST_RETIRED.ANY", + "MetricGroup": "Pipeline", + "MetricName": "UPI" + }, + { + "BriefDescription": "Rough Estimation of fraction of fetched lines bytes that were likely consumed by program instructions", + "MetricExpr": "min( 1 , UOPS_ISSUED.ANY / ( (UOPS_RETIRED.RETIRE_SLOTS / INST_RETIRED.ANY) * 32 * ( ICACHE.HIT + ICACHE.MISSES ) / 4) )", + "MetricGroup": "Frontend", + "MetricName": "IFetch_Line_Utilization" + }, + { + "BriefDescription": "Fraction of Uops delivered by the DSB (aka Decoded Icache; or Uop Cache)", + "MetricExpr": "IDQ.DSB_UOPS / ( IDQ.DSB_UOPS + LSD.UOPS + IDQ.MITE_UOPS + IDQ.MS_UOPS )", + "MetricGroup": "DSB; Frontend_Bandwidth", + "MetricName": "DSB_Coverage" + }, + { + "BriefDescription": "Cycles Per Instruction (threaded)", + "MetricExpr": "1 / (INST_RETIRED.ANY / cycles)", + "MetricGroup": "Pipeline;Summary", + "MetricName": "CPI" + }, + { + "BriefDescription": "Per-thread actual clocks when the logical processor is active. This is called 'Clockticks' in VTune.", + "MetricExpr": "CPU_CLK_UNHALTED.THREAD", + "MetricGroup": "Summary", + "MetricName": "CLKS" + }, + { + "BriefDescription": "Total issue-pipeline slots", + "MetricExpr": "4*(( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else cycles)", + "MetricGroup": "TopDownL1", + "MetricName": "SLOTS" + }, + { + "BriefDescription": "Total number of retired Instructions", + "MetricExpr": "INST_RETIRED.ANY", + "MetricGroup": "Summary", + "MetricName": "Instructions" + }, + { + "BriefDescription": "Instructions Per Cycle (per physical core)", + "MetricExpr": "INST_RETIRED.ANY / (( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else cycles)", + "MetricGroup": "SMT", + "MetricName": "CoreIPC" + }, + { + "BriefDescription": "Instruction-Level-Parallelism (average number of uops executed when there is at least 1 uop executed)", + "MetricExpr": "UOPS_DISPATCHED.THREAD / (( cpu@UOPS_DISPATCHED.CORE\\,cmask\\=1@ / 2) if #SMT_on else cpu@UOPS_DISPATCHED.CORE\\,cmask\\=1@)", + "MetricGroup": "Pipeline;Ports_Utilization", + "MetricName": "ILP" + }, + { + "BriefDescription": "Core actual clocks when any thread is active on the physical core", + "MetricExpr": "( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else CPU_CLK_UNHALTED.THREAD", + "MetricGroup": "SMT", + "MetricName": "CORE_CLKS" + }, + { + "BriefDescription": "Average CPU Utilization", + "MetricExpr": "CPU_CLK_UNHALTED.REF_TSC / msr@tsc@", + "MetricGroup": "Summary", + "MetricName": "CPU_Utilization" + }, + { + "BriefDescription": "Giga Floating Point Operations Per Second", + "MetricExpr": "(( 1*( FP_COMP_OPS_EXE.SSE_SCALAR_SINGLE + FP_COMP_OPS_EXE.SSE_SCALAR_DOUBLE ) + 2* FP_COMP_OPS_EXE.SSE_PACKED_DOUBLE + 4*( FP_COMP_OPS_EXE.SSE_PACKED_SINGLE + SIMD_FP_256.PACKED_DOUBLE ) + 8* SIMD_FP_256.PACKED_SINGLE )) / 1000000000 / duration_time", + "MetricGroup": "FLOPS;Summary", + "MetricName": "GFLOPs" + }, + { + "BriefDescription": "Average Frequency Utilization relative nominal frequency", + "MetricExpr": "CPU_CLK_UNHALTED.THREAD / CPU_CLK_UNHALTED.REF_TSC", + "MetricGroup": "Power", + "MetricName": "Turbo_Utilization" + }, + { + "BriefDescription": "Fraction of cycles where both hardware threads were active", + "MetricExpr": "1 - CPU_CLK_THREAD_UNHALTED.ONE_THREAD_ACTIVE / ( CPU_CLK_THREAD_UNHALTED.REF_XCLK_ANY / 2 ) if #SMT_on else 0", + "MetricGroup": "SMT;Summary", + "MetricName": "SMT_2T_Utilization" + }, + { + "BriefDescription": "Fraction of cycles spent in Kernel mode", + "MetricExpr": "CPU_CLK_UNHALTED.REF_TSC:u / CPU_CLK_UNHALTED.REF_TSC", + "MetricGroup": "Summary", + "MetricName": "Kernel_Utilization" + }, + { + "BriefDescription": "C3 residency percent per core", + "MetricExpr": "(cstate_core@c3\\-residency@ / msr@tsc@) * 100", + "MetricGroup": "Power", + "MetricName": "C3_Core_Residency" + }, + { + "BriefDescription": "C6 residency percent per core", + "MetricExpr": "(cstate_core@c6\\-residency@ / msr@tsc@) * 100", + "MetricGroup": "Power", + "MetricName": "C6_Core_Residency" + }, + { + "BriefDescription": "C7 residency percent per core", + "MetricExpr": "(cstate_core@c7\\-residency@ / msr@tsc@) * 100", + "MetricGroup": "Power", + "MetricName": "C7_Core_Residency" + }, + { + "BriefDescription": "C2 residency percent per package", + "MetricExpr": "(cstate_pkg@c2\\-residency@ / msr@tsc@) * 100", + "MetricGroup": "Power", + "MetricName": "C2_Pkg_Residency" + }, + { + "BriefDescription": "C3 residency percent per package", + "MetricExpr": "(cstate_pkg@c3\\-residency@ / msr@tsc@) * 100", + "MetricGroup": "Power", + "MetricName": "C3_Pkg_Residency" + }, + { + "BriefDescription": "C6 residency percent per package", + "MetricExpr": "(cstate_pkg@c6\\-residency@ / msr@tsc@) * 100", + "MetricGroup": "Power", + "MetricName": "C6_Pkg_Residency" + }, + { + "BriefDescription": "C7 residency percent per package", + "MetricExpr": "(cstate_pkg@c7\\-residency@ / msr@tsc@) * 100", + "MetricGroup": "Power", + "MetricName": "C7_Pkg_Residency" + } +] diff --git a/tools/perf/pmu-events/arch/x86/mapfile.csv b/tools/perf/pmu-events/arch/x86/mapfile.csv index 4ea068366c3e..fe1a2c47cabf 100644 --- a/tools/perf/pmu-events/arch/x86/mapfile.csv +++ b/tools/perf/pmu-events/arch/x86/mapfile.csv @@ -9,6 +9,7 @@ GenuineIntel-6-27,v4,bonnell,core GenuineIntel-6-36,v4,bonnell,core GenuineIntel-6-35,v4,bonnell,core GenuineIntel-6-5C,v8,goldmont,core +GenuineIntel-6-7A,v1,goldmontplus,core GenuineIntel-6-3C,v24,haswell,core GenuineIntel-6-45,v24,haswell,core GenuineIntel-6-46,v24,haswell,core diff --git a/tools/perf/pmu-events/arch/x86/sandybridge/snb-metrics.json b/tools/perf/pmu-events/arch/x86/sandybridge/snb-metrics.json new file mode 100644 index 000000000000..fd7d7c438226 --- /dev/null +++ b/tools/perf/pmu-events/arch/x86/sandybridge/snb-metrics.json @@ -0,0 +1,140 @@ +[ + { + "BriefDescription": "Instructions Per Cycle (per logical thread)", + "MetricExpr": "INST_RETIRED.ANY / CPU_CLK_UNHALTED.THREAD", + "MetricGroup": "TopDownL1", + "MetricName": "IPC" + }, + { + "BriefDescription": "Uops Per Instruction", + "MetricExpr": "UOPS_RETIRED.RETIRE_SLOTS / INST_RETIRED.ANY", + "MetricGroup": "Pipeline", + "MetricName": "UPI" + }, + { + "BriefDescription": "Rough Estimation of fraction of fetched lines bytes that were likely consumed by program instructions", + "MetricExpr": "min( 1 , UOPS_ISSUED.ANY / ( (UOPS_RETIRED.RETIRE_SLOTS / INST_RETIRED.ANY) * 32 * ( ICACHE.HIT + ICACHE.MISSES ) / 4) )", + "MetricGroup": "Frontend", + "MetricName": "IFetch_Line_Utilization" + }, + { + "BriefDescription": "Fraction of Uops delivered by the DSB (aka Decoded Icache; or Uop Cache)", + "MetricExpr": "IDQ.DSB_UOPS / ( IDQ.DSB_UOPS + LSD.UOPS + IDQ.MITE_UOPS + IDQ.MS_UOPS )", + "MetricGroup": "DSB; Frontend_Bandwidth", + "MetricName": "DSB_Coverage" + }, + { + "BriefDescription": "Cycles Per Instruction (threaded)", + "MetricExpr": "1 / (INST_RETIRED.ANY / cycles)", + "MetricGroup": "Pipeline;Summary", + "MetricName": "CPI" + }, + { + "BriefDescription": "Per-thread actual clocks when the logical processor is active. This is called 'Clockticks' in VTune.", + "MetricExpr": "CPU_CLK_UNHALTED.THREAD", + "MetricGroup": "Summary", + "MetricName": "CLKS" + }, + { + "BriefDescription": "Total issue-pipeline slots", + "MetricExpr": "4*(( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else cycles)", + "MetricGroup": "TopDownL1", + "MetricName": "SLOTS" + }, + { + "BriefDescription": "Total number of retired Instructions", + "MetricExpr": "INST_RETIRED.ANY", + "MetricGroup": "Summary", + "MetricName": "Instructions" + }, + { + "BriefDescription": "Instructions Per Cycle (per physical core)", + "MetricExpr": "INST_RETIRED.ANY / (( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else cycles)", + "MetricGroup": "SMT", + "MetricName": "CoreIPC" + }, + { + "BriefDescription": "Instruction-Level-Parallelism (average number of uops executed when there is at least 1 uop executed)", + "MetricExpr": "UOPS_DISPATCHED.THREAD / (( cpu@UOPS_DISPATCHED.CORE\\,cmask\\=1@ / 2) if #SMT_on else cpu@UOPS_DISPATCHED.CORE\\,cmask\\=1@)", + "MetricGroup": "Pipeline;Ports_Utilization", + "MetricName": "ILP" + }, + { + "BriefDescription": "Core actual clocks when any thread is active on the physical core", + "MetricExpr": "( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else CPU_CLK_UNHALTED.THREAD", + "MetricGroup": "SMT", + "MetricName": "CORE_CLKS" + }, + { + "BriefDescription": "Average CPU Utilization", + "MetricExpr": "CPU_CLK_UNHALTED.REF_TSC / msr@tsc@", + "MetricGroup": "Summary", + "MetricName": "CPU_Utilization" + }, + { + "BriefDescription": "Giga Floating Point Operations Per Second", + "MetricExpr": "(( 1*( FP_COMP_OPS_EXE.SSE_SCALAR_SINGLE + FP_COMP_OPS_EXE.SSE_SCALAR_DOUBLE ) + 2* FP_COMP_OPS_EXE.SSE_PACKED_DOUBLE + 4*( FP_COMP_OPS_EXE.SSE_PACKED_SINGLE + SIMD_FP_256.PACKED_DOUBLE ) + 8* SIMD_FP_256.PACKED_SINGLE )) / 1000000000 / duration_time", + "MetricGroup": "FLOPS;Summary", + "MetricName": "GFLOPs" + }, + { + "BriefDescription": "Average Frequency Utilization relative nominal frequency", + "MetricExpr": "CPU_CLK_UNHALTED.THREAD / CPU_CLK_UNHALTED.REF_TSC", + "MetricGroup": "Power", + "MetricName": "Turbo_Utilization" + }, + { + "BriefDescription": "Fraction of cycles where both hardware threads were active", + "MetricExpr": "1 - CPU_CLK_THREAD_UNHALTED.ONE_THREAD_ACTIVE / ( CPU_CLK_THREAD_UNHALTED.REF_XCLK_ANY / 2 ) if #SMT_on else 0", + "MetricGroup": "SMT;Summary", + "MetricName": "SMT_2T_Utilization" + }, + { + "BriefDescription": "Fraction of cycles spent in Kernel mode", + "MetricExpr": "CPU_CLK_UNHALTED.REF_TSC:u / CPU_CLK_UNHALTED.REF_TSC", + "MetricGroup": "Summary", + "MetricName": "Kernel_Utilization" + }, + { + "BriefDescription": "C3 residency percent per core", + "MetricExpr": "(cstate_core@c3\\-residency@ / msr@tsc@) * 100", + "MetricGroup": "Power", + "MetricName": "C3_Core_Residency" + }, + { + "BriefDescription": "C6 residency percent per core", + "MetricExpr": "(cstate_core@c6\\-residency@ / msr@tsc@) * 100", + "MetricGroup": "Power", + "MetricName": "C6_Core_Residency" + }, + { + "BriefDescription": "C7 residency percent per core", + "MetricExpr": "(cstate_core@c7\\-residency@ / msr@tsc@) * 100", + "MetricGroup": "Power", + "MetricName": "C7_Core_Residency" + }, + { + "BriefDescription": "C2 residency percent per package", + "MetricExpr": "(cstate_pkg@c2\\-residency@ / msr@tsc@) * 100", + "MetricGroup": "Power", + "MetricName": "C2_Pkg_Residency" + }, + { + "BriefDescription": "C3 residency percent per package", + "MetricExpr": "(cstate_pkg@c3\\-residency@ / msr@tsc@) * 100", + "MetricGroup": "Power", + "MetricName": "C3_Pkg_Residency" + }, + { + "BriefDescription": "C6 residency percent per package", + "MetricExpr": "(cstate_pkg@c6\\-residency@ / msr@tsc@) * 100", + "MetricGroup": "Power", + "MetricName": "C6_Pkg_Residency" + }, + { + "BriefDescription": "C7 residency percent per package", + "MetricExpr": "(cstate_pkg@c7\\-residency@ / msr@tsc@) * 100", + "MetricGroup": "Power", + "MetricName": "C7_Pkg_Residency" + } +] diff --git a/tools/perf/pmu-events/arch/x86/skylake/skl-metrics.json b/tools/perf/pmu-events/arch/x86/skylake/skl-metrics.json new file mode 100644 index 000000000000..36c903faed0b --- /dev/null +++ b/tools/perf/pmu-events/arch/x86/skylake/skl-metrics.json @@ -0,0 +1,164 @@ +[ + { + "BriefDescription": "Instructions Per Cycle (per logical thread)", + "MetricExpr": "INST_RETIRED.ANY / CPU_CLK_UNHALTED.THREAD", + "MetricGroup": "TopDownL1", + "MetricName": "IPC" + }, + { + "BriefDescription": "Uops Per Instruction", + "MetricExpr": "UOPS_RETIRED.RETIRE_SLOTS / INST_RETIRED.ANY", + "MetricGroup": "Pipeline", + "MetricName": "UPI" + }, + { + "BriefDescription": "Rough Estimation of fraction of fetched lines bytes that were likely consumed by program instructions", + "MetricExpr": "min( 1 , UOPS_ISSUED.ANY / ((UOPS_RETIRED.RETIRE_SLOTS / INST_RETIRED.ANY) * 64 * ( ICACHE_64B.IFTAG_HIT + ICACHE_64B.IFTAG_MISS ) / 4.1) )", + "MetricGroup": "Frontend", + "MetricName": "IFetch_Line_Utilization" + }, + { + "BriefDescription": "Fraction of Uops delivered by the DSB (aka Decoded Icache; or Uop Cache)", + "MetricExpr": "IDQ.DSB_UOPS / ( IDQ.DSB_UOPS + LSD.UOPS + IDQ.MITE_UOPS + IDQ.MS_UOPS )", + "MetricGroup": "DSB; Frontend_Bandwidth", + "MetricName": "DSB_Coverage" + }, + { + "BriefDescription": "Cycles Per Instruction (threaded)", + "MetricExpr": "1 / (INST_RETIRED.ANY / cycles)", + "MetricGroup": "Pipeline;Summary", + "MetricName": "CPI" + }, + { + "BriefDescription": "Per-thread actual clocks when the logical processor is active. This is called 'Clockticks' in VTune.", + "MetricExpr": "CPU_CLK_UNHALTED.THREAD", + "MetricGroup": "Summary", + "MetricName": "CLKS" + }, + { + "BriefDescription": "Total issue-pipeline slots", + "MetricExpr": "4*(( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else cycles)", + "MetricGroup": "TopDownL1", + "MetricName": "SLOTS" + }, + { + "BriefDescription": "Total number of retired Instructions", + "MetricExpr": "INST_RETIRED.ANY", + "MetricGroup": "Summary", + "MetricName": "Instructions" + }, + { + "BriefDescription": "Instructions Per Cycle (per physical core)", + "MetricExpr": "INST_RETIRED.ANY / (( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else cycles)", + "MetricGroup": "SMT", + "MetricName": "CoreIPC" + }, + { + "BriefDescription": "Instruction-Level-Parallelism (average number of uops executed when there is at least 1 uop executed)", + "MetricExpr": "UOPS_EXECUTED.THREAD / (( UOPS_EXECUTED.CORE_CYCLES_GE_1 / 2) if #SMT_on else UOPS_EXECUTED.CORE_CYCLES_GE_1)", + "MetricGroup": "Pipeline;Ports_Utilization", + "MetricName": "ILP" + }, + { + "BriefDescription": "Average Branch Address Clear Cost (fraction of cycles)", + "MetricExpr": "2* (( RS_EVENTS.EMPTY_CYCLES - ICACHE_16B.IFDATA_STALL - ICACHE_64B.IFTAG_STALL ) / RS_EVENTS.EMPTY_END)", + "MetricGroup": "Unknown_Branches", + "MetricName": "BAClear_Cost" + }, + { + "BriefDescription": "Core actual clocks when any thread is active on the physical core", + "MetricExpr": "( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else CPU_CLK_UNHALTED.THREAD", + "MetricGroup": "SMT", + "MetricName": "CORE_CLKS" + }, + { + "BriefDescription": "Actual Average Latency for L1 data-cache miss demand loads", + "MetricExpr": "L1D_PEND_MISS.PENDING / ( MEM_LOAD_RETIRED.L1_MISS_PS + MEM_LOAD_RETIRED.FB_HIT_PS )", + "MetricGroup": "Memory_Bound;Memory_Lat", + "MetricName": "Load_Miss_Real_Latency" + }, + { + "BriefDescription": "Memory-Level-Parallelism (average number of L1 miss demand load when there is at least 1 such miss)", + "MetricExpr": "L1D_PEND_MISS.PENDING / (( L1D_PEND_MISS.PENDING_CYCLES_ANY / 2) if #SMT_on else L1D_PEND_MISS.PENDING_CYCLES)", + "MetricGroup": "Memory_Bound;Memory_BW", + "MetricName": "MLP" + }, + { + "BriefDescription": "Utilization of the core's Page Walker(s) serving STLB misses triggered by instruction/Load/Store accesses", + "MetricExpr": "( ITLB_MISSES.WALK_PENDING + DTLB_LOAD_MISSES.WALK_PENDING + DTLB_STORE_MISSES.WALK_PENDING + EPT.WALK_PENDING ) / ( 2 * (( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else cycles) )", + "MetricGroup": "TLB", + "MetricName": "Page_Walks_Utilization" + }, + { + "BriefDescription": "Average CPU Utilization", + "MetricExpr": "CPU_CLK_UNHALTED.REF_TSC / msr@tsc@", + "MetricGroup": "Summary", + "MetricName": "CPU_Utilization" + }, + { + "BriefDescription": "Giga Floating Point Operations Per Second", + "MetricExpr": "(( 1*( FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_ARITH_INST_RETIRED.SCALAR_DOUBLE ) + 2* FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + 4*( FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE ) + 8* FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE )) / 1000000000 / duration_time", + "MetricGroup": "FLOPS;Summary", + "MetricName": "GFLOPs" + }, + { + "BriefDescription": "Average Frequency Utilization relative nominal frequency", + "MetricExpr": "CPU_CLK_UNHALTED.THREAD / CPU_CLK_UNHALTED.REF_TSC", + "MetricGroup": "Power", + "MetricName": "Turbo_Utilization" + }, + { + "BriefDescription": "Fraction of cycles where both hardware threads were active", + "MetricExpr": "1 - CPU_CLK_THREAD_UNHALTED.ONE_THREAD_ACTIVE / ( CPU_CLK_THREAD_UNHALTED.REF_XCLK_ANY / 2 ) if #SMT_on else 0", + "MetricGroup": "SMT;Summary", + "MetricName": "SMT_2T_Utilization" + }, + { + "BriefDescription": "Fraction of cycles spent in Kernel mode", + "MetricExpr": "CPU_CLK_UNHALTED.REF_TSC:u / CPU_CLK_UNHALTED.REF_TSC", + "MetricGroup": "Summary", + "MetricName": "Kernel_Utilization" + }, + { + "BriefDescription": "C3 residency percent per core", + "MetricExpr": "(cstate_core@c3\\-residency@ / msr@tsc@) * 100", + "MetricGroup": "Power", + "MetricName": "C3_Core_Residency" + }, + { + "BriefDescription": "C6 residency percent per core", + "MetricExpr": "(cstate_core@c6\\-residency@ / msr@tsc@) * 100", + "MetricGroup": "Power", + "MetricName": "C6_Core_Residency" + }, + { + "BriefDescription": "C7 residency percent per core", + "MetricExpr": "(cstate_core@c7\\-residency@ / msr@tsc@) * 100", + "MetricGroup": "Power", + "MetricName": "C7_Core_Residency" + }, + { + "BriefDescription": "C2 residency percent per package", + "MetricExpr": "(cstate_pkg@c2\\-residency@ / msr@tsc@) * 100", + "MetricGroup": "Power", + "MetricName": "C2_Pkg_Residency" + }, + { + "BriefDescription": "C3 residency percent per package", + "MetricExpr": "(cstate_pkg@c3\\-residency@ / msr@tsc@) * 100", + "MetricGroup": "Power", + "MetricName": "C3_Pkg_Residency" + }, + { + "BriefDescription": "C6 residency percent per package", + "MetricExpr": "(cstate_pkg@c6\\-residency@ / msr@tsc@) * 100", + "MetricGroup": "Power", + "MetricName": "C6_Pkg_Residency" + }, + { + "BriefDescription": "C7 residency percent per package", + "MetricExpr": "(cstate_pkg@c7\\-residency@ / msr@tsc@) * 100", + "MetricGroup": "Power", + "MetricName": "C7_Pkg_Residency" + } +] diff --git a/tools/perf/pmu-events/arch/x86/skylakex/skx-metrics.json b/tools/perf/pmu-events/arch/x86/skylakex/skx-metrics.json new file mode 100644 index 000000000000..36c903faed0b --- /dev/null +++ b/tools/perf/pmu-events/arch/x86/skylakex/skx-metrics.json @@ -0,0 +1,164 @@ +[ + { + "BriefDescription": "Instructions Per Cycle (per logical thread)", + "MetricExpr": "INST_RETIRED.ANY / CPU_CLK_UNHALTED.THREAD", + "MetricGroup": "TopDownL1", + "MetricName": "IPC" + }, + { + "BriefDescription": "Uops Per Instruction", + "MetricExpr": "UOPS_RETIRED.RETIRE_SLOTS / INST_RETIRED.ANY", + "MetricGroup": "Pipeline", + "MetricName": "UPI" + }, + { + "BriefDescription": "Rough Estimation of fraction of fetched lines bytes that were likely consumed by program instructions", + "MetricExpr": "min( 1 , UOPS_ISSUED.ANY / ((UOPS_RETIRED.RETIRE_SLOTS / INST_RETIRED.ANY) * 64 * ( ICACHE_64B.IFTAG_HIT + ICACHE_64B.IFTAG_MISS ) / 4.1) )", + "MetricGroup": "Frontend", + "MetricName": "IFetch_Line_Utilization" + }, + { + "BriefDescription": "Fraction of Uops delivered by the DSB (aka Decoded Icache; or Uop Cache)", + "MetricExpr": "IDQ.DSB_UOPS / ( IDQ.DSB_UOPS + LSD.UOPS + IDQ.MITE_UOPS + IDQ.MS_UOPS )", + "MetricGroup": "DSB; Frontend_Bandwidth", + "MetricName": "DSB_Coverage" + }, + { + "BriefDescription": "Cycles Per Instruction (threaded)", + "MetricExpr": "1 / (INST_RETIRED.ANY / cycles)", + "MetricGroup": "Pipeline;Summary", + "MetricName": "CPI" + }, + { + "BriefDescription": "Per-thread actual clocks when the logical processor is active. This is called 'Clockticks' in VTune.", + "MetricExpr": "CPU_CLK_UNHALTED.THREAD", + "MetricGroup": "Summary", + "MetricName": "CLKS" + }, + { + "BriefDescription": "Total issue-pipeline slots", + "MetricExpr": "4*(( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else cycles)", + "MetricGroup": "TopDownL1", + "MetricName": "SLOTS" + }, + { + "BriefDescription": "Total number of retired Instructions", + "MetricExpr": "INST_RETIRED.ANY", + "MetricGroup": "Summary", + "MetricName": "Instructions" + }, + { + "BriefDescription": "Instructions Per Cycle (per physical core)", + "MetricExpr": "INST_RETIRED.ANY / (( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else cycles)", + "MetricGroup": "SMT", + "MetricName": "CoreIPC" + }, + { + "BriefDescription": "Instruction-Level-Parallelism (average number of uops executed when there is at least 1 uop executed)", + "MetricExpr": "UOPS_EXECUTED.THREAD / (( UOPS_EXECUTED.CORE_CYCLES_GE_1 / 2) if #SMT_on else UOPS_EXECUTED.CORE_CYCLES_GE_1)", + "MetricGroup": "Pipeline;Ports_Utilization", + "MetricName": "ILP" + }, + { + "BriefDescription": "Average Branch Address Clear Cost (fraction of cycles)", + "MetricExpr": "2* (( RS_EVENTS.EMPTY_CYCLES - ICACHE_16B.IFDATA_STALL - ICACHE_64B.IFTAG_STALL ) / RS_EVENTS.EMPTY_END)", + "MetricGroup": "Unknown_Branches", + "MetricName": "BAClear_Cost" + }, + { + "BriefDescription": "Core actual clocks when any thread is active on the physical core", + "MetricExpr": "( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else CPU_CLK_UNHALTED.THREAD", + "MetricGroup": "SMT", + "MetricName": "CORE_CLKS" + }, + { + "BriefDescription": "Actual Average Latency for L1 data-cache miss demand loads", + "MetricExpr": "L1D_PEND_MISS.PENDING / ( MEM_LOAD_RETIRED.L1_MISS_PS + MEM_LOAD_RETIRED.FB_HIT_PS )", + "MetricGroup": "Memory_Bound;Memory_Lat", + "MetricName": "Load_Miss_Real_Latency" + }, + { + "BriefDescription": "Memory-Level-Parallelism (average number of L1 miss demand load when there is at least 1 such miss)", + "MetricExpr": "L1D_PEND_MISS.PENDING / (( L1D_PEND_MISS.PENDING_CYCLES_ANY / 2) if #SMT_on else L1D_PEND_MISS.PENDING_CYCLES)", + "MetricGroup": "Memory_Bound;Memory_BW", + "MetricName": "MLP" + }, + { + "BriefDescription": "Utilization of the core's Page Walker(s) serving STLB misses triggered by instruction/Load/Store accesses", + "MetricExpr": "( ITLB_MISSES.WALK_PENDING + DTLB_LOAD_MISSES.WALK_PENDING + DTLB_STORE_MISSES.WALK_PENDING + EPT.WALK_PENDING ) / ( 2 * (( CPU_CLK_UNHALTED.THREAD_ANY / 2 ) if #SMT_on else cycles) )", + "MetricGroup": "TLB", + "MetricName": "Page_Walks_Utilization" + }, + { + "BriefDescription": "Average CPU Utilization", + "MetricExpr": "CPU_CLK_UNHALTED.REF_TSC / msr@tsc@", + "MetricGroup": "Summary", + "MetricName": "CPU_Utilization" + }, + { + "BriefDescription": "Giga Floating Point Operations Per Second", + "MetricExpr": "(( 1*( FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_ARITH_INST_RETIRED.SCALAR_DOUBLE ) + 2* FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + 4*( FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE ) + 8* FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE )) / 1000000000 / duration_time", + "MetricGroup": "FLOPS;Summary", + "MetricName": "GFLOPs" + }, + { + "BriefDescription": "Average Frequency Utilization relative nominal frequency", + "MetricExpr": "CPU_CLK_UNHALTED.THREAD / CPU_CLK_UNHALTED.REF_TSC", + "MetricGroup": "Power", + "MetricName": "Turbo_Utilization" + }, + { + "BriefDescription": "Fraction of cycles where both hardware threads were active", + "MetricExpr": "1 - CPU_CLK_THREAD_UNHALTED.ONE_THREAD_ACTIVE / ( CPU_CLK_THREAD_UNHALTED.REF_XCLK_ANY / 2 ) if #SMT_on else 0", + "MetricGroup": "SMT;Summary", + "MetricName": "SMT_2T_Utilization" + }, + { + "BriefDescription": "Fraction of cycles spent in Kernel mode", + "MetricExpr": "CPU_CLK_UNHALTED.REF_TSC:u / CPU_CLK_UNHALTED.REF_TSC", + "MetricGroup": "Summary", + "MetricName": "Kernel_Utilization" + }, + { + "BriefDescription": "C3 residency percent per core", + "MetricExpr": "(cstate_core@c3\\-residency@ / msr@tsc@) * 100", + "MetricGroup": "Power", + "MetricName": "C3_Core_Residency" + }, + { + "BriefDescription": "C6 residency percent per core", + "MetricExpr": "(cstate_core@c6\\-residency@ / msr@tsc@) * 100", + "MetricGroup": "Power", + "MetricName": "C6_Core_Residency" + }, + { + "BriefDescription": "C7 residency percent per core", + "MetricExpr": "(cstate_core@c7\\-residency@ / msr@tsc@) * 100", + "MetricGroup": "Power", + "MetricName": "C7_Core_Residency" + }, + { + "BriefDescription": "C2 residency percent per package", + "MetricExpr": "(cstate_pkg@c2\\-residency@ / msr@tsc@) * 100", + "MetricGroup": "Power", + "MetricName": "C2_Pkg_Residency" + }, + { + "BriefDescription": "C3 residency percent per package", + "MetricExpr": "(cstate_pkg@c3\\-residency@ / msr@tsc@) * 100", + "MetricGroup": "Power", + "MetricName": "C3_Pkg_Residency" + }, + { + "BriefDescription": "C6 residency percent per package", + "MetricExpr": "(cstate_pkg@c6\\-residency@ / msr@tsc@) * 100", + "MetricGroup": "Power", + "MetricName": "C6_Pkg_Residency" + }, + { + "BriefDescription": "C7 residency percent per package", + "MetricExpr": "(cstate_pkg@c7\\-residency@ / msr@tsc@) * 100", + "MetricGroup": "Power", + "MetricName": "C7_Pkg_Residency" + } +] diff --git a/tools/perf/pmu-events/jevents.c b/tools/perf/pmu-events/jevents.c index d51dc9ca8861..9eb7047bafe4 100644 --- a/tools/perf/pmu-events/jevents.c +++ b/tools/perf/pmu-events/jevents.c @@ -292,7 +292,7 @@ static int print_events_table_entry(void *data, char *name, char *event, char *desc, char *long_desc, char *pmu, char *unit, char *perpkg, char *metric_expr, - char *metric_name) + char *metric_name, char *metric_group) { struct perf_entry_data *pd = data; FILE *outfp = pd->outfp; @@ -304,8 +304,10 @@ static int print_events_table_entry(void *data, char *name, char *event, */ fprintf(outfp, "{\n"); - fprintf(outfp, "\t.name = \"%s\",\n", name); - fprintf(outfp, "\t.event = \"%s\",\n", event); + if (name) + fprintf(outfp, "\t.name = \"%s\",\n", name); + if (event) + fprintf(outfp, "\t.event = \"%s\",\n", event); fprintf(outfp, "\t.desc = \"%s\",\n", desc); fprintf(outfp, "\t.topic = \"%s\",\n", topic); if (long_desc && long_desc[0]) @@ -320,6 +322,8 @@ static int print_events_table_entry(void *data, char *name, char *event, fprintf(outfp, "\t.metric_expr = \"%s\",\n", metric_expr); if (metric_name) fprintf(outfp, "\t.metric_name = \"%s\",\n", metric_name); + if (metric_group) + fprintf(outfp, "\t.metric_group = \"%s\",\n", metric_group); fprintf(outfp, "},\n"); return 0; @@ -357,6 +361,9 @@ static char *real_event(const char *name, char *event) { int i; + if (!name) + return NULL; + for (i = 0; fixed[i].name; i++) if (!strcasecmp(name, fixed[i].name)) return (char *)fixed[i].event; @@ -369,7 +376,7 @@ int json_events(const char *fn, char *long_desc, char *pmu, char *unit, char *perpkg, char *metric_expr, - char *metric_name), + char *metric_name, char *metric_group), void *data) { int err = -EIO; @@ -397,6 +404,7 @@ int json_events(const char *fn, char *unit = NULL; char *metric_expr = NULL; char *metric_name = NULL; + char *metric_group = NULL; unsigned long long eventcode = 0; struct msrmap *msr = NULL; jsmntok_t *msrval = NULL; @@ -476,6 +484,8 @@ int json_events(const char *fn, addfield(map, &perpkg, "", "", val); } else if (json_streq(map, field, "MetricName")) { addfield(map, &metric_name, "", "", val); + } else if (json_streq(map, field, "MetricGroup")) { + addfield(map, &metric_group, "", "", val); } else if (json_streq(map, field, "MetricExpr")) { addfield(map, &metric_expr, "", "", val); for (s = metric_expr; *s; s++) @@ -501,10 +511,11 @@ int json_events(const char *fn, addfield(map, &event, ",", filter, NULL); if (msr != NULL) addfield(map, &event, ",", msr->pname, msrval); - fixname(name); + if (name) + fixname(name); err = func(data, name, real_event(name, event), desc, long_desc, - pmu, unit, perpkg, metric_expr, metric_name); + pmu, unit, perpkg, metric_expr, metric_name, metric_group); free(event); free(desc); free(name); @@ -516,6 +527,7 @@ int json_events(const char *fn, free(unit); free(metric_expr); free(metric_name); + free(metric_group); if (err) break; tok += j; diff --git a/tools/perf/pmu-events/jevents.h b/tools/perf/pmu-events/jevents.h index d87efd2685b8..4684c673c445 100644 --- a/tools/perf/pmu-events/jevents.h +++ b/tools/perf/pmu-events/jevents.h @@ -7,7 +7,7 @@ int json_events(const char *fn, char *long_desc, char *pmu, char *unit, char *perpkg, char *metric_expr, - char *metric_name), + char *metric_name, char *metric_group), void *data); char *get_cpu_str(void); diff --git a/tools/perf/pmu-events/pmu-events.h b/tools/perf/pmu-events/pmu-events.h index e08789ddfe6c..92a4d15ee0b9 100644 --- a/tools/perf/pmu-events/pmu-events.h +++ b/tools/perf/pmu-events/pmu-events.h @@ -16,6 +16,7 @@ struct pmu_event { const char *perpkg; const char *metric_expr; const char *metric_name; + const char *metric_group; }; /* diff --git a/tools/perf/tests/attr.c b/tools/perf/tests/attr.c index c180bbcdbef6..0e1367f90af5 100644 --- a/tools/perf/tests/attr.c +++ b/tools/perf/tests/attr.c @@ -167,7 +167,7 @@ static int run_dir(const char *d, const char *perf) snprintf(cmd, 3*PATH_MAX, PYTHON " %s/attr.py -d %s/attr/ -p %s %.*s", d, d, perf, vcnt, v); - return system(cmd); + return system(cmd) ? TEST_FAIL : TEST_OK; } int test__attr(struct test *test __maybe_unused, int subtest __maybe_unused) diff --git a/tools/perf/tests/attr.py b/tools/perf/tests/attr.py index 907b1b2f56ad..ff9b60b99f52 100644 --- a/tools/perf/tests/attr.py +++ b/tools/perf/tests/attr.py @@ -238,6 +238,7 @@ class Test(object): # events in result. Fail if there's not any. for exp_name, exp_event in expect.items(): exp_list = [] + res_event = {} log.debug(" matching [%s]" % exp_name) for res_name, res_event in result.items(): log.debug(" to [%s]" % res_name) @@ -254,7 +255,10 @@ class Test(object): if exp_event.optional(): log.debug(" %s does not match, but is optional" % exp_name) else: - exp_event.diff(res_event) + if not res_event: + log.debug(" res_event is empty"); + else: + exp_event.diff(res_event) raise Fail(self, 'match failure'); match[exp_name] = exp_list diff --git a/tools/perf/tests/attr/base-record b/tools/perf/tests/attr/base-record index 31e0b1da830b..37940665f736 100644 --- a/tools/perf/tests/attr/base-record +++ b/tools/perf/tests/attr/base-record @@ -23,7 +23,7 @@ comm=1 freq=1 inherit_stat=0 enable_on_exec=1 -task=0 +task=1 watermark=0 precise_ip=0|1|2|3 mmap_data=0 diff --git a/tools/perf/tests/attr/test-record-group b/tools/perf/tests/attr/test-record-group index 6e7961f6f7a5..618ba1c17474 100644 --- a/tools/perf/tests/attr/test-record-group +++ b/tools/perf/tests/attr/test-record-group @@ -17,5 +17,6 @@ sample_type=327 read_format=4 mmap=0 comm=0 +task=0 enable_on_exec=0 disabled=0 diff --git a/tools/perf/tests/attr/test-record-group-sampling b/tools/perf/tests/attr/test-record-group-sampling index ef59afd6d635..f906b793196f 100644 --- a/tools/perf/tests/attr/test-record-group-sampling +++ b/tools/perf/tests/attr/test-record-group-sampling @@ -23,7 +23,7 @@ sample_type=343 # PERF_FORMAT_ID | PERF_FORMAT_GROUP read_format=12 - +task=0 mmap=0 comm=0 enable_on_exec=0 diff --git a/tools/perf/tests/attr/test-record-group1 b/tools/perf/tests/attr/test-record-group1 index 87a222d014d8..48e8bd12fe46 100644 --- a/tools/perf/tests/attr/test-record-group1 +++ b/tools/perf/tests/attr/test-record-group1 @@ -18,5 +18,6 @@ sample_type=327 read_format=4 mmap=0 comm=0 +task=0 enable_on_exec=0 disabled=0 diff --git a/tools/perf/tests/attr/test-stat-C0 b/tools/perf/tests/attr/test-stat-C0 index 67717fe6a65d..a2c76d10b2bb 100644 --- a/tools/perf/tests/attr/test-stat-C0 +++ b/tools/perf/tests/attr/test-stat-C0 @@ -7,3 +7,4 @@ ret = 1 # events are disabled by default when attached to cpu disabled=1 enable_on_exec=0 +optional=1 diff --git a/tools/perf/tests/attr/test-stat-basic b/tools/perf/tests/attr/test-stat-basic index 74e17881f2ba..69867d049fda 100644 --- a/tools/perf/tests/attr/test-stat-basic +++ b/tools/perf/tests/attr/test-stat-basic @@ -4,3 +4,4 @@ args = -e cycles kill >/dev/null 2>&1 ret = 1 [event:base-stat] +optional=1 diff --git a/tools/perf/tests/attr/test-stat-default b/tools/perf/tests/attr/test-stat-default index e911dbd4eb47..d9e99b3f77e6 100644 --- a/tools/perf/tests/attr/test-stat-default +++ b/tools/perf/tests/attr/test-stat-default @@ -32,6 +32,7 @@ config=2 fd=5 type=0 config=0 +optional=1 # PERF_TYPE_HARDWARE / PERF_COUNT_HW_STALLED_CYCLES_FRONTEND [event6:base-stat] @@ -52,15 +53,18 @@ optional=1 fd=8 type=0 config=1 +optional=1 # PERF_TYPE_HARDWARE / PERF_COUNT_HW_BRANCH_INSTRUCTIONS [event9:base-stat] fd=9 type=0 config=4 +optional=1 # PERF_TYPE_HARDWARE / PERF_COUNT_HW_BRANCH_MISSES [event10:base-stat] fd=10 type=0 config=5 +optional=1 diff --git a/tools/perf/tests/attr/test-stat-detailed-1 b/tools/perf/tests/attr/test-stat-detailed-1 index b39270a08e74..8b04a055d154 100644 --- a/tools/perf/tests/attr/test-stat-detailed-1 +++ b/tools/perf/tests/attr/test-stat-detailed-1 @@ -33,6 +33,7 @@ config=2 fd=5 type=0 config=0 +optional=1 # PERF_TYPE_HARDWARE / PERF_COUNT_HW_STALLED_CYCLES_FRONTEND [event6:base-stat] @@ -53,18 +54,21 @@ optional=1 fd=8 type=0 config=1 +optional=1 # PERF_TYPE_HARDWARE / PERF_COUNT_HW_BRANCH_INSTRUCTIONS [event9:base-stat] fd=9 type=0 config=4 +optional=1 # PERF_TYPE_HARDWARE / PERF_COUNT_HW_BRANCH_MISSES [event10:base-stat] fd=10 type=0 config=5 +optional=1 # PERF_TYPE_HW_CACHE / # PERF_COUNT_HW_CACHE_L1D << 0 | @@ -74,6 +78,7 @@ config=5 fd=11 type=3 config=0 +optional=1 # PERF_TYPE_HW_CACHE / # PERF_COUNT_HW_CACHE_L1D << 0 | @@ -83,6 +88,7 @@ config=0 fd=12 type=3 config=65536 +optional=1 # PERF_TYPE_HW_CACHE / # PERF_COUNT_HW_CACHE_LL << 0 | @@ -92,6 +98,7 @@ config=65536 fd=13 type=3 config=2 +optional=1 # PERF_TYPE_HW_CACHE, # PERF_COUNT_HW_CACHE_LL << 0 | @@ -101,3 +108,4 @@ config=2 fd=14 type=3 config=65538 +optional=1 diff --git a/tools/perf/tests/attr/test-stat-detailed-2 b/tools/perf/tests/attr/test-stat-detailed-2 index 45f8e6ea34f8..4fca9f1bfbf8 100644 --- a/tools/perf/tests/attr/test-stat-detailed-2 +++ b/tools/perf/tests/attr/test-stat-detailed-2 @@ -33,6 +33,7 @@ config=2 fd=5 type=0 config=0 +optional=1 # PERF_TYPE_HARDWARE / PERF_COUNT_HW_STALLED_CYCLES_FRONTEND [event6:base-stat] @@ -53,18 +54,21 @@ optional=1 fd=8 type=0 config=1 +optional=1 # PERF_TYPE_HARDWARE / PERF_COUNT_HW_BRANCH_INSTRUCTIONS [event9:base-stat] fd=9 type=0 config=4 +optional=1 # PERF_TYPE_HARDWARE / PERF_COUNT_HW_BRANCH_MISSES [event10:base-stat] fd=10 type=0 config=5 +optional=1 # PERF_TYPE_HW_CACHE / # PERF_COUNT_HW_CACHE_L1D << 0 | @@ -74,6 +78,7 @@ config=5 fd=11 type=3 config=0 +optional=1 # PERF_TYPE_HW_CACHE / # PERF_COUNT_HW_CACHE_L1D << 0 | @@ -83,6 +88,7 @@ config=0 fd=12 type=3 config=65536 +optional=1 # PERF_TYPE_HW_CACHE / # PERF_COUNT_HW_CACHE_LL << 0 | @@ -92,6 +98,7 @@ config=65536 fd=13 type=3 config=2 +optional=1 # PERF_TYPE_HW_CACHE, # PERF_COUNT_HW_CACHE_LL << 0 | @@ -101,6 +108,7 @@ config=2 fd=14 type=3 config=65538 +optional=1 # PERF_TYPE_HW_CACHE, # PERF_COUNT_HW_CACHE_L1I << 0 | @@ -120,6 +128,7 @@ optional=1 fd=16 type=3 config=65537 +optional=1 # PERF_TYPE_HW_CACHE, # PERF_COUNT_HW_CACHE_DTLB << 0 | @@ -129,6 +138,7 @@ config=65537 fd=17 type=3 config=3 +optional=1 # PERF_TYPE_HW_CACHE, # PERF_COUNT_HW_CACHE_DTLB << 0 | @@ -138,6 +148,7 @@ config=3 fd=18 type=3 config=65539 +optional=1 # PERF_TYPE_HW_CACHE, # PERF_COUNT_HW_CACHE_ITLB << 0 | @@ -147,6 +158,7 @@ config=65539 fd=19 type=3 config=4 +optional=1 # PERF_TYPE_HW_CACHE, # PERF_COUNT_HW_CACHE_ITLB << 0 | @@ -156,3 +168,4 @@ config=4 fd=20 type=3 config=65540 +optional=1 diff --git a/tools/perf/tests/attr/test-stat-detailed-3 b/tools/perf/tests/attr/test-stat-detailed-3 index 30ae0fb7a3fd..4bb58e1c82a6 100644 --- a/tools/perf/tests/attr/test-stat-detailed-3 +++ b/tools/perf/tests/attr/test-stat-detailed-3 @@ -33,6 +33,7 @@ config=2 fd=5 type=0 config=0 +optional=1 # PERF_TYPE_HARDWARE / PERF_COUNT_HW_STALLED_CYCLES_FRONTEND [event6:base-stat] @@ -53,18 +54,21 @@ optional=1 fd=8 type=0 config=1 +optional=1 # PERF_TYPE_HARDWARE / PERF_COUNT_HW_BRANCH_INSTRUCTIONS [event9:base-stat] fd=9 type=0 config=4 +optional=1 # PERF_TYPE_HARDWARE / PERF_COUNT_HW_BRANCH_MISSES [event10:base-stat] fd=10 type=0 config=5 +optional=1 # PERF_TYPE_HW_CACHE / # PERF_COUNT_HW_CACHE_L1D << 0 | @@ -74,6 +78,7 @@ config=5 fd=11 type=3 config=0 +optional=1 # PERF_TYPE_HW_CACHE / # PERF_COUNT_HW_CACHE_L1D << 0 | @@ -83,6 +88,7 @@ config=0 fd=12 type=3 config=65536 +optional=1 # PERF_TYPE_HW_CACHE / # PERF_COUNT_HW_CACHE_LL << 0 | @@ -92,6 +98,7 @@ config=65536 fd=13 type=3 config=2 +optional=1 # PERF_TYPE_HW_CACHE, # PERF_COUNT_HW_CACHE_LL << 0 | @@ -101,6 +108,7 @@ config=2 fd=14 type=3 config=65538 +optional=1 # PERF_TYPE_HW_CACHE, # PERF_COUNT_HW_CACHE_L1I << 0 | @@ -120,6 +128,7 @@ optional=1 fd=16 type=3 config=65537 +optional=1 # PERF_TYPE_HW_CACHE, # PERF_COUNT_HW_CACHE_DTLB << 0 | @@ -129,6 +138,7 @@ config=65537 fd=17 type=3 config=3 +optional=1 # PERF_TYPE_HW_CACHE, # PERF_COUNT_HW_CACHE_DTLB << 0 | @@ -138,6 +148,7 @@ config=3 fd=18 type=3 config=65539 +optional=1 # PERF_TYPE_HW_CACHE, # PERF_COUNT_HW_CACHE_ITLB << 0 | @@ -147,6 +158,7 @@ config=65539 fd=19 type=3 config=4 +optional=1 # PERF_TYPE_HW_CACHE, # PERF_COUNT_HW_CACHE_ITLB << 0 | @@ -156,6 +168,7 @@ config=4 fd=20 type=3 config=65540 +optional=1 # PERF_TYPE_HW_CACHE, # PERF_COUNT_HW_CACHE_L1D << 0 | diff --git a/tools/perf/tests/attr/test-stat-group b/tools/perf/tests/attr/test-stat-group index fdc1596a8862..e15d6946e9b3 100644 --- a/tools/perf/tests/attr/test-stat-group +++ b/tools/perf/tests/attr/test-stat-group @@ -6,6 +6,7 @@ ret = 1 [event-1:base-stat] fd=1 group_fd=-1 +read_format=3|15 [event-2:base-stat] fd=2 @@ -13,3 +14,4 @@ group_fd=1 config=1 disabled=0 enable_on_exec=0 +read_format=3|15 diff --git a/tools/perf/tests/attr/test-stat-group1 b/tools/perf/tests/attr/test-stat-group1 index 2a1f86e4a904..1746751123dc 100644 --- a/tools/perf/tests/attr/test-stat-group1 +++ b/tools/perf/tests/attr/test-stat-group1 @@ -6,6 +6,7 @@ ret = 1 [event-1:base-stat] fd=1 group_fd=-1 +read_format=3|15 [event-2:base-stat] fd=2 @@ -13,3 +14,4 @@ group_fd=1 config=1 disabled=0 enable_on_exec=0 +read_format=3|15 diff --git a/tools/perf/tests/attr/test-stat-no-inherit b/tools/perf/tests/attr/test-stat-no-inherit index d54b2a1e3e28..924fbb9300d1 100644 --- a/tools/perf/tests/attr/test-stat-no-inherit +++ b/tools/perf/tests/attr/test-stat-no-inherit @@ -5,3 +5,4 @@ ret = 1 [event:base-stat] inherit=0 +optional=1 diff --git a/tools/perf/tests/builtin-test.c b/tools/perf/tests/builtin-test.c index 53d06f37406a..766573e236e4 100644 --- a/tools/perf/tests/builtin-test.c +++ b/tools/perf/tests/builtin-test.c @@ -4,6 +4,7 @@ * * Builtin regression testing command: ever growing number of sanity tests */ +#include <fcntl.h> #include <errno.h> #include <unistd.h> #include <string.h> diff --git a/tools/perf/tests/mmap-thread-lookup.c b/tools/perf/tests/mmap-thread-lookup.c index 3c3f3e029e33..868d82b501f4 100644 --- a/tools/perf/tests/mmap-thread-lookup.c +++ b/tools/perf/tests/mmap-thread-lookup.c @@ -132,7 +132,7 @@ static int synth_all(struct machine *machine) { return perf_event__synthesize_threads(NULL, perf_event__process, - machine, 0, 500); + machine, 0, 500, 1); } static int synth_process(struct machine *machine) diff --git a/tools/perf/tests/topology.c b/tools/perf/tests/topology.c index a59db7c45a65..17cb1bb3448c 100644 --- a/tools/perf/tests/topology.c +++ b/tools/perf/tests/topology.c @@ -30,12 +30,14 @@ static int get_temp(char *path) static int session_write_header(char *path) { struct perf_session *session; - struct perf_data_file file = { - .path = path, - .mode = PERF_DATA_MODE_WRITE, + struct perf_data data = { + .file = { + .path = path, + }, + .mode = PERF_DATA_MODE_WRITE, }; - session = perf_session__new(&file, false, NULL); + session = perf_session__new(&data, false, NULL); TEST_ASSERT_VAL("can't get session", session); session->evlist = perf_evlist__new_default(); @@ -47,7 +49,7 @@ static int session_write_header(char *path) session->header.data_size += DATA_SIZE; TEST_ASSERT_VAL("failed to write header", - !perf_session__write_header(session, session->evlist, file.fd, true)); + !perf_session__write_header(session, session->evlist, data.file.fd, true)); perf_session__delete(session); @@ -57,13 +59,15 @@ static int session_write_header(char *path) static int check_cpu_topology(char *path, struct cpu_map *map) { struct perf_session *session; - struct perf_data_file file = { - .path = path, - .mode = PERF_DATA_MODE_READ, + struct perf_data data = { + .file = { + .path = path, + }, + .mode = PERF_DATA_MODE_READ, }; int i; - session = perf_session__new(&file, false, NULL); + session = perf_session__new(&data, false, NULL); TEST_ASSERT_VAL("can't get session", session); for (i = 0; i < session->header.env.nr_cpus_avail; i++) { diff --git a/tools/perf/trace/beauty/Build b/tools/perf/trace/beauty/Build index 175d633c6b49..066bbf0f4a74 100644 --- a/tools/perf/trace/beauty/Build +++ b/tools/perf/trace/beauty/Build @@ -3,5 +3,7 @@ libperf-y += fcntl.o ifeq ($(SRCARCH),$(filter $(SRCARCH),x86)) libperf-y += ioctl.o endif +libperf-y += kcmp.o libperf-y += pkey_alloc.o +libperf-y += prctl.o libperf-y += statx.o diff --git a/tools/perf/trace/beauty/beauty.h b/tools/perf/trace/beauty/beauty.h index d80655cd1881..a6dfd04beaee 100644 --- a/tools/perf/trace/beauty/beauty.h +++ b/tools/perf/trace/beauty/beauty.h @@ -4,6 +4,7 @@ #include <linux/kernel.h> #include <linux/types.h> +#include <sys/types.h> struct strarray { int offset; @@ -27,6 +28,8 @@ size_t strarray__scnprintf(struct strarray *sa, char *bf, size_t size, const cha struct trace; struct thread; +size_t pid__scnprintf_fd(struct trace *trace, pid_t pid, int fd, char *bf, size_t size); + /** * @val: value of syscall argument being formatted * @args: All the args, use syscall_args__val(arg, nth) to access one @@ -79,12 +82,27 @@ size_t syscall_arg__scnprintf_fcntl_arg(char *bf, size_t size, struct syscall_ar size_t syscall_arg__scnprintf_ioctl_cmd(char *bf, size_t size, struct syscall_arg *arg); #define SCA_IOCTL_CMD syscall_arg__scnprintf_ioctl_cmd +size_t syscall_arg__scnprintf_kcmp_type(char *bf, size_t size, struct syscall_arg *arg); +#define SCA_KCMP_TYPE syscall_arg__scnprintf_kcmp_type + +size_t syscall_arg__scnprintf_kcmp_idx(char *bf, size_t size, struct syscall_arg *arg); +#define SCA_KCMP_IDX syscall_arg__scnprintf_kcmp_idx + size_t syscall_arg__scnprintf_pkey_alloc_access_rights(char *bf, size_t size, struct syscall_arg *arg); #define SCA_PKEY_ALLOC_ACCESS_RIGHTS syscall_arg__scnprintf_pkey_alloc_access_rights size_t syscall_arg__scnprintf_open_flags(char *bf, size_t size, struct syscall_arg *arg); #define SCA_OPEN_FLAGS syscall_arg__scnprintf_open_flags +size_t syscall_arg__scnprintf_prctl_option(char *bf, size_t size, struct syscall_arg *arg); +#define SCA_PRCTL_OPTION syscall_arg__scnprintf_prctl_option + +size_t syscall_arg__scnprintf_prctl_arg2(char *bf, size_t size, struct syscall_arg *arg); +#define SCA_PRCTL_ARG2 syscall_arg__scnprintf_prctl_arg2 + +size_t syscall_arg__scnprintf_prctl_arg3(char *bf, size_t size, struct syscall_arg *arg); +#define SCA_PRCTL_ARG3 syscall_arg__scnprintf_prctl_arg3 + size_t syscall_arg__scnprintf_statx_flags(char *bf, size_t size, struct syscall_arg *arg); #define SCA_STATX_FLAGS syscall_arg__scnprintf_statx_flags diff --git a/tools/perf/trace/beauty/kcmp.c b/tools/perf/trace/beauty/kcmp.c new file mode 100644 index 000000000000..f62040eb9d5c --- /dev/null +++ b/tools/perf/trace/beauty/kcmp.c @@ -0,0 +1,44 @@ +/* + * trace/beauty/kcmp.c + * + * Copyright (C) 2017, Red Hat Inc, Arnaldo Carvalho de Melo <acme@redhat.com> + * + * Released under the GPL v2. (and only v2, not any later version) + */ + +#include "trace/beauty/beauty.h" +#include <linux/kernel.h> +#include <sys/types.h> +#include <machine.h> +#include <uapi/linux/kcmp.h> + +#include "trace/beauty/generated/kcmp_type_array.c" + +size_t syscall_arg__scnprintf_kcmp_idx(char *bf, size_t size, struct syscall_arg *arg) +{ + unsigned long fd = arg->val; + int type = syscall_arg__val(arg, 2); + pid_t pid; + + if (type != KCMP_FILE) + return syscall_arg__scnprintf_long(bf, size, arg); + + pid = syscall_arg__val(arg, arg->idx == 3 ? 0 : 1); /* idx1 -> pid1, idx2 -> pid2 */ + return pid__scnprintf_fd(arg->trace, pid, fd, bf, size); +} + +static size_t kcmp__scnprintf_type(int type, char *bf, size_t size) +{ + static DEFINE_STRARRAY(kcmp_types); + return strarray__scnprintf(&strarray__kcmp_types, bf, size, "%d", type); +} + +size_t syscall_arg__scnprintf_kcmp_type(char *bf, size_t size, struct syscall_arg *arg) +{ + unsigned long type = arg->val; + + if (type != KCMP_FILE) + arg->mask |= (1 << 3) | (1 << 4); /* Ignore idx1 and idx2 */ + + return kcmp__scnprintf_type(type, bf, size); +} diff --git a/tools/perf/trace/beauty/kcmp_type.sh b/tools/perf/trace/beauty/kcmp_type.sh new file mode 100755 index 000000000000..40d063b8c082 --- /dev/null +++ b/tools/perf/trace/beauty/kcmp_type.sh @@ -0,0 +1,10 @@ +#!/bin/sh + +header_dir=$1 + +printf "static const char *kcmp_types[] = {\n" +regex='^[[:space:]]+(KCMP_(\w+)),' +egrep $regex ${header_dir}/kcmp.h | grep -v KCMP_TYPES, | \ + sed -r "s/$regex/\1 \2/g" | \ + xargs printf "\t[%s]\t= \"%s\",\n" +printf "};\n" diff --git a/tools/perf/trace/beauty/madvise_behavior.sh b/tools/perf/trace/beauty/madvise_behavior.sh new file mode 100755 index 000000000000..60ef8640ee70 --- /dev/null +++ b/tools/perf/trace/beauty/madvise_behavior.sh @@ -0,0 +1,10 @@ +#!/bin/sh + +header_dir=$1 + +printf "static const char *madvise_advices[] = {\n" +regex='^[[:space:]]*#[[:space:]]*define[[:space:]]+MADV_([[:alnum:]_]+)[[:space:]]+([[:digit:]]+)[[:space:]]*.*' +egrep $regex ${header_dir}/mman-common.h | \ + sed -r "s/$regex/\2 \1/g" | \ + sort -n | xargs printf "\t[%s] = \"%s\",\n" +printf "};\n" diff --git a/tools/perf/trace/beauty/mmap.c b/tools/perf/trace/beauty/mmap.c index 51f1cea406f5..9e1668b2c5d7 100644 --- a/tools/perf/trace/beauty/mmap.c +++ b/tools/perf/trace/beauty/mmap.c @@ -95,35 +95,21 @@ static size_t syscall_arg__scnprintf_mremap_flags(char *bf, size_t size, #define SCA_MREMAP_FLAGS syscall_arg__scnprintf_mremap_flags +static size_t madvise__scnprintf_behavior(int behavior, char *bf, size_t size) +{ +#include "trace/beauty/generated/madvise_behavior_array.c" + static DEFINE_STRARRAY(madvise_advices); + + if (behavior < strarray__madvise_advices.nr_entries && strarray__madvise_advices.entries[behavior] != NULL) + return scnprintf(bf, size, "MADV_%s", strarray__madvise_advices.entries[behavior]); + + return scnprintf(bf, size, "%#", behavior); +} + static size_t syscall_arg__scnprintf_madvise_behavior(char *bf, size_t size, struct syscall_arg *arg) { - int behavior = arg->val; - - switch (behavior) { -#define P_MADV_BHV(n) case MADV_##n: return scnprintf(bf, size, #n) - P_MADV_BHV(NORMAL); - P_MADV_BHV(RANDOM); - P_MADV_BHV(SEQUENTIAL); - P_MADV_BHV(WILLNEED); - P_MADV_BHV(DONTNEED); - P_MADV_BHV(FREE); - P_MADV_BHV(REMOVE); - P_MADV_BHV(DONTFORK); - P_MADV_BHV(DOFORK); - P_MADV_BHV(HWPOISON); - P_MADV_BHV(SOFT_OFFLINE); - P_MADV_BHV(MERGEABLE); - P_MADV_BHV(UNMERGEABLE); - P_MADV_BHV(HUGEPAGE); - P_MADV_BHV(NOHUGEPAGE); - P_MADV_BHV(DONTDUMP); - P_MADV_BHV(DODUMP); -#undef P_MADV_BHV - default: break; - } - - return scnprintf(bf, size, "%#x", behavior); + return madvise__scnprintf_behavior(arg->val, bf, size); } #define SCA_MADV_BHV syscall_arg__scnprintf_madvise_behavior diff --git a/tools/perf/trace/beauty/prctl.c b/tools/perf/trace/beauty/prctl.c new file mode 100644 index 000000000000..246130dad6c4 --- /dev/null +++ b/tools/perf/trace/beauty/prctl.c @@ -0,0 +1,82 @@ +/* + * trace/beauty/prctl.c + * + * Copyright (C) 2017, Red Hat Inc, Arnaldo Carvalho de Melo <acme@redhat.com> + * + * Released under the GPL v2. (and only v2, not any later version) + */ + +#include "trace/beauty/beauty.h" +#include <linux/kernel.h> +#include <uapi/linux/prctl.h> + +#include "trace/beauty/generated/prctl_option_array.c" + +static size_t prctl__scnprintf_option(int option, char *bf, size_t size) +{ + static DEFINE_STRARRAY(prctl_options); + return strarray__scnprintf(&strarray__prctl_options, bf, size, "%d", option); +} + +static size_t prctl__scnprintf_set_mm(int option, char *bf, size_t size) +{ + static DEFINE_STRARRAY(prctl_set_mm_options); + return strarray__scnprintf(&strarray__prctl_set_mm_options, bf, size, "%d", option); +} + +size_t syscall_arg__scnprintf_prctl_arg2(char *bf, size_t size, struct syscall_arg *arg) +{ + int option = syscall_arg__val(arg, 0); + + if (option == PR_SET_MM) + return prctl__scnprintf_set_mm(arg->val, bf, size); + /* + * We still don't grab the contents of pointers on entry or exit, + * so just print them as hex numbers + */ + if (option == PR_SET_NAME) + return syscall_arg__scnprintf_hex(bf, size, arg); + + return syscall_arg__scnprintf_long(bf, size, arg); +} + +size_t syscall_arg__scnprintf_prctl_arg3(char *bf, size_t size, struct syscall_arg *arg) +{ + int option = syscall_arg__val(arg, 0); + + if (option == PR_SET_MM) + return syscall_arg__scnprintf_hex(bf, size, arg); + + return syscall_arg__scnprintf_long(bf, size, arg); +} + +size_t syscall_arg__scnprintf_prctl_option(char *bf, size_t size, struct syscall_arg *arg) +{ + unsigned long option = arg->val; + enum { + SPO_ARG2 = (1 << 1), + SPO_ARG3 = (1 << 2), + SPO_ARG4 = (1 << 3), + SPO_ARG5 = (1 << 4), + SPO_ARG6 = (1 << 5), + }; + const u8 all_but2 = SPO_ARG3 | SPO_ARG4 | SPO_ARG5 | SPO_ARG6; + const u8 all = SPO_ARG2 | all_but2; + const u8 masks[] = { + [PR_GET_DUMPABLE] = all, + [PR_SET_DUMPABLE] = all_but2, + [PR_SET_NAME] = all_but2, + [PR_GET_CHILD_SUBREAPER] = all_but2, + [PR_SET_CHILD_SUBREAPER] = all_but2, + [PR_GET_SECUREBITS] = all, + [PR_SET_SECUREBITS] = all_but2, + [PR_SET_MM] = SPO_ARG4 | SPO_ARG5 | SPO_ARG6, + [PR_GET_PDEATHSIG] = all, + [PR_SET_PDEATHSIG] = all_but2, + }; + + if (option < ARRAY_SIZE(masks)) + arg->mask |= masks[option]; + + return prctl__scnprintf_option(option, bf, size); +} diff --git a/tools/perf/trace/beauty/prctl_option.sh b/tools/perf/trace/beauty/prctl_option.sh new file mode 100755 index 000000000000..0be4138fbe71 --- /dev/null +++ b/tools/perf/trace/beauty/prctl_option.sh @@ -0,0 +1,17 @@ +#!/bin/sh + +header_dir=$1 + +printf "static const char *prctl_options[] = {\n" +regex='^#define[[:space:]]+PR_([GS]ET\w+)[[:space:]]*([[:xdigit:]]+).*' +egrep $regex ${header_dir}/prctl.h | grep -v PR_SET_PTRACER | \ + sed -r "s/$regex/\2 \1/g" | \ + sort -n | xargs printf "\t[%s] = \"%s\",\n" +printf "};\n" + +printf "static const char *prctl_set_mm_options[] = {\n" +regex='^#[[:space:]]+define[[:space:]]+PR_SET_MM_(\w+)[[:space:]]*([[:digit:]]+).*' +egrep $regex ${header_dir}/prctl.h | \ + sed -r "s/$regex/\2 \1/g" | \ + sort -n | xargs printf "\t[%s] = \"%s\",\n" +printf "};\n" diff --git a/tools/perf/ui/browsers/hists.c b/tools/perf/ui/browsers/hists.c index 628ad5f7eddb..68146f4620a5 100644 --- a/tools/perf/ui/browsers/hists.c +++ b/tools/perf/ui/browsers/hists.c @@ -155,57 +155,9 @@ static void callchain_list__set_folding(struct callchain_list *cl, bool unfold) cl->unfolded = unfold ? cl->has_children : false; } -static struct inline_node *inline_node__create(struct map *map, u64 ip) -{ - struct dso *dso; - struct inline_node *node; - - if (map == NULL) - return NULL; - - dso = map->dso; - if (dso == NULL) - return NULL; - - node = dso__parse_addr_inlines(dso, - map__rip_2objdump(map, ip)); - - return node; -} - -static int inline__count_rows(struct inline_node *node) -{ - struct inline_list *ilist; - int i = 0; - - if (node == NULL) - return 0; - - list_for_each_entry(ilist, &node->val, list) { - if ((ilist->filename != NULL) || (ilist->funcname != NULL)) - i++; - } - - return i; -} - -static int callchain_list__inline_rows(struct callchain_list *chain) -{ - struct inline_node *node; - int rows; - - node = inline_node__create(chain->ms.map, chain->ip); - if (node == NULL) - return 0; - - rows = inline__count_rows(node); - inline_node__delete(node); - return rows; -} - static int callchain_node__count_rows_rb_tree(struct callchain_node *node) { - int n = 0, inline_rows; + int n = 0; struct rb_node *nd; for (nd = rb_first(&node->rb_root); nd; nd = rb_next(nd)) { @@ -216,12 +168,6 @@ static int callchain_node__count_rows_rb_tree(struct callchain_node *node) list_for_each_entry(chain, &child->val, list) { ++n; - if (symbol_conf.inline_name) { - inline_rows = - callchain_list__inline_rows(chain); - n += inline_rows; - } - /* We need this because we may not have children */ folded_sign = callchain_list__folded(chain); if (folded_sign == '+') @@ -273,7 +219,7 @@ static int callchain_node__count_rows(struct callchain_node *node) { struct callchain_list *chain; bool unfolded = false; - int n = 0, inline_rows; + int n = 0; if (callchain_param.mode == CHAIN_FLAT) return callchain_node__count_flat_rows(node); @@ -282,10 +228,6 @@ static int callchain_node__count_rows(struct callchain_node *node) list_for_each_entry(chain, &node->val, list) { ++n; - if (symbol_conf.inline_name) { - inline_rows = callchain_list__inline_rows(chain); - n += inline_rows; - } unfolded = chain->unfolded; } @@ -433,19 +375,6 @@ static void hist_entry__init_have_children(struct hist_entry *he) he->init_have_children = true; } -static void hist_entry_init_inline_node(struct hist_entry *he) -{ - if (he->inline_node) - return; - - he->inline_node = inline_node__create(he->ms.map, he->ip); - - if (he->inline_node == NULL) - return; - - he->has_children = true; -} - static bool hist_browser__toggle_fold(struct hist_browser *browser) { struct hist_entry *he = browser->he_selection; @@ -477,12 +406,8 @@ static bool hist_browser__toggle_fold(struct hist_browser *browser) if (he->unfolded) { if (he->leaf) - if (he->inline_node) - he->nr_rows = inline__count_rows( - he->inline_node); - else - he->nr_rows = callchain__count_rows( - &he->sorted_chain); + he->nr_rows = callchain__count_rows( + &he->sorted_chain); else he->nr_rows = hierarchy_count_rows(browser, he, false); @@ -842,71 +767,6 @@ static bool hist_browser__check_dump_full(struct hist_browser *browser __maybe_u #define LEVEL_OFFSET_STEP 3 -static int hist_browser__show_inline(struct hist_browser *browser, - struct inline_node *node, - unsigned short row, - int offset) -{ - struct inline_list *ilist; - char buf[1024]; - int color, width, first_row; - - first_row = row; - width = browser->b.width - (LEVEL_OFFSET_STEP + 2); - list_for_each_entry(ilist, &node->val, list) { - if ((ilist->filename != NULL) || (ilist->funcname != NULL)) { - color = HE_COLORSET_NORMAL; - if (ui_browser__is_current_entry(&browser->b, row)) - color = HE_COLORSET_SELECTED; - - if (callchain_param.key == CCKEY_ADDRESS || - callchain_param.key == CCKEY_SRCLINE) { - if (ilist->filename != NULL) - scnprintf(buf, sizeof(buf), - "%s:%d (inline)", - ilist->filename, - ilist->line_nr); - else - scnprintf(buf, sizeof(buf), "??"); - } else if (ilist->funcname != NULL) - scnprintf(buf, sizeof(buf), "%s (inline)", - ilist->funcname); - else if (ilist->filename != NULL) - scnprintf(buf, sizeof(buf), - "%s:%d (inline)", - ilist->filename, - ilist->line_nr); - else - scnprintf(buf, sizeof(buf), "??"); - - ui_browser__set_color(&browser->b, color); - hist_browser__gotorc(browser, row, 0); - ui_browser__write_nstring(&browser->b, " ", - LEVEL_OFFSET_STEP + offset); - ui_browser__write_nstring(&browser->b, buf, width); - row++; - } - } - - return row - first_row; -} - -static size_t show_inline_list(struct hist_browser *browser, struct map *map, - u64 ip, int row, int offset) -{ - struct inline_node *node; - int ret; - - node = inline_node__create(map, ip); - if (node == NULL) - return 0; - - ret = hist_browser__show_inline(browser, node, row, offset); - - inline_node__delete(node); - return ret; -} - static int hist_browser__show_callchain_list(struct hist_browser *browser, struct callchain_node *node, struct callchain_list *chain, @@ -918,7 +778,7 @@ static int hist_browser__show_callchain_list(struct hist_browser *browser, char bf[1024], *alloc_str; char buf[64], *alloc_str2; const char *str; - int inline_rows = 0, ret = 1; + int ret = 1; if (arg->row_offset != 0) { arg->row_offset--; @@ -955,12 +815,7 @@ static int hist_browser__show_callchain_list(struct hist_browser *browser, free(alloc_str); free(alloc_str2); - if (symbol_conf.inline_name) { - inline_rows = show_inline_list(browser, chain->ms.map, - chain->ip, row + 1, offset); - } - - return ret + inline_rows; + return ret; } static bool check_percent_display(struct rb_node *node, u64 parent_total) @@ -1384,12 +1239,6 @@ static int hist_browser__show_entry(struct hist_browser *browser, folded_sign = hist_entry__folded(entry); } - if (symbol_conf.inline_name && - (!entry->has_children)) { - hist_entry_init_inline_node(entry); - folded_sign = hist_entry__folded(entry); - } - if (row_offset == 0) { struct hpp_arg arg = { .b = &browser->b, @@ -1421,8 +1270,7 @@ static int hist_browser__show_entry(struct hist_browser *browser, } if (first) { - if (symbol_conf.use_callchain || - symbol_conf.inline_name) { + if (symbol_conf.use_callchain) { ui_browser__printf(&browser->b, "%c ", folded_sign); width -= 2; } @@ -1464,15 +1312,11 @@ static int hist_browser__show_entry(struct hist_browser *browser, .is_current_entry = current_entry, }; - if (entry->inline_node) - printed += hist_browser__show_inline(browser, - entry->inline_node, row, 0); - else - printed += hist_browser__show_callchain(browser, - entry, 1, row, - hist_browser__show_callchain_entry, - &arg, - hist_browser__check_output_full); + printed += hist_browser__show_callchain(browser, + entry, 1, row, + hist_browser__show_callchain_entry, + &arg, + hist_browser__check_output_full); } return printed; diff --git a/tools/perf/ui/progress.c b/tools/perf/ui/progress.c index b5a5df14d702..bbfbc91a0fa4 100644 --- a/tools/perf/ui/progress.c +++ b/tools/perf/ui/progress.c @@ -28,13 +28,17 @@ void ui_progress__update(struct ui_progress *p, u64 adv) } } -void ui_progress__init(struct ui_progress *p, u64 total, const char *title) +void __ui_progress__init(struct ui_progress *p, u64 total, + const char *title, bool size) { p->curr = 0; p->next = p->step = total / 16 ?: 1; p->total = total; p->title = title; + p->size = size; + if (ui_progress__ops->init) + ui_progress__ops->init(p); } void ui_progress__finish(void) diff --git a/tools/perf/ui/progress.h b/tools/perf/ui/progress.h index 594bbe6935dd..4f52c37b2f09 100644 --- a/tools/perf/ui/progress.h +++ b/tools/perf/ui/progress.h @@ -9,12 +9,22 @@ void ui_progress__finish(void); struct ui_progress { const char *title; u64 curr, next, step, total; + bool size; }; -void ui_progress__init(struct ui_progress *p, u64 total, const char *title); +void __ui_progress__init(struct ui_progress *p, u64 total, + const char *title, bool size); + +#define ui_progress__init(p, total, title) \ + __ui_progress__init(p, total, title, false) + +#define ui_progress__init_size(p, total, title) \ + __ui_progress__init(p, total, title, true) + void ui_progress__update(struct ui_progress *p, u64 adv); struct ui_progress_ops { + void (*init)(struct ui_progress *p); void (*update)(struct ui_progress *p); void (*finish)(void); }; diff --git a/tools/perf/ui/stdio/hist.c b/tools/perf/ui/stdio/hist.c index de2810ae16be..25dd1e0ecc58 100644 --- a/tools/perf/ui/stdio/hist.c +++ b/tools/perf/ui/stdio/hist.c @@ -22,64 +22,6 @@ static size_t callchain__fprintf_left_margin(FILE *fp, int left_margin) return ret; } -static size_t inline__fprintf(struct map *map, u64 ip, int left_margin, - int depth, int depth_mask, FILE *fp) -{ - struct dso *dso; - struct inline_node *node; - struct inline_list *ilist; - int ret = 0, i; - - if (map == NULL) - return 0; - - dso = map->dso; - if (dso == NULL) - return 0; - - node = dso__parse_addr_inlines(dso, - map__rip_2objdump(map, ip)); - if (node == NULL) - return 0; - - list_for_each_entry(ilist, &node->val, list) { - if ((ilist->filename != NULL) || (ilist->funcname != NULL)) { - ret += callchain__fprintf_left_margin(fp, left_margin); - - for (i = 0; i < depth; i++) { - if (depth_mask & (1 << i)) - ret += fprintf(fp, "|"); - else - ret += fprintf(fp, " "); - ret += fprintf(fp, " "); - } - - if (callchain_param.key == CCKEY_ADDRESS || - callchain_param.key == CCKEY_SRCLINE) { - if (ilist->filename != NULL) - ret += fprintf(fp, "%s:%d (inline)", - ilist->filename, - ilist->line_nr); - else - ret += fprintf(fp, "??"); - } else if (ilist->funcname != NULL) - ret += fprintf(fp, "%s (inline)", - ilist->funcname); - else if (ilist->filename != NULL) - ret += fprintf(fp, "%s:%d (inline)", - ilist->filename, - ilist->line_nr); - else - ret += fprintf(fp, "??"); - - ret += fprintf(fp, "\n"); - } - } - - inline_node__delete(node); - return ret; -} - static size_t ipchain__fprintf_graph_line(FILE *fp, int depth, int depth_mask, int left_margin) { @@ -138,9 +80,6 @@ static size_t ipchain__fprintf_graph(FILE *fp, struct callchain_node *node, fputc('\n', fp); free(alloc_str); - if (symbol_conf.inline_name) - ret += inline__fprintf(chain->ms.map, chain->ip, - left_margin, depth, depth_mask, fp); return ret; } @@ -315,13 +254,6 @@ static size_t callchain__fprintf_graph(FILE *fp, struct rb_root *root, if (++entries_printed == callchain_param.print_limit) break; - - if (symbol_conf.inline_name) - ret += inline__fprintf(chain->ms.map, - chain->ip, - left_margin, - 0, 0, - fp); } root = &cnode->rb_root; } @@ -601,7 +533,6 @@ static int hist_entry__fprintf(struct hist_entry *he, size_t size, { int ret; int callchain_ret = 0; - int inline_ret = 0; struct perf_hpp hpp = { .buf = bf, .size = size, @@ -623,13 +554,7 @@ static int hist_entry__fprintf(struct hist_entry *he, size_t size, callchain_ret = hist_entry_callchain__fprintf(he, total_period, 0, fp); - if (callchain_ret == 0 && symbol_conf.inline_name) { - inline_ret = inline__fprintf(he->ms.map, he->ip, 0, 0, 0, fp); - ret += inline_ret; - if (inline_ret > 0) - ret += fprintf(fp, "\n"); - } else - ret += callchain_ret; + ret += callchain_ret; return ret; } diff --git a/tools/perf/ui/tui/progress.c b/tools/perf/ui/tui/progress.c index 236bcb620ae4..bc134b82829d 100644 --- a/tools/perf/ui/tui/progress.c +++ b/tools/perf/ui/tui/progress.c @@ -1,13 +1,34 @@ // SPDX-License-Identifier: GPL-2.0 +#include <linux/kernel.h> #include "../cache.h" #include "../progress.h" #include "../libslang.h" #include "../ui.h" #include "tui.h" +#include "units.h" #include "../browser.h" +static void __tui_progress__init(struct ui_progress *p) +{ + p->next = p->step = p->total / (SLtt_Screen_Cols - 2) ?: 1; +} + +static int get_title(struct ui_progress *p, char *buf, size_t size) +{ + char buf_cur[20]; + char buf_tot[20]; + int ret; + + ret = unit_number__scnprintf(buf_cur, sizeof(buf_cur), p->curr); + ret += unit_number__scnprintf(buf_tot, sizeof(buf_tot), p->total); + + return ret + scnprintf(buf, size, "%s [%s/%s]", + p->title, buf_cur, buf_tot); +} + static void tui_progress__update(struct ui_progress *p) { + char buf[100], *title = (char *) p->title; int bar, y; /* * FIXME: We should have a per UI backend way of showing progress, @@ -19,13 +40,18 @@ static void tui_progress__update(struct ui_progress *p) if (p->total == 0) return; + if (p->size) { + get_title(p, buf, sizeof(buf)); + title = buf; + } + ui__refresh_dimensions(false); pthread_mutex_lock(&ui__lock); y = SLtt_Screen_Rows / 2 - 2; SLsmg_set_color(0); SLsmg_draw_box(y, 0, 3, SLtt_Screen_Cols); SLsmg_gotorc(y++, 1); - SLsmg_write_string((char *)p->title); + SLsmg_write_string(title); SLsmg_fill_region(y, 1, 1, SLtt_Screen_Cols - 2, ' '); SLsmg_set_color(HE_COLORSET_SELECTED); bar = ((SLtt_Screen_Cols - 2) * p->curr) / p->total; @@ -50,8 +76,8 @@ static void tui_progress__finish(void) pthread_mutex_unlock(&ui__lock); } -static struct ui_progress_ops tui_progress__ops = -{ +static struct ui_progress_ops tui_progress__ops = { + .init = __tui_progress__init, .update = tui_progress__update, .finish = tui_progress__finish, }; diff --git a/tools/perf/util/Build b/tools/perf/util/Build index 94518c1bf8b6..a3de7916fe63 100644 --- a/tools/perf/util/Build +++ b/tools/perf/util/Build @@ -13,6 +13,7 @@ libperf-y += find_bit.o libperf-y += kallsyms.o libperf-y += levenshtein.o libperf-y += llvm-utils.o +libperf-y += mmap.o libperf-y += memswap.o libperf-y += parse-events.o libperf-y += perf_regs.o @@ -34,6 +35,7 @@ libperf-y += dso.o libperf-y += symbol.o libperf-y += symbol_fprintf.o libperf-y += color.o +libperf-y += metricgroup.o libperf-y += header.o libperf-y += callchain.o libperf-y += values.o @@ -78,6 +80,7 @@ libperf-y += data.o libperf-y += tsc.o libperf-y += cloexec.o libperf-y += call-path.o +libperf-y += rwsem.o libperf-y += thread-stack.o libperf-$(CONFIG_AUXTRACE) += auxtrace.o libperf-$(CONFIG_AUXTRACE) += intel-pt-decoder/ diff --git a/tools/perf/util/annotate.c b/tools/perf/util/annotate.c index aa66791b1bfc..da1c4c4a0dd8 100644 --- a/tools/perf/util/annotate.c +++ b/tools/perf/util/annotate.c @@ -49,10 +49,9 @@ struct arch { void *priv; unsigned int model; unsigned int family; - int (*init)(struct arch *arch); + int (*init)(struct arch *arch, char *cpuid); bool (*ins_is_fused)(struct arch *arch, const char *ins1, const char *ins2); - int (*cpuid_parse)(struct arch *arch, char *cpuid); struct { char comment_char; char skip_functions_char; @@ -132,10 +131,10 @@ static struct arch architectures[] = { }, { .name = "x86", + .init = x86__annotate_init, .instructions = x86__instructions, .nr_instructions = ARRAY_SIZE(x86__instructions), .ins_is_fused = x86__ins_is_fused, - .cpuid_parse = x86__cpuid_parse, .objdump = { .comment_char = '#', }, @@ -1457,16 +1456,13 @@ int symbol__disassemble(struct symbol *sym, struct map *map, *parch = arch; if (arch->init) { - err = arch->init(arch); + err = arch->init(arch, cpuid); if (err) { pr_err("%s: failed to initialize %s arch priv area\n", __func__, arch->name); return err; } } - if (arch->cpuid_parse && cpuid) - arch->cpuid_parse(arch, cpuid); - pr_debug("%s: filename=%s, sym=%s, start=%#" PRIx64 ", end=%#" PRIx64 "\n", __func__, symfs_filename, sym->name, map->unmap_ip(map, sym->start), map->unmap_ip(map, sym->end)); diff --git a/tools/perf/util/auxtrace.c b/tools/perf/util/auxtrace.c index 5547457566a7..a33491416400 100644 --- a/tools/perf/util/auxtrace.c +++ b/tools/perf/util/auxtrace.c @@ -208,7 +208,7 @@ static int auxtrace_queues__grow(struct auxtrace_queues *queues, static void *auxtrace_copy_data(u64 size, struct perf_session *session) { - int fd = perf_data_file__fd(session->file); + int fd = perf_data__fd(session->data); void *p; ssize_t ret; @@ -305,7 +305,7 @@ static int auxtrace_queues__add_event_buffer(struct auxtrace_queues *queues, if (session->one_mmap) { buffer->data = buffer->data_offset - session->one_mmap_offset + session->one_mmap_addr; - } else if (perf_data_file__is_pipe(session->file)) { + } else if (perf_data__is_pipe(session->data)) { buffer->data = auxtrace_copy_data(buffer->size, session); if (!buffer->data) return -ENOMEM; diff --git a/tools/perf/util/callchain.c b/tools/perf/util/callchain.c index 6031933d811c..082505d08d72 100644 --- a/tools/perf/util/callchain.c +++ b/tools/perf/util/callchain.c @@ -567,6 +567,7 @@ fill_node(struct callchain_node *node, struct callchain_cursor *cursor) call->ip = cursor_node->ip; call->ms.sym = cursor_node->sym; call->ms.map = map__get(cursor_node->map); + call->srcline = cursor_node->srcline; if (cursor_node->branch) { call->branch_count = 1; @@ -645,103 +646,120 @@ enum match_result { MATCH_GT, }; -static enum match_result match_chain_srcline(struct callchain_cursor_node *node, - struct callchain_list *cnode) +static enum match_result match_chain_strings(const char *left, + const char *right) { - char *left = NULL; - char *right = NULL; enum match_result ret = MATCH_EQ; int cmp; - if (cnode->ms.map) - left = get_srcline(cnode->ms.map->dso, - map__rip_2objdump(cnode->ms.map, cnode->ip), - cnode->ms.sym, true, false); - if (node->map) - right = get_srcline(node->map->dso, - map__rip_2objdump(node->map, node->ip), - node->sym, true, false); - if (left && right) cmp = strcmp(left, right); else if (!left && right) cmp = 1; else if (left && !right) cmp = -1; - else if (cnode->ip == node->ip) - cmp = 0; else - cmp = (cnode->ip < node->ip) ? -1 : 1; + return MATCH_ERROR; if (cmp != 0) ret = cmp < 0 ? MATCH_LT : MATCH_GT; - free_srcline(left); - free_srcline(right); return ret; } +/* + * We need to always use relative addresses because we're aggregating + * callchains from multiple threads, i.e. different address spaces, so + * comparing absolute addresses make no sense as a symbol in a DSO may end up + * in a different address when used in a different binary or even the same + * binary but with some sort of address randomization technique, thus we need + * to compare just relative addresses. -acme + */ +static enum match_result match_chain_dso_addresses(struct map *left_map, u64 left_ip, + struct map *right_map, u64 right_ip) +{ + struct dso *left_dso = left_map ? left_map->dso : NULL; + struct dso *right_dso = right_map ? right_map->dso : NULL; + + if (left_dso != right_dso) + return left_dso < right_dso ? MATCH_LT : MATCH_GT; + + if (left_ip != right_ip) + return left_ip < right_ip ? MATCH_LT : MATCH_GT; + + return MATCH_EQ; +} + static enum match_result match_chain(struct callchain_cursor_node *node, struct callchain_list *cnode) { - struct symbol *sym = node->sym; - u64 left, right; - struct dso *left_dso = NULL; - struct dso *right_dso = NULL; - - if (callchain_param.key == CCKEY_SRCLINE) { - enum match_result match = match_chain_srcline(node, cnode); + enum match_result match = MATCH_ERROR; + switch (callchain_param.key) { + case CCKEY_SRCLINE: + match = match_chain_strings(cnode->srcline, node->srcline); if (match != MATCH_ERROR) - return match; - } - - if (cnode->ms.sym && sym && callchain_param.key == CCKEY_FUNCTION) { - left = cnode->ms.sym->start; - right = sym->start; - left_dso = cnode->ms.map->dso; - right_dso = node->map->dso; - } else { - left = cnode->ip; - right = node->ip; + break; + /* otherwise fall-back to symbol-based comparison below */ + __fallthrough; + case CCKEY_FUNCTION: + if (node->sym && cnode->ms.sym) { + /* + * Compare inlined frames based on their symbol name + * because different inlined frames will have the same + * symbol start. Otherwise do a faster comparison based + * on the symbol start address. + */ + if (cnode->ms.sym->inlined || node->sym->inlined) { + match = match_chain_strings(cnode->ms.sym->name, + node->sym->name); + if (match != MATCH_ERROR) + break; + } else { + match = match_chain_dso_addresses(cnode->ms.map, cnode->ms.sym->start, + node->map, node->sym->start); + break; + } + } + /* otherwise fall-back to IP-based comparison below */ + __fallthrough; + case CCKEY_ADDRESS: + default: + match = match_chain_dso_addresses(cnode->ms.map, cnode->ip, node->map, node->ip); + break; } - if (left == right && left_dso == right_dso) { - if (node->branch) { - cnode->branch_count++; + if (match == MATCH_EQ && node->branch) { + cnode->branch_count++; - if (node->branch_from) { - /* - * It's "to" of a branch - */ - cnode->brtype_stat.branch_to = true; + if (node->branch_from) { + /* + * It's "to" of a branch + */ + cnode->brtype_stat.branch_to = true; - if (node->branch_flags.predicted) - cnode->predicted_count++; + if (node->branch_flags.predicted) + cnode->predicted_count++; - if (node->branch_flags.abort) - cnode->abort_count++; + if (node->branch_flags.abort) + cnode->abort_count++; - branch_type_count(&cnode->brtype_stat, - &node->branch_flags, - node->branch_from, - node->ip); - } else { - /* - * It's "from" of a branch - */ - cnode->brtype_stat.branch_to = false; - cnode->cycles_count += - node->branch_flags.cycles; - cnode->iter_count += node->nr_loop_iter; - cnode->iter_cycles += node->iter_cycles; - } + branch_type_count(&cnode->brtype_stat, + &node->branch_flags, + node->branch_from, + node->ip); + } else { + /* + * It's "from" of a branch + */ + cnode->brtype_stat.branch_to = false; + cnode->cycles_count += node->branch_flags.cycles; + cnode->iter_count += node->nr_loop_iter; + cnode->iter_cycles += node->iter_cycles; } - - return MATCH_EQ; } - return left > right ? MATCH_GT : MATCH_LT; + return match; } /* @@ -970,7 +988,7 @@ merge_chain_branch(struct callchain_cursor *cursor, list_for_each_entry_safe(list, next_list, &src->val, list) { callchain_cursor_append(cursor, list->ip, list->ms.map, list->ms.sym, - false, NULL, 0, 0, 0); + false, NULL, 0, 0, 0, list->srcline); list_del(&list->list); map__zput(list->ms.map); free(list); @@ -1010,7 +1028,8 @@ int callchain_merge(struct callchain_cursor *cursor, int callchain_cursor_append(struct callchain_cursor *cursor, u64 ip, struct map *map, struct symbol *sym, bool branch, struct branch_flags *flags, - int nr_loop_iter, u64 iter_cycles, u64 branch_from) + int nr_loop_iter, u64 iter_cycles, u64 branch_from, + const char *srcline) { struct callchain_cursor_node *node = *cursor->last; @@ -1029,6 +1048,7 @@ int callchain_cursor_append(struct callchain_cursor *cursor, node->branch = branch; node->nr_loop_iter = nr_loop_iter; node->iter_cycles = iter_cycles; + node->srcline = srcline; if (flags) memcpy(&node->branch_flags, flags, @@ -1071,10 +1091,8 @@ int fill_callchain_info(struct addr_location *al, struct callchain_cursor_node * { al->map = node->map; al->sym = node->sym; - if (node->map) - al->addr = node->map->map_ip(node->map, node->ip); - else - al->addr = node->ip; + al->srcline = node->srcline; + al->addr = node->ip; if (al->sym == NULL) { if (hide_unresolved) @@ -1116,16 +1134,15 @@ char *callchain_list__sym_name(struct callchain_list *cl, int printed; if (cl->ms.sym) { - if (show_srcline && cl->ms.map && !cl->srcline) - cl->srcline = get_srcline(cl->ms.map->dso, - map__rip_2objdump(cl->ms.map, - cl->ip), - cl->ms.sym, false, show_addr); - if (cl->srcline) - printed = scnprintf(bf, bfsize, "%s %s", - cl->ms.sym->name, cl->srcline); + const char *inlined = cl->ms.sym->inlined ? " (inlined)" : ""; + + if (show_srcline && cl->srcline) + printed = scnprintf(bf, bfsize, "%s %s%s", + cl->ms.sym->name, cl->srcline, + inlined); else - printed = scnprintf(bf, bfsize, "%s", cl->ms.sym->name); + printed = scnprintf(bf, bfsize, "%s%s", + cl->ms.sym->name, inlined); } else printed = scnprintf(bf, bfsize, "%#" PRIx64, cl->ip); @@ -1533,7 +1550,7 @@ int callchain_cursor__copy(struct callchain_cursor *dst, node->branch, &node->branch_flags, node->nr_loop_iter, node->iter_cycles, - node->branch_from); + node->branch_from, node->srcline); if (rc) break; diff --git a/tools/perf/util/callchain.h b/tools/perf/util/callchain.h index f967aa47d0a1..b79ef2478a57 100644 --- a/tools/perf/util/callchain.h +++ b/tools/perf/util/callchain.h @@ -122,7 +122,7 @@ struct callchain_list { u64 iter_count; u64 iter_cycles; struct branch_type_stat brtype_stat; - char *srcline; + const char *srcline; struct list_head list; }; @@ -136,6 +136,7 @@ struct callchain_cursor_node { u64 ip; struct map *map; struct symbol *sym; + const char *srcline; bool branch; struct branch_flags branch_flags; u64 branch_from; @@ -202,7 +203,8 @@ static inline void callchain_cursor_reset(struct callchain_cursor *cursor) int callchain_cursor_append(struct callchain_cursor *cursor, u64 ip, struct map *map, struct symbol *sym, bool branch, struct branch_flags *flags, - int nr_loop_iter, u64 iter_cycles, u64 branch_from); + int nr_loop_iter, u64 iter_cycles, u64 branch_from, + const char *srcline); /* Close a cursor writing session. Initialize for the reader */ static inline void callchain_cursor_commit(struct callchain_cursor *cursor) diff --git a/tools/perf/util/comm.c b/tools/perf/util/comm.c index 8808570f8e9c..7798a2cc8a86 100644 --- a/tools/perf/util/comm.c +++ b/tools/perf/util/comm.c @@ -6,6 +6,7 @@ #include <stdio.h> #include <string.h> #include <linux/refcount.h> +#include "rwsem.h" struct comm_str { char *str; @@ -15,6 +16,7 @@ struct comm_str { /* Should perhaps be moved to struct machine */ static struct rb_root comm_str_root; +static struct rw_semaphore comm_str_lock = {.lock = PTHREAD_RWLOCK_INITIALIZER,}; static struct comm_str *comm_str__get(struct comm_str *cs) { @@ -26,7 +28,9 @@ static struct comm_str *comm_str__get(struct comm_str *cs) static void comm_str__put(struct comm_str *cs) { if (cs && refcount_dec_and_test(&cs->refcnt)) { + down_write(&comm_str_lock); rb_erase(&cs->rb_node, &comm_str_root); + up_write(&comm_str_lock); zfree(&cs->str); free(cs); } @@ -51,7 +55,8 @@ static struct comm_str *comm_str__alloc(const char *str) return cs; } -static struct comm_str *comm_str__findnew(const char *str, struct rb_root *root) +static +struct comm_str *__comm_str__findnew(const char *str, struct rb_root *root) { struct rb_node **p = &root->rb_node; struct rb_node *parent = NULL; @@ -82,6 +87,17 @@ static struct comm_str *comm_str__findnew(const char *str, struct rb_root *root) return new; } +static struct comm_str *comm_str__findnew(const char *str, struct rb_root *root) +{ + struct comm_str *cs; + + down_write(&comm_str_lock); + cs = __comm_str__findnew(str, root); + up_write(&comm_str_lock); + + return cs; +} + struct comm *comm__new(const char *str, u64 timestamp, bool exec) { struct comm *comm = zalloc(sizeof(*comm)); diff --git a/tools/perf/util/config.c b/tools/perf/util/config.c index 4b893c622236..84eb9393c7db 100644 --- a/tools/perf/util/config.c +++ b/tools/perf/util/config.c @@ -701,10 +701,7 @@ struct perf_config_set *perf_config_set__new(void) if (set) { INIT_LIST_HEAD(&set->sections); - if (perf_config_set__init(set) < 0) { - perf_config_set__delete(set); - set = NULL; - } + perf_config_set__init(set); } return set; diff --git a/tools/perf/util/data-convert-bt.c b/tools/perf/util/data-convert-bt.c index 2346cecb8ea2..5744c12641a5 100644 --- a/tools/perf/util/data-convert-bt.c +++ b/tools/perf/util/data-convert-bt.c @@ -1577,10 +1577,10 @@ int bt_convert__perf2ctf(const char *input, const char *path, struct perf_data_convert_opts *opts) { struct perf_session *session; - struct perf_data_file file = { - .path = input, - .mode = PERF_DATA_MODE_READ, - .force = opts->force, + struct perf_data data = { + .file.path = input, + .mode = PERF_DATA_MODE_READ, + .force = opts->force, }; struct convert c = { .tool = { @@ -1619,7 +1619,7 @@ int bt_convert__perf2ctf(const char *input, const char *path, err = -1; /* perf.data session */ - session = perf_session__new(&file, 0, &c.tool); + session = perf_session__new(&data, 0, &c.tool); if (!session) goto free_writer; @@ -1650,7 +1650,7 @@ int bt_convert__perf2ctf(const char *input, const char *path, fprintf(stderr, "[ perf data convert: Converted '%s' into CTF data '%s' ]\n", - file.path, path); + data.file.path, path); fprintf(stderr, "[ perf data convert: Converted and wrote %.3f MB (%" PRIu64 " samples", diff --git a/tools/perf/util/data.c b/tools/perf/util/data.c index 79192758bdb3..48094fde0a68 100644 --- a/tools/perf/util/data.c +++ b/tools/perf/util/data.c @@ -4,6 +4,7 @@ #include <sys/types.h> #include <sys/stat.h> #include <errno.h> +#include <fcntl.h> #include <unistd.h> #include <string.h> @@ -21,56 +22,56 @@ #endif #endif -static bool check_pipe(struct perf_data_file *file) +static bool check_pipe(struct perf_data *data) { struct stat st; bool is_pipe = false; - int fd = perf_data_file__is_read(file) ? + int fd = perf_data__is_read(data) ? STDIN_FILENO : STDOUT_FILENO; - if (!file->path) { + if (!data->file.path) { if (!fstat(fd, &st) && S_ISFIFO(st.st_mode)) is_pipe = true; } else { - if (!strcmp(file->path, "-")) + if (!strcmp(data->file.path, "-")) is_pipe = true; } if (is_pipe) - file->fd = fd; + data->file.fd = fd; - return file->is_pipe = is_pipe; + return data->is_pipe = is_pipe; } -static int check_backup(struct perf_data_file *file) +static int check_backup(struct perf_data *data) { struct stat st; - if (!stat(file->path, &st) && st.st_size) { + if (!stat(data->file.path, &st) && st.st_size) { /* TODO check errors properly */ char oldname[PATH_MAX]; snprintf(oldname, sizeof(oldname), "%s.old", - file->path); + data->file.path); unlink(oldname); - rename(file->path, oldname); + rename(data->file.path, oldname); } return 0; } -static int open_file_read(struct perf_data_file *file) +static int open_file_read(struct perf_data *data) { struct stat st; int fd; char sbuf[STRERR_BUFSIZE]; - fd = open(file->path, O_RDONLY); + fd = open(data->file.path, O_RDONLY); if (fd < 0) { int err = errno; - pr_err("failed to open %s: %s", file->path, + pr_err("failed to open %s: %s", data->file.path, str_error_r(err, sbuf, sizeof(sbuf))); - if (err == ENOENT && !strcmp(file->path, "perf.data")) + if (err == ENOENT && !strcmp(data->file.path, "perf.data")) pr_err(" (try 'perf record' first)"); pr_err("\n"); return -err; @@ -79,19 +80,19 @@ static int open_file_read(struct perf_data_file *file) if (fstat(fd, &st) < 0) goto out_close; - if (!file->force && st.st_uid && (st.st_uid != geteuid())) { + if (!data->force && st.st_uid && (st.st_uid != geteuid())) { pr_err("File %s not owned by current user or root (use -f to override)\n", - file->path); + data->file.path); goto out_close; } if (!st.st_size) { - pr_info("zero-sized file (%s), nothing to do!\n", - file->path); + pr_info("zero-sized data (%s), nothing to do!\n", + data->file.path); goto out_close; } - file->size = st.st_size; + data->size = st.st_size; return fd; out_close: @@ -99,49 +100,49 @@ static int open_file_read(struct perf_data_file *file) return -1; } -static int open_file_write(struct perf_data_file *file) +static int open_file_write(struct perf_data *data) { int fd; char sbuf[STRERR_BUFSIZE]; - if (check_backup(file)) + if (check_backup(data)) return -1; - fd = open(file->path, O_CREAT|O_RDWR|O_TRUNC|O_CLOEXEC, + fd = open(data->file.path, O_CREAT|O_RDWR|O_TRUNC|O_CLOEXEC, S_IRUSR|S_IWUSR); if (fd < 0) - pr_err("failed to open %s : %s\n", file->path, + pr_err("failed to open %s : %s\n", data->file.path, str_error_r(errno, sbuf, sizeof(sbuf))); return fd; } -static int open_file(struct perf_data_file *file) +static int open_file(struct perf_data *data) { int fd; - fd = perf_data_file__is_read(file) ? - open_file_read(file) : open_file_write(file); + fd = perf_data__is_read(data) ? + open_file_read(data) : open_file_write(data); - file->fd = fd; + data->file.fd = fd; return fd < 0 ? -1 : 0; } -int perf_data_file__open(struct perf_data_file *file) +int perf_data__open(struct perf_data *data) { - if (check_pipe(file)) + if (check_pipe(data)) return 0; - if (!file->path) - file->path = "perf.data"; + if (!data->file.path) + data->file.path = "perf.data"; - return open_file(file); + return open_file(data); } -void perf_data_file__close(struct perf_data_file *file) +void perf_data__close(struct perf_data *data) { - close(file->fd); + close(data->file.fd); } ssize_t perf_data_file__write(struct perf_data_file *file, @@ -150,42 +151,48 @@ ssize_t perf_data_file__write(struct perf_data_file *file, return writen(file->fd, buf, size); } -int perf_data_file__switch(struct perf_data_file *file, +ssize_t perf_data__write(struct perf_data *data, + void *buf, size_t size) +{ + return perf_data_file__write(&data->file, buf, size); +} + +int perf_data__switch(struct perf_data *data, const char *postfix, size_t pos, bool at_exit) { char *new_filepath; int ret; - if (check_pipe(file)) + if (check_pipe(data)) return -EINVAL; - if (perf_data_file__is_read(file)) + if (perf_data__is_read(data)) return -EINVAL; - if (asprintf(&new_filepath, "%s.%s", file->path, postfix) < 0) + if (asprintf(&new_filepath, "%s.%s", data->file.path, postfix) < 0) return -ENOMEM; /* * Only fire a warning, don't return error, continue fill * original file. */ - if (rename(file->path, new_filepath)) - pr_warning("Failed to rename %s to %s\n", file->path, new_filepath); + if (rename(data->file.path, new_filepath)) + pr_warning("Failed to rename %s to %s\n", data->file.path, new_filepath); if (!at_exit) { - close(file->fd); - ret = perf_data_file__open(file); + close(data->file.fd); + ret = perf_data__open(data); if (ret < 0) goto out; - if (lseek(file->fd, pos, SEEK_SET) == (off_t)-1) { + if (lseek(data->file.fd, pos, SEEK_SET) == (off_t)-1) { ret = -errno; pr_debug("Failed to lseek to %zu: %s", pos, strerror(errno)); goto out; } } - ret = file->fd; + ret = data->file.fd; out: free(new_filepath); return ret; diff --git a/tools/perf/util/data.h b/tools/perf/util/data.h index 80241ba78101..4828f7feea89 100644 --- a/tools/perf/util/data.h +++ b/tools/perf/util/data.h @@ -10,51 +10,57 @@ enum perf_data_mode { }; struct perf_data_file { - const char *path; - int fd; + const char *path; + int fd; +}; + +struct perf_data { + struct perf_data_file file; bool is_pipe; bool force; unsigned long size; enum perf_data_mode mode; }; -static inline bool perf_data_file__is_read(struct perf_data_file *file) +static inline bool perf_data__is_read(struct perf_data *data) { - return file->mode == PERF_DATA_MODE_READ; + return data->mode == PERF_DATA_MODE_READ; } -static inline bool perf_data_file__is_write(struct perf_data_file *file) +static inline bool perf_data__is_write(struct perf_data *data) { - return file->mode == PERF_DATA_MODE_WRITE; + return data->mode == PERF_DATA_MODE_WRITE; } -static inline int perf_data_file__is_pipe(struct perf_data_file *file) +static inline int perf_data__is_pipe(struct perf_data *data) { - return file->is_pipe; + return data->is_pipe; } -static inline int perf_data_file__fd(struct perf_data_file *file) +static inline int perf_data__fd(struct perf_data *data) { - return file->fd; + return data->file.fd; } -static inline unsigned long perf_data_file__size(struct perf_data_file *file) +static inline unsigned long perf_data__size(struct perf_data *data) { - return file->size; + return data->size; } -int perf_data_file__open(struct perf_data_file *file); -void perf_data_file__close(struct perf_data_file *file); +int perf_data__open(struct perf_data *data); +void perf_data__close(struct perf_data *data); +ssize_t perf_data__write(struct perf_data *data, + void *buf, size_t size); ssize_t perf_data_file__write(struct perf_data_file *file, void *buf, size_t size); /* * If at_exit is set, only rename current perf.data to - * perf.data.<postfix>, continue write on original file. + * perf.data.<postfix>, continue write on original data. * Set at_exit when flushing the last output. * * Return value is fd of new output. */ -int perf_data_file__switch(struct perf_data_file *file, +int perf_data__switch(struct perf_data *data, const char *postfix, size_t pos, bool at_exit); #endif /* __PERF_DATA_H */ diff --git a/tools/perf/util/debug.c b/tools/perf/util/debug.c index dc8b53b6950e..f3a71db83947 100644 --- a/tools/perf/util/debug.c +++ b/tools/perf/util/debug.c @@ -112,50 +112,53 @@ int dump_printf(const char *fmt, ...) return ret; } -static void trace_event_printer(enum binary_printer_ops op, - unsigned int val, void *extra) +static int trace_event_printer(enum binary_printer_ops op, + unsigned int val, void *extra, FILE *fp) { const char *color = PERF_COLOR_BLUE; union perf_event *event = (union perf_event *)extra; unsigned char ch = (unsigned char)val; + int printed = 0; switch (op) { case BINARY_PRINT_DATA_BEGIN: - printf("."); - color_fprintf(stdout, color, "\n. ... raw event: size %d bytes\n", - event->header.size); + printed += fprintf(fp, "."); + printed += color_fprintf(fp, color, "\n. ... raw event: size %d bytes\n", + event->header.size); break; case BINARY_PRINT_LINE_BEGIN: - printf("."); + printed += fprintf(fp, "."); break; case BINARY_PRINT_ADDR: - color_fprintf(stdout, color, " %04x: ", val); + printed += color_fprintf(fp, color, " %04x: ", val); break; case BINARY_PRINT_NUM_DATA: - color_fprintf(stdout, color, " %02x", val); + printed += color_fprintf(fp, color, " %02x", val); break; case BINARY_PRINT_NUM_PAD: - color_fprintf(stdout, color, " "); + printed += color_fprintf(fp, color, " "); break; case BINARY_PRINT_SEP: - color_fprintf(stdout, color, " "); + printed += color_fprintf(fp, color, " "); break; case BINARY_PRINT_CHAR_DATA: - color_fprintf(stdout, color, "%c", + printed += color_fprintf(fp, color, "%c", isprint(ch) ? ch : '.'); break; case BINARY_PRINT_CHAR_PAD: - color_fprintf(stdout, color, " "); + printed += color_fprintf(fp, color, " "); break; case BINARY_PRINT_LINE_END: - color_fprintf(stdout, color, "\n"); + printed += color_fprintf(fp, color, "\n"); break; case BINARY_PRINT_DATA_END: - printf("\n"); + printed += fprintf(fp, "\n"); break; default: break; } + + return printed; } void trace_event(union perf_event *event) diff --git a/tools/perf/util/dso.c b/tools/perf/util/dso.c index 00c98c968cb1..d5b6f7f5baff 100644 --- a/tools/perf/util/dso.c +++ b/tools/perf/util/dso.c @@ -7,9 +7,11 @@ #include <sys/stat.h> #include <unistd.h> #include <errno.h> +#include <fcntl.h> #include "compress.h" #include "path.h" #include "symbol.h" +#include "srcline.h" #include "dso.h" #include "machine.h" #include "auxtrace.h" @@ -1201,6 +1203,8 @@ struct dso *dso__new(const char *name) for (i = 0; i < MAP__NR_TYPES; ++i) dso->symbols[i] = dso->symbol_names[i] = RB_ROOT; dso->data.cache = RB_ROOT; + dso->inlined_nodes = RB_ROOT; + dso->srclines = RB_ROOT; dso->data.fd = -1; dso->data.status = DSO_DATA_STATUS_UNKNOWN; dso->symtab_type = DSO_BINARY_TYPE__NOT_FOUND; @@ -1232,6 +1236,10 @@ void dso__delete(struct dso *dso) if (!RB_EMPTY_NODE(&dso->rb_node)) pr_err("DSO %s is still in rbtree when being deleted!\n", dso->long_name); + + /* free inlines first, as they reference symbols */ + inlines__tree_delete(&dso->inlined_nodes); + srcline__tree_delete(&dso->srclines); for (i = 0; i < MAP__NR_TYPES; ++i) symbols__delete(&dso->symbols[i]); @@ -1366,9 +1374,9 @@ void __dsos__add(struct dsos *dsos, struct dso *dso) void dsos__add(struct dsos *dsos, struct dso *dso) { - pthread_rwlock_wrlock(&dsos->lock); + down_write(&dsos->lock); __dsos__add(dsos, dso); - pthread_rwlock_unlock(&dsos->lock); + up_write(&dsos->lock); } struct dso *__dsos__find(struct dsos *dsos, const char *name, bool cmp_short) @@ -1387,9 +1395,9 @@ struct dso *__dsos__find(struct dsos *dsos, const char *name, bool cmp_short) struct dso *dsos__find(struct dsos *dsos, const char *name, bool cmp_short) { struct dso *dso; - pthread_rwlock_rdlock(&dsos->lock); + down_read(&dsos->lock); dso = __dsos__find(dsos, name, cmp_short); - pthread_rwlock_unlock(&dsos->lock); + up_read(&dsos->lock); return dso; } @@ -1416,9 +1424,9 @@ struct dso *__dsos__findnew(struct dsos *dsos, const char *name) struct dso *dsos__findnew(struct dsos *dsos, const char *name) { struct dso *dso; - pthread_rwlock_wrlock(&dsos->lock); + down_write(&dsos->lock); dso = dso__get(__dsos__findnew(dsos, name)); - pthread_rwlock_unlock(&dsos->lock); + up_write(&dsos->lock); return dso; } diff --git a/tools/perf/util/dso.h b/tools/perf/util/dso.h index 926ff2e7f668..c229dbe0277a 100644 --- a/tools/perf/util/dso.h +++ b/tools/perf/util/dso.h @@ -7,7 +7,7 @@ #include <linux/rbtree.h> #include <sys/types.h> #include <stdbool.h> -#include <pthread.h> +#include "rwsem.h" #include <linux/types.h> #include <linux/bitops.h> #include "map.h" @@ -130,7 +130,7 @@ struct dso_cache { struct dsos { struct list_head head; struct rb_root root; /* rbtree root sorted by long name */ - pthread_rwlock_t lock; + struct rw_semaphore lock; }; struct auxtrace_cache; @@ -142,6 +142,8 @@ struct dso { struct rb_root *root; /* root of rbtree that rb_node is in */ struct rb_root symbols[MAP__NR_TYPES]; struct rb_root symbol_names[MAP__NR_TYPES]; + struct rb_root inlined_nodes; + struct rb_root srclines; struct { u64 addr; struct symbol *symbol; diff --git a/tools/perf/util/event.c b/tools/perf/util/event.c index fc690fecbfd6..97a8ef9980db 100644 --- a/tools/perf/util/event.c +++ b/tools/perf/util/event.c @@ -1,6 +1,7 @@ // SPDX-License-Identifier: GPL-2.0 #include <dirent.h> #include <errno.h> +#include <fcntl.h> #include <inttypes.h> #include <linux/kernel.h> #include <linux/types.h> @@ -678,21 +679,21 @@ out: return err; } -int perf_event__synthesize_threads(struct perf_tool *tool, - perf_event__handler_t process, - struct machine *machine, - bool mmap_data, - unsigned int proc_map_timeout) +static int __perf_event__synthesize_threads(struct perf_tool *tool, + perf_event__handler_t process, + struct machine *machine, + bool mmap_data, + unsigned int proc_map_timeout, + struct dirent **dirent, + int start, + int num) { - DIR *proc; - char proc_path[PATH_MAX]; - struct dirent *dirent; union perf_event *comm_event, *mmap_event, *fork_event; union perf_event *namespaces_event; int err = -1; - - if (machine__is_default_guest(machine)) - return 0; + char *end; + pid_t pid; + int i; comm_event = malloc(sizeof(comm_event->comm) + machine->id_hdr_size); if (comm_event == NULL) @@ -712,31 +713,25 @@ int perf_event__synthesize_threads(struct perf_tool *tool, if (namespaces_event == NULL) goto out_free_fork; - snprintf(proc_path, sizeof(proc_path), "%s/proc", machine->root_dir); - proc = opendir(proc_path); - - if (proc == NULL) - goto out_free_namespaces; - - while ((dirent = readdir(proc)) != NULL) { - char *end; - pid_t pid = strtol(dirent->d_name, &end, 10); + for (i = start; i < start + num; i++) { + if (!isdigit(dirent[i]->d_name[0])) + continue; - if (*end) /* only interested in proper numerical dirents */ + pid = (pid_t)strtol(dirent[i]->d_name, &end, 10); + /* only interested in proper numerical dirents */ + if (*end) continue; /* - * We may race with exiting thread, so don't stop just because - * one thread couldn't be synthesized. - */ + * We may race with exiting thread, so don't stop just because + * one thread couldn't be synthesized. + */ __event__synthesize_thread(comm_event, mmap_event, fork_event, namespaces_event, pid, 1, process, tool, machine, mmap_data, proc_map_timeout); } - err = 0; - closedir(proc); -out_free_namespaces: + free(namespaces_event); out_free_fork: free(fork_event); @@ -748,6 +743,118 @@ out: return err; } +struct synthesize_threads_arg { + struct perf_tool *tool; + perf_event__handler_t process; + struct machine *machine; + bool mmap_data; + unsigned int proc_map_timeout; + struct dirent **dirent; + int num; + int start; +}; + +static void *synthesize_threads_worker(void *arg) +{ + struct synthesize_threads_arg *args = arg; + + __perf_event__synthesize_threads(args->tool, args->process, + args->machine, args->mmap_data, + args->proc_map_timeout, args->dirent, + args->start, args->num); + return NULL; +} + +int perf_event__synthesize_threads(struct perf_tool *tool, + perf_event__handler_t process, + struct machine *machine, + bool mmap_data, + unsigned int proc_map_timeout, + unsigned int nr_threads_synthesize) +{ + struct synthesize_threads_arg *args = NULL; + pthread_t *synthesize_threads = NULL; + char proc_path[PATH_MAX]; + struct dirent **dirent; + int num_per_thread; + int m, n, i, j; + int thread_nr; + int base = 0; + int err = -1; + + + if (machine__is_default_guest(machine)) + return 0; + + snprintf(proc_path, sizeof(proc_path), "%s/proc", machine->root_dir); + n = scandir(proc_path, &dirent, 0, alphasort); + if (n < 0) + return err; + + if (nr_threads_synthesize == UINT_MAX) + thread_nr = sysconf(_SC_NPROCESSORS_ONLN); + else + thread_nr = nr_threads_synthesize; + + if (thread_nr <= 1) { + err = __perf_event__synthesize_threads(tool, process, + machine, mmap_data, + proc_map_timeout, + dirent, base, n); + goto free_dirent; + } + if (thread_nr > n) + thread_nr = n; + + synthesize_threads = calloc(sizeof(pthread_t), thread_nr); + if (synthesize_threads == NULL) + goto free_dirent; + + args = calloc(sizeof(*args), thread_nr); + if (args == NULL) + goto free_threads; + + num_per_thread = n / thread_nr; + m = n % thread_nr; + for (i = 0; i < thread_nr; i++) { + args[i].tool = tool; + args[i].process = process; + args[i].machine = machine; + args[i].mmap_data = mmap_data; + args[i].proc_map_timeout = proc_map_timeout; + args[i].dirent = dirent; + } + for (i = 0; i < m; i++) { + args[i].num = num_per_thread + 1; + args[i].start = i * args[i].num; + } + if (i != 0) + base = args[i-1].start + args[i-1].num; + for (j = i; j < thread_nr; j++) { + args[j].num = num_per_thread; + args[j].start = base + (j - i) * args[i].num; + } + + for (i = 0; i < thread_nr; i++) { + if (pthread_create(&synthesize_threads[i], NULL, + synthesize_threads_worker, &args[i])) + goto out_join; + } + err = 0; +out_join: + for (i = 0; i < thread_nr; i++) + pthread_join(synthesize_threads[i], NULL); + free(args); +free_threads: + free(synthesize_threads); +free_dirent: + for (i = 0; i < n; i++) + free(dirent[i]); + free(dirent); + + return err; +} + struct process_symbol_args { const char *name; u64 start; @@ -1498,6 +1605,7 @@ int machine__resolve(struct machine *machine, struct addr_location *al, al->sym = NULL; al->cpu = sample->cpu; al->socket = -1; + al->srcline = NULL; if (al->cpu >= 0) { struct perf_env *env = machine->env; diff --git a/tools/perf/util/event.h b/tools/perf/util/event.h index 5524ee69279c..1ae95efbfb95 100644 --- a/tools/perf/util/event.h +++ b/tools/perf/util/event.h @@ -681,7 +681,8 @@ int perf_event__synthesize_cpu_map(struct perf_tool *tool, int perf_event__synthesize_threads(struct perf_tool *tool, perf_event__handler_t process, struct machine *machine, bool mmap_data, - unsigned int proc_map_timeout); + unsigned int proc_map_timeout, + unsigned int nr_threads_synthesize); int perf_event__synthesize_kernel_mmap(struct perf_tool *tool, perf_event__handler_t process, struct machine *machine); diff --git a/tools/perf/util/evlist.c b/tools/perf/util/evlist.c index 6a0d7ffbeba0..c6c891e154a6 100644 --- a/tools/perf/util/evlist.c +++ b/tools/perf/util/evlist.c @@ -33,9 +33,6 @@ #include <linux/log2.h> #include <linux/err.h> -static void perf_mmap__munmap(struct perf_mmap *map); -static void perf_mmap__put(struct perf_mmap *map); - #define FD(e, x, y) (*(int *)xyarray__entry(e->fd, x, y)) #define SID(e, x, y) xyarray__entry(e->sample_id, x, y) @@ -704,129 +701,6 @@ static int perf_evlist__resume(struct perf_evlist *evlist) return perf_evlist__set_paused(evlist, false); } -/* When check_messup is true, 'end' must points to a good entry */ -static union perf_event * -perf_mmap__read(struct perf_mmap *md, bool check_messup, u64 start, - u64 end, u64 *prev) -{ - unsigned char *data = md->base + page_size; - union perf_event *event = NULL; - int diff = end - start; - - if (check_messup) { - /* - * If we're further behind than half the buffer, there's a chance - * the writer will bite our tail and mess up the samples under us. - * - * If we somehow ended up ahead of the 'end', we got messed up. - * - * In either case, truncate and restart at 'end'. - */ - if (diff > md->mask / 2 || diff < 0) { - fprintf(stderr, "WARNING: failed to keep up with mmap data.\n"); - - /* - * 'end' points to a known good entry, start there. - */ - start = end; - diff = 0; - } - } - - if (diff >= (int)sizeof(event->header)) { - size_t size; - - event = (union perf_event *)&data[start & md->mask]; - size = event->header.size; - - if (size < sizeof(event->header) || diff < (int)size) { - event = NULL; - goto broken_event; - } - - /* - * Event straddles the mmap boundary -- header should always - * be inside due to u64 alignment of output. - */ - if ((start & md->mask) + size != ((start + size) & md->mask)) { - unsigned int offset = start; - unsigned int len = min(sizeof(*event), size), cpy; - void *dst = md->event_copy; - - do { - cpy = min(md->mask + 1 - (offset & md->mask), len); - memcpy(dst, &data[offset & md->mask], cpy); - offset += cpy; - dst += cpy; - len -= cpy; - } while (len); - - event = (union perf_event *) md->event_copy; - } - - start += size; - } - -broken_event: - if (prev) - *prev = start; - - return event; -} - -union perf_event *perf_mmap__read_forward(struct perf_mmap *md, bool check_messup) -{ - u64 head; - u64 old = md->prev; - - /* - * Check if event was unmapped due to a POLLHUP/POLLERR. - */ - if (!refcount_read(&md->refcnt)) - return NULL; - - head = perf_mmap__read_head(md); - - return perf_mmap__read(md, check_messup, old, head, &md->prev); -} - -union perf_event * -perf_mmap__read_backward(struct perf_mmap *md) -{ - u64 head, end; - u64 start = md->prev; - - /* - * Check if event was unmapped due to a POLLHUP/POLLERR. - */ - if (!refcount_read(&md->refcnt)) - return NULL; - - head = perf_mmap__read_head(md); - if (!head) - return NULL; - - /* - * 'head' pointer starts from 0. Kernel minus sizeof(record) form - * it each time when kernel writes to it, so in fact 'head' is - * negative. 'end' pointer is made manually by adding the size of - * the ring buffer to 'head' pointer, means the validate data can - * read is the whole ring buffer. If 'end' is positive, the ring - * buffer has not fully filled, so we must adjust 'end' to 0. - * - * However, since both 'head' and 'end' is unsigned, we can't - * simply compare 'end' against 0. Here we compare '-head' and - * the size of the ring buffer, where -head is the number of bytes - * kernel write to the ring buffer. - */ - if (-head < (u64)(md->mask + 1)) - end = 0; - else - end = head + md->mask + 1; - - return perf_mmap__read(md, false, start, end, &md->prev); -} - union perf_event *perf_evlist__mmap_read_forward(struct perf_evlist *evlist, int idx) { struct perf_mmap *md = &evlist->mmap[idx]; @@ -857,96 +731,16 @@ union perf_event *perf_evlist__mmap_read(struct perf_evlist *evlist, int idx) return perf_evlist__mmap_read_forward(evlist, idx); } -void perf_mmap__read_catchup(struct perf_mmap *md) -{ - u64 head; - - if (!refcount_read(&md->refcnt)) - return; - - head = perf_mmap__read_head(md); - md->prev = head; -} - void perf_evlist__mmap_read_catchup(struct perf_evlist *evlist, int idx) { perf_mmap__read_catchup(&evlist->mmap[idx]); } -static bool perf_mmap__empty(struct perf_mmap *md) -{ - return perf_mmap__read_head(md) == md->prev && !md->auxtrace_mmap.base; -} - -static void perf_mmap__get(struct perf_mmap *map) -{ - refcount_inc(&map->refcnt); -} - -static void perf_mmap__put(struct perf_mmap *md) -{ - BUG_ON(md->base && refcount_read(&md->refcnt) == 0); - - if (refcount_dec_and_test(&md->refcnt)) - perf_mmap__munmap(md); -} - -void perf_mmap__consume(struct perf_mmap *md, bool overwrite) -{ - if (!overwrite) { - u64 old = md->prev; - - perf_mmap__write_tail(md, old); - } - - if (refcount_read(&md->refcnt) == 1 && perf_mmap__empty(md)) - perf_mmap__put(md); -} - void perf_evlist__mmap_consume(struct perf_evlist *evlist, int idx) { perf_mmap__consume(&evlist->mmap[idx], evlist->overwrite); } -int __weak auxtrace_mmap__mmap(struct auxtrace_mmap *mm __maybe_unused, - struct auxtrace_mmap_params *mp __maybe_unused, - void *userpg __maybe_unused, - int fd __maybe_unused) -{ - return 0; -} - -void __weak auxtrace_mmap__munmap(struct auxtrace_mmap *mm __maybe_unused) -{ -} - -void __weak auxtrace_mmap_params__init( - struct auxtrace_mmap_params *mp __maybe_unused, - off_t auxtrace_offset __maybe_unused, - unsigned int auxtrace_pages __maybe_unused, - bool auxtrace_overwrite __maybe_unused) -{ -} - -void __weak auxtrace_mmap_params__set_idx( - struct auxtrace_mmap_params *mp __maybe_unused, - struct perf_evlist *evlist __maybe_unused, - int idx __maybe_unused, - bool per_cpu __maybe_unused) -{ -} - -static void perf_mmap__munmap(struct perf_mmap *map) -{ - if (map->base != NULL) { - munmap(map->base, perf_mmap__mmap_len(map)); - map->base = NULL; - map->fd = -1; - refcount_set(&map->refcnt, 0); - } - auxtrace_mmap__munmap(&map->auxtrace_mmap); -} - static void perf_evlist__munmap_nofree(struct perf_evlist *evlist) { int i; @@ -995,48 +789,6 @@ static struct perf_mmap *perf_evlist__alloc_mmap(struct perf_evlist *evlist) return map; } -struct mmap_params { - int prot; - int mask; - struct auxtrace_mmap_params auxtrace_mp; -}; - -static int perf_mmap__mmap(struct perf_mmap *map, - struct mmap_params *mp, int fd) -{ - /* - * The last one will be done at perf_evlist__mmap_consume(), so that we - * make sure we don't prevent tools from consuming every last event in - * the ring buffer. - * - * I.e. we can get the POLLHUP meaning that the fd doesn't exist - * anymore, but the last events for it are still in the ring buffer, - * waiting to be consumed. - * - * Tools can chose to ignore this at their own discretion, but the - * evlist layer can't just drop it when filtering events in - * perf_evlist__filter_pollfd(). - */ - refcount_set(&map->refcnt, 2); - map->prev = 0; - map->mask = mp->mask; - map->base = mmap(NULL, perf_mmap__mmap_len(map), mp->prot, - MAP_SHARED, fd, 0); - if (map->base == MAP_FAILED) { - pr_debug2("failed to mmap perf event ring buffer, error %d\n", - errno); - map->base = NULL; - return -1; - } - map->fd = fd; - - if (auxtrace_mmap__mmap(&map->auxtrace_mmap, - &mp->auxtrace_mp, map->base, fd)) - return -1; - - return 0; -} - static bool perf_evlist__should_poll(struct perf_evlist *evlist __maybe_unused, struct perf_evsel *evsel) diff --git a/tools/perf/util/evlist.h b/tools/perf/util/evlist.h index c1750a400bb7..e72ae64c11ac 100644 --- a/tools/perf/util/evlist.h +++ b/tools/perf/util/evlist.h @@ -7,12 +7,13 @@ #include <linux/refcount.h> #include <linux/list.h> #include <api/fd/array.h> +#include <fcntl.h> #include <stdio.h> #include "../perf.h" #include "event.h" #include "evsel.h" +#include "mmap.h" #include "util.h" -#include "auxtrace.h" #include <signal.h> #include <unistd.h> @@ -24,55 +25,6 @@ struct record_opts; #define PERF_EVLIST__HLIST_BITS 8 #define PERF_EVLIST__HLIST_SIZE (1 << PERF_EVLIST__HLIST_BITS) -/** - * struct perf_mmap - perf's ring buffer mmap details - * - * @refcnt - e.g. code using PERF_EVENT_IOC_SET_OUTPUT to share this - */ -struct perf_mmap { - void *base; - int mask; - int fd; - refcount_t refcnt; - u64 prev; - struct auxtrace_mmap auxtrace_mmap; - char event_copy[PERF_SAMPLE_MAX_SIZE] __aligned(8); -}; - -static inline size_t -perf_mmap__mmap_len(struct perf_mmap *map) -{ - return map->mask + 1 + page_size; -} - -/* - * State machine of bkw_mmap_state: - * - * .________________(forbid)_____________. - * | V - * NOTREADY --(0)--> RUNNING --(1)--> DATA_PENDING --(2)--> EMPTY - * ^ ^ | ^ | - * | |__(forbid)____/ |___(forbid)___/| - * | | - * \_________________(3)_______________/ - * - * NOTREADY : Backward ring buffers are not ready - * RUNNING : Backward ring buffers are recording - * DATA_PENDING : We are required to collect data from backward ring buffers - * EMPTY : We have collected data from backward ring buffers. - * - * (0): Setup backward ring buffer - * (1): Pause ring buffers for reading - * (2): Read from ring buffers - * (3): Resume ring buffers for recording - */ -enum bkw_mmap_state { - BKW_MMAP_NOTREADY, - BKW_MMAP_RUNNING, - BKW_MMAP_DATA_PENDING, - BKW_MMAP_EMPTY, -}; - struct perf_evlist { struct list_head entries; struct hlist_head heads[PERF_EVLIST__HLIST_SIZE]; @@ -177,12 +129,6 @@ struct perf_sample_id *perf_evlist__id2sid(struct perf_evlist *evlist, u64 id); void perf_evlist__toggle_bkw_mmap(struct perf_evlist *evlist, enum bkw_mmap_state state); -union perf_event *perf_mmap__read_forward(struct perf_mmap *map, bool check_messup); -union perf_event *perf_mmap__read_backward(struct perf_mmap *map); - -void perf_mmap__read_catchup(struct perf_mmap *md); -void perf_mmap__consume(struct perf_mmap *md, bool overwrite); - union perf_event *perf_evlist__mmap_read(struct perf_evlist *evlist, int idx); union perf_event *perf_evlist__mmap_read_forward(struct perf_evlist *evlist, @@ -286,25 +232,6 @@ size_t perf_evlist__fprintf(struct perf_evlist *evlist, FILE *fp); int perf_evlist__strerror_open(struct perf_evlist *evlist, int err, char *buf, size_t size); int perf_evlist__strerror_mmap(struct perf_evlist *evlist, int err, char *buf, size_t size); -static inline u64 perf_mmap__read_head(struct perf_mmap *mm) -{ - struct perf_event_mmap_page *pc = mm->base; - u64 head = ACCESS_ONCE(pc->data_head); - rmb(); - return head; -} - -static inline void perf_mmap__write_tail(struct perf_mmap *md, u64 tail) -{ - struct perf_event_mmap_page *pc = md->base; - - /* - * ensure all reads are done before we write the tail out. - */ - mb(); - pc->data_tail = tail; -} - bool perf_evlist__can_select_event(struct perf_evlist *evlist, const char *str); void perf_evlist__to_front(struct perf_evlist *evlist, struct perf_evsel *move_evsel); diff --git a/tools/perf/util/evsel.c b/tools/perf/util/evsel.c index 0dccdb89572c..f894893c203d 100644 --- a/tools/perf/util/evsel.c +++ b/tools/perf/util/evsel.c @@ -683,7 +683,7 @@ void perf_evsel__config_callchain(struct perf_evsel *evsel, if (!function) { perf_evsel__set_sample_bit(evsel, REGS_USER); perf_evsel__set_sample_bit(evsel, STACK_USER); - attr->sample_regs_user = PERF_REGS_MASK; + attr->sample_regs_user |= PERF_REGS_MASK; attr->sample_stack_user = param->dump_size; attr->exclude_callchain_user = 1; } else { @@ -936,6 +936,11 @@ void perf_evsel__config(struct perf_evsel *evsel, struct record_opts *opts, perf_evsel__set_sample_bit(evsel, REGS_INTR); } + if (opts->sample_user_regs) { + attr->sample_regs_user |= opts->sample_user_regs; + perf_evsel__set_sample_bit(evsel, REGS_USER); + } + if (target__has_cpu(&opts->target) || opts->sample_cpu) perf_evsel__set_sample_bit(evsel, CPU); diff --git a/tools/perf/util/evsel.h b/tools/perf/util/evsel.h index b4df79d72329..9277df96ffda 100644 --- a/tools/perf/util/evsel.h +++ b/tools/perf/util/evsel.h @@ -69,6 +69,8 @@ struct perf_evsel_config_term { } val; }; +struct perf_stat_evsel; + /** struct perf_evsel - event selector * * @evlist - evlist this evsel is in, if it is in one. @@ -102,6 +104,7 @@ struct perf_evsel { const char *unit; struct event_format *tp_format; off_t id_offset; + struct perf_stat_evsel *stats; void *priv; u64 db_id; struct cgroup_sel *cgrp; @@ -138,6 +141,7 @@ struct perf_evsel { const char * metric_name; struct perf_evsel **metric_events; bool collect_stat; + bool weak_group; }; union u64_swap { diff --git a/tools/perf/util/evsel_fprintf.c b/tools/perf/util/evsel_fprintf.c index 1fd7c2e46db2..06dfb027879d 100644 --- a/tools/perf/util/evsel_fprintf.c +++ b/tools/perf/util/evsel_fprintf.c @@ -158,7 +158,7 @@ int sample__fprintf_callchain(struct perf_sample *sample, int left_alignment, } } - if (print_dso) { + if (print_dso && (!node->sym || !node->sym->inlined)) { printed += fprintf(fp, " ("); printed += map__fprintf_dsoname(node->map, fp); printed += fprintf(fp, ")"); @@ -167,41 +167,12 @@ int sample__fprintf_callchain(struct perf_sample *sample, int left_alignment, if (print_srcline) printed += map__fprintf_srcline(node->map, addr, "\n ", fp); + if (node->sym && node->sym->inlined) + printed += fprintf(fp, " (inlined)"); + if (!print_oneline) printed += fprintf(fp, "\n"); - if (symbol_conf.inline_name && node->map) { - struct inline_node *inode; - - addr = map__rip_2objdump(node->map, node->ip), - inode = dso__parse_addr_inlines(node->map->dso, addr); - - if (inode) { - struct inline_list *ilist; - - list_for_each_entry(ilist, &inode->val, list) { - if (print_arrow) - printed += fprintf(fp, " <-"); - - /* IP is same, just skip it */ - if (print_ip) - printed += fprintf(fp, "%c%16s", - s, ""); - if (print_sym) - printed += fprintf(fp, " %s", - ilist->funcname); - if (print_srcline) - printed += fprintf(fp, "\n %s:%d", - ilist->filename, - ilist->line_nr); - if (!print_oneline) - printed += fprintf(fp, "\n"); - } - - inline_node__delete(inode); - } - } - if (symbol_conf.bt_stop_list && node->sym && strlist__has_entry(symbol_conf.bt_stop_list, diff --git a/tools/perf/util/header.c b/tools/perf/util/header.c index ba0cea8fef72..7c0e9d587bfa 100644 --- a/tools/perf/util/header.c +++ b/tools/perf/util/header.c @@ -1763,7 +1763,7 @@ process_event_desc(struct feat_fd *ff, void *data __maybe_unused) session = container_of(ff->ph, struct perf_session, header); - if (session->file->is_pipe) { + if (session->data->is_pipe) { /* Save events for reading later by print_event_desc, * since they can't be read again in pipe mode. */ ff->events = events; @@ -1772,7 +1772,7 @@ process_event_desc(struct feat_fd *ff, void *data __maybe_unused) for (evsel = events; evsel->attr.size; evsel++) perf_evlist__set_event_name(session->evlist, evsel); - if (!session->file->is_pipe) + if (!session->data->is_pipe) free_event_desc(events); return 0; @@ -2249,7 +2249,7 @@ int perf_header__fprintf_info(struct perf_session *session, FILE *fp, bool full) { struct header_print_data hd; struct perf_header *header = &session->header; - int fd = perf_data_file__fd(session->file); + int fd = perf_data__fd(session->data); struct stat st; int ret, bit; @@ -2265,7 +2265,7 @@ int perf_header__fprintf_info(struct perf_session *session, FILE *fp, bool full) perf_header__process_sections(header, fd, &hd, perf_file_section__fprintf_info); - if (session->file->is_pipe) + if (session->data->is_pipe) return 0; fprintf(fp, "# missing features: "); @@ -2758,7 +2758,7 @@ static int perf_header__read_pipe(struct perf_session *session) struct perf_pipe_file_header f_header; if (perf_file_header__read_pipe(&f_header, header, - perf_data_file__fd(session->file), + perf_data__fd(session->data), session->repipe) < 0) { pr_debug("incompatible file format\n"); return -EINVAL; @@ -2861,13 +2861,13 @@ static int perf_evlist__prepare_tracepoint_events(struct perf_evlist *evlist, int perf_session__read_header(struct perf_session *session) { - struct perf_data_file *file = session->file; + struct perf_data *data = session->data; struct perf_header *header = &session->header; struct perf_file_header f_header; struct perf_file_attr f_attr; u64 f_id; int nr_attrs, nr_ids, i, j; - int fd = perf_data_file__fd(file); + int fd = perf_data__fd(data); session->evlist = perf_evlist__new(); if (session->evlist == NULL) @@ -2875,7 +2875,7 @@ int perf_session__read_header(struct perf_session *session) session->evlist->env = &header->env; session->machines.host.env = &header->env; - if (perf_data_file__is_pipe(file)) + if (perf_data__is_pipe(data)) return perf_header__read_pipe(session); if (perf_file_header__read(&f_header, header, fd) < 0) @@ -2890,7 +2890,7 @@ int perf_session__read_header(struct perf_session *session) if (f_header.data.size == 0) { pr_warning("WARNING: The %s file's data size field is 0 which is unexpected.\n" "Was the 'perf record' command properly terminated?\n", - file->path); + data->file.path); } nr_attrs = f_header.attrs.size / f_header.attr_size; @@ -3398,7 +3398,7 @@ int perf_event__process_tracing_data(struct perf_tool *tool __maybe_unused, struct perf_session *session) { ssize_t size_read, padding, size = event->tracing_data.size; - int fd = perf_data_file__fd(session->file); + int fd = perf_data__fd(session->data); off_t offset = lseek(fd, 0, SEEK_CUR); char buf[BUFSIZ]; diff --git a/tools/perf/util/hist.c b/tools/perf/util/hist.c index 097473600d94..b6140950301e 100644 --- a/tools/perf/util/hist.c +++ b/tools/perf/util/hist.c @@ -597,6 +597,7 @@ __hists__add_entry(struct hists *hists, .map = al->map, .sym = al->sym, }, + .srcline = al->srcline ? strdup(al->srcline) : NULL, .socket = al->socket, .cpu = al->cpu, .cpumode = al->cpumode, @@ -951,6 +952,7 @@ iter_add_next_cumulative_entry(struct hist_entry_iter *iter, .map = al->map, .sym = al->sym, }, + .srcline = al->srcline ? strdup(al->srcline) : NULL, .parent = iter->parent, .raw_data = sample->raw_data, .raw_size = sample->raw_size, @@ -1142,11 +1144,6 @@ void hist_entry__delete(struct hist_entry *he) zfree(&he->mem_info); } - if (he->inline_node) { - inline_node__delete(he->inline_node); - he->inline_node = NULL; - } - zfree(&he->stat_acc); free_srcline(he->srcline); if (he->srcfile && he->srcfile[0]) diff --git a/tools/perf/util/intel-bts.c b/tools/perf/util/intel-bts.c index 218ee2bac9a5..5325e65f9711 100644 --- a/tools/perf/util/intel-bts.c +++ b/tools/perf/util/intel-bts.c @@ -500,7 +500,7 @@ static int intel_bts_process_queue(struct intel_bts_queue *btsq, u64 *timestamp) } if (!buffer->data) { - int fd = perf_data_file__fd(btsq->bts->session->file); + int fd = perf_data__fd(btsq->bts->session->data); buffer->data = auxtrace_buffer__get_data(buffer, fd); if (!buffer->data) { @@ -664,10 +664,10 @@ static int intel_bts_process_auxtrace_event(struct perf_session *session, if (!bts->data_queued) { struct auxtrace_buffer *buffer; off_t data_offset; - int fd = perf_data_file__fd(session->file); + int fd = perf_data__fd(session->data); int err; - if (perf_data_file__is_pipe(session->file)) { + if (perf_data__is_pipe(session->data)) { data_offset = 0; } else { data_offset = lseek(fd, 0, SEEK_CUR); diff --git a/tools/perf/util/intel-pt.c b/tools/perf/util/intel-pt.c index b58f9fd1e2ee..23f9ba676df0 100644 --- a/tools/perf/util/intel-pt.c +++ b/tools/perf/util/intel-pt.c @@ -271,7 +271,7 @@ next: ptq->buffer = buffer; if (!buffer->data) { - int fd = perf_data_file__fd(ptq->pt->session->file); + int fd = perf_data__fd(ptq->pt->session->data); buffer->data = auxtrace_buffer__get_data(buffer, fd); if (!buffer->data) @@ -2084,10 +2084,10 @@ static int intel_pt_process_auxtrace_event(struct perf_session *session, if (!pt->data_queued) { struct auxtrace_buffer *buffer; off_t data_offset; - int fd = perf_data_file__fd(session->file); + int fd = perf_data__fd(session->data); int err; - if (perf_data_file__is_pipe(session->file)) { + if (perf_data__is_pipe(session->data)) { data_offset = 0; } else { data_offset = lseek(fd, 0, SEEK_CUR); diff --git a/tools/perf/util/jit.h b/tools/perf/util/jit.h index c2582fa9fe21..6817ffc2a059 100644 --- a/tools/perf/util/jit.h +++ b/tools/perf/util/jit.h @@ -4,7 +4,7 @@ #include <data.h> -int jit_process(struct perf_session *session, struct perf_data_file *output, +int jit_process(struct perf_session *session, struct perf_data *output, struct machine *machine, char *filename, pid_t pid, u64 *nbytes); int jit_inject_record(const char *filename); diff --git a/tools/perf/util/jitdump.c b/tools/perf/util/jitdump.c index 36483db032e8..a1863000e972 100644 --- a/tools/perf/util/jitdump.c +++ b/tools/perf/util/jitdump.c @@ -30,7 +30,7 @@ #include "sane_ctype.h" struct jit_buf_desc { - struct perf_data_file *output; + struct perf_data *output; struct perf_session *session; struct machine *machine; union jr_entry *entry; @@ -61,8 +61,8 @@ struct debug_line_info { struct jit_tool { struct perf_tool tool; - struct perf_data_file output; - struct perf_data_file input; + struct perf_data output; + struct perf_data input; u64 bytes_written; }; @@ -357,7 +357,7 @@ jit_inject_event(struct jit_buf_desc *jd, union perf_event *event) { ssize_t size; - size = perf_data_file__write(jd->output, event, event->header.size); + size = perf_data__write(jd->output, event, event->header.size); if (size < 0) return -1; @@ -752,7 +752,7 @@ jit_detect(char *mmap_name, pid_t pid) int jit_process(struct perf_session *session, - struct perf_data_file *output, + struct perf_data *output, struct machine *machine, char *filename, pid_t pid, diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c index bd5d5b5e2218..6a8d03c3d9b7 100644 --- a/tools/perf/util/machine.c +++ b/tools/perf/util/machine.c @@ -31,7 +31,21 @@ static void dsos__init(struct dsos *dsos) { INIT_LIST_HEAD(&dsos->head); dsos->root = RB_ROOT; - pthread_rwlock_init(&dsos->lock, NULL); + init_rwsem(&dsos->lock); +} + +static void machine__threads_init(struct machine *machine) +{ + int i; + + for (i = 0; i < THREADS__TABLE_SIZE; i++) { + struct threads *threads = &machine->threads[i]; + threads->entries = RB_ROOT; + init_rwsem(&threads->lock); + threads->nr = 0; + INIT_LIST_HEAD(&threads->dead); + threads->last_match = NULL; + } } int machine__init(struct machine *machine, const char *root_dir, pid_t pid) @@ -41,11 +55,7 @@ int machine__init(struct machine *machine, const char *root_dir, pid_t pid) RB_CLEAR_NODE(&machine->rb_node); dsos__init(&machine->dsos); - machine->threads = RB_ROOT; - pthread_rwlock_init(&machine->threads_lock, NULL); - machine->nr_threads = 0; - INIT_LIST_HEAD(&machine->dead_threads); - machine->last_match = NULL; + machine__threads_init(machine); machine->vdso_info = NULL; machine->env = NULL; @@ -121,7 +131,7 @@ static void dsos__purge(struct dsos *dsos) { struct dso *pos, *n; - pthread_rwlock_wrlock(&dsos->lock); + down_write(&dsos->lock); list_for_each_entry_safe(pos, n, &dsos->head, node) { RB_CLEAR_NODE(&pos->rb_node); @@ -130,39 +140,49 @@ static void dsos__purge(struct dsos *dsos) dso__put(pos); } - pthread_rwlock_unlock(&dsos->lock); + up_write(&dsos->lock); } static void dsos__exit(struct dsos *dsos) { dsos__purge(dsos); - pthread_rwlock_destroy(&dsos->lock); + exit_rwsem(&dsos->lock); } void machine__delete_threads(struct machine *machine) { struct rb_node *nd; + int i; - pthread_rwlock_wrlock(&machine->threads_lock); - nd = rb_first(&machine->threads); - while (nd) { - struct thread *t = rb_entry(nd, struct thread, rb_node); + for (i = 0; i < THREADS__TABLE_SIZE; i++) { + struct threads *threads = &machine->threads[i]; + down_write(&threads->lock); + nd = rb_first(&threads->entries); + while (nd) { + struct thread *t = rb_entry(nd, struct thread, rb_node); - nd = rb_next(nd); - __machine__remove_thread(machine, t, false); + nd = rb_next(nd); + __machine__remove_thread(machine, t, false); + } + up_write(&threads->lock); } - pthread_rwlock_unlock(&machine->threads_lock); } void machine__exit(struct machine *machine) { + int i; + machine__destroy_kernel_maps(machine); map_groups__exit(&machine->kmaps); dsos__exit(&machine->dsos); machine__exit_vdso(machine); zfree(&machine->root_dir); zfree(&machine->current_tid); - pthread_rwlock_destroy(&machine->threads_lock); + + for (i = 0; i < THREADS__TABLE_SIZE; i++) { + struct threads *threads = &machine->threads[i]; + exit_rwsem(&threads->lock); + } } void machine__delete(struct machine *machine) @@ -380,10 +400,11 @@ out_err: * lookup/new thread inserted. */ static struct thread *____machine__findnew_thread(struct machine *machine, + struct threads *threads, pid_t pid, pid_t tid, bool create) { - struct rb_node **p = &machine->threads.rb_node; + struct rb_node **p = &threads->entries.rb_node; struct rb_node *parent = NULL; struct thread *th; @@ -392,14 +413,14 @@ static struct thread *____machine__findnew_thread(struct machine *machine, * so most of the time we dont have to look up * the full rbtree: */ - th = machine->last_match; + th = threads->last_match; if (th != NULL) { if (th->tid == tid) { machine__update_thread_pid(machine, th, pid); return thread__get(th); } - machine->last_match = NULL; + threads->last_match = NULL; } while (*p != NULL) { @@ -407,7 +428,7 @@ static struct thread *____machine__findnew_thread(struct machine *machine, th = rb_entry(parent, struct thread, rb_node); if (th->tid == tid) { - machine->last_match = th; + threads->last_match = th; machine__update_thread_pid(machine, th, pid); return thread__get(th); } @@ -424,7 +445,7 @@ static struct thread *____machine__findnew_thread(struct machine *machine, th = thread__new(pid, tid); if (th != NULL) { rb_link_node(&th->rb_node, parent, p); - rb_insert_color(&th->rb_node, &machine->threads); + rb_insert_color(&th->rb_node, &threads->entries); /* * We have to initialize map_groups separately @@ -435,7 +456,7 @@ static struct thread *____machine__findnew_thread(struct machine *machine, * leader and that would screwed the rb tree. */ if (thread__init_map_groups(th, machine)) { - rb_erase_init(&th->rb_node, &machine->threads); + rb_erase_init(&th->rb_node, &threads->entries); RB_CLEAR_NODE(&th->rb_node); thread__put(th); return NULL; @@ -444,8 +465,8 @@ static struct thread *____machine__findnew_thread(struct machine *machine, * It is now in the rbtree, get a ref */ thread__get(th); - machine->last_match = th; - ++machine->nr_threads; + threads->last_match = th; + ++threads->nr; } return th; @@ -453,27 +474,30 @@ static struct thread *____machine__findnew_thread(struct machine *machine, struct thread *__machine__findnew_thread(struct machine *machine, pid_t pid, pid_t tid) { - return ____machine__findnew_thread(machine, pid, tid, true); + return ____machine__findnew_thread(machine, machine__threads(machine, tid), pid, tid, true); } struct thread *machine__findnew_thread(struct machine *machine, pid_t pid, pid_t tid) { + struct threads *threads = machine__threads(machine, tid); struct thread *th; - pthread_rwlock_wrlock(&machine->threads_lock); + down_write(&threads->lock); th = __machine__findnew_thread(machine, pid, tid); - pthread_rwlock_unlock(&machine->threads_lock); + up_write(&threads->lock); return th; } struct thread *machine__find_thread(struct machine *machine, pid_t pid, pid_t tid) { + struct threads *threads = machine__threads(machine, tid); struct thread *th; - pthread_rwlock_rdlock(&machine->threads_lock); - th = ____machine__findnew_thread(machine, pid, tid, false); - pthread_rwlock_unlock(&machine->threads_lock); + + down_read(&threads->lock); + th = ____machine__findnew_thread(machine, threads, pid, tid, false); + up_read(&threads->lock); return th; } @@ -565,7 +589,7 @@ static struct dso *machine__findnew_module_dso(struct machine *machine, { struct dso *dso; - pthread_rwlock_wrlock(&machine->dsos.lock); + down_write(&machine->dsos.lock); dso = __dsos__find(&machine->dsos, m->name, true); if (!dso) { @@ -579,7 +603,7 @@ static struct dso *machine__findnew_module_dso(struct machine *machine, dso__get(dso); out_unlock: - pthread_rwlock_unlock(&machine->dsos.lock); + up_write(&machine->dsos.lock); return dso; } @@ -720,21 +744,25 @@ size_t machine__fprintf_vmlinux_path(struct machine *machine, FILE *fp) size_t machine__fprintf(struct machine *machine, FILE *fp) { - size_t ret; struct rb_node *nd; + size_t ret; + int i; - pthread_rwlock_rdlock(&machine->threads_lock); + for (i = 0; i < THREADS__TABLE_SIZE; i++) { + struct threads *threads = &machine->threads[i]; - ret = fprintf(fp, "Threads: %u\n", machine->nr_threads); + down_read(&threads->lock); - for (nd = rb_first(&machine->threads); nd; nd = rb_next(nd)) { - struct thread *pos = rb_entry(nd, struct thread, rb_node); + ret = fprintf(fp, "Threads: %u\n", threads->nr); - ret += thread__fprintf(pos, fp); - } + for (nd = rb_first(&threads->entries); nd; nd = rb_next(nd)) { + struct thread *pos = rb_entry(nd, struct thread, rb_node); - pthread_rwlock_unlock(&machine->threads_lock); + ret += thread__fprintf(pos, fp); + } + up_read(&threads->lock); + } return ret; } @@ -1293,7 +1321,7 @@ static int machine__process_kernel_mmap_event(struct machine *machine, struct dso *kernel = NULL; struct dso *dso; - pthread_rwlock_rdlock(&machine->dsos.lock); + down_read(&machine->dsos.lock); list_for_each_entry(dso, &machine->dsos.head, node) { @@ -1323,7 +1351,7 @@ static int machine__process_kernel_mmap_event(struct machine *machine, break; } - pthread_rwlock_unlock(&machine->dsos.lock); + up_read(&machine->dsos.lock); if (kernel == NULL) kernel = machine__findnew_dso(machine, kmmap_prefix); @@ -1480,23 +1508,25 @@ out_problem: static void __machine__remove_thread(struct machine *machine, struct thread *th, bool lock) { - if (machine->last_match == th) - machine->last_match = NULL; + struct threads *threads = machine__threads(machine, th->tid); + + if (threads->last_match == th) + threads->last_match = NULL; BUG_ON(refcount_read(&th->refcnt) == 0); if (lock) - pthread_rwlock_wrlock(&machine->threads_lock); - rb_erase_init(&th->rb_node, &machine->threads); + down_write(&threads->lock); + rb_erase_init(&th->rb_node, &threads->entries); RB_CLEAR_NODE(&th->rb_node); - --machine->nr_threads; + --threads->nr; /* * Move it first to the dead_threads list, then drop the reference, * if this is the last reference, then the thread__delete destructor * will be called and we will remove it from the dead_threads list. */ - list_add_tail(&th->node, &machine->dead_threads); + list_add_tail(&th->node, &threads->dead); if (lock) - pthread_rwlock_unlock(&machine->threads_lock); + up_write(&threads->lock); thread__put(th); } @@ -1680,6 +1710,26 @@ struct mem_info *sample__resolve_mem(struct perf_sample *sample, return mi; } +static char *callchain_srcline(struct map *map, struct symbol *sym, u64 ip) +{ + char *srcline = NULL; + + if (!map || callchain_param.key == CCKEY_FUNCTION) + return srcline; + + srcline = srcline__tree_find(&map->dso->srclines, ip); + if (!srcline) { + bool show_sym = false; + bool show_addr = callchain_param.key == CCKEY_ADDRESS; + + srcline = get_srcline(map->dso, map__rip_2objdump(map, ip), + sym, show_sym, show_addr); + srcline__tree_insert(&map->dso->srclines, ip, srcline); + } + + return srcline; +} + struct iterations { int nr_loop_iter; u64 cycles; @@ -1699,6 +1749,7 @@ static int add_callchain_ip(struct thread *thread, struct addr_location al; int nr_loop_iter = 0; u64 iter_cycles = 0; + const char *srcline = NULL; al.filtered = 0; al.sym = NULL; @@ -1754,9 +1805,10 @@ static int add_callchain_ip(struct thread *thread, iter_cycles = iter->cycles; } + srcline = callchain_srcline(al.map, al.sym, al.addr); return callchain_cursor_append(cursor, al.addr, al.map, al.sym, branch, flags, nr_loop_iter, - iter_cycles, branch_from); + iter_cycles, branch_from, srcline); } struct branch_info *sample__resolve_bstack(struct perf_sample *sample, @@ -2069,15 +2121,54 @@ check_calls: return 0; } +static int append_inlines(struct callchain_cursor *cursor, + struct map *map, struct symbol *sym, u64 ip) +{ + struct inline_node *inline_node; + struct inline_list *ilist; + u64 addr; + int ret = 1; + + if (!symbol_conf.inline_name || !map || !sym) + return ret; + + addr = map__rip_2objdump(map, ip); + + inline_node = inlines__tree_find(&map->dso->inlined_nodes, addr); + if (!inline_node) { + inline_node = dso__parse_addr_inlines(map->dso, addr, sym); + if (!inline_node) + return ret; + inlines__tree_insert(&map->dso->inlined_nodes, inline_node); + } + + list_for_each_entry(ilist, &inline_node->val, list) { + ret = callchain_cursor_append(cursor, ip, map, + ilist->symbol, false, + NULL, 0, 0, 0, ilist->srcline); + + if (ret != 0) + return ret; + } + + return ret; +} + static int unwind_entry(struct unwind_entry *entry, void *arg) { struct callchain_cursor *cursor = arg; + const char *srcline = NULL; if (symbol_conf.hide_unresolved && entry->sym == NULL) return 0; + + if (append_inlines(cursor, entry->map, entry->sym, entry->ip) == 0) + return 0; + + srcline = callchain_srcline(entry->map, entry->sym, entry->ip); return callchain_cursor_append(cursor, entry->ip, entry->map, entry->sym, - false, NULL, 0, 0, 0); + false, NULL, 0, 0, 0, srcline); } static int thread__resolve_callchain_unwind(struct thread *thread, @@ -2141,21 +2232,26 @@ int machine__for_each_thread(struct machine *machine, int (*fn)(struct thread *thread, void *p), void *priv) { + struct threads *threads; struct rb_node *nd; struct thread *thread; int rc = 0; + int i; - for (nd = rb_first(&machine->threads); nd; nd = rb_next(nd)) { - thread = rb_entry(nd, struct thread, rb_node); - rc = fn(thread, priv); - if (rc != 0) - return rc; - } + for (i = 0; i < THREADS__TABLE_SIZE; i++) { + threads = &machine->threads[i]; + for (nd = rb_first(&threads->entries); nd; nd = rb_next(nd)) { + thread = rb_entry(nd, struct thread, rb_node); + rc = fn(thread, priv); + if (rc != 0) + return rc; + } - list_for_each_entry(thread, &machine->dead_threads, node) { - rc = fn(thread, priv); - if (rc != 0) - return rc; + list_for_each_entry(thread, &threads->dead, node) { + rc = fn(thread, priv); + if (rc != 0) + return rc; + } } return rc; } @@ -2184,12 +2280,16 @@ int machines__for_each_thread(struct machines *machines, int __machine__synthesize_threads(struct machine *machine, struct perf_tool *tool, struct target *target, struct thread_map *threads, perf_event__handler_t process, bool data_mmap, - unsigned int proc_map_timeout) + unsigned int proc_map_timeout, + unsigned int nr_threads_synthesize) { if (target__has_task(target)) return perf_event__synthesize_thread_map(tool, threads, process, machine, data_mmap, proc_map_timeout); else if (target__has_cpu(target)) - return perf_event__synthesize_threads(tool, process, machine, data_mmap, proc_map_timeout); + return perf_event__synthesize_threads(tool, process, + machine, data_mmap, + proc_map_timeout, + nr_threads_synthesize); /* command specified */ return 0; } diff --git a/tools/perf/util/machine.h b/tools/perf/util/machine.h index d551aa80a59b..5ce860b64c74 100644 --- a/tools/perf/util/machine.h +++ b/tools/perf/util/machine.h @@ -7,6 +7,7 @@ #include "map.h" #include "dso.h" #include "event.h" +#include "rwsem.h" struct addr_location; struct branch_stack; @@ -24,6 +25,17 @@ extern const char *ref_reloc_sym_names[]; struct vdso_info; +#define THREADS__TABLE_BITS 8 +#define THREADS__TABLE_SIZE (1 << THREADS__TABLE_BITS) + +struct threads { + struct rb_root entries; + struct rw_semaphore lock; + unsigned int nr; + struct list_head dead; + struct thread *last_match; +}; + struct machine { struct rb_node rb_node; pid_t pid; @@ -31,11 +43,7 @@ struct machine { bool comm_exec; bool kptr_restrict_warned; char *root_dir; - struct rb_root threads; - pthread_rwlock_t threads_lock; - unsigned int nr_threads; - struct list_head dead_threads; - struct thread *last_match; + struct threads threads[THREADS__TABLE_SIZE]; struct vdso_info *vdso_info; struct perf_env *env; struct dsos dsos; @@ -49,6 +57,12 @@ struct machine { }; }; +static inline struct threads *machine__threads(struct machine *machine, pid_t tid) +{ + /* Cast it to handle tid == -1 */ + return &machine->threads[(unsigned int)tid % THREADS__TABLE_SIZE]; +} + static inline struct map *__machine__kernel_map(struct machine *machine, enum map_type type) { @@ -244,15 +258,18 @@ int machines__for_each_thread(struct machines *machines, int __machine__synthesize_threads(struct machine *machine, struct perf_tool *tool, struct target *target, struct thread_map *threads, perf_event__handler_t process, bool data_mmap, - unsigned int proc_map_timeout); + unsigned int proc_map_timeout, + unsigned int nr_threads_synthesize); static inline int machine__synthesize_threads(struct machine *machine, struct target *target, struct thread_map *threads, bool data_mmap, - unsigned int proc_map_timeout) + unsigned int proc_map_timeout, + unsigned int nr_threads_synthesize) { return __machine__synthesize_threads(machine, NULL, target, threads, perf_event__process, data_mmap, - proc_map_timeout); + proc_map_timeout, + nr_threads_synthesize); } pid_t machine__get_current_tid(struct machine *machine, int cpu); diff --git a/tools/perf/util/map.c b/tools/perf/util/map.c index 4e7bd2750122..6d40efd74402 100644 --- a/tools/perf/util/map.c +++ b/tools/perf/util/map.c @@ -489,7 +489,7 @@ u64 map__objdump_2mem(struct map *map, u64 ip) static void maps__init(struct maps *maps) { maps->entries = RB_ROOT; - pthread_rwlock_init(&maps->lock, NULL); + init_rwsem(&maps->lock); } void map_groups__init(struct map_groups *mg, struct machine *machine) @@ -518,9 +518,9 @@ static void __maps__purge(struct maps *maps) static void maps__exit(struct maps *maps) { - pthread_rwlock_wrlock(&maps->lock); + down_write(&maps->lock); __maps__purge(maps); - pthread_rwlock_unlock(&maps->lock); + up_write(&maps->lock); } void map_groups__exit(struct map_groups *mg) @@ -587,7 +587,7 @@ struct symbol *maps__find_symbol_by_name(struct maps *maps, const char *name, struct symbol *sym; struct rb_node *nd; - pthread_rwlock_rdlock(&maps->lock); + down_read(&maps->lock); for (nd = rb_first(&maps->entries); nd; nd = rb_next(nd)) { struct map *pos = rb_entry(nd, struct map, rb_node); @@ -603,7 +603,7 @@ struct symbol *maps__find_symbol_by_name(struct maps *maps, const char *name, sym = NULL; out: - pthread_rwlock_unlock(&maps->lock); + up_read(&maps->lock); return sym; } @@ -639,7 +639,7 @@ static size_t maps__fprintf(struct maps *maps, FILE *fp) size_t printed = 0; struct rb_node *nd; - pthread_rwlock_rdlock(&maps->lock); + down_read(&maps->lock); for (nd = rb_first(&maps->entries); nd; nd = rb_next(nd)) { struct map *pos = rb_entry(nd, struct map, rb_node); @@ -651,7 +651,7 @@ static size_t maps__fprintf(struct maps *maps, FILE *fp) } } - pthread_rwlock_unlock(&maps->lock); + up_read(&maps->lock); return printed; } @@ -683,7 +683,7 @@ static int maps__fixup_overlappings(struct maps *maps, struct map *map, FILE *fp struct rb_node *next; int err = 0; - pthread_rwlock_wrlock(&maps->lock); + down_write(&maps->lock); root = &maps->entries; next = rb_first(root); @@ -751,7 +751,7 @@ put_map: err = 0; out: - pthread_rwlock_unlock(&maps->lock); + up_write(&maps->lock); return err; } @@ -772,7 +772,7 @@ int map_groups__clone(struct thread *thread, struct map *map; struct maps *maps = &parent->maps[type]; - pthread_rwlock_rdlock(&maps->lock); + down_read(&maps->lock); for (map = maps__first(maps); map; map = map__next(map)) { struct map *new = map__clone(map); @@ -789,7 +789,7 @@ int map_groups__clone(struct thread *thread, err = 0; out_unlock: - pthread_rwlock_unlock(&maps->lock); + up_read(&maps->lock); return err; } @@ -816,9 +816,9 @@ static void __maps__insert(struct maps *maps, struct map *map) void maps__insert(struct maps *maps, struct map *map) { - pthread_rwlock_wrlock(&maps->lock); + down_write(&maps->lock); __maps__insert(maps, map); - pthread_rwlock_unlock(&maps->lock); + up_write(&maps->lock); } static void __maps__remove(struct maps *maps, struct map *map) @@ -829,9 +829,9 @@ static void __maps__remove(struct maps *maps, struct map *map) void maps__remove(struct maps *maps, struct map *map) { - pthread_rwlock_wrlock(&maps->lock); + down_write(&maps->lock); __maps__remove(maps, map); - pthread_rwlock_unlock(&maps->lock); + up_write(&maps->lock); } struct map *maps__find(struct maps *maps, u64 ip) @@ -839,7 +839,7 @@ struct map *maps__find(struct maps *maps, u64 ip) struct rb_node **p, *parent = NULL; struct map *m; - pthread_rwlock_rdlock(&maps->lock); + down_read(&maps->lock); p = &maps->entries.rb_node; while (*p != NULL) { @@ -855,7 +855,7 @@ struct map *maps__find(struct maps *maps, u64 ip) m = NULL; out: - pthread_rwlock_unlock(&maps->lock); + up_read(&maps->lock); return m; } diff --git a/tools/perf/util/map.h b/tools/perf/util/map.h index 1fb9b8589adc..edeb7291c8e1 100644 --- a/tools/perf/util/map.h +++ b/tools/perf/util/map.h @@ -10,6 +10,7 @@ #include <stdio.h> #include <stdbool.h> #include <linux/types.h> +#include "rwsem.h" enum map_type { MAP__FUNCTION = 0, @@ -62,7 +63,7 @@ struct kmap { struct maps { struct rb_root entries; - pthread_rwlock_t lock; + struct rw_semaphore lock; }; struct map_groups { diff --git a/tools/perf/util/metricgroup.c b/tools/perf/util/metricgroup.c new file mode 100644 index 000000000000..0ddd9c199227 --- /dev/null +++ b/tools/perf/util/metricgroup.c @@ -0,0 +1,490 @@ +/* + * Copyright (c) 2017, Intel Corporation. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + */ + +/* Manage metrics and groups of metrics from JSON files */ + +#include "metricgroup.h" +#include "evlist.h" +#include "strbuf.h" +#include "pmu.h" +#include "expr.h" +#include "rblist.h" +#include "pmu.h" +#include <string.h> +#include <stdbool.h> +#include <errno.h> +#include "pmu-events/pmu-events.h" +#include "strbuf.h" +#include "strlist.h" +#include <assert.h> +#include <ctype.h> + +struct metric_event *metricgroup__lookup(struct rblist *metric_events, + struct perf_evsel *evsel, + bool create) +{ + struct rb_node *nd; + struct metric_event me = { + .evsel = evsel + }; + nd = rblist__find(metric_events, &me); + if (nd) + return container_of(nd, struct metric_event, nd); + if (create) { + rblist__add_node(metric_events, &me); + nd = rblist__find(metric_events, &me); + if (nd) + return container_of(nd, struct metric_event, nd); + } + return NULL; +} + +static int metric_event_cmp(struct rb_node *rb_node, const void *entry) +{ + struct metric_event *a = container_of(rb_node, + struct metric_event, + nd); + const struct metric_event *b = entry; + + if (a->evsel == b->evsel) + return 0; + if ((char *)a->evsel < (char *)b->evsel) + return -1; + return +1; +} + +static struct rb_node *metric_event_new(struct rblist *rblist __maybe_unused, + const void *entry) +{ + struct metric_event *me = malloc(sizeof(struct metric_event)); + + if (!me) + return NULL; + memcpy(me, entry, sizeof(struct metric_event)); + me->evsel = ((struct metric_event *)entry)->evsel; + INIT_LIST_HEAD(&me->head); + return &me->nd; +} + +static void metricgroup__rblist_init(struct rblist *metric_events) +{ + rblist__init(metric_events); + metric_events->node_cmp = metric_event_cmp; + metric_events->node_new = metric_event_new; +} + +struct egroup { + struct list_head nd; + int idnum; + const char **ids; + const char *metric_name; + const char *metric_expr; +}; + +static struct perf_evsel *find_evsel(struct perf_evlist *perf_evlist, + const char **ids, + int idnum, + struct perf_evsel **metric_events) +{ + struct perf_evsel *ev, *start = NULL; + int ind = 0; + + evlist__for_each_entry (perf_evlist, ev) { + if (!strcmp(ev->name, ids[ind])) { + metric_events[ind] = ev; + if (ind == 0) + start = ev; + if (++ind == idnum) { + metric_events[ind] = NULL; + return start; + } + } else { + ind = 0; + start = NULL; + } + } + /* + * This can happen when an alias expands to multiple + * events, like for uncore events. + * We don't support this case for now. + */ + return NULL; +} + +static int metricgroup__setup_events(struct list_head *groups, + struct perf_evlist *perf_evlist, + struct rblist *metric_events_list) +{ + struct metric_event *me; + struct metric_expr *expr; + int i = 0; + int ret = 0; + struct egroup *eg; + struct perf_evsel *evsel; + + list_for_each_entry (eg, groups, nd) { + struct perf_evsel **metric_events; + + metric_events = calloc(sizeof(void *), eg->idnum + 1); + if (!metric_events) { + ret = -ENOMEM; + break; + } + evsel = find_evsel(perf_evlist, eg->ids, eg->idnum, + metric_events); + if (!evsel) { + pr_debug("Cannot resolve %s: %s\n", + eg->metric_name, eg->metric_expr); + continue; + } + for (i = 0; i < eg->idnum; i++) + metric_events[i]->collect_stat = true; + me = metricgroup__lookup(metric_events_list, evsel, true); + if (!me) { + ret = -ENOMEM; + break; + } + expr = malloc(sizeof(struct metric_expr)); + if (!expr) { + ret = -ENOMEM; + break; + } + expr->metric_expr = eg->metric_expr; + expr->metric_name = eg->metric_name; + expr->metric_events = metric_events; + list_add(&expr->nd, &me->head); + } + return ret; +} + +static bool match_metric(const char *n, const char *list) +{ + int len; + char *m; + + if (!list) + return false; + if (!strcmp(list, "all")) + return true; + if (!n) + return !strcasecmp(list, "No_group"); + len = strlen(list); + m = strcasestr(n, list); + if (!m) + return false; + if ((m == n || m[-1] == ';' || m[-1] == ' ') && + (m[len] == 0 || m[len] == ';')) + return true; + return false; +} + +struct mep { + struct rb_node nd; + const char *name; + struct strlist *metrics; +}; + +static int mep_cmp(struct rb_node *rb_node, const void *entry) +{ + struct mep *a = container_of(rb_node, struct mep, nd); + struct mep *b = (struct mep *)entry; + + return strcmp(a->name, b->name); +} + +static struct rb_node *mep_new(struct rblist *rl __maybe_unused, + const void *entry) +{ + struct mep *me = malloc(sizeof(struct mep)); + + if (!me) + return NULL; + memcpy(me, entry, sizeof(struct mep)); + me->name = strdup(me->name); + if (!me->name) + goto out_me; + me->metrics = strlist__new(NULL, NULL); + if (!me->metrics) + goto out_name; + return &me->nd; +out_name: + free((char *)me->name); +out_me: + free(me); + return NULL; +} + +static struct mep *mep_lookup(struct rblist *groups, const char *name) +{ + struct rb_node *nd; + struct mep me = { + .name = name + }; + nd = rblist__find(groups, &me); + if (nd) + return container_of(nd, struct mep, nd); + rblist__add_node(groups, &me); + nd = rblist__find(groups, &me); + if (nd) + return container_of(nd, struct mep, nd); + return NULL; +} + +static void mep_delete(struct rblist *rl __maybe_unused, + struct rb_node *nd) +{ + struct mep *me = container_of(nd, struct mep, nd); + + strlist__delete(me->metrics); + free((void *)me->name); + free(me); +} + +static void metricgroup__print_strlist(struct strlist *metrics, bool raw) +{ + struct str_node *sn; + int n = 0; + + strlist__for_each_entry (sn, metrics) { + if (raw) + printf("%s%s", n > 0 ? " " : "", sn->s); + else + printf(" %s\n", sn->s); + n++; + } + if (raw) + putchar('\n'); +} + +void metricgroup__print(bool metrics, bool metricgroups, char *filter, + bool raw) +{ + struct pmu_events_map *map = perf_pmu__find_map(); + struct pmu_event *pe; + int i; + struct rblist groups; + struct rb_node *node, *next; + struct strlist *metriclist = NULL; + + if (!map) + return; + + if (!metricgroups) { + metriclist = strlist__new(NULL, NULL); + if (!metriclist) + return; + } + + rblist__init(&groups); + groups.node_new = mep_new; + groups.node_cmp = mep_cmp; + groups.node_delete = mep_delete; + for (i = 0; ; i++) { + const char *g; + pe = &map->table[i]; + + if (!pe->name && !pe->metric_group && !pe->metric_name) + break; + if (!pe->metric_expr) + continue; + g = pe->metric_group; + if (!g && pe->metric_name) { + if (pe->name) + continue; + g = "No_group"; + } + if (g) { + char *omg; + char *mg = strdup(g); + + if (!mg) + return; + omg = mg; + while ((g = strsep(&mg, ";")) != NULL) { + struct mep *me; + char *s; + + if (*g == 0) + g = "No_group"; + while (isspace(*g)) + g++; + if (filter && !strstr(g, filter)) + continue; + if (raw) + s = (char *)pe->metric_name; + else { + if (asprintf(&s, "%s\n\t[%s]", + pe->metric_name, pe->desc) < 0) + return; + } + + if (!s) + continue; + + if (!metricgroups) { + strlist__add(metriclist, s); + } else { + me = mep_lookup(&groups, g); + if (!me) + continue; + strlist__add(me->metrics, s); + } + } + free(omg); + } + } + + if (metricgroups && !raw) + printf("\nMetric Groups:\n\n"); + else if (metrics && !raw) + printf("\nMetrics:\n\n"); + + for (node = rb_first(&groups.entries); node; node = next) { + struct mep *me = container_of(node, struct mep, nd); + + if (metricgroups) + printf("%s%s%s", me->name, metrics ? ":" : "", raw ? " " : "\n"); + if (metrics) + metricgroup__print_strlist(me->metrics, raw); + next = rb_next(node); + rblist__remove_node(&groups, node); + } + if (!metricgroups) + metricgroup__print_strlist(metriclist, raw); + strlist__delete(metriclist); +} + +static int metricgroup__add_metric(const char *metric, struct strbuf *events, + struct list_head *group_list) +{ + struct pmu_events_map *map = perf_pmu__find_map(); + struct pmu_event *pe; + int ret = -EINVAL; + int i, j; + + if (!map) + return 0; + + for (i = 0; ; i++) { + pe = &map->table[i]; + + if (!pe->name && !pe->metric_group && !pe->metric_name) + break; + if (!pe->metric_expr) + continue; + if (match_metric(pe->metric_group, metric) || + match_metric(pe->metric_name, metric)) { + const char **ids; + int idnum; + struct egroup *eg; + + pr_debug("metric expr %s for %s\n", pe->metric_expr, pe->metric_name); + + if (expr__find_other(pe->metric_expr, + NULL, &ids, &idnum) < 0) + continue; + if (events->len > 0) + strbuf_addf(events, ","); + for (j = 0; j < idnum; j++) { + pr_debug("found event %s\n", ids[j]); + strbuf_addf(events, "%s%s", + j == 0 ? "{" : ",", + ids[j]); + } + strbuf_addf(events, "}:W"); + + eg = malloc(sizeof(struct egroup)); + if (!eg) { + ret = -ENOMEM; + break; + } + eg->ids = ids; + eg->idnum = idnum; + eg->metric_name = pe->metric_name; + eg->metric_expr = pe->metric_expr; + list_add_tail(&eg->nd, group_list); + ret = 0; + } + } + return ret; +} + +static int metricgroup__add_metric_list(const char *list, struct strbuf *events, + struct list_head *group_list) +{ + char *llist, *nlist, *p; + int ret = -EINVAL; + + nlist = strdup(list); + if (!nlist) + return -ENOMEM; + llist = nlist; + + strbuf_init(events, 100); + strbuf_addf(events, "%s", ""); + + while ((p = strsep(&llist, ",")) != NULL) { + ret = metricgroup__add_metric(p, events, group_list); + if (ret == -EINVAL) { + fprintf(stderr, "Cannot find metric or group `%s'\n", + p); + break; + } + } + free(nlist); + return ret; +} + +static void metricgroup__free_egroups(struct list_head *group_list) +{ + struct egroup *eg, *egtmp; + int i; + + list_for_each_entry_safe (eg, egtmp, group_list, nd) { + for (i = 0; i < eg->idnum; i++) + free((char *)eg->ids[i]); + free(eg->ids); + free(eg); + } +} + +int metricgroup__parse_groups(const struct option *opt, + const char *str, + struct rblist *metric_events) +{ + struct parse_events_error parse_error; + struct perf_evlist *perf_evlist = *(struct perf_evlist **)opt->value; + struct strbuf extra_events; + LIST_HEAD(group_list); + int ret; + + if (metric_events->nr_entries == 0) + metricgroup__rblist_init(metric_events); + ret = metricgroup__add_metric_list(str, &extra_events, &group_list); + if (ret) + return ret; + pr_debug("adding %s\n", extra_events.buf); + memset(&parse_error, 0, sizeof(struct parse_events_error)); + ret = parse_events(perf_evlist, extra_events.buf, &parse_error); + if (ret) { + parse_events_print_error(&parse_error, extra_events.buf); + goto out; + } + strbuf_release(&extra_events); + ret = metricgroup__setup_events(&group_list, perf_evlist, + metric_events); +out: + metricgroup__free_egroups(&group_list); + return ret; +} diff --git a/tools/perf/util/metricgroup.h b/tools/perf/util/metricgroup.h new file mode 100644 index 000000000000..06854e125ee7 --- /dev/null +++ b/tools/perf/util/metricgroup.h @@ -0,0 +1,31 @@ +#ifndef METRICGROUP_H +#define METRICGROUP_H 1 + +#include "linux/list.h" +#include "rblist.h" +#include <subcmd/parse-options.h> +#include "evlist.h" +#include "strbuf.h" + +struct metric_event { + struct rb_node nd; + struct perf_evsel *evsel; + struct list_head head; /* list of metric_expr */ +}; + +struct metric_expr { + struct list_head nd; + const char *metric_expr; + const char *metric_name; + struct perf_evsel **metric_events; +}; + +struct metric_event *metricgroup__lookup(struct rblist *metric_events, + struct perf_evsel *evsel, + bool create); +int metricgroup__parse_groups(const struct option *opt, + const char *str, + struct rblist *metric_events); + +void metricgroup__print(bool metrics, bool groups, char *filter, bool raw); +#endif diff --git a/tools/perf/util/mmap.c b/tools/perf/util/mmap.c new file mode 100644 index 000000000000..9fe5f9c7d577 --- /dev/null +++ b/tools/perf/util/mmap.c @@ -0,0 +1,352 @@ +/* + * Copyright (C) 2011-2017, Red Hat Inc, Arnaldo Carvalho de Melo <acme@redhat.com> + * + * Parts came from evlist.c builtin-{top,stat,record}.c, see those files for further + * copyright notes. + * + * Released under the GPL v2. (and only v2, not any later version) + */ + +#include <sys/mman.h> +#include <inttypes.h> +#include <asm/bug.h> +#include "debug.h" +#include "event.h" +#include "mmap.h" +#include "util.h" /* page_size */ + +size_t perf_mmap__mmap_len(struct perf_mmap *map) +{ + return map->mask + 1 + page_size; +} + +/* When check_messup is true, 'end' must points to a good entry */ +static union perf_event *perf_mmap__read(struct perf_mmap *map, bool check_messup, + u64 start, u64 end, u64 *prev) +{ + unsigned char *data = map->base + page_size; + union perf_event *event = NULL; + int diff = end - start; + + if (check_messup) { + /* + * If we're further behind than half the buffer, there's a chance + * the writer will bite our tail and mess up the samples under us. + * + * If we somehow ended up ahead of the 'end', we got messed up. + * + * In either case, truncate and restart at 'end'. + */ + if (diff > map->mask / 2 || diff < 0) { + fprintf(stderr, "WARNING: failed to keep up with mmap data.\n"); + + /* + * 'end' points to a known good entry, start there. + */ + start = end; + diff = 0; + } + } + + if (diff >= (int)sizeof(event->header)) { + size_t size; + + event = (union perf_event *)&data[start & map->mask]; + size = event->header.size; + + if (size < sizeof(event->header) || diff < (int)size) { + event = NULL; + goto broken_event; + } + + /* + * Event straddles the mmap boundary -- header should always + * be inside due to u64 alignment of output. + */ + if ((start & map->mask) + size != ((start + size) & map->mask)) { + unsigned int offset = start; + unsigned int len = min(sizeof(*event), size), cpy; + void *dst = map->event_copy; + + do { + cpy = min(map->mask + 1 - (offset & map->mask), len); + memcpy(dst, &data[offset & map->mask], cpy); + offset += cpy; + dst += cpy; + len -= cpy; + } while (len); + + event = (union perf_event *)map->event_copy; + } + + start += size; + } + +broken_event: + if (prev) + *prev = start; + + return event; +} + +union perf_event *perf_mmap__read_forward(struct perf_mmap *map, bool check_messup) +{ + u64 head; + u64 old = map->prev; + + /* + * Check if event was unmapped due to a POLLHUP/POLLERR. + */ + if (!refcount_read(&map->refcnt)) + return NULL; + + head = perf_mmap__read_head(map); + + return perf_mmap__read(map, check_messup, old, head, &map->prev); +} + +union perf_event *perf_mmap__read_backward(struct perf_mmap *map) +{ + u64 head, end; + u64 start = map->prev; + + /* + * Check if event was unmapped due to a POLLHUP/POLLERR. + */ + if (!refcount_read(&map->refcnt)) + return NULL; + + head = perf_mmap__read_head(map); + if (!head) + return NULL; + + /* + * 'head' pointer starts from 0. Kernel minus sizeof(record) form + * it each time when kernel writes to it, so in fact 'head' is + * negative. 'end' pointer is made manually by adding the size of + * the ring buffer to 'head' pointer, means the validate data can + * read is the whole ring buffer. If 'end' is positive, the ring + * buffer has not fully filled, so we must adjust 'end' to 0. + * + * However, since both 'head' and 'end' is unsigned, we can't + * simply compare 'end' against 0. Here we compare '-head' and + * the size of the ring buffer, where -head is the number of bytes + * kernel write to the ring buffer. + */ + if (-head < (u64)(map->mask + 1)) + end = 0; + else + end = head + map->mask + 1; + + return perf_mmap__read(map, false, start, end, &map->prev); +} + +void perf_mmap__read_catchup(struct perf_mmap *map) +{ + u64 head; + + if (!refcount_read(&map->refcnt)) + return; + + head = perf_mmap__read_head(map); + map->prev = head; +} + +static bool perf_mmap__empty(struct perf_mmap *map) +{ + return perf_mmap__read_head(map) == map->prev && !map->auxtrace_mmap.base; +} + +void perf_mmap__get(struct perf_mmap *map) +{ + refcount_inc(&map->refcnt); +} + +void perf_mmap__put(struct perf_mmap *map) +{ + BUG_ON(map->base && refcount_read(&map->refcnt) == 0); + + if (refcount_dec_and_test(&map->refcnt)) + perf_mmap__munmap(map); +} + +void perf_mmap__consume(struct perf_mmap *map, bool overwrite) +{ + if (!overwrite) { + u64 old = map->prev; + + perf_mmap__write_tail(map, old); + } + + if (refcount_read(&map->refcnt) == 1 && perf_mmap__empty(map)) + perf_mmap__put(map); +} + +int __weak auxtrace_mmap__mmap(struct auxtrace_mmap *mm __maybe_unused, + struct auxtrace_mmap_params *mp __maybe_unused, + void *userpg __maybe_unused, + int fd __maybe_unused) +{ + return 0; +} + +void __weak auxtrace_mmap__munmap(struct auxtrace_mmap *mm __maybe_unused) +{ +} + +void __weak auxtrace_mmap_params__init(struct auxtrace_mmap_params *mp __maybe_unused, + off_t auxtrace_offset __maybe_unused, + unsigned int auxtrace_pages __maybe_unused, + bool auxtrace_overwrite __maybe_unused) +{ +} + +void __weak auxtrace_mmap_params__set_idx(struct auxtrace_mmap_params *mp __maybe_unused, + struct perf_evlist *evlist __maybe_unused, + int idx __maybe_unused, + bool per_cpu __maybe_unused) +{ +} + +void perf_mmap__munmap(struct perf_mmap *map) +{ + if (map->base != NULL) { + munmap(map->base, perf_mmap__mmap_len(map)); + map->base = NULL; + map->fd = -1; + refcount_set(&map->refcnt, 0); + } + auxtrace_mmap__munmap(&map->auxtrace_mmap); +} + +int perf_mmap__mmap(struct perf_mmap *map, struct mmap_params *mp, int fd) +{ + /* + * The last one will be done at perf_evlist__mmap_consume(), so that we + * make sure we don't prevent tools from consuming every last event in + * the ring buffer. + * + * I.e. we can get the POLLHUP meaning that the fd doesn't exist + * anymore, but the last events for it are still in the ring buffer, + * waiting to be consumed. + * + * Tools can chose to ignore this at their own discretion, but the + * evlist layer can't just drop it when filtering events in + * perf_evlist__filter_pollfd(). + */ + refcount_set(&map->refcnt, 2); + map->prev = 0; + map->mask = mp->mask; + map->base = mmap(NULL, perf_mmap__mmap_len(map), mp->prot, + MAP_SHARED, fd, 0); + if (map->base == MAP_FAILED) { + pr_debug2("failed to mmap perf event ring buffer, error %d\n", + errno); + map->base = NULL; + return -1; + } + map->fd = fd; + + if (auxtrace_mmap__mmap(&map->auxtrace_mmap, + &mp->auxtrace_mp, map->base, fd)) + return -1; + + return 0; +} + +static int backward_rb_find_range(void *buf, int mask, u64 head, u64 *start, u64 *end) +{ + struct perf_event_header *pheader; + u64 evt_head = head; + int size = mask + 1; + + pr_debug2("backward_rb_find_range: buf=%p, head=%"PRIx64"\n", buf, head); + pheader = (struct perf_event_header *)(buf + (head & mask)); + *start = head; + while (true) { + if (evt_head - head >= (unsigned int)size) { + pr_debug("Finished reading backward ring buffer: rewind\n"); + if (evt_head - head > (unsigned int)size) + evt_head -= pheader->size; + *end = evt_head; + return 0; + } + + pheader = (struct perf_event_header *)(buf + (evt_head & mask)); + + if (pheader->size == 0) { + pr_debug("Finished reading backward ring buffer: get start\n"); + *end = evt_head; + return 0; + } + + evt_head += pheader->size; + pr_debug3("move evt_head: %"PRIx64"\n", evt_head); + } + WARN_ONCE(1, "Shouldn't get here\n"); + return -1; +} + +static int rb_find_range(void *data, int mask, u64 head, u64 old, + u64 *start, u64 *end, bool backward) +{ + if (!backward) { + *start = old; + *end = head; + return 0; + } + + return backward_rb_find_range(data, mask, head, start, end); +} + +int perf_mmap__push(struct perf_mmap *md, bool overwrite, bool backward, + void *to, int push(void *to, void *buf, size_t size)) +{ + u64 head = perf_mmap__read_head(md); + u64 old = md->prev; + u64 end = head, start = old; + unsigned char *data = md->base + page_size; + unsigned long size; + void *buf; + int rc = 0; + + if (rb_find_range(data, md->mask, head, old, &start, &end, backward)) + return -1; + + if (start == end) + return 0; + + size = end - start; + if (size > (unsigned long)(md->mask) + 1) { + WARN_ONCE(1, "failed to keep up with mmap data. (warn only once)\n"); + + md->prev = head; + perf_mmap__consume(md, overwrite || backward); + return 0; + } + + if ((start & md->mask) + size != (end & md->mask)) { + buf = &data[start & md->mask]; + size = md->mask + 1 - (start & md->mask); + start += size; + + if (push(to, buf, size) < 0) { + rc = -1; + goto out; + } + } + + buf = &data[start & md->mask]; + size = end - start; + start += size; + + if (push(to, buf, size) < 0) { + rc = -1; + goto out; + } + + md->prev = head; + perf_mmap__consume(md, overwrite || backward); +out: + return rc; +} diff --git a/tools/perf/util/mmap.h b/tools/perf/util/mmap.h new file mode 100644 index 000000000000..efd78b827b05 --- /dev/null +++ b/tools/perf/util/mmap.h @@ -0,0 +1,97 @@ +#ifndef __PERF_MMAP_H +#define __PERF_MMAP_H 1 + +#include <linux/compiler.h> +#include <linux/refcount.h> +#include <linux/types.h> +#include <asm/barrier.h> +#include <stdbool.h> +#include "auxtrace.h" +#include "event.h" + +/** + * struct perf_mmap - perf's ring buffer mmap details + * + * @refcnt - e.g. code using PERF_EVENT_IOC_SET_OUTPUT to share this + */ +struct perf_mmap { + void *base; + int mask; + int fd; + refcount_t refcnt; + u64 prev; + struct auxtrace_mmap auxtrace_mmap; + char event_copy[PERF_SAMPLE_MAX_SIZE] __aligned(8); +}; + +/* + * State machine of bkw_mmap_state: + * + * .________________(forbid)_____________. + * | V + * NOTREADY --(0)--> RUNNING --(1)--> DATA_PENDING --(2)--> EMPTY + * ^ ^ | ^ | + * | |__(forbid)____/ |___(forbid)___/| + * | | + * \_________________(3)_______________/ + * + * NOTREADY : Backward ring buffers are not ready + * RUNNING : Backward ring buffers are recording + * DATA_PENDING : We are required to collect data from backward ring buffers + * EMPTY : We have collected data from backward ring buffers. + * + * (0): Setup backward ring buffer + * (1): Pause ring buffers for reading + * (2): Read from ring buffers + * (3): Resume ring buffers for recording + */ +enum bkw_mmap_state { + BKW_MMAP_NOTREADY, + BKW_MMAP_RUNNING, + BKW_MMAP_DATA_PENDING, + BKW_MMAP_EMPTY, +}; + +struct mmap_params { + int prot, mask; + struct auxtrace_mmap_params auxtrace_mp; +}; + +int perf_mmap__mmap(struct perf_mmap *map, struct mmap_params *mp, int fd); +void perf_mmap__munmap(struct perf_mmap *map); + +void perf_mmap__get(struct perf_mmap *map); +void perf_mmap__put(struct perf_mmap *map); + +void perf_mmap__consume(struct perf_mmap *map, bool overwrite); + +void perf_mmap__read_catchup(struct perf_mmap *md); + +static inline u64 perf_mmap__read_head(struct perf_mmap *mm) +{ + struct perf_event_mmap_page *pc = mm->base; + u64 head = ACCESS_ONCE(pc->data_head); + rmb(); + return head; +} + +static inline void perf_mmap__write_tail(struct perf_mmap *md, u64 tail) +{ + struct perf_event_mmap_page *pc = md->base; + + /* + * ensure all reads are done before we write the tail out. + */ + mb(); + pc->data_tail = tail; +} + +union perf_event *perf_mmap__read_forward(struct perf_mmap *map, bool check_messup); +union perf_event *perf_mmap__read_backward(struct perf_mmap *map); + +int perf_mmap__push(struct perf_mmap *md, bool overwrite, bool backward, + void *to, int push(void *to, void *buf, size_t size)); + +size_t perf_mmap__mmap_len(struct perf_mmap *map); + +#endif /*__PERF_MMAP_H */ diff --git a/tools/perf/util/namespaces.c b/tools/perf/util/namespaces.c index a58e91197729..5be021701f34 100644 --- a/tools/perf/util/namespaces.c +++ b/tools/perf/util/namespaces.c @@ -11,6 +11,7 @@ #include "event.h" #include <sys/types.h> #include <sys/stat.h> +#include <fcntl.h> #include <limits.h> #include <sched.h> #include <stdlib.h> diff --git a/tools/perf/util/namespaces.h b/tools/perf/util/namespaces.h index 05d82601c9a6..760558dcfd18 100644 --- a/tools/perf/util/namespaces.h +++ b/tools/perf/util/namespaces.h @@ -9,9 +9,10 @@ #ifndef __PERF_NAMESPACES_H #define __PERF_NAMESPACES_H -#include "../perf.h" -#include <linux/list.h> +#include <sys/types.h> +#include <linux/perf_event.h> #include <linux/refcount.h> +#include <linux/types.h> struct namespaces_event; diff --git a/tools/perf/util/parse-events.c b/tools/perf/util/parse-events.c index 56694e3409ea..a7fcd95961ef 100644 --- a/tools/perf/util/parse-events.c +++ b/tools/perf/util/parse-events.c @@ -29,6 +29,7 @@ #include "probe-file.h" #include "asm/bug.h" #include "util/parse-branch-options.h" +#include "metricgroup.h" #define MAX_NAME_LEN 100 @@ -1220,11 +1221,17 @@ static int __parse_events_add_pmu(struct parse_events_state *parse_state, struct perf_pmu_info info; struct perf_pmu *pmu; struct perf_evsel *evsel; + struct parse_events_error *err = parse_state->error; LIST_HEAD(config_terms); pmu = perf_pmu__find(name); - if (!pmu) + if (!pmu) { + if (asprintf(&err->str, + "Cannot find PMU `%s'. Missing kernel support?", + name) < 0) + err->str = NULL; return -EINVAL; + } if (pmu->default_config) { memcpy(&attr, pmu->default_config, @@ -1368,6 +1375,7 @@ struct event_modifier { int exclude_GH; int sample_read; int pinned; + int weak; }; static int get_event_modifier(struct event_modifier *mod, char *str, @@ -1386,6 +1394,7 @@ static int get_event_modifier(struct event_modifier *mod, char *str, int exclude = eu | ek | eh; int exclude_GH = evsel ? evsel->exclude_GH : 0; + int weak = 0; memset(mod, 0, sizeof(*mod)); @@ -1423,6 +1432,8 @@ static int get_event_modifier(struct event_modifier *mod, char *str, sample_read = 1; } else if (*str == 'D') { pinned = 1; + } else if (*str == 'W') { + weak = 1; } else break; @@ -1453,6 +1464,7 @@ static int get_event_modifier(struct event_modifier *mod, char *str, mod->exclude_GH = exclude_GH; mod->sample_read = sample_read; mod->pinned = pinned; + mod->weak = weak; return 0; } @@ -1466,7 +1478,7 @@ static int check_modifier(char *str) char *p = str; /* The sizeof includes 0 byte as well. */ - if (strlen(str) > (sizeof("ukhGHpppPSDI") - 1)) + if (strlen(str) > (sizeof("ukhGHpppPSDIW") - 1)) return -1; while (*p) { @@ -1506,6 +1518,7 @@ int parse_events__modifier_event(struct list_head *list, char *str, bool add) evsel->exclude_GH = mod.exclude_GH; evsel->sample_read = mod.sample_read; evsel->precise_max = mod.precise_max; + evsel->weak_group = mod.weak; if (perf_evsel__is_group_leader(evsel)) evsel->attr.pinned = mod.pinned; @@ -1728,8 +1741,8 @@ static int get_term_width(void) return ws.ws_col > MAX_WIDTH ? MAX_WIDTH : ws.ws_col; } -static void parse_events_print_error(struct parse_events_error *err, - const char *event) +void parse_events_print_error(struct parse_events_error *err, + const char *event) { const char *str = "invalid or unsupported event: "; char _buf[MAX_WIDTH]; @@ -1784,8 +1797,6 @@ static void parse_events_print_error(struct parse_events_error *err, zfree(&err->str); zfree(&err->help); } - - fprintf(stderr, "Run 'perf list' for a list of valid events\n"); } #undef MAX_WIDTH @@ -1797,8 +1808,10 @@ int parse_events_option(const struct option *opt, const char *str, struct parse_events_error err = { .idx = 0, }; int ret = parse_events(evlist, str, &err); - if (ret) + if (ret) { parse_events_print_error(&err, str); + fprintf(stderr, "Run 'perf list' for a list of valid events\n"); + } return ret; } @@ -2376,6 +2389,8 @@ void print_events(const char *event_glob, bool name_only, bool quiet_flag, print_tracepoint_events(NULL, NULL, name_only); print_sdt_events(NULL, NULL, name_only); + + metricgroup__print(true, true, NULL, name_only); } int parse_events__is_hardcoded_term(struct parse_events_term *term) diff --git a/tools/perf/util/parse-events.h b/tools/perf/util/parse-events.h index eed50b54bab3..be337c266697 100644 --- a/tools/perf/util/parse-events.h +++ b/tools/perf/util/parse-events.h @@ -203,6 +203,9 @@ int is_valid_tracepoint(const char *event_string); int valid_event_mount(const char *eventfs); char *parse_events_formats_error_string(char *additional_terms); +void parse_events_print_error(struct parse_events_error *err, + const char *event); + #ifdef HAVE_LIBELF_SUPPORT /* * If the probe point starts with '%', diff --git a/tools/perf/util/parse-events.l b/tools/perf/util/parse-events.l index 025729510525..655ecff636a8 100644 --- a/tools/perf/util/parse-events.l +++ b/tools/perf/util/parse-events.l @@ -179,7 +179,7 @@ name [a-zA-Z_*?][a-zA-Z0-9_*?.]* name_minus [a-zA-Z_*?][a-zA-Z0-9\-_*?.:]* drv_cfg_term [a-zA-Z0-9_\.]+(=[a-zA-Z0-9_*?\.:]+)? /* If you add a modifier you need to update check_modifier() */ -modifier_event [ukhpPGHSDI]+ +modifier_event [ukhpPGHSDIW]+ modifier_bp [rwx]{1,3} %% @@ -306,6 +306,7 @@ cpu-migrations|migrations { return sym(yyscanner, PERF_TYPE_SOFTWARE, PERF_COU alignment-faults { return sym(yyscanner, PERF_TYPE_SOFTWARE, PERF_COUNT_SW_ALIGNMENT_FAULTS); } emulation-faults { return sym(yyscanner, PERF_TYPE_SOFTWARE, PERF_COUNT_SW_EMULATION_FAULTS); } dummy { return sym(yyscanner, PERF_TYPE_SOFTWARE, PERF_COUNT_SW_DUMMY); } +duration_time { return sym(yyscanner, PERF_TYPE_SOFTWARE, PERF_COUNT_SW_DUMMY); } bpf-output { return sym(yyscanner, PERF_TYPE_SOFTWARE, PERF_COUNT_SW_BPF_OUTPUT); } /* diff --git a/tools/perf/util/pmu.c b/tools/perf/util/pmu.c index b10b35a63138..07cb2ac041d7 100644 --- a/tools/perf/util/pmu.c +++ b/tools/perf/util/pmu.c @@ -3,6 +3,7 @@ #include <linux/compiler.h> #include <sys/types.h> #include <errno.h> +#include <fcntl.h> #include <sys/stat.h> #include <unistd.h> #include <stdio.h> @@ -541,16 +542,8 @@ char * __weak get_cpuid_str(void) return NULL; } -/* - * From the pmu_events_map, find the table of PMU events that corresponds - * to the current running CPU. Then, add all PMU events from that table - * as aliases. - */ -static void pmu_add_cpu_aliases(struct list_head *head, const char *name) +static char *perf_pmu__getcpuid(void) { - int i; - struct pmu_events_map *map; - struct pmu_event *pe; char *cpuid; static bool printed; @@ -560,22 +553,50 @@ static void pmu_add_cpu_aliases(struct list_head *head, const char *name) if (!cpuid) cpuid = get_cpuid_str(); if (!cpuid) - return; + return NULL; if (!printed) { pr_debug("Using CPUID %s\n", cpuid); printed = true; } + return cpuid; +} + +struct pmu_events_map *perf_pmu__find_map(void) +{ + struct pmu_events_map *map; + char *cpuid = perf_pmu__getcpuid(); + int i; i = 0; - while (1) { + for (;;) { map = &pmu_events_map[i++]; - if (!map->table) - goto out; + if (!map->table) { + map = NULL; + break; + } if (!strcmp(map->cpuid, cpuid)) break; } + free(cpuid); + return map; +} + +/* + * From the pmu_events_map, find the table of PMU events that corresponds + * to the current running CPU. Then, add all PMU events from that table + * as aliases. + */ +static void pmu_add_cpu_aliases(struct list_head *head, const char *name) +{ + int i; + struct pmu_events_map *map; + struct pmu_event *pe; + + map = perf_pmu__find_map(); + if (!map) + return; /* * Found a matching PMU events table. Create aliases @@ -585,8 +606,11 @@ static void pmu_add_cpu_aliases(struct list_head *head, const char *name) const char *pname; pe = &map->table[i++]; - if (!pe->name) + if (!pe->name) { + if (pe->metric_group || pe->metric_name) + continue; break; + } pname = pe->pmu ? pe->pmu : "cpu"; if (strncmp(pname, name, strlen(pname))) @@ -600,9 +624,6 @@ static void pmu_add_cpu_aliases(struct list_head *head, const char *name) (char *)pe->metric_expr, (char *)pe->metric_name); } - -out: - free(cpuid); } struct perf_event_attr * __weak diff --git a/tools/perf/util/pmu.h b/tools/perf/util/pmu.h index eca99435f4a0..27c75e635866 100644 --- a/tools/perf/util/pmu.h +++ b/tools/perf/util/pmu.h @@ -92,4 +92,6 @@ int perf_pmu__test(void); struct perf_event_attr *perf_pmu__get_default_config(struct perf_pmu *pmu); +struct pmu_events_map *perf_pmu__find_map(void); + #endif /* __PMU_H */ diff --git a/tools/perf/util/print_binary.c b/tools/perf/util/print_binary.c index 779e35c9e566..23e367063446 100644 --- a/tools/perf/util/print_binary.c +++ b/tools/perf/util/print_binary.c @@ -3,40 +3,42 @@ #include <linux/log2.h> #include "sane_ctype.h" -void print_binary(unsigned char *data, size_t len, - size_t bytes_per_line, print_binary_t printer, - void *extra) +int binary__fprintf(unsigned char *data, size_t len, + size_t bytes_per_line, binary__fprintf_t printer, + void *extra, FILE *fp) { size_t i, j, mask; + int printed = 0; if (!printer) - return; + return 0; bytes_per_line = roundup_pow_of_two(bytes_per_line); mask = bytes_per_line - 1; - printer(BINARY_PRINT_DATA_BEGIN, 0, extra); + printed += printer(BINARY_PRINT_DATA_BEGIN, 0, extra, fp); for (i = 0; i < len; i++) { if ((i & mask) == 0) { - printer(BINARY_PRINT_LINE_BEGIN, -1, extra); - printer(BINARY_PRINT_ADDR, i, extra); + printed += printer(BINARY_PRINT_LINE_BEGIN, -1, extra, fp); + printed += printer(BINARY_PRINT_ADDR, i, extra, fp); } - printer(BINARY_PRINT_NUM_DATA, data[i], extra); + printed += printer(BINARY_PRINT_NUM_DATA, data[i], extra, fp); if (((i & mask) == mask) || i == len - 1) { for (j = 0; j < mask-(i & mask); j++) - printer(BINARY_PRINT_NUM_PAD, -1, extra); + printed += printer(BINARY_PRINT_NUM_PAD, -1, extra, fp); - printer(BINARY_PRINT_SEP, i, extra); + printer(BINARY_PRINT_SEP, i, extra, fp); for (j = i & ~mask; j <= i; j++) - printer(BINARY_PRINT_CHAR_DATA, data[j], extra); + printed += printer(BINARY_PRINT_CHAR_DATA, data[j], extra, fp); for (j = 0; j < mask-(i & mask); j++) - printer(BINARY_PRINT_CHAR_PAD, i, extra); - printer(BINARY_PRINT_LINE_END, -1, extra); + printed += printer(BINARY_PRINT_CHAR_PAD, i, extra, fp); + printed += printer(BINARY_PRINT_LINE_END, -1, extra, fp); } } - printer(BINARY_PRINT_DATA_END, -1, extra); + printed += printer(BINARY_PRINT_DATA_END, -1, extra, fp); + return printed; } int is_printable_array(char *p, unsigned int len) diff --git a/tools/perf/util/print_binary.h b/tools/perf/util/print_binary.h index 2be3075e2b05..2a1554afc957 100644 --- a/tools/perf/util/print_binary.h +++ b/tools/perf/util/print_binary.h @@ -3,6 +3,7 @@ #define PERF_PRINT_BINARY_H #include <stddef.h> +#include <stdio.h> enum binary_printer_ops { BINARY_PRINT_DATA_BEGIN, @@ -17,12 +18,19 @@ enum binary_printer_ops { BINARY_PRINT_DATA_END, }; -typedef void (*print_binary_t)(enum binary_printer_ops op, - unsigned int val, void *extra); +typedef int (*binary__fprintf_t)(enum binary_printer_ops op, + unsigned int val, void *extra, FILE *fp); -void print_binary(unsigned char *data, size_t len, - size_t bytes_per_line, print_binary_t printer, - void *extra); +int binary__fprintf(unsigned char *data, size_t len, + size_t bytes_per_line, binary__fprintf_t printer, + void *extra, FILE *fp); + +static inline void print_binary(unsigned char *data, size_t len, + size_t bytes_per_line, binary__fprintf_t printer, + void *extra) +{ + binary__fprintf(data, len, bytes_per_line, printer, extra, stdout); +} int is_printable_array(char *p, unsigned int len); diff --git a/tools/perf/util/probe-file.c b/tools/perf/util/probe-file.c index cdf8d83a484c..4ae1123c6794 100644 --- a/tools/perf/util/probe-file.c +++ b/tools/perf/util/probe-file.c @@ -15,6 +15,7 @@ * */ #include <errno.h> +#include <fcntl.h> #include <sys/stat.h> #include <sys/types.h> #include <sys/uio.h> diff --git a/tools/perf/util/python-ext-sources b/tools/perf/util/python-ext-sources index e66dc495809a..b4f2f06722a7 100644 --- a/tools/perf/util/python-ext-sources +++ b/tools/perf/util/python-ext-sources @@ -10,6 +10,7 @@ util/ctype.c util/evlist.c util/evsel.c util/cpumap.c +util/mmap.c util/namespaces.c ../lib/bitmap.c ../lib/find_bit.c diff --git a/tools/perf/util/rb_resort.h b/tools/perf/util/rb_resort.h index 7d8972b33f6b..a920f702a74d 100644 --- a/tools/perf/util/rb_resort.h +++ b/tools/perf/util/rb_resort.h @@ -144,7 +144,8 @@ struct __name##_sorted *__name = __name##_sorted__new __ilist->rblist.nr_entries) /* For 'struct machine->threads' */ -#define DECLARE_RESORT_RB_MACHINE_THREADS(__name, __machine) \ - DECLARE_RESORT_RB(__name)(&__machine->threads, __machine->nr_threads) +#define DECLARE_RESORT_RB_MACHINE_THREADS(__name, __machine, hash_bucket) \ + DECLARE_RESORT_RB(__name)(&__machine->threads[hash_bucket].entries, \ + __machine->threads[hash_bucket].nr) #endif /* _PERF_RESORT_RB_H_ */ diff --git a/tools/perf/util/rwsem.c b/tools/perf/util/rwsem.c new file mode 100644 index 000000000000..5e52e7baa7b6 --- /dev/null +++ b/tools/perf/util/rwsem.c @@ -0,0 +1,32 @@ +#include "util.h" +#include "rwsem.h" + +int init_rwsem(struct rw_semaphore *sem) +{ + return pthread_rwlock_init(&sem->lock, NULL); +} + +int exit_rwsem(struct rw_semaphore *sem) +{ + return pthread_rwlock_destroy(&sem->lock); +} + +int down_read(struct rw_semaphore *sem) +{ + return perf_singlethreaded ? 0 : pthread_rwlock_rdlock(&sem->lock); +} + +int up_read(struct rw_semaphore *sem) +{ + return perf_singlethreaded ? 0 : pthread_rwlock_unlock(&sem->lock); +} + +int down_write(struct rw_semaphore *sem) +{ + return perf_singlethreaded ? 0 : pthread_rwlock_wrlock(&sem->lock); +} + +int up_write(struct rw_semaphore *sem) +{ + return perf_singlethreaded ? 0 : pthread_rwlock_unlock(&sem->lock); +} diff --git a/tools/perf/util/rwsem.h b/tools/perf/util/rwsem.h new file mode 100644 index 000000000000..94565ad4d494 --- /dev/null +++ b/tools/perf/util/rwsem.h @@ -0,0 +1,19 @@ +#ifndef _PERF_RWSEM_H +#define _PERF_RWSEM_H + +#include <pthread.h> + +struct rw_semaphore { + pthread_rwlock_t lock; +}; + +int init_rwsem(struct rw_semaphore *sem); +int exit_rwsem(struct rw_semaphore *sem); + +int down_read(struct rw_semaphore *sem); +int up_read(struct rw_semaphore *sem); + +int down_write(struct rw_semaphore *sem); +int up_write(struct rw_semaphore *sem); + +#endif /* _PERF_RWSEM_H */ diff --git a/tools/perf/util/session.c b/tools/perf/util/session.c index da55081aefc6..5c412310f266 100644 --- a/tools/perf/util/session.c +++ b/tools/perf/util/session.c @@ -33,14 +33,14 @@ static int perf_session__deliver_event(struct perf_session *session, static int perf_session__open(struct perf_session *session) { - struct perf_data_file *file = session->file; + struct perf_data *data = session->data; if (perf_session__read_header(session) < 0) { pr_err("incompatible file format (rerun with -v to learn more)\n"); return -1; } - if (perf_data_file__is_pipe(file)) + if (perf_data__is_pipe(data)) return 0; if (perf_header__has_feat(&session->header, HEADER_STAT)) @@ -121,7 +121,7 @@ static int ordered_events__deliver_event(struct ordered_events *oe, session->tool, event->file_offset); } -struct perf_session *perf_session__new(struct perf_data_file *file, +struct perf_session *perf_session__new(struct perf_data *data, bool repipe, struct perf_tool *tool) { struct perf_session *session = zalloc(sizeof(*session)); @@ -135,13 +135,13 @@ struct perf_session *perf_session__new(struct perf_data_file *file, machines__init(&session->machines); ordered_events__init(&session->ordered_events, ordered_events__deliver_event); - if (file) { - if (perf_data_file__open(file)) + if (data) { + if (perf_data__open(data)) goto out_delete; - session->file = file; + session->data = data; - if (perf_data_file__is_read(file)) { + if (perf_data__is_read(data)) { if (perf_session__open(session) < 0) goto out_close; @@ -149,7 +149,7 @@ struct perf_session *perf_session__new(struct perf_data_file *file, * set session attributes that are present in perf.data * but not in pipe-mode. */ - if (!file->is_pipe) { + if (!data->is_pipe) { perf_session__set_id_hdr_size(session); perf_session__set_comm_exec(session); } @@ -158,7 +158,7 @@ struct perf_session *perf_session__new(struct perf_data_file *file, session->machines.host.env = &perf_env; } - if (!file || perf_data_file__is_write(file)) { + if (!data || perf_data__is_write(data)) { /* * In O_RDONLY mode this will be performed when reading the * kernel MMAP event, in perf_event__process_mmap(). @@ -171,7 +171,7 @@ struct perf_session *perf_session__new(struct perf_data_file *file, * In pipe-mode, evlist is empty until PERF_RECORD_HEADER_ATTR is * processed, so perf_evlist__sample_id_all is not meaningful here. */ - if ((!file || !file->is_pipe) && tool && tool->ordering_requires_timestamps && + if ((!data || !data->is_pipe) && tool && tool->ordering_requires_timestamps && tool->ordered_events && !perf_evlist__sample_id_all(session->evlist)) { dump_printf("WARNING: No sample_id_all support, falling back to unordered processing\n"); tool->ordered_events = false; @@ -180,7 +180,7 @@ struct perf_session *perf_session__new(struct perf_data_file *file, return session; out_close: - perf_data_file__close(file); + perf_data__close(data); out_delete: perf_session__delete(session); out: @@ -202,8 +202,8 @@ void perf_session__delete(struct perf_session *session) perf_session__delete_threads(session); perf_env__exit(&session->header.env); machines__exit(&session->machines); - if (session->file) - perf_data_file__close(session->file); + if (session->data) + perf_data__close(session->data); free(session); } @@ -291,8 +291,8 @@ static s64 process_event_auxtrace_stub(struct perf_tool *tool __maybe_unused, __maybe_unused) { dump_printf(": unhandled!\n"); - if (perf_data_file__is_pipe(session->file)) - skipn(perf_data_file__fd(session->file), event->auxtrace.size); + if (perf_data__is_pipe(session->data)) + skipn(perf_data__fd(session->data), event->auxtrace.size); return event->auxtrace.size; } @@ -1350,7 +1350,7 @@ static s64 perf_session__process_user_event(struct perf_session *session, { struct ordered_events *oe = &session->ordered_events; struct perf_tool *tool = session->tool; - int fd = perf_data_file__fd(session->file); + int fd = perf_data__fd(session->data); int err; dump_event(session->evlist, event, file_offset, NULL); @@ -1450,10 +1450,10 @@ int perf_session__peek_event(struct perf_session *session, off_t file_offset, goto out_parse_sample; } - if (perf_data_file__is_pipe(session->file)) + if (perf_data__is_pipe(session->data)) return -1; - fd = perf_data_file__fd(session->file); + fd = perf_data__fd(session->data); hdr_sz = sizeof(struct perf_event_header); if (buf_sz < hdr_sz) @@ -1688,7 +1688,7 @@ static int __perf_session__process_pipe_events(struct perf_session *session) { struct ordered_events *oe = &session->ordered_events; struct perf_tool *tool = session->tool; - int fd = perf_data_file__fd(session->file); + int fd = perf_data__fd(session->data); union perf_event *event; uint32_t size, cur_size = 0; void *buf = NULL; @@ -1829,7 +1829,7 @@ static int __perf_session__process_events(struct perf_session *session, { struct ordered_events *oe = &session->ordered_events; struct perf_tool *tool = session->tool; - int fd = perf_data_file__fd(session->file); + int fd = perf_data__fd(session->data); u64 head, page_offset, file_offset, file_pos, size; int err, mmap_prot, mmap_flags, map_idx = 0; size_t mmap_size; @@ -1850,7 +1850,7 @@ static int __perf_session__process_events(struct perf_session *session, if (data_offset + data_size < file_size) file_size = data_offset + data_size; - ui_progress__init(&prog, file_size, "Processing events..."); + ui_progress__init_size(&prog, file_size, "Processing events..."); mmap_size = MMAP_SIZE; if (mmap_size > file_size) { @@ -1946,13 +1946,13 @@ out_err: int perf_session__process_events(struct perf_session *session) { - u64 size = perf_data_file__size(session->file); + u64 size = perf_data__size(session->data); int err; if (perf_session__register_idle_thread(session) < 0) return -ENOMEM; - if (!perf_data_file__is_pipe(session->file)) + if (!perf_data__is_pipe(session->data)) err = __perf_session__process_events(session, session->header.data_offset, session->header.data_size, size); diff --git a/tools/perf/util/session.h b/tools/perf/util/session.h index 3f63ee12471d..da1434a7c120 100644 --- a/tools/perf/util/session.h +++ b/tools/perf/util/session.h @@ -33,13 +33,13 @@ struct perf_session { void *one_mmap_addr; u64 one_mmap_offset; struct ordered_events ordered_events; - struct perf_data_file *file; + struct perf_data *data; struct perf_tool *tool; }; struct perf_tool; -struct perf_session *perf_session__new(struct perf_data_file *file, +struct perf_session *perf_session__new(struct perf_data *data, bool repipe, struct perf_tool *tool); void perf_session__delete(struct perf_session *session); diff --git a/tools/perf/util/sort.c b/tools/perf/util/sort.c index 84a33f1e9ec9..a00eacdf02ed 100644 --- a/tools/perf/util/sort.c +++ b/tools/perf/util/sort.c @@ -226,6 +226,9 @@ static int64_t _sort__sym_cmp(struct symbol *sym_l, struct symbol *sym_r) if (sym_l == sym_r) return 0; + if (sym_l->inlined || sym_r->inlined) + return strcmp(sym_l->name, sym_r->name); + if (sym_l->start != sym_r->start) return (int64_t)(sym_r->start - sym_l->start); @@ -284,6 +287,9 @@ static int _hist_entry__sym_snprintf(struct map *map, struct symbol *sym, ret += repsep_snprintf(bf + ret, size - ret, "%.*s", width - ret, sym->name); + if (sym->inlined) + ret += repsep_snprintf(bf + ret, size - ret, + " (inlined)"); } } else { size_t len = BITS_PER_LONG / 4; diff --git a/tools/perf/util/sort.h b/tools/perf/util/sort.h index b2b55e5149a7..f5901c10a563 100644 --- a/tools/perf/util/sort.h +++ b/tools/perf/util/sort.h @@ -130,7 +130,6 @@ struct hist_entry { }; char *srcline; char *srcfile; - struct inline_node *inline_node; struct symbol *parent; struct branch_info *branch_info; struct hists *hists; diff --git a/tools/perf/util/srcline.c b/tools/perf/util/srcline.c index 4105682afc7a..d19f05c56de6 100644 --- a/tools/perf/util/srcline.c +++ b/tools/perf/util/srcline.c @@ -11,7 +11,7 @@ #include "util/debug.h" #include "util/callchain.h" #include "srcline.h" - +#include "string2.h" #include "symbol.h" bool srcline_full_filename; @@ -34,28 +34,17 @@ static const char *dso__name(struct dso *dso) return dso_name; } -static int inline_list__append(char *filename, char *funcname, int line_nr, - struct inline_node *node, struct dso *dso) +static int inline_list__append(struct symbol *symbol, char *srcline, + struct inline_node *node) { struct inline_list *ilist; - char *demangled; ilist = zalloc(sizeof(*ilist)); if (ilist == NULL) return -1; - ilist->filename = filename; - ilist->line_nr = line_nr; - - if (dso != NULL) { - demangled = dso__demangle_sym(dso, 0, funcname); - if (demangled == NULL) { - ilist->funcname = funcname; - } else { - ilist->funcname = demangled; - free(funcname); - } - } + ilist->symbol = symbol; + ilist->srcline = srcline; if (callchain_param.order == ORDER_CALLEE) list_add_tail(&ilist->list, &node->val); @@ -65,6 +54,65 @@ static int inline_list__append(char *filename, char *funcname, int line_nr, return 0; } +/* basename version that takes a const input string */ +static const char *gnu_basename(const char *path) +{ + const char *base = strrchr(path, '/'); + + return base ? base + 1 : path; +} + +static char *srcline_from_fileline(const char *file, unsigned int line) +{ + char *srcline; + + if (!file) + return NULL; + + if (!srcline_full_filename) + file = gnu_basename(file); + + if (asprintf(&srcline, "%s:%u", file, line) < 0) + return NULL; + + return srcline; +} + +static struct symbol *new_inline_sym(struct dso *dso, + struct symbol *base_sym, + const char *funcname) +{ + struct symbol *inline_sym; + char *demangled = NULL; + + if (dso) { + demangled = dso__demangle_sym(dso, 0, funcname); + if (demangled) + funcname = demangled; + } + + if (base_sym && strcmp(funcname, base_sym->name) == 0) { + /* reuse the real, existing symbol */ + inline_sym = base_sym; + /* ensure that we don't alias an inlined symbol, which could + * lead to double frees in inline_node__delete + */ + assert(!base_sym->inlined); + } else { + /* create a fake symbol for the inline frame */ + inline_sym = symbol__new(base_sym ? base_sym->start : 0, + base_sym ? base_sym->end : 0, + base_sym ? base_sym->binding : 0, + funcname); + if (inline_sym) + inline_sym->inlined = 1; + } + + free(demangled); + + return inline_sym; +} + #ifdef HAVE_LIBBFD_SUPPORT /* @@ -208,18 +256,23 @@ static void addr2line_cleanup(struct a2l_data *a2l) #define MAX_INLINE_NEST 1024 static int inline_list__append_dso_a2l(struct dso *dso, - struct inline_node *node) + struct inline_node *node, + struct symbol *sym) { struct a2l_data *a2l = dso->a2l; - char *funcname = a2l->funcname ? strdup(a2l->funcname) : NULL; - char *filename = a2l->filename ? strdup(a2l->filename) : NULL; + struct symbol *inline_sym = new_inline_sym(dso, sym, a2l->funcname); + char *srcline = NULL; - return inline_list__append(filename, funcname, a2l->line, node, dso); + if (a2l->filename) + srcline = srcline_from_fileline(a2l->filename, a2l->line); + + return inline_list__append(inline_sym, srcline, node); } static int addr2line(const char *dso_name, u64 addr, char **file, unsigned int *line, struct dso *dso, - bool unwind_inlines, struct inline_node *node) + bool unwind_inlines, struct inline_node *node, + struct symbol *sym) { int ret = 0; struct a2l_data *a2l = dso->a2l; @@ -245,7 +298,7 @@ static int addr2line(const char *dso_name, u64 addr, if (unwind_inlines) { int cnt = 0; - if (node && inline_list__append_dso_a2l(dso, node)) + if (node && inline_list__append_dso_a2l(dso, node, sym)) return 0; while (bfd_find_inliner_info(a2l->abfd, &a2l->filename, @@ -256,7 +309,7 @@ static int addr2line(const char *dso_name, u64 addr, a2l->filename = NULL; if (node != NULL) { - if (inline_list__append_dso_a2l(dso, node)) + if (inline_list__append_dso_a2l(dso, node, sym)) return 0; // found at least one inline frame ret = 1; @@ -288,7 +341,7 @@ void dso__free_a2l(struct dso *dso) } static struct inline_node *addr2inlines(const char *dso_name, u64 addr, - struct dso *dso) + struct dso *dso, struct symbol *sym) { struct inline_node *node; @@ -301,17 +354,8 @@ static struct inline_node *addr2inlines(const char *dso_name, u64 addr, INIT_LIST_HEAD(&node->val); node->addr = addr; - if (!addr2line(dso_name, addr, NULL, NULL, dso, TRUE, node)) - goto out_free_inline_node; - - if (list_empty(&node->val)) - goto out_free_inline_node; - + addr2line(dso_name, addr, NULL, NULL, dso, true, node, sym); return node; - -out_free_inline_node: - inline_node__delete(node); - return NULL; } #else /* HAVE_LIBBFD_SUPPORT */ @@ -341,7 +385,8 @@ static int addr2line(const char *dso_name, u64 addr, char **file, unsigned int *line_nr, struct dso *dso __maybe_unused, bool unwind_inlines __maybe_unused, - struct inline_node *node __maybe_unused) + struct inline_node *node __maybe_unused, + struct symbol *sym __maybe_unused) { FILE *fp; char cmd[PATH_MAX]; @@ -381,16 +426,18 @@ void dso__free_a2l(struct dso *dso __maybe_unused) } static struct inline_node *addr2inlines(const char *dso_name, u64 addr, - struct dso *dso __maybe_unused) + struct dso *dso __maybe_unused, + struct symbol *sym) { FILE *fp; char cmd[PATH_MAX]; struct inline_node *node; char *filename = NULL; - size_t len; + char *funcname = NULL; + size_t filelen, funclen; unsigned int line_nr = 0; - scnprintf(cmd, sizeof(cmd), "addr2line -e %s -i %016"PRIx64, + scnprintf(cmd, sizeof(cmd), "addr2line -e %s -i -f %016"PRIx64, dso_name, addr); fp = popen(cmd, "r"); @@ -408,26 +455,34 @@ static struct inline_node *addr2inlines(const char *dso_name, u64 addr, INIT_LIST_HEAD(&node->val); node->addr = addr; - while (getline(&filename, &len, fp) != -1) { - if (filename_split(filename, &line_nr) != 1) { - free(filename); + /* addr2line -f generates two lines for each inlined functions */ + while (getline(&funcname, &funclen, fp) != -1) { + char *srcline; + struct symbol *inline_sym; + + rtrim(funcname); + + if (getline(&filename, &filelen, fp) == -1) goto out; - } - if (inline_list__append(filename, NULL, line_nr, node, - NULL) != 0) + if (filename_split(filename, &line_nr) != 1) goto out; - filename = NULL; + srcline = srcline_from_fileline(filename, line_nr); + inline_sym = new_inline_sym(dso, sym, funcname); + + if (inline_list__append(inline_sym, srcline, node) != 0) { + free(srcline); + if (inline_sym && inline_sym->inlined) + symbol__delete(inline_sym); + goto out; + } } out: pclose(fp); - - if (list_empty(&node->val)) { - inline_node__delete(node); - return NULL; - } + free(filename); + free(funcname); return node; } @@ -455,19 +510,18 @@ char *__get_srcline(struct dso *dso, u64 addr, struct symbol *sym, if (dso_name == NULL) goto out; - if (!addr2line(dso_name, addr, &file, &line, dso, unwind_inlines, NULL)) + if (!addr2line(dso_name, addr, &file, &line, dso, + unwind_inlines, NULL, sym)) goto out; - if (asprintf(&srcline, "%s:%u", - srcline_full_filename ? file : basename(file), - line) < 0) { - free(file); + srcline = srcline_from_fileline(file, line); + free(file); + + if (!srcline) goto out; - } dso->a2l_fails = 0; - free(file); return srcline; out: @@ -501,7 +555,74 @@ char *get_srcline(struct dso *dso, u64 addr, struct symbol *sym, return __get_srcline(dso, addr, sym, show_sym, show_addr, false); } -struct inline_node *dso__parse_addr_inlines(struct dso *dso, u64 addr) +struct srcline_node { + u64 addr; + char *srcline; + struct rb_node rb_node; +}; + +void srcline__tree_insert(struct rb_root *tree, u64 addr, char *srcline) +{ + struct rb_node **p = &tree->rb_node; + struct rb_node *parent = NULL; + struct srcline_node *i, *node; + + node = zalloc(sizeof(struct srcline_node)); + if (!node) { + perror("not enough memory for the srcline node"); + return; + } + + node->addr = addr; + node->srcline = srcline; + + while (*p != NULL) { + parent = *p; + i = rb_entry(parent, struct srcline_node, rb_node); + if (addr < i->addr) + p = &(*p)->rb_left; + else + p = &(*p)->rb_right; + } + rb_link_node(&node->rb_node, parent, p); + rb_insert_color(&node->rb_node, tree); +} + +char *srcline__tree_find(struct rb_root *tree, u64 addr) +{ + struct rb_node *n = tree->rb_node; + + while (n) { + struct srcline_node *i = rb_entry(n, struct srcline_node, + rb_node); + + if (addr < i->addr) + n = n->rb_left; + else if (addr > i->addr) + n = n->rb_right; + else + return i->srcline; + } + + return NULL; +} + +void srcline__tree_delete(struct rb_root *tree) +{ + struct srcline_node *pos; + struct rb_node *next = rb_first(tree); + + while (next) { + pos = rb_entry(next, struct srcline_node, rb_node); + next = rb_next(&pos->rb_node); + rb_erase(&pos->rb_node, tree); + free_srcline(pos->srcline); + zfree(&pos); + } +} + +struct inline_node *dso__parse_addr_inlines(struct dso *dso, u64 addr, + struct symbol *sym) { const char *dso_name; @@ -509,7 +630,7 @@ struct inline_node *dso__parse_addr_inlines(struct dso *dso, u64 addr) if (dso_name == NULL) return NULL; - return addr2inlines(dso_name, addr, dso); + return addr2inlines(dso_name, addr, dso, sym); } void inline_node__delete(struct inline_node *node) @@ -518,10 +639,63 @@ void inline_node__delete(struct inline_node *node) list_for_each_entry_safe(ilist, tmp, &node->val, list) { list_del_init(&ilist->list); - zfree(&ilist->filename); - zfree(&ilist->funcname); + free_srcline(ilist->srcline); + /* only the inlined symbols are owned by the list */ + if (ilist->symbol && ilist->symbol->inlined) + symbol__delete(ilist->symbol); free(ilist); } free(node); } + +void inlines__tree_insert(struct rb_root *tree, struct inline_node *inlines) +{ + struct rb_node **p = &tree->rb_node; + struct rb_node *parent = NULL; + const u64 addr = inlines->addr; + struct inline_node *i; + + while (*p != NULL) { + parent = *p; + i = rb_entry(parent, struct inline_node, rb_node); + if (addr < i->addr) + p = &(*p)->rb_left; + else + p = &(*p)->rb_right; + } + rb_link_node(&inlines->rb_node, parent, p); + rb_insert_color(&inlines->rb_node, tree); +} + +struct inline_node *inlines__tree_find(struct rb_root *tree, u64 addr) +{ + struct rb_node *n = tree->rb_node; + + while (n) { + struct inline_node *i = rb_entry(n, struct inline_node, + rb_node); + + if (addr < i->addr) + n = n->rb_left; + else if (addr > i->addr) + n = n->rb_right; + else + return i; + } + + return NULL; +} + +void inlines__tree_delete(struct rb_root *tree) +{ + struct inline_node *pos; + struct rb_node *next = rb_first(tree); + + while (next) { + pos = rb_entry(next, struct inline_node, rb_node); + next = rb_next(&pos->rb_node); + rb_erase(&pos->rb_node, tree); + inline_node__delete(pos); + } +} diff --git a/tools/perf/util/srcline.h b/tools/perf/util/srcline.h index 8e73f607dfa3..847b7086182c 100644 --- a/tools/perf/util/srcline.h +++ b/tools/perf/util/srcline.h @@ -3,6 +3,7 @@ #define PERF_SRCLINE_H #include <linux/list.h> +#include <linux/rbtree.h> #include <linux/types.h> struct dso; @@ -15,21 +16,38 @@ char *__get_srcline(struct dso *dso, u64 addr, struct symbol *sym, bool show_sym, bool show_addr, bool unwind_inlines); void free_srcline(char *srcline); +/* insert the srcline into the DSO, which will take ownership */ +void srcline__tree_insert(struct rb_root *tree, u64 addr, char *srcline); +/* find previously inserted srcline */ +char *srcline__tree_find(struct rb_root *tree, u64 addr); +/* delete all srclines within the tree */ +void srcline__tree_delete(struct rb_root *tree); + #define SRCLINE_UNKNOWN ((char *) "??:0") struct inline_list { - char *filename; - char *funcname; - unsigned int line_nr; + struct symbol *symbol; + char *srcline; struct list_head list; }; struct inline_node { u64 addr; struct list_head val; + struct rb_node rb_node; }; -struct inline_node *dso__parse_addr_inlines(struct dso *dso, u64 addr); +/* parse inlined frames for the given address */ +struct inline_node *dso__parse_addr_inlines(struct dso *dso, u64 addr, + struct symbol *sym); +/* free resources associated to the inline node list */ void inline_node__delete(struct inline_node *node); +/* insert the inline node list into the DSO, which will take ownership */ +void inlines__tree_insert(struct rb_root *tree, struct inline_node *inlines); +/* find previously inserted inline node list */ +struct inline_node *inlines__tree_find(struct rb_root *tree, u64 addr); +/* delete all nodes within the tree of inline_node s */ +void inlines__tree_delete(struct rb_root *tree); + #endif /* PERF_SRCLINE_H */ diff --git a/tools/perf/util/stat-shadow.c b/tools/perf/util/stat-shadow.c index 37363869c9a1..855e35cbb1dc 100644 --- a/tools/perf/util/stat-shadow.c +++ b/tools/perf/util/stat-shadow.c @@ -7,6 +7,7 @@ #include "rblist.h" #include "evlist.h" #include "expr.h" +#include "metricgroup.h" enum { CTX_BIT_USER = 1 << 0, @@ -56,7 +57,6 @@ struct saved_value { struct rb_node rb_node; struct perf_evsel *evsel; int cpu; - int ctx; struct stats stats; }; @@ -67,8 +67,6 @@ static int saved_value_cmp(struct rb_node *rb_node, const void *entry) rb_node); const struct saved_value *b = entry; - if (a->ctx != b->ctx) - return a->ctx - b->ctx; if (a->cpu != b->cpu) return a->cpu - b->cpu; if (a->evsel == b->evsel) @@ -90,13 +88,12 @@ static struct rb_node *saved_value_new(struct rblist *rblist __maybe_unused, } static struct saved_value *saved_value_lookup(struct perf_evsel *evsel, - int cpu, int ctx, + int cpu, bool create) { struct rb_node *nd; struct saved_value dm = { .cpu = cpu, - .ctx = ctx, .evsel = evsel, }; nd = rblist__find(&runtime_saved_values, &dm); @@ -182,59 +179,60 @@ void perf_stat__reset_shadow_stats(void) * more semantic information such as miss/hit ratios, * instruction rates, etc: */ -void perf_stat__update_shadow_stats(struct perf_evsel *counter, u64 *count, +void perf_stat__update_shadow_stats(struct perf_evsel *counter, u64 count, int cpu) { int ctx = evsel_context(counter); + count *= counter->scale; + if (perf_evsel__match(counter, SOFTWARE, SW_TASK_CLOCK) || perf_evsel__match(counter, SOFTWARE, SW_CPU_CLOCK)) - update_stats(&runtime_nsecs_stats[cpu], count[0]); + update_stats(&runtime_nsecs_stats[cpu], count); else if (perf_evsel__match(counter, HARDWARE, HW_CPU_CYCLES)) - update_stats(&runtime_cycles_stats[ctx][cpu], count[0]); + update_stats(&runtime_cycles_stats[ctx][cpu], count); else if (perf_stat_evsel__is(counter, CYCLES_IN_TX)) - update_stats(&runtime_cycles_in_tx_stats[ctx][cpu], count[0]); + update_stats(&runtime_cycles_in_tx_stats[ctx][cpu], count); else if (perf_stat_evsel__is(counter, TRANSACTION_START)) - update_stats(&runtime_transaction_stats[ctx][cpu], count[0]); + update_stats(&runtime_transaction_stats[ctx][cpu], count); else if (perf_stat_evsel__is(counter, ELISION_START)) - update_stats(&runtime_elision_stats[ctx][cpu], count[0]); + update_stats(&runtime_elision_stats[ctx][cpu], count); else if (perf_stat_evsel__is(counter, TOPDOWN_TOTAL_SLOTS)) - update_stats(&runtime_topdown_total_slots[ctx][cpu], count[0]); + update_stats(&runtime_topdown_total_slots[ctx][cpu], count); else if (perf_stat_evsel__is(counter, TOPDOWN_SLOTS_ISSUED)) - update_stats(&runtime_topdown_slots_issued[ctx][cpu], count[0]); + update_stats(&runtime_topdown_slots_issued[ctx][cpu], count); else if (perf_stat_evsel__is(counter, TOPDOWN_SLOTS_RETIRED)) - update_stats(&runtime_topdown_slots_retired[ctx][cpu], count[0]); + update_stats(&runtime_topdown_slots_retired[ctx][cpu], count); else if (perf_stat_evsel__is(counter, TOPDOWN_FETCH_BUBBLES)) - update_stats(&runtime_topdown_fetch_bubbles[ctx][cpu],count[0]); + update_stats(&runtime_topdown_fetch_bubbles[ctx][cpu], count); else if (perf_stat_evsel__is(counter, TOPDOWN_RECOVERY_BUBBLES)) - update_stats(&runtime_topdown_recovery_bubbles[ctx][cpu], count[0]); + update_stats(&runtime_topdown_recovery_bubbles[ctx][cpu], count); else if (perf_evsel__match(counter, HARDWARE, HW_STALLED_CYCLES_FRONTEND)) - update_stats(&runtime_stalled_cycles_front_stats[ctx][cpu], count[0]); + update_stats(&runtime_stalled_cycles_front_stats[ctx][cpu], count); else if (perf_evsel__match(counter, HARDWARE, HW_STALLED_CYCLES_BACKEND)) - update_stats(&runtime_stalled_cycles_back_stats[ctx][cpu], count[0]); + update_stats(&runtime_stalled_cycles_back_stats[ctx][cpu], count); else if (perf_evsel__match(counter, HARDWARE, HW_BRANCH_INSTRUCTIONS)) - update_stats(&runtime_branches_stats[ctx][cpu], count[0]); + update_stats(&runtime_branches_stats[ctx][cpu], count); else if (perf_evsel__match(counter, HARDWARE, HW_CACHE_REFERENCES)) - update_stats(&runtime_cacherefs_stats[ctx][cpu], count[0]); + update_stats(&runtime_cacherefs_stats[ctx][cpu], count); else if (perf_evsel__match(counter, HW_CACHE, HW_CACHE_L1D)) - update_stats(&runtime_l1_dcache_stats[ctx][cpu], count[0]); + update_stats(&runtime_l1_dcache_stats[ctx][cpu], count); else if (perf_evsel__match(counter, HW_CACHE, HW_CACHE_L1I)) - update_stats(&runtime_ll_cache_stats[ctx][cpu], count[0]); + update_stats(&runtime_ll_cache_stats[ctx][cpu], count); else if (perf_evsel__match(counter, HW_CACHE, HW_CACHE_LL)) - update_stats(&runtime_ll_cache_stats[ctx][cpu], count[0]); + update_stats(&runtime_ll_cache_stats[ctx][cpu], count); else if (perf_evsel__match(counter, HW_CACHE, HW_CACHE_DTLB)) - update_stats(&runtime_dtlb_cache_stats[ctx][cpu], count[0]); + update_stats(&runtime_dtlb_cache_stats[ctx][cpu], count); else if (perf_evsel__match(counter, HW_CACHE, HW_CACHE_ITLB)) - update_stats(&runtime_itlb_cache_stats[ctx][cpu], count[0]); + update_stats(&runtime_itlb_cache_stats[ctx][cpu], count); else if (perf_stat_evsel__is(counter, SMI_NUM)) - update_stats(&runtime_smi_num_stats[ctx][cpu], count[0]); + update_stats(&runtime_smi_num_stats[ctx][cpu], count); else if (perf_stat_evsel__is(counter, APERF)) - update_stats(&runtime_aperf_stats[ctx][cpu], count[0]); + update_stats(&runtime_aperf_stats[ctx][cpu], count); if (counter->collect_stat) { - struct saved_value *v = saved_value_lookup(counter, cpu, ctx, - true); - update_stats(&v->stats, count[0]); + struct saved_value *v = saved_value_lookup(counter, cpu, true); + update_stats(&v->stats, count); } } @@ -628,15 +626,68 @@ static void print_smi_cost(int cpu, struct perf_evsel *evsel, out->print_metric(out->ctx, NULL, "%4.0f", "SMI#", smi_num); } +static void generic_metric(const char *metric_expr, + struct perf_evsel **metric_events, + char *name, + const char *metric_name, + double avg, + int cpu, + struct perf_stat_output_ctx *out) +{ + print_metric_t print_metric = out->print_metric; + struct parse_ctx pctx; + double ratio; + int i; + void *ctxp = out->ctx; + + expr__ctx_init(&pctx); + expr__add_id(&pctx, name, avg); + for (i = 0; metric_events[i]; i++) { + struct saved_value *v; + struct stats *stats; + double scale; + + if (!strcmp(metric_events[i]->name, "duration_time")) { + stats = &walltime_nsecs_stats; + scale = 1e-9; + } else { + v = saved_value_lookup(metric_events[i], cpu, false); + if (!v) + break; + stats = &v->stats; + scale = 1.0; + } + expr__add_id(&pctx, metric_events[i]->name, avg_stats(stats)*scale); + } + if (!metric_events[i]) { + const char *p = metric_expr; + + if (expr__parse(&ratio, &pctx, &p) == 0) + print_metric(ctxp, NULL, "%8.1f", + metric_name ? + metric_name : + out->force_header ? name : "", + ratio); + else + print_metric(ctxp, NULL, NULL, + out->force_header ? + (metric_name ? metric_name : name) : "", 0); + } else + print_metric(ctxp, NULL, NULL, "", 0); +} + void perf_stat__print_shadow_stats(struct perf_evsel *evsel, double avg, int cpu, - struct perf_stat_output_ctx *out) + struct perf_stat_output_ctx *out, + struct rblist *metric_events) { void *ctxp = out->ctx; print_metric_t print_metric = out->print_metric; double total, ratio = 0.0, total2; const char *color = NULL; int ctx = evsel_context(evsel); + struct metric_event *me; + int num = 1; if (perf_evsel__match(evsel, HARDWARE, HW_INSTRUCTIONS)) { total = avg_stats(&runtime_cycles_stats[ctx][cpu]); @@ -820,33 +871,8 @@ void perf_stat__print_shadow_stats(struct perf_evsel *evsel, else print_metric(ctxp, NULL, NULL, name, 0); } else if (evsel->metric_expr) { - struct parse_ctx pctx; - int i; - - expr__ctx_init(&pctx); - expr__add_id(&pctx, evsel->name, avg); - for (i = 0; evsel->metric_events[i]; i++) { - struct saved_value *v; - - v = saved_value_lookup(evsel->metric_events[i], cpu, ctx, false); - if (!v) - break; - expr__add_id(&pctx, evsel->metric_events[i]->name, - avg_stats(&v->stats)); - } - if (!evsel->metric_events[i]) { - const char *p = evsel->metric_expr; - - if (expr__parse(&ratio, &pctx, &p) == 0) - print_metric(ctxp, NULL, "%8.1f", - evsel->metric_name ? - evsel->metric_name : - out->force_header ? evsel->name : "", - ratio); - else - print_metric(ctxp, NULL, NULL, "", 0); - } else - print_metric(ctxp, NULL, NULL, "", 0); + generic_metric(evsel->metric_expr, evsel->metric_events, evsel->name, + evsel->metric_name, avg, cpu, out); } else if (runtime_nsecs_stats[cpu].n != 0) { char unit = 'M'; char unit_buf[10]; @@ -864,6 +890,20 @@ void perf_stat__print_shadow_stats(struct perf_evsel *evsel, } else if (perf_stat_evsel__is(evsel, SMI_NUM)) { print_smi_cost(cpu, evsel, out); } else { - print_metric(ctxp, NULL, NULL, NULL, 0); + num = 0; } + + if ((me = metricgroup__lookup(metric_events, evsel, false)) != NULL) { + struct metric_expr *mexp; + + list_for_each_entry (mexp, &me->head, nd) { + if (num++ > 0) + out->new_line(ctxp); + generic_metric(mexp->metric_expr, mexp->metric_events, + evsel->name, mexp->metric_name, + avg, cpu, out); + } + } + if (num == 0) + print_metric(ctxp, NULL, NULL, NULL, 0); } diff --git a/tools/perf/util/stat.c b/tools/perf/util/stat.c index c9bae5fb8b47..151e9efd7286 100644 --- a/tools/perf/util/stat.c +++ b/tools/perf/util/stat.c @@ -70,7 +70,7 @@ double rel_stddev_stats(double stddev, double avg) bool __perf_evsel_stat__is(struct perf_evsel *evsel, enum perf_stat_evsel_id id) { - struct perf_stat_evsel *ps = evsel->priv; + struct perf_stat_evsel *ps = evsel->stats; return ps->id == id; } @@ -94,7 +94,7 @@ static const char *id_str[PERF_STAT_EVSEL_ID__MAX] = { void perf_stat_evsel_id_init(struct perf_evsel *evsel) { - struct perf_stat_evsel *ps = evsel->priv; + struct perf_stat_evsel *ps = evsel->stats; int i; /* ps->id is 0 hence PERF_STAT_EVSEL_ID__NONE by default */ @@ -110,7 +110,7 @@ void perf_stat_evsel_id_init(struct perf_evsel *evsel) static void perf_evsel__reset_stat_priv(struct perf_evsel *evsel) { int i; - struct perf_stat_evsel *ps = evsel->priv; + struct perf_stat_evsel *ps = evsel->stats; for (i = 0; i < 3; i++) init_stats(&ps->res_stats[i]); @@ -120,8 +120,8 @@ static void perf_evsel__reset_stat_priv(struct perf_evsel *evsel) static int perf_evsel__alloc_stat_priv(struct perf_evsel *evsel) { - evsel->priv = zalloc(sizeof(struct perf_stat_evsel)); - if (evsel->priv == NULL) + evsel->stats = zalloc(sizeof(struct perf_stat_evsel)); + if (evsel->stats == NULL) return -ENOMEM; perf_evsel__reset_stat_priv(evsel); return 0; @@ -129,11 +129,11 @@ static int perf_evsel__alloc_stat_priv(struct perf_evsel *evsel) static void perf_evsel__free_stat_priv(struct perf_evsel *evsel) { - struct perf_stat_evsel *ps = evsel->priv; + struct perf_stat_evsel *ps = evsel->stats; if (ps) free(ps->group_data); - zfree(&evsel->priv); + zfree(&evsel->stats); } static int perf_evsel__alloc_prev_raw_counts(struct perf_evsel *evsel, @@ -278,7 +278,9 @@ process_counter_values(struct perf_stat_config *config, struct perf_evsel *evsel perf_evsel__compute_deltas(evsel, cpu, thread, count); perf_counts_values__scale(count, config->scale, NULL); if (config->aggr_mode == AGGR_NONE) - perf_stat__update_shadow_stats(evsel, count->values, cpu); + perf_stat__update_shadow_stats(evsel, count->val, cpu); + if (config->aggr_mode == AGGR_THREAD) + perf_stat__update_shadow_stats(evsel, count->val, 0); break; case AGGR_GLOBAL: aggr->val += count->val; @@ -319,9 +321,8 @@ int perf_stat_process_counter(struct perf_stat_config *config, struct perf_evsel *counter) { struct perf_counts_values *aggr = &counter->counts->aggr; - struct perf_stat_evsel *ps = counter->priv; + struct perf_stat_evsel *ps = counter->stats; u64 *count = counter->counts->aggr.values; - u64 val; int i, ret; aggr->val = aggr->ena = aggr->run = 0; @@ -361,8 +362,7 @@ int perf_stat_process_counter(struct perf_stat_config *config, /* * Save the full runtime - to allow normalization during printout: */ - val = counter->scale * *count; - perf_stat__update_shadow_stats(counter, &val, 0); + perf_stat__update_shadow_stats(counter, *count, 0); return 0; } diff --git a/tools/perf/util/stat.h b/tools/perf/util/stat.h index 96326b1f9443..eefca5c981fd 100644 --- a/tools/perf/util/stat.h +++ b/tools/perf/util/stat.h @@ -83,7 +83,7 @@ typedef void (*new_line_t )(void *ctx); void perf_stat__init_shadow_stats(void); void perf_stat__reset_shadow_stats(void); -void perf_stat__update_shadow_stats(struct perf_evsel *counter, u64 *count, +void perf_stat__update_shadow_stats(struct perf_evsel *counter, u64 count, int cpu); struct perf_stat_output_ctx { void *ctx; @@ -92,9 +92,11 @@ struct perf_stat_output_ctx { bool force_header; }; +struct rblist; void perf_stat__print_shadow_stats(struct perf_evsel *evsel, double avg, int cpu, - struct perf_stat_output_ctx *out); + struct perf_stat_output_ctx *out, + struct rblist *metric_events); void perf_stat__collect_metric_expr(struct perf_evlist *); int perf_evlist__alloc_stats(struct perf_evlist *evlist, bool alloc_raw); diff --git a/tools/perf/util/symbol.c b/tools/perf/util/symbol.c index 6492ef38b090..1b67a8639dfe 100644 --- a/tools/perf/util/symbol.c +++ b/tools/perf/util/symbol.c @@ -46,6 +46,7 @@ struct symbol_conf symbol_conf = { .show_hist_headers = true, .symfs = "", .event_group = true, + .inline_name = true, }; static enum dso_binary_type binary_type_symtab[] = { @@ -227,7 +228,7 @@ void __map_groups__fixup_end(struct map_groups *mg, enum map_type type) struct maps *maps = &mg->maps[type]; struct map *next, *curr; - pthread_rwlock_wrlock(&maps->lock); + down_write(&maps->lock); curr = maps__first(maps); if (curr == NULL) @@ -247,7 +248,7 @@ void __map_groups__fixup_end(struct map_groups *mg, enum map_type type) curr->end = ~0ULL; out_unlock: - pthread_rwlock_unlock(&maps->lock); + up_write(&maps->lock); } struct symbol *symbol__new(u64 start, u64 len, u8 binding, const char *name) @@ -1672,7 +1673,7 @@ struct map *map_groups__find_by_name(struct map_groups *mg, struct maps *maps = &mg->maps[type]; struct map *map; - pthread_rwlock_rdlock(&maps->lock); + down_read(&maps->lock); for (map = maps__first(maps); map; map = map__next(map)) { if (map->dso && strcmp(map->dso->short_name, name) == 0) @@ -1682,7 +1683,7 @@ struct map *map_groups__find_by_name(struct map_groups *mg, map = NULL; out_unlock: - pthread_rwlock_unlock(&maps->lock); + up_read(&maps->lock); return map; } diff --git a/tools/perf/util/symbol.h b/tools/perf/util/symbol.h index 6352022593c6..a4f0075b4e5c 100644 --- a/tools/perf/util/symbol.h +++ b/tools/perf/util/symbol.h @@ -60,6 +60,7 @@ struct symbol { u8 binding; u8 idle:1; u8 ignore:1; + u8 inlined:1; u8 arch_sym; char name[0]; }; @@ -209,6 +210,7 @@ struct addr_location { struct thread *thread; struct map *map; struct symbol *sym; + const char *srcline; u64 addr; char level; u8 filtered; diff --git a/tools/perf/util/thread.c b/tools/perf/util/thread.c index 1dbcd3c8dee0..68b65b10579b 100644 --- a/tools/perf/util/thread.c +++ b/tools/perf/util/thread.c @@ -46,6 +46,8 @@ struct thread *thread__new(pid_t pid, pid_t tid) thread->cpu = -1; INIT_LIST_HEAD(&thread->namespaces_list); INIT_LIST_HEAD(&thread->comm_list); + init_rwsem(&thread->namespaces_lock); + init_rwsem(&thread->comm_lock); comm_str = malloc(32); if (!comm_str) @@ -84,18 +86,26 @@ void thread__delete(struct thread *thread) map_groups__put(thread->mg); thread->mg = NULL; } + down_write(&thread->namespaces_lock); list_for_each_entry_safe(namespaces, tmp_namespaces, &thread->namespaces_list, list) { list_del(&namespaces->list); namespaces__free(namespaces); } + up_write(&thread->namespaces_lock); + + down_write(&thread->comm_lock); list_for_each_entry_safe(comm, tmp_comm, &thread->comm_list, list) { list_del(&comm->list); comm__free(comm); } + up_write(&thread->comm_lock); + unwind__finish_access(thread); nsinfo__zput(thread->nsinfo); + exit_rwsem(&thread->namespaces_lock); + exit_rwsem(&thread->comm_lock); free(thread); } @@ -126,8 +136,8 @@ struct namespaces *thread__namespaces(const struct thread *thread) return list_first_entry(&thread->namespaces_list, struct namespaces, list); } -int thread__set_namespaces(struct thread *thread, u64 timestamp, - struct namespaces_event *event) +static int __thread__set_namespaces(struct thread *thread, u64 timestamp, + struct namespaces_event *event) { struct namespaces *new, *curr = thread__namespaces(thread); @@ -150,6 +160,17 @@ int thread__set_namespaces(struct thread *thread, u64 timestamp, return 0; } +int thread__set_namespaces(struct thread *thread, u64 timestamp, + struct namespaces_event *event) +{ + int ret; + + down_write(&thread->namespaces_lock); + ret = __thread__set_namespaces(thread, timestamp, event); + up_write(&thread->namespaces_lock); + return ret; +} + struct comm *thread__comm(const struct thread *thread) { if (list_empty(&thread->comm_list)) @@ -171,8 +192,8 @@ struct comm *thread__exec_comm(const struct thread *thread) return last; } -int __thread__set_comm(struct thread *thread, const char *str, u64 timestamp, - bool exec) +static int ____thread__set_comm(struct thread *thread, const char *str, + u64 timestamp, bool exec) { struct comm *new, *curr = thread__comm(thread); @@ -196,6 +217,17 @@ int __thread__set_comm(struct thread *thread, const char *str, u64 timestamp, return 0; } +int __thread__set_comm(struct thread *thread, const char *str, u64 timestamp, + bool exec) +{ + int ret; + + down_write(&thread->comm_lock); + ret = ____thread__set_comm(thread, str, timestamp, exec); + up_write(&thread->comm_lock); + return ret; +} + int thread__set_comm_from_proc(struct thread *thread) { char path[64]; @@ -213,7 +245,7 @@ int thread__set_comm_from_proc(struct thread *thread) return err; } -const char *thread__comm_str(const struct thread *thread) +static const char *__thread__comm_str(const struct thread *thread) { const struct comm *comm = thread__comm(thread); @@ -223,6 +255,17 @@ const char *thread__comm_str(const struct thread *thread) return comm__str(comm); } +const char *thread__comm_str(const struct thread *thread) +{ + const char *str; + + down_read((struct rw_semaphore *)&thread->comm_lock); + str = __thread__comm_str(thread); + up_read((struct rw_semaphore *)&thread->comm_lock); + + return str; +} + /* CHECKME: it should probably better return the max comm len from its comm list */ int thread__comm_len(struct thread *thread) { @@ -265,7 +308,7 @@ static int __thread__prepare_access(struct thread *thread) struct maps *maps = &thread->mg->maps[i]; struct map *map; - pthread_rwlock_rdlock(&maps->lock); + down_read(&maps->lock); for (map = maps__first(maps); map; map = map__next(map)) { err = unwind__prepare_access(thread, map, &initialized); @@ -273,7 +316,7 @@ static int __thread__prepare_access(struct thread *thread) break; } - pthread_rwlock_unlock(&maps->lock); + up_read(&maps->lock); } return err; diff --git a/tools/perf/util/thread.h b/tools/perf/util/thread.h index fdcea7c0cac1..40cfa36c022a 100644 --- a/tools/perf/util/thread.h +++ b/tools/perf/util/thread.h @@ -10,6 +10,7 @@ #include "symbol.h" #include <strlist.h> #include <intlist.h> +#include "rwsem.h" struct thread_stack; struct unwind_libunwind_ops; @@ -30,7 +31,9 @@ struct thread { int comm_len; bool dead; /* if set thread has exited */ struct list_head namespaces_list; + struct rw_semaphore namespaces_lock; struct list_head comm_list; + struct rw_semaphore comm_lock; u64 db_id; void *priv; diff --git a/tools/perf/util/top.h b/tools/perf/util/top.h index 506150a75bd0..9892323cdd7c 100644 --- a/tools/perf/util/top.h +++ b/tools/perf/util/top.h @@ -38,6 +38,7 @@ struct perf_top { int sym_pcnt_filter; const char *sym_filter; float min_percent; + unsigned int nr_threads_synthesize; }; #define CONSOLE_CLEAR "[H[2J" diff --git a/tools/perf/util/trace-event-info.c b/tools/perf/util/trace-event-info.c index e7d60d05596d..d7f2113462fb 100644 --- a/tools/perf/util/trace-event-info.c +++ b/tools/perf/util/trace-event-info.c @@ -28,7 +28,6 @@ #include <sys/types.h> #include <sys/stat.h> #include <sys/wait.h> -#include <pthread.h> #include <fcntl.h> #include <unistd.h> #include <errno.h> diff --git a/tools/perf/util/trace-event-read.c b/tools/perf/util/trace-event-read.c index 8a9a677f7576..40b425949aa3 100644 --- a/tools/perf/util/trace-event-read.c +++ b/tools/perf/util/trace-event-read.c @@ -27,7 +27,6 @@ #include <sys/stat.h> #include <sys/wait.h> #include <sys/mman.h> -#include <pthread.h> #include <fcntl.h> #include <unistd.h> #include <errno.h> diff --git a/tools/perf/util/util.c b/tools/perf/util/util.c index 3687b720327a..a789f952b3e9 100644 --- a/tools/perf/util/util.c +++ b/tools/perf/util/util.c @@ -7,6 +7,7 @@ #include <sys/stat.h> #include <sys/utsname.h> #include <dirent.h> +#include <fcntl.h> #include <inttypes.h> #include <signal.h> #include <stdio.h> @@ -23,6 +24,19 @@ /* * XXX We need to find a better place for these things... */ + +bool perf_singlethreaded = true; + +void perf_set_singlethreaded(void) +{ + perf_singlethreaded = true; +} + +void perf_set_multithreaded(void) +{ + perf_singlethreaded = false; +} + unsigned int page_size; int cacheline_size; @@ -175,7 +189,7 @@ out: return err; } -int copyfile_offset(int ifd, loff_t off_in, int ofd, loff_t off_out, u64 size) +static int copyfile_offset(int ifd, loff_t off_in, int ofd, loff_t off_out, u64 size) { void *ptr; loff_t pgoff; diff --git a/tools/perf/util/util.h b/tools/perf/util/util.h index b52765e6d7b4..01434509c2e9 100644 --- a/tools/perf/util/util.h +++ b/tools/perf/util/util.h @@ -6,7 +6,6 @@ /* glibc 2.20 deprecates _BSD_SOURCE in favour of _DEFAULT_SOURCE */ #define _DEFAULT_SOURCE 1 -#include <fcntl.h> #include <stdbool.h> #include <stddef.h> #include <stdlib.h> @@ -36,7 +35,6 @@ bool lsdir_no_dot_filter(const char *name, struct dirent *d); int copyfile(const char *from, const char *to); int copyfile_mode(const char *from, const char *to, mode_t mode); int copyfile_ns(const char *from, const char *to, struct nsinfo *nsi); -int copyfile_offset(int fromfd, loff_t from_ofs, int tofd, loff_t to_ofs, u64 size); ssize_t readn(int fd, void *buf, size_t n); ssize_t writen(int fd, const void *buf, size_t n); @@ -65,4 +63,9 @@ int sched_getcpu(void); int setns(int fd, int nstype); #endif +extern bool perf_singlethreaded; + +void perf_set_singlethreaded(void); +void perf_set_multithreaded(void); + #endif /* GIT_COMPAT_UTIL_H */ diff --git a/tools/perf/util/vdso.c b/tools/perf/util/vdso.c index cffcda448c28..0acb1ec0e2f0 100644 --- a/tools/perf/util/vdso.c +++ b/tools/perf/util/vdso.c @@ -320,7 +320,7 @@ struct dso *machine__findnew_vdso(struct machine *machine, struct vdso_info *vdso_info; struct dso *dso = NULL; - pthread_rwlock_wrlock(&machine->dsos.lock); + down_write(&machine->dsos.lock); if (!machine->vdso_info) machine->vdso_info = vdso_info__new(); @@ -348,7 +348,7 @@ struct dso *machine__findnew_vdso(struct machine *machine, out_unlock: dso__get(dso); - pthread_rwlock_unlock(&machine->dsos.lock); + up_write(&machine->dsos.lock); return dso; } diff --git a/tools/perf/util/zlib.c b/tools/perf/util/zlib.c index 008fe68d7b76..a725b958cf31 100644 --- a/tools/perf/util/zlib.c +++ b/tools/perf/util/zlib.c @@ -1,4 +1,5 @@ // SPDX-License-Identifier: GPL-2.0 +#include <fcntl.h> #include <stdio.h> #include <unistd.h> #include <sys/stat.h> |