Monte Carlo Benchmarking Engine
High-performance SIMD Monte Carlo engine (AVX2/NEON) with custom memory allocators and perf logging.
 
Loading...
Searching...
No Matches
parse_perf_metrics.py File Reference

CLI flag generator from perf CSV output (perf stat -x,) More...

Go to the source code of this file.

Namespaces

namespace  pipeline
 
namespace  pipeline.parse_perf_metrics
 

Functions

 pipeline.parse_perf_metrics.debug_print (dict values)
 

Variables

 pipeline.parse_perf_metrics.trials = int(sys.argv[2])
 
 pipeline.parse_perf_metrics.df = pl.read_csv(sys.argv[1], has_header=False)
 
 pipeline.parse_perf_metrics.columns
 
dict pipeline.parse_perf_metrics.field_map
 
dict pipeline.parse_perf_metrics.event_to_key = {v: k for k, v in field_map.items() if v != "NA"}
 
 pipeline.parse_perf_metrics.filtered = df.filter(pl.col("event").is_in(event_to_key.keys()))
 
tuple pipeline.parse_perf_metrics.to_clean
 
dict pipeline.parse_perf_metrics.values = {key: "NA" for key in field_map}
 
 pipeline.parse_perf_metrics.named
 
dict pipeline.parse_perf_metrics.cli_key = event_to_key[row["event"]]
 
list pipeline.parse_perf_metrics.ordered_keys
 

Detailed Description

CLI flag generator from perf CSV output (perf stat -x,)

Parses Linux perf stat logs in CSV format and extracts a fixed set of performance metrics. Outputs these metrics as --key value shell arguments for downstream use in pipeline scripts or shell evaluation.

Metrics are mapped from perf event names (e.g., "cycles:u") into canonical CLI keys (e.g., CYCLES, IPC). Unsupported metrics are filled as "NA". Derived values like IPC and misses per trial are computed inline using safe arithmetic fallbacks.

Output Format
CYCLES=123456789 INSTR=123456 IPC=1.23 ...
Printed as a single line, space-separated key-value flags, suitable for eval.
Example
$ eval $(python3 parse_perf_metrics.py perf_SIMD.csv 1000000)
$ echo $CYCLES
Usage
python3 parse_perf_metrics.py <perf_log.csv> <num_trials>
Arguments
  • <perf_log.csv> — Path to perf CSV file (from perf stat -x,)
  • <num_trials> — Number of simulation trials (used for normalization)
Note
  • L2/L3 metrics are set to "NA" unless enabled manually via raw PMU events.
  • Designed for use via eval in shell pipelines or programmatically via subprocess.
  • Will safely skip unsupported perf fields and calculate derived metrics (e.g., IPC, miss rate).

Definition in file parse_perf_metrics.py.