Here is a list of all files with brief descriptions:
[detail level 12]
▼ pipeline | |
combine_batch_parquets.py | Combines multiple per-method parquet log files into a single file |
gen_perf_parquet_logs.py | Generates perf benchmarking parquet from command-line arguments |
insert_to_clickhouse.py | Inserts filtered benchmarking logs into a ClickHouse database |
parse_perf_metrics.py | CLI flag generator from perf CSV output (perf stat -x, ) |
schema.py | Defines the canonical schema used across ETL, validation, and ClickHouse ingestion |
schema_to_clickhouse.py | Converts Polars schema to ClickHouse-compatible SQL |
utils.py | Shared utilities for safe casting, schema enforcement, and arithmetic fallback logic |
▼ scripts | |
config.py | Loads environment-based configuration for pipeline and services |
run_perf.sh | Dockerized perf benchmarker for Monte Carlo simulation engine |
setup.py | CLI utility to initialize ClickHouse + Grafana for benchmark pipeline |
benchmark.hpp | Wall-clock + cycle-accurate benchmarking for performance profiling |
main.cpp | CLI runner for benchmarking Monte Carlo simulation methods |
montecarlo.hpp | High-performance Monte Carlo π Estimation Engine — SIMD-accelerated, memory-optimized |
pool.hpp | Fixed-size aligned pool allocator for high-performance simulations |