Files
sharpcompress/tests/SharpCompress.Performance

SharpCompress Performance Benchmarks

This project contains performance benchmarks for SharpCompress using BenchmarkDotNet.

Overview

The benchmarks test all major archive formats supported by SharpCompress:

  • Zip: Read (Archive & Reader API) and Write operations
  • Tar: Read (Archive & Reader API) and Write operations, including Tar.GZip
  • Rar: Read operations (Archive & Reader API)
  • 7Zip: Read operations for LZMA and LZMA2 compression
  • GZip: Compression and decompression

Running Benchmarks

Run all benchmarks

dotnet run --project tests/SharpCompress.Performance/SharpCompress.Performance.csproj --configuration Release

Run specific benchmark class

dotnet run --project tests/SharpCompress.Performance/SharpCompress.Performance.csproj --configuration Release -- --filter "*ZipBenchmarks*"

Run with specific job configuration

# Quick run for testing (1 warmup, 1 iteration)
dotnet run --project tests/SharpCompress.Performance/SharpCompress.Performance.csproj --configuration Release -- --job Dry

# Short run (3 warmup, 3 iterations)
dotnet run --project tests/SharpCompress.Performance/SharpCompress.Performance.csproj --configuration Release -- --job Short

# Medium run (default)
dotnet run --project tests/SharpCompress.Performance/SharpCompress.Performance.csproj --configuration Release -- --job Medium

Export results

# Export to JSON
dotnet run --project tests/SharpCompress.Performance/SharpCompress.Performance.csproj --configuration Release -- --exporters json

# Export to multiple formats
dotnet run --project tests/SharpCompress.Performance/SharpCompress.Performance.csproj --configuration Release -- --exporters json markdown html

List available benchmarks

dotnet run --project tests/SharpCompress.Performance/SharpCompress.Performance.csproj --configuration Release -- --list flat

Baseline Results

The baseline results are stored in baseline-results.md and represent the expected performance characteristics of the library. These results are used in CI to detect significant performance regressions.

Generate Baseline (Automated)

Use the build target to generate baseline results:

dotnet run --project build/build.csproj -- generate-baseline

This will:

  1. Build the performance project
  2. Run all benchmarks
  3. Combine the markdown reports into baseline-results.md
  4. Clean up temporary artifacts

Generate Baseline (Manual)

To manually update the baseline:

  1. Run the benchmarks: dotnet run --project tests/SharpCompress.Performance/SharpCompress.Performance.csproj --configuration Release -- --exporters markdown --artifacts baseline-output
  2. Combine the results: cat baseline-output/results/*-report-github.md > baseline-results.md
  3. Review the changes and commit if appropriate

JetBrains Profiler Integration

The performance project supports JetBrains profiler for detailed CPU and memory profiling during local development.

Prerequisites

Install JetBrains profiler tools from: https://www.jetbrains.com/profiler/

Run with CPU Profiling

dotnet run --project tests/SharpCompress.Performance/SharpCompress.Performance.csproj --configuration Release -- --profile --type cpu --output ./my-cpu-snapshots

Run with Memory Profiling

dotnet run --project tests/SharpCompress.Performance/SharpCompress.Performance.csproj --configuration Release -- --profile --type memory --output ./my-memory-snapshots

Profiler Options

  • --profile: Enable profiler mode
  • --type cpu|memory: Choose profiling type (default: cpu)
  • --output <path>: Specify snapshot output directory (default: ./profiler-snapshots)

The profiler will run a sample benchmark and save snapshots that can be opened in JetBrains profiler tools for detailed analysis.

CI Integration

The performance benchmarks run automatically in GitHub Actions on:

  • Push to master or release branches
  • Pull requests to master or release branches
  • Manual workflow dispatch

Results are displayed in the GitHub Actions summary and uploaded as artifacts.

Benchmark Configuration

The benchmarks are configured with minimal iterations for CI efficiency:

  • Warmup Count: 1 iteration
  • Iteration Count: 3 iterations
  • Invocation Count: 1
  • Unroll Factor: 1
  • Toolchain: InProcessEmitToolchain (for fast execution)

These settings provide a good balance between speed and accuracy for CI purposes. For more accurate results, use the Short, Medium, or Long job configurations.

Memory Diagnostics

All benchmarks include memory diagnostics using [MemoryDiagnoser], which provides:

  • Total allocated memory per operation
  • Gen 0/1/2 collection counts

Understanding Results

Key metrics in the benchmark results:

  • Mean: Average execution time
  • Error: Half of 99.9% confidence interval
  • StdDev: Standard deviation
  • Allocated: Total managed memory allocated per operation

Contributing

When adding new benchmarks:

  1. Create a new class in the Benchmarks/ directory
  2. Inherit from ArchiveBenchmarkBase for archive-related benchmarks
  3. Add [MemoryDiagnoser] attribute to the class
  4. Use [Benchmark(Description = "...")] for each benchmark method
  5. Add [GlobalSetup] for one-time initialization
  6. Update this README if needed