Compare commits

..

3 Commits

Author SHA1 Message Date
copilot-swe-agent[bot]
7eed908ace Improve infinite loop detection logic based on code review
- Remove unreachable condition check
- Add proper check for consecutive zero-length streams
- Verify both old and new streams to detect invalid state

Co-authored-by: adamhathcock <527620+adamhathcock@users.noreply.github.com>
2026-01-31 11:04:17 +00:00
copilot-swe-agent[bot]
8cd667d0c3 Fix infinite loop in SourceStream.Seek for malformed archives
- Add detection for when SetStream fails during Seek operation
- Throw InvalidOperationException with clear error message instead of looping infinitely
- Add test case Rar_MalformedArchive_NoInfiniteLoop to validate fix
- All 74 RAR archive tests pass

Co-authored-by: adamhathcock <527620+adamhathcock@users.noreply.github.com>
2026-01-31 11:02:16 +00:00
copilot-swe-agent[bot]
e8f04e22ba Initial plan 2026-01-31 10:56:12 +00:00
278 changed files with 9088 additions and 15533 deletions

7
.copilot-agent.yml Normal file
View File

@@ -0,0 +1,7 @@
enabled: true
agent:
name: copilot-coding-agent
allow:
- paths: ["src/**/*", "tests/**/*", "README.md", "AGENTS.md"]
actions: ["create", "modify"]
require_review_before_merge: true

17
.github/agents/copilot-agent.yml vendored Normal file
View File

@@ -0,0 +1,17 @@
enabled: true
agent:
name: copilot-coding-agent
allow:
- paths: ["src/**/*", "tests/**/*", "README.md", "AGENTS.md"]
actions: ["create", "modify", "delete"]
require_review_before_merge: true
required_approvals: 1
allowed_merge_strategies:
- squash
- merge
auto_merge_on_green: false
run_workflows: true
notes: |
- This manifest expresses the policy for the Copilot coding agent in this repository.
- It does NOT install or authorize the agent; a repository admin must install the Copilot coding agent app and grant the repository the necessary permissions (contents: write, pull_requests: write, checks: write, actions: write/read, issues: write) to allow the agent to act.
- Keep allow paths narrow and prefer require_review_before_merge during initial rollout.

25
.github/prompts/plan-async.prompt.md vendored Normal file
View File

@@ -0,0 +1,25 @@
# Plan: Implement Missing Async Functionality in SharpCompress
SharpCompress has async support for low-level stream operations and Reader/Writer APIs, but critical entry points (Archive.Open, factory methods, initialization) remain synchronous. This plan adds async overloads for all user-facing I/O operations and fixes existing async bugs, enabling full end-to-end async workflows.
## Steps
1. **Add async factory methods** to [ArchiveFactory.cs](src/SharpCompress/Factories/ArchiveFactory.cs), [ReaderFactory.cs](src/SharpCompress/Factories/ReaderFactory.cs), and [WriterFactory.cs](src/SharpCompress/Factories/WriterFactory.cs) with `OpenAsync` and `CreateAsync` overloads accepting `CancellationToken`
2. **Implement async Open methods** on concrete archive types ([ZipArchive.cs](src/SharpCompress/Archives/Zip/ZipArchive.cs), [TarArchive.cs](src/SharpCompress/Archives/Tar/TarArchive.cs), [RarArchive.cs](src/SharpCompress/Archives/Rar/RarArchive.cs), [GZipArchive.cs](src/SharpCompress/Archives/GZip/GZipArchive.cs), [SevenZipArchive.cs](src/SharpCompress/Archives/SevenZip/SevenZipArchive.cs)) and reader types ([ZipReader.cs](src/SharpCompress/Readers/Zip/ZipReader.cs), [TarReader.cs](src/SharpCompress/Readers/Tar/TarReader.cs), etc.)
3. **Convert archive initialization logic to async** including header reading, volume loading, and format signature detection across archive constructors and internal initialization methods
4. **Fix LZMA decoder async bugs** in [LzmaStream.cs](src/SharpCompress/Compressors/LZMA/LzmaStream.cs), [Decoder.cs](src/SharpCompress/Compressors/LZMA/Decoder.cs), and [OutWindow.cs](src/SharpCompress/Compressors/LZMA/OutWindow.cs) to enable true async 7Zip support and remove `NonDisposingStream` workaround
5. **Complete Rar async implementation** by converting `UnpackV2017` methods to async in [UnpackV2017.cs](src/SharpCompress/Compressors/Rar/UnpackV2017.cs) and updating Rar20 decompression
6. **Add comprehensive async tests** covering all new async entry points, cancellation scenarios, and concurrent operations across all archive formats in test files
## Further Considerations
1. **Breaking changes** - Should new async methods be added alongside existing sync methods (non-breaking), or should sync methods eventually be deprecated? Recommend additive approach for backward compatibility.
2. **Performance impact** - Header parsing for formats like Zip/Tar is often small; consider whether truly async parsing adds value vs sync parsing wrapped in Task, or make it conditional based on stream type (network vs file).
3. **7Zip complexity** - The LZMA async bug fix (Step 4) may be challenging due to state management in the decoder; consider whether to scope it separately or implement a simpler workaround that maintains correctness.

123
.github/prompts/plan-for-next.prompt.md vendored Normal file
View File

@@ -0,0 +1,123 @@
# Plan: Modernize SharpCompress Public API
Based on comprehensive analysis, the API has several inconsistencies around factory patterns, async support, format capabilities, and options classes. Most improvements can be done incrementally without breaking changes.
## Steps
1. **Standardize factory patterns** by deprecating format-specific static `Open` methods in [Archives/Zip/ZipArchive.cs](src/SharpCompress/Archives/Zip/ZipArchive.cs), [Archives/Tar/TarArchive.cs](src/SharpCompress/Archives/Tar/TarArchive.cs), etc. in favor of centralized [Factories/ArchiveFactory.cs](src/SharpCompress/Factories/ArchiveFactory.cs)
2. **Complete async implementation** in [Writers/Zip/ZipWriter.cs](src/SharpCompress/Writers/Zip/ZipWriter.cs) and other writers that currently use sync-over-async, implementing true async I/O throughout the writer hierarchy
3. **Unify options classes** by making [Common/ExtractionOptions.cs](src/SharpCompress/Common/ExtractionOptions.cs) inherit from `OptionsBase` and adding progress reporting to extraction methods consistently
4. **Clarify GZip semantics** in [Archives/GZip/GZipArchive.cs](src/SharpCompress/Archives/GZip/GZipArchive.cs) by adding XML documentation explaining single-entry limitation and relationship to GZip compression used in Tar.gz
## Further Considerations
1. **Breaking changes roadmap** - Should we plan a major version (2.0) to remove deprecated factory methods, clean up `ArchiveType` enum (remove Arc/Arj or add full support), and consolidate naming patterns?
2. **Progress reporting consistency** - Should `IProgress<ArchiveExtractionProgress<IEntry>>` be added to all extraction extension methods or consolidated into options classes?
## Detailed Analysis
### Factory Pattern Issues
Three different factory patterns exist with overlapping functionality:
1. **Static Factories**: ArchiveFactory, ReaderFactory, WriterFactory
2. **Instance Factories**: IArchiveFactory, IReaderFactory, IWriterFactory
3. **Format-specific static methods**: Each archive class has static `Open` methods
**Example confusion:**
```csharp
// Three ways to open a Zip archive - which is recommended?
var archive1 = ArchiveFactory.Open("file.zip");
var archive2 = ZipArchive.Open("file.zip");
var archive3 = ArchiveFactory.AutoFactory.Open(fileInfo, options);
```
### Async Support Gaps
Base `IWriter` interface has async methods, but writer implementations provide minimal async support. Most writers just call synchronous methods:
```csharp
public virtual async Task WriteAsync(...)
{
// Default implementation calls synchronous version
Write(filename, source, modificationTime);
await Task.CompletedTask.ConfigureAwait(false);
}
```
Real async implementations only in:
- `TarWriter` - Proper async implementation
- Most other writers use sync-over-async
### GZip Archive Special Case
GZip is treated as both a compression format and an archive format, but only supports single-entry archives:
```csharp
protected override GZipArchiveEntry CreateEntryInternal(...)
{
if (Entries.Any())
{
throw new InvalidFormatException("Only one entry is allowed in a GZip Archive");
}
// ...
}
```
### Options Class Hierarchy
```
OptionsBase (LeaveStreamOpen, ArchiveEncoding)
├─ ReaderOptions (LookForHeader, Password, DisableCheckIncomplete, BufferSize, ExtensionHint, Progress)
├─ WriterOptions (CompressionType, CompressionLevel, Progress)
│ ├─ ZipWriterOptions (ArchiveComment, UseZip64)
│ ├─ TarWriterOptions (FinalizeArchiveOnClose, HeaderFormat)
│ └─ GZipWriterOptions (no additional properties)
└─ ExtractionOptions (standalone - Overwrite, ExtractFullPath, PreserveFileTime, PreserveAttributes)
```
**Issues:**
- `ExtractionOptions` doesn't inherit from `OptionsBase` - no encoding support during extraction
- Progress reporting inconsistency between readers and extraction
- Obsolete properties (`ChecksumIsValid`, `Version`) with unclear migration path
### Implementation Priorities
**High Priority (Non-Breaking):**
1. Add API usage guide (Archive vs Reader, factory recommendations, async best practices)
2. Fix progress reporting consistency
3. Complete async implementation in writers
**Medium Priority (Next Major Version):**
1. Unify factory pattern - deprecate format-specific static `Open` methods
2. Clean up options classes - make `ExtractionOptions` inherit from `OptionsBase`
3. Clarify archive types - remove Arc/Arj from `ArchiveType` enum or add full support
4. Standardize naming across archive types
**Low Priority:**
1. Add BZip2 archive support similar to GZipArchive
2. Complete obsolete property cleanup with migration guide
### Backward Compatibility Strategy
**Safe (Non-Breaking) Changes:**
- Add new methods to interfaces (use default implementations)
- Add new options properties (with defaults)
- Add new factory methods
- Improve async implementations
- Add progress reporting support
**Breaking Changes to Avoid:**
- ❌ Removing format-specific `Open` methods (deprecate instead)
- ❌ Changing `LeaveStreamOpen` default (currently `true`)
- ❌ Removing obsolete properties before major version bump
- ❌ Changing return types or signatures of existing methods
**Deprecation Pattern:**
- Use `[Obsolete]` for one major version
- Use `[EditorBrowsable(EditorBrowsableState.Never)]` in next major version
- Remove in following major version

View File

@@ -1,50 +0,0 @@
name: Performance Benchmarks
on:
push:
branches:
- 'master'
- 'release'
pull_request:
branches:
- 'master'
- 'release'
workflow_dispatch:
permissions:
contents: read
jobs:
benchmark:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v6
with:
fetch-depth: 0
- uses: actions/setup-dotnet@v5
with:
dotnet-version: 10.0.x
- name: Build Performance Project
run: dotnet build tests/SharpCompress.Performance/SharpCompress.Performance.csproj --configuration Release
- name: Run Benchmarks
run: dotnet run --project tests/SharpCompress.Performance/SharpCompress.Performance.csproj --configuration Release --no-build -- --filter "*" --exporters json markdown --artifacts benchmark-results
continue-on-error: true
- name: Display Benchmark Results
if: always()
run: dotnet run --project build/build.csproj -- display-benchmark-results
- name: Compare with Baseline
if: always()
run: dotnet run --project build/build.csproj -- compare-benchmark-results
- name: Upload Benchmark Results
if: always()
uses: actions/upload-artifact@v6
with:
name: benchmark-results
path: benchmark-results/

4
.gitignore vendored
View File

@@ -17,10 +17,6 @@ tests/TestArchives/*/Scratch2
tools
.idea/
artifacts/
BenchmarkDotNet.Artifacts/
baseline-artifacts/
profiler-snapshots/
.DS_Store
*.snupkg
benchmark-results/

View File

@@ -1,20 +1,20 @@
<Project>
<ItemGroup>
<PackageVersion Include="BenchmarkDotNet" Version="0.15.8" />
<PackageVersion Include="Bullseye" Version="6.1.0" />
<PackageVersion Include="AwesomeAssertions" Version="9.3.0" />
<PackageVersion Include="Glob" Version="1.1.9" />
<PackageVersion Include="JetBrains.Profiler.SelfApi" Version="2.5.16" />
<PackageVersion Include="JetBrains.Profiler.SelfApi" Version="2.5.15" />
<PackageVersion Include="Microsoft.Bcl.AsyncInterfaces" Version="10.0.0" />
<PackageVersion Include="Microsoft.NET.ILLink.Task" Version="10.0.0" />
<PackageVersion Include="Microsoft.NET.Test.Sdk" Version="18.0.1" />
<PackageVersion Include="Mono.Posix.NETStandard" Version="1.0.0" />
<PackageVersion Include="SimpleExec" Version="13.0.0" />
<PackageVersion Include="System.Text.Encoding.CodePages" Version="10.0.0" />
<PackageVersion Include="System.Buffers" Version="4.6.1" />
<PackageVersion Include="System.Memory" Version="4.6.3" />
<PackageVersion Include="xunit.v3" Version="3.2.2" />
<PackageVersion Include="xunit.v3" Version="3.2.1" />
<PackageVersion Include="xunit.runner.visualstudio" Version="3.1.5" />
<GlobalPackageReference Include="Microsoft.SourceLink.GitHub" Version="10.0.102" />
<GlobalPackageReference Include="Microsoft.SourceLink.GitHub" Version="8.0.0" />
<GlobalPackageReference Include="Microsoft.NETFramework.ReferenceAssemblies" Version="1.0.3" />
<GlobalPackageReference
Include="Microsoft.VisualStudio.Threading.Analyzers"

View File

@@ -19,9 +19,6 @@ const string Publish = "publish";
const string DetermineVersion = "determine-version";
const string UpdateVersion = "update-version";
const string PushToNuGet = "push-to-nuget";
const string DisplayBenchmarkResults = "display-benchmark-results";
const string CompareBenchmarkResults = "compare-benchmark-results";
const string GenerateBaseline = "generate-baseline";
Target(
Clean,
@@ -213,249 +210,6 @@ Target(
}
);
Target(
DisplayBenchmarkResults,
() =>
{
var githubStepSummary = Environment.GetEnvironmentVariable("GITHUB_STEP_SUMMARY");
var resultsDir = "benchmark-results/results";
if (!Directory.Exists(resultsDir))
{
Console.WriteLine("No benchmark results found.");
return;
}
var markdownFiles = Directory
.GetFiles(resultsDir, "*-report-github.md")
.OrderBy(f => f)
.ToList();
if (markdownFiles.Count == 0)
{
Console.WriteLine("No benchmark markdown reports found.");
return;
}
var output = new List<string> { "## Benchmark Results", "" };
foreach (var file in markdownFiles)
{
Console.WriteLine($"Processing {Path.GetFileName(file)}");
var content = File.ReadAllText(file);
output.Add(content);
output.Add("");
}
// Write to GitHub Step Summary if available
if (!string.IsNullOrEmpty(githubStepSummary))
{
File.AppendAllLines(githubStepSummary, output);
Console.WriteLine($"Benchmark results written to GitHub Step Summary");
}
else
{
// Write to console if not in GitHub Actions
foreach (var line in output)
{
Console.WriteLine(line);
}
}
}
);
Target(
CompareBenchmarkResults,
() =>
{
var githubStepSummary = Environment.GetEnvironmentVariable("GITHUB_STEP_SUMMARY");
var baselinePath = "tests/SharpCompress.Performance/baseline-results.md";
var resultsDir = "benchmark-results/results";
var output = new List<string> { "## Comparison with Baseline", "" };
if (!File.Exists(baselinePath))
{
Console.WriteLine("Baseline file not found");
output.Add("⚠️ Baseline file not found. Run `generate-baseline` to create it.");
WriteOutput(output, githubStepSummary);
return;
}
if (!Directory.Exists(resultsDir))
{
Console.WriteLine("No current benchmark results found.");
output.Add("⚠️ No current benchmark results found. Showing baseline only.");
output.Add("");
output.Add("### Baseline Results");
output.AddRange(File.ReadAllLines(baselinePath));
WriteOutput(output, githubStepSummary);
return;
}
var markdownFiles = Directory
.GetFiles(resultsDir, "*-report-github.md")
.OrderBy(f => f)
.ToList();
if (markdownFiles.Count == 0)
{
Console.WriteLine("No current benchmark markdown reports found.");
output.Add("⚠️ No current benchmark results found. Showing baseline only.");
output.Add("");
output.Add("### Baseline Results");
output.AddRange(File.ReadAllLines(baselinePath));
WriteOutput(output, githubStepSummary);
return;
}
Console.WriteLine("Parsing baseline results...");
var baselineMetrics = ParseBenchmarkResults(File.ReadAllText(baselinePath));
Console.WriteLine("Parsing current results...");
var currentText = string.Join("\n", markdownFiles.Select(f => File.ReadAllText(f)));
var currentMetrics = ParseBenchmarkResults(currentText);
Console.WriteLine("Comparing results...");
output.Add("### Performance Comparison");
output.Add("");
output.Add(
"| Benchmark | Baseline Mean | Current Mean | Change | Baseline Memory | Current Memory | Change |"
);
output.Add(
"|-----------|---------------|--------------|--------|-----------------|----------------|--------|"
);
var hasRegressions = false;
var hasImprovements = false;
foreach (var method in currentMetrics.Keys.Union(baselineMetrics.Keys).OrderBy(k => k))
{
var hasCurrent = currentMetrics.TryGetValue(method, out var current);
var hasBaseline = baselineMetrics.TryGetValue(method, out var baseline);
if (!hasCurrent)
{
output.Add(
$"| {method} | {baseline!.Mean} | ❌ Missing | N/A | {baseline.Memory} | N/A | N/A |"
);
continue;
}
if (!hasBaseline)
{
output.Add(
$"| {method} | ❌ New | {current!.Mean} | N/A | N/A | {current.Memory} | N/A |"
);
continue;
}
var timeChange = CalculateChange(baseline!.MeanValue, current!.MeanValue);
var memChange = CalculateChange(baseline.MemoryValue, current.MemoryValue);
var timeIcon =
timeChange > 25 ? "🔴"
: timeChange < -25 ? "🟢"
: "⚪";
var memIcon =
memChange > 25 ? "🔴"
: memChange < -25 ? "🟢"
: "⚪";
if (timeChange > 25 || memChange > 25)
hasRegressions = true;
if (timeChange < -25 || memChange < -25)
hasImprovements = true;
output.Add(
$"| {method} | {baseline.Mean} | {current.Mean} | {timeIcon} {timeChange:+0.0;-0.0;0}% | {baseline.Memory} | {current.Memory} | {memIcon} {memChange:+0.0;-0.0;0}% |"
);
}
output.Add("");
output.Add("**Legend:**");
output.Add("- 🔴 Regression (>25% slower/more memory)");
output.Add("- 🟢 Improvement (>25% faster/less memory)");
output.Add("- ⚪ No significant change");
if (hasRegressions)
{
output.Add("");
output.Add(
"⚠️ **Warning**: Performance regressions detected. Review the changes carefully."
);
}
else if (hasImprovements)
{
output.Add("");
output.Add("✅ Performance improvements detected!");
}
else
{
output.Add("");
output.Add("✅ Performance is stable compared to baseline.");
}
WriteOutput(output, githubStepSummary);
}
);
Target(
GenerateBaseline,
() =>
{
var perfProject = "tests/SharpCompress.Performance/SharpCompress.Performance.csproj";
var baselinePath = "tests/SharpCompress.Performance/baseline-results.md";
var artifactsDir = "baseline-artifacts";
Console.WriteLine("Building performance project...");
Run("dotnet", $"build {perfProject} --configuration Release");
Console.WriteLine("Running benchmarks to generate baseline...");
Run(
"dotnet",
$"run --project {perfProject} --configuration Release --no-build -- --filter \"*\" --exporters markdown --artifacts {artifactsDir}"
);
var resultsDir = Path.Combine(artifactsDir, "results");
if (!Directory.Exists(resultsDir))
{
Console.WriteLine("ERROR: No benchmark results generated.");
return;
}
var markdownFiles = Directory
.GetFiles(resultsDir, "*-report-github.md")
.OrderBy(f => f)
.ToList();
if (markdownFiles.Count == 0)
{
Console.WriteLine("ERROR: No markdown reports found.");
return;
}
Console.WriteLine($"Combining {markdownFiles.Count} benchmark reports...");
var baselineContent = new List<string>();
foreach (var file in markdownFiles)
{
var lines = File.ReadAllLines(file);
baselineContent.AddRange(lines.Select(l => l.Trim()).Where(l => l.StartsWith('|')));
}
File.WriteAllText(baselinePath, string.Join(Environment.NewLine, baselineContent));
Console.WriteLine($"Baseline written to {baselinePath}");
// Clean up artifacts directory
if (Directory.Exists(artifactsDir))
{
Directory.Delete(artifactsDir, true);
Console.WriteLine("Cleaned up artifacts directory.");
}
}
);
Target("default", [Publish], () => Console.WriteLine("Done!"));
await RunTargetsAndExitAsync(args);
@@ -476,7 +230,7 @@ static async Task<(string version, bool isPrerelease)> GetVersion()
}
else
{
// Not tagged - create prerelease version
// Not tagged - create prerelease version based on next minor version
var allTags = (await GetGitOutput("tag", "--list"))
.Split('\n', StringSplitOptions.RemoveEmptyEntries)
.Where(tag => Regex.IsMatch(tag.Trim(), @"^\d+\.\d+\.\d+$"))
@@ -486,22 +240,8 @@ static async Task<(string version, bool isPrerelease)> GetVersion()
var lastTag = allTags.OrderBy(tag => Version.Parse(tag)).LastOrDefault() ?? "0.0.0";
var lastVersion = Version.Parse(lastTag);
// Determine version increment based on branch
var currentBranch = await GetCurrentBranch();
Version nextVersion;
if (currentBranch == "release")
{
// Release branch: increment patch version
nextVersion = new Version(lastVersion.Major, lastVersion.Minor, lastVersion.Build + 1);
Console.WriteLine($"Building prerelease for release branch (patch increment)");
}
else
{
// Master or other branches: increment minor version
nextVersion = new Version(lastVersion.Major, lastVersion.Minor + 1, 0);
Console.WriteLine($"Building prerelease for {currentBranch} branch (minor increment)");
}
// Increment minor version for next release
var nextVersion = new Version(lastVersion.Major, lastVersion.Minor + 1, 0);
// Use commit count since the last version tag if available; otherwise, fall back to total count
var revListArgs = allTags.Any() ? $"--count {lastTag}..HEAD" : "--count HEAD";
@@ -513,28 +253,6 @@ static async Task<(string version, bool isPrerelease)> GetVersion()
}
}
static async Task<string> GetCurrentBranch()
{
// In GitHub Actions, GITHUB_REF_NAME contains the branch name
var githubRefName = Environment.GetEnvironmentVariable("GITHUB_REF_NAME");
if (!string.IsNullOrEmpty(githubRefName))
{
return githubRefName;
}
// Fallback to git command for local builds
try
{
var (output, _) = await ReadAsync("git", "branch --show-current");
return output.Trim();
}
catch (Exception ex)
{
Console.WriteLine($"Warning: Could not determine current branch: {ex.Message}");
return "unknown";
}
}
static async Task<string> GetGitOutput(string command, string args)
{
try
@@ -548,142 +266,3 @@ static async Task<string> GetGitOutput(string command, string args)
throw new Exception($"Git command failed: git {command} {args}\n{ex.Message}", ex);
}
}
static void WriteOutput(List<string> output, string? githubStepSummary)
{
if (!string.IsNullOrEmpty(githubStepSummary))
{
File.AppendAllLines(githubStepSummary, output);
Console.WriteLine("Comparison written to GitHub Step Summary");
}
else
{
foreach (var line in output)
{
Console.WriteLine(line);
}
}
}
static Dictionary<string, BenchmarkMetric> ParseBenchmarkResults(string markdown)
{
var metrics = new Dictionary<string, BenchmarkMetric>();
var lines = markdown.Split('\n');
for (int i = 0; i < lines.Length; i++)
{
var line = lines[i].Trim();
// Look for table rows with benchmark data
if (line.StartsWith("|") && line.Contains("&#39;") && i > 0)
{
var parts = line.Split('|', StringSplitOptions.TrimEntries);
if (parts.Length >= 5)
{
var method = parts[1].Replace("&#39;", "'");
var meanStr = parts[2];
// Find Allocated column - it's usually the last column or labeled "Allocated"
string memoryStr = "N/A";
for (int j = parts.Length - 2; j >= 2; j--)
{
if (
parts[j].Contains("KB")
|| parts[j].Contains("MB")
|| parts[j].Contains("GB")
|| parts[j].Contains("B")
)
{
memoryStr = parts[j];
break;
}
}
if (
!method.Equals("Method", StringComparison.OrdinalIgnoreCase)
&& !string.IsNullOrWhiteSpace(method)
)
{
var metric = new BenchmarkMetric
{
Method = method,
Mean = meanStr,
MeanValue = ParseTimeValue(meanStr),
Memory = memoryStr,
MemoryValue = ParseMemoryValue(memoryStr),
};
metrics[method] = metric;
}
}
}
}
return metrics;
}
static double ParseTimeValue(string timeStr)
{
if (string.IsNullOrWhiteSpace(timeStr) || timeStr == "N/A" || timeStr == "NA")
return 0;
// Remove thousands separators and parse
timeStr = timeStr.Replace(",", "").Trim();
var match = Regex.Match(timeStr, @"([\d.]+)\s*(\w+)");
if (!match.Success)
return 0;
var value = double.Parse(match.Groups[1].Value);
var unit = match.Groups[2].Value.ToLower();
// Convert to microseconds for comparison
return unit switch
{
"s" => value * 1_000_000,
"ms" => value * 1_000,
"μs" or "us" => value,
"ns" => value / 1_000,
_ => value,
};
}
static double ParseMemoryValue(string memStr)
{
if (string.IsNullOrWhiteSpace(memStr) || memStr == "N/A" || memStr == "NA")
return 0;
memStr = memStr.Replace(",", "").Trim();
var match = Regex.Match(memStr, @"([\d.]+)\s*(\w+)");
if (!match.Success)
return 0;
var value = double.Parse(match.Groups[1].Value);
var unit = match.Groups[2].Value.ToUpper();
// Convert to KB for comparison
return unit switch
{
"GB" => value * 1_024 * 1_024,
"MB" => value * 1_024,
"KB" => value,
"B" => value / 1_024,
_ => value,
};
}
static double CalculateChange(double baseline, double current)
{
if (baseline == 0)
return 0;
return ((current - baseline) / baseline) * 100;
}
record BenchmarkMetric
{
public string Method { get; init; } = "";
public string Mean { get; init; } = "";
public double MeanValue { get; init; }
public string Memory { get; init; } = "";
public double MemoryValue { get; init; }
}

View File

@@ -25,12 +25,12 @@
},
"Microsoft.SourceLink.GitHub": {
"type": "Direct",
"requested": "[10.0.102, )",
"resolved": "10.0.102",
"contentHash": "Oxq3RCIJSdtpIU4hLqO7XaDe/Ra3HS9Wi8rJl838SAg6Zu1iQjerA0+xXWBgUFYbgknUGCLOU0T+lzMLkvY9Qg==",
"requested": "[8.0.0, )",
"resolved": "8.0.0",
"contentHash": "G5q7OqtwIyGTkeIOAc3u2ZuV/kicQaec5EaRnc0pIeSnh9LUjj+PYQrJYBURvDt7twGl2PKA7nSN0kz1Zw5bnQ==",
"dependencies": {
"Microsoft.Build.Tasks.Git": "10.0.102",
"Microsoft.SourceLink.Common": "10.0.102"
"Microsoft.Build.Tasks.Git": "8.0.0",
"Microsoft.SourceLink.Common": "8.0.0"
}
},
"Microsoft.VisualStudio.Threading.Analyzers": {
@@ -47,8 +47,8 @@
},
"Microsoft.Build.Tasks.Git": {
"type": "Transitive",
"resolved": "10.0.102",
"contentHash": "0i81LYX31U6UiXz4NOLbvc++u+/mVDmOt+PskrM/MygpDxkv9THKQyRUmavBpLK6iBV0abNWnn+CQgSRz//Pwg=="
"resolved": "8.0.0",
"contentHash": "bZKfSIKJRXLTuSzLudMFte/8CempWjVamNUR5eHJizsy+iuOuO/k2gnh7W0dHJmYY0tBf+gUErfluCv5mySAOQ=="
},
"Microsoft.NETFramework.ReferenceAssemblies.net461": {
"type": "Transitive",
@@ -57,8 +57,8 @@
},
"Microsoft.SourceLink.Common": {
"type": "Transitive",
"resolved": "10.0.102",
"contentHash": "Mk1IMb9q5tahC2NltxYXFkLBtuBvfBoCQ3pIxYQWfzbCE9o1OB9SsHe0hnNGo7lWgTA/ePbFAJLWu6nLL9K17A=="
"resolved": "8.0.0",
"contentHash": "dk9JPxTCIevS75HyEQ0E4OVAFhB2N+V9ShCXf8Q6FkUQZDkgLI12y679Nym1YqsiSysuQskT7Z+6nUf3yab6Vw=="
}
}
}

View File

@@ -20,6 +20,7 @@ public static partial class ArchiveFactory
)
{
readerOptions ??= new ReaderOptions();
stream = SharpCompressStream.Create(stream, bufferSize: readerOptions.BufferSize);
var factory = await FindFactoryAsync<IArchiveFactory>(stream, cancellationToken);
return factory.OpenAsyncArchive(stream, readerOptions);
}

View File

@@ -16,6 +16,7 @@ public static partial class ArchiveFactory
public static IArchive OpenArchive(Stream stream, ReaderOptions? readerOptions = null)
{
readerOptions ??= new ReaderOptions();
stream = SharpCompressStream.Create(stream, bufferSize: readerOptions.BufferSize);
return FindFactory<IArchiveFactory>(stream).OpenArchive(stream, readerOptions);
}
@@ -149,14 +150,24 @@ public static partial class ArchiveFactory
);
}
public static bool IsArchive(string filePath, out ArchiveType? type)
// Async methods moved to ArchiveFactory.Async.cs
public static bool IsArchive(
string filePath,
out ArchiveType? type,
int bufferSize = ReaderOptions.DefaultBufferSize
)
{
filePath.NotNullOrEmpty(nameof(filePath));
using Stream s = File.OpenRead(filePath);
return IsArchive(s, out type);
return IsArchive(s, out type, bufferSize);
}
public static bool IsArchive(Stream stream, out ArchiveType? type)
public static bool IsArchive(
Stream stream,
out ArchiveType? type,
int bufferSize = ReaderOptions.DefaultBufferSize
)
{
type = null;
stream.NotNull(nameof(stream));

View File

@@ -58,7 +58,7 @@ internal sealed class GZipWritableArchiveEntry : GZipArchiveEntry, IWritableArch
{
//ensure new stream is at the start, this could be reset
stream.Seek(0, SeekOrigin.Begin);
return SharpCompressStream.CreateNonDisposing(stream);
return SharpCompressStream.Create(stream, leaveOpen: true);
}
internal override void Close()

View File

@@ -9,6 +9,8 @@ namespace SharpCompress.Archives;
public static class IArchiveEntryExtensions
{
private const int BufferSize = 81920;
/// <param name="archiveEntry">The archive entry to extract.</param>
extension(IArchiveEntry archiveEntry)
{
@@ -26,7 +28,7 @@ public static class IArchiveEntryExtensions
using var entryStream = archiveEntry.OpenEntryStream();
var sourceStream = WrapWithProgress(entryStream, archiveEntry, progress);
sourceStream.CopyTo(streamToWriteTo, Constants.BufferSize);
sourceStream.CopyTo(streamToWriteTo, BufferSize);
}
/// <summary>
@@ -46,16 +48,10 @@ public static class IArchiveEntryExtensions
throw new ExtractionException("Entry is a file directory and cannot be extracted.");
}
#if LEGACY_DOTNET
using var entryStream = await archiveEntry.OpenEntryStreamAsync(cancellationToken);
#else
await using var entryStream = await archiveEntry.OpenEntryStreamAsync(
cancellationToken
);
#endif
var sourceStream = WrapWithProgress(entryStream, archiveEntry, progress);
await sourceStream
.CopyToAsync(streamToWriteTo, Constants.BufferSize, cancellationToken)
.CopyToAsync(streamToWriteTo, BufferSize, cancellationToken)
.ConfigureAwait(false);
}
}

View File

@@ -16,22 +16,18 @@ public partial class SevenZipArchive : AbstractArchive<SevenZipArchiveEntry, Sev
{
private ArchiveDatabase? _database;
/// <summary>
/// Constructor with a SourceStream able to handle FileInfo and Streams.
/// </summary>
/// <param name="sourceStream"></param>
private SevenZipArchive(SourceStream sourceStream)
: base(ArchiveType.SevenZip, sourceStream) { }
protected override IEnumerable<SevenZipVolume> LoadVolumes(SourceStream sourceStream)
{
sourceStream.NotNull("SourceStream is null").LoadAllParts(); //request all streams
return new SevenZipVolume(sourceStream, ReaderOptions, 0).AsEnumerable(); //simple single volume or split, multivolume not supported
}
internal SevenZipArchive()
: base(ArchiveType.SevenZip) { }
protected override IEnumerable<SevenZipVolume> LoadVolumes(SourceStream sourceStream)
{
sourceStream.NotNull("SourceStream is null").LoadAllParts();
return new SevenZipVolume(sourceStream, ReaderOptions, 0).AsEnumerable();
}
protected override IEnumerable<SevenZipArchiveEntry> LoadEntries(
IEnumerable<SevenZipVolume> volumes
)
@@ -102,34 +98,13 @@ public partial class SevenZipArchive : AbstractArchive<SevenZipArchiveEntry, Sev
public override long TotalSize =>
_database?._packSizes.Aggregate(0L, (total, packSize) => total + packSize) ?? 0;
internal sealed class SevenZipReader : AbstractReader<SevenZipEntry, SevenZipVolume>
private sealed class SevenZipReader : AbstractReader<SevenZipEntry, SevenZipVolume>
{
private readonly SevenZipArchive _archive;
private SevenZipEntry? _currentEntry;
private Stream? _currentFolderStream;
private CFolder? _currentFolder;
/// <summary>
/// Enables internal diagnostics for tests.
/// When disabled (default), diagnostics properties return null to avoid exposing internal state.
/// </summary>
internal bool DiagnosticsEnabled { get; set; }
/// <summary>
/// Current folder instance used to decide whether the solid folder stream should be reused.
/// Only available when <see cref="DiagnosticsEnabled"/> is true.
/// </summary>
internal object? DiagnosticsCurrentFolder => DiagnosticsEnabled ? _currentFolder : null;
/// <summary>
/// Current shared folder stream instance.
/// Only available when <see cref="DiagnosticsEnabled"/> is true.
/// </summary>
internal Stream? DiagnosticsCurrentFolderStream =>
DiagnosticsEnabled ? _currentFolderStream : null;
internal SevenZipReader(ReaderOptions readerOptions, SevenZipArchive archive)
: base(readerOptions, ArchiveType.SevenZip, false) => this._archive = archive;
: base(readerOptions, ArchiveType.SevenZip) => this._archive = archive;
public override SevenZipVolume Volume => _archive.Volumes.Single();
@@ -142,10 +117,6 @@ public partial class SevenZipArchive : AbstractArchive<SevenZipArchiveEntry, Sev
_currentEntry = dir;
yield return dir;
}
// For solid archives (entries in the same folder share a compressed stream),
// we must iterate entries sequentially and maintain the folder stream state
// across entries in the same folder to avoid recreating the decompression
// stream for each file, which breaks contiguous streaming.
foreach (var entry in entries.Where(x => !x.IsDirectory))
{
_currentEntry = entry;
@@ -160,53 +131,10 @@ public partial class SevenZipArchive : AbstractArchive<SevenZipArchiveEntry, Sev
{
return CreateEntryStream(Stream.Null);
}
var folder = entry.FilePart.Folder;
// Check if we're starting a new folder - dispose old folder stream if needed
if (folder != _currentFolder)
{
_currentFolderStream?.Dispose();
_currentFolderStream = null;
_currentFolder = folder;
}
// Create the folder stream once per folder
if (_currentFolderStream is null)
{
_currentFolderStream = _archive._database!.GetFolderStream(
_archive.Volumes.Single().Stream,
folder!,
_archive._database.PasswordProvider
);
}
// Wrap with SyncOnlyStream to work around LZMA async bugs
// Return a ReadOnlySubStream that reads from the shared folder stream
return CreateEntryStream(
new SyncOnlyStream(
new ReadOnlySubStream(_currentFolderStream, entry.Size, leaveOpen: true)
)
);
}
public override void Dispose()
{
_currentFolderStream?.Dispose();
_currentFolderStream = null;
base.Dispose();
return CreateEntryStream(new SyncOnlyStream(entry.FilePart.GetCompressedStream()));
}
}
/// <summary>
/// WORKAROUND: Forces async operations to use synchronous equivalents.
/// This is necessary because the LZMA decoder has bugs in its async implementation
/// that cause state corruption (IndexOutOfRangeException, DataErrorException).
///
/// The proper fix would be to repair the LZMA decoder's async methods
/// (LzmaStream.ReadAsync, Decoder.CodeAsync, OutWindow async operations),
/// but that requires deep changes to the decoder state machine.
/// </summary>
private sealed class SyncOnlyStream : Stream
{
private readonly Stream _baseStream;
@@ -236,7 +164,6 @@ public partial class SevenZipArchive : AbstractArchive<SevenZipArchiveEntry, Sev
public override void Write(byte[] buffer, int offset, int count) =>
_baseStream.Write(buffer, offset, count);
// Force async operations to use sync equivalents to avoid LZMA decoder bugs
public override Task<int> ReadAsync(
byte[] buffer,
int offset,

View File

@@ -12,9 +12,8 @@ public class SevenZipArchiveEntry : SevenZipEntry, IArchiveEntry
public Stream OpenEntryStream() => FilePart.GetCompressedStream();
public async ValueTask<Stream> OpenEntryStreamAsync(
CancellationToken cancellationToken = default
) => (await FilePart.GetCompressedStreamAsync(cancellationToken)).NotNull();
public ValueTask<Stream> OpenEntryStreamAsync(CancellationToken cancellationToken = default) =>
new(OpenEntryStream());
public IArchive Archive { get; }

View File

@@ -176,11 +176,7 @@ public partial class TarArchive
try
{
var tarHeader = new TarHeader(new ArchiveEncoding());
#if NET8_0_OR_GREATER
await using var reader = new AsyncBinaryReader(stream, leaveOpen: true);
#else
using var reader = new AsyncBinaryReader(stream, leaveOpen: true);
#endif
var reader = new AsyncBinaryReader(stream, false);
var readSucceeded = await tarHeader.ReadAsync(reader);
var isEmptyArchive =
tarHeader.Name?.Length == 0

View File

@@ -66,7 +66,7 @@ public partial class TarArchive : AbstractWritableArchive<TarArchiveEntry, TarVo
using (var entryStream = entry.OpenEntryStream())
{
using var memoryStream = new MemoryStream();
entryStream.CopyTo(memoryStream, Constants.BufferSize);
entryStream.CopyTo(memoryStream);
memoryStream.Position = 0;
var bytes = memoryStream.ToArray();

View File

@@ -14,9 +14,8 @@ public class TarArchiveEntry : TarEntry, IArchiveEntry
public virtual Stream OpenEntryStream() => Parts.Single().GetCompressedStream().NotNull();
public async ValueTask<Stream> OpenEntryStreamAsync(
CancellationToken cancellationToken = default
) => (await Parts.Single().GetCompressedStreamAsync(cancellationToken)).NotNull();
public ValueTask<Stream> OpenEntryStreamAsync(CancellationToken cancellationToken = default) =>
new(OpenEntryStream());
#region IArchiveEntry Members

View File

@@ -79,7 +79,7 @@ internal sealed class TarWritableArchiveEntry : TarArchiveEntry, IWritableArchiv
}
//ensure new stream is at the start, this could be reset
stream.Seek(0, SeekOrigin.Begin);
return SharpCompressStream.CreateNonDisposing(stream);
return SharpCompressStream.Create(stream, leaveOpen: true);
}
internal override void Close()

View File

@@ -135,24 +135,40 @@ public partial class ZipArchive
return (IWritableAsyncArchive)OpenArchive(fileInfos, readerOptions);
}
public static bool IsZipFile(string filePath, string? password = null) =>
IsZipFile(new FileInfo(filePath), password);
public static bool IsZipFile(
string filePath,
string? password = null,
int bufferSize = ReaderOptions.DefaultBufferSize
) => IsZipFile(new FileInfo(filePath), password, bufferSize);
public static bool IsZipFile(FileInfo fileInfo, string? password = null)
public static bool IsZipFile(
FileInfo fileInfo,
string? password = null,
int bufferSize = ReaderOptions.DefaultBufferSize
)
{
if (!fileInfo.Exists)
{
return false;
}
using Stream stream = fileInfo.OpenRead();
return IsZipFile(stream, password);
return IsZipFile(stream, password, bufferSize);
}
public static bool IsZipFile(Stream stream, string? password = null)
public static bool IsZipFile(
Stream stream,
string? password = null,
int bufferSize = ReaderOptions.DefaultBufferSize
)
{
var headerFactory = new StreamingZipHeaderFactory(password, new ArchiveEncoding(), null);
try
{
if (stream is not SharpCompressStream)
{
stream = new SharpCompressStream(stream, bufferSize: bufferSize);
}
var header = headerFactory
.ReadStreamHeader(stream)
.FirstOrDefault(x => x.ZipHeaderType != ZipHeaderType.Split);
@@ -172,11 +188,20 @@ public partial class ZipArchive
}
}
public static bool IsZipMulti(Stream stream, string? password = null)
public static bool IsZipMulti(
Stream stream,
string? password = null,
int bufferSize = ReaderOptions.DefaultBufferSize
)
{
var headerFactory = new StreamingZipHeaderFactory(password, new ArchiveEncoding(), null);
try
{
if (stream is not SharpCompressStream)
{
stream = new SharpCompressStream(stream, bufferSize: bufferSize);
}
var header = headerFactory
.ReadStreamHeader(stream)
.FirstOrDefault(x => x.ZipHeaderType != ZipHeaderType.Split);
@@ -208,6 +233,7 @@ public partial class ZipArchive
public static async ValueTask<bool> IsZipFileAsync(
Stream stream,
string? password = null,
int bufferSize = ReaderOptions.DefaultBufferSize,
CancellationToken cancellationToken = default
)
{
@@ -215,6 +241,11 @@ public partial class ZipArchive
var headerFactory = new StreamingZipHeaderFactory(password, new ArchiveEncoding(), null);
try
{
if (stream is not SharpCompressStream)
{
stream = new SharpCompressStream(stream, bufferSize: bufferSize);
}
var header = await headerFactory
.ReadStreamHeaderAsync(stream)
.Where(x => x.ZipHeaderType != ZipHeaderType.Split)
@@ -242,6 +273,7 @@ public partial class ZipArchive
public static async ValueTask<bool> IsZipMultiAsync(
Stream stream,
string? password = null,
int bufferSize = ReaderOptions.DefaultBufferSize,
CancellationToken cancellationToken = default
)
{
@@ -249,6 +281,11 @@ public partial class ZipArchive
var headerFactory = new StreamingZipHeaderFactory(password, new ArchiveEncoding(), null);
try
{
if (stream is not SharpCompressStream)
{
stream = new SharpCompressStream(stream, bufferSize: bufferSize);
}
var header = headerFactory
.ReadStreamHeader(stream)
.FirstOrDefault(x => x.ZipHeaderType != ZipHeaderType.Split);

View File

@@ -35,7 +35,7 @@ public partial class ZipArchive : AbstractWritableArchive<ZipArchiveEntry, ZipVo
protected override IEnumerable<ZipVolume> LoadVolumes(SourceStream stream)
{
stream.LoadAllParts();
//stream.Position = 0;
stream.Position = 0;
var streams = stream.Streams.ToList();
var idx = 0;
@@ -45,7 +45,11 @@ public partial class ZipArchive : AbstractWritableArchive<ZipArchiveEntry, ZipVo
var headerProbeStream = streams[1];
var startPosition = headerProbeStream.Position;
headerProbeStream.Position = startPosition + 4;
var isZip = IsZipFile(headerProbeStream, ReaderOptions.Password);
var isZip = IsZipFile(
headerProbeStream,
ReaderOptions.Password,
ReaderOptions.BufferSize
);
headerProbeStream.Position = startPosition;
if (isZip)
{
@@ -156,7 +160,7 @@ public partial class ZipArchive : AbstractWritableArchive<ZipArchiveEntry, ZipVo
protected override IReader CreateReaderForSolidExtraction()
{
var stream = Volumes.Single().Stream;
//stream.Position = 0;
((IStreamStack)stream).StackSeek(0);
return ZipReader.OpenReader(stream, ReaderOptions, Entries);
}

View File

@@ -80,7 +80,7 @@ internal class ZipWritableArchiveEntry : ZipArchiveEntry, IWritableArchiveEntry
}
//ensure new stream is at the start, this could be reset
stream.Seek(0, SeekOrigin.Begin);
return SharpCompressStream.CreateNonDisposing(stream);
return SharpCompressStream.Create(stream, leaveOpen: true);
}
internal override void Close()

View File

@@ -4,61 +4,62 @@ using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace SharpCompress.Common.Ace;
public class AceCrc
namespace SharpCompress.Common.Ace
{
// CRC-32 lookup table (standard polynomial 0xEDB88320, reflected)
private static readonly uint[] Crc32Table = GenerateTable();
private static uint[] GenerateTable()
public class AceCrc
{
var table = new uint[256];
// CRC-32 lookup table (standard polynomial 0xEDB88320, reflected)
private static readonly uint[] Crc32Table = GenerateTable();
for (int i = 0; i < 256; i++)
private static uint[] GenerateTable()
{
uint crc = (uint)i;
var table = new uint[256];
for (int j = 0; j < 8; j++)
for (int i = 0; i < 256; i++)
{
if ((crc & 1) != 0)
uint crc = (uint)i;
for (int j = 0; j < 8; j++)
{
crc = (crc >> 1) ^ 0xEDB88320u;
}
else
{
crc >>= 1;
if ((crc & 1) != 0)
{
crc = (crc >> 1) ^ 0xEDB88320u;
}
else
{
crc >>= 1;
}
}
table[i] = crc;
}
table[i] = crc;
return table;
}
return table;
}
/// <summary>
/// Calculate ACE CRC-32 checksum.
/// ACE CRC-32 uses standard CRC-32 polynomial (0xEDB88320, reflected)
/// with init=0xFFFFFFFF but NO final XOR.
/// </summary>
public static uint AceCrc32(ReadOnlySpan<byte> data)
{
uint crc = 0xFFFFFFFFu;
foreach (byte b in data)
/// <summary>
/// Calculate ACE CRC-32 checksum.
/// ACE CRC-32 uses standard CRC-32 polynomial (0xEDB88320, reflected)
/// with init=0xFFFFFFFF but NO final XOR.
/// </summary>
public static uint AceCrc32(ReadOnlySpan<byte> data)
{
crc = (crc >> 8) ^ Crc32Table[(crc ^ b) & 0xFF];
uint crc = 0xFFFFFFFFu;
foreach (byte b in data)
{
crc = (crc >> 8) ^ Crc32Table[(crc ^ b) & 0xFF];
}
return crc; // No final XOR for ACE
}
return crc; // No final XOR for ACE
}
/// <summary>
/// ACE CRC-16 is the lower 16 bits of the ACE CRC-32.
/// </summary>
public static ushort AceCrc16(ReadOnlySpan<byte> data)
{
return (ushort)(AceCrc32(data) & 0xFFFF);
/// <summary>
/// ACE CRC-16 is the lower 16 bits of the ACE CRC-32.
/// </summary>
public static ushort AceCrc16(ReadOnlySpan<byte> data)
{
return (ushort)(AceCrc32(data) & 0xFFFF);
}
}
}

View File

@@ -6,62 +6,63 @@ using System.Text;
using System.Threading.Tasks;
using SharpCompress.Common.Ace.Headers;
namespace SharpCompress.Common.Ace;
public class AceEntry : Entry
namespace SharpCompress.Common.Ace
{
private readonly AceFilePart _filePart;
internal AceEntry(AceFilePart filePart)
public class AceEntry : Entry
{
_filePart = filePart;
}
private readonly AceFilePart _filePart;
public override long Crc
{
get
internal AceEntry(AceFilePart filePart)
{
if (_filePart == null)
{
return 0;
}
return _filePart.Header.Crc32;
_filePart = filePart;
}
}
public override string? Key => _filePart?.Header.Filename;
public override string? LinkTarget => null;
public override long CompressedSize => _filePart?.Header.PackedSize ?? 0;
public override CompressionType CompressionType
{
get
public override long Crc
{
if (_filePart.Header.CompressionType == Headers.CompressionType.Stored)
get
{
return CompressionType.None;
if (_filePart == null)
{
return 0;
}
return _filePart.Header.Crc32;
}
return CompressionType.AceLZ77;
}
public override string? Key => _filePart?.Header.Filename;
public override string? LinkTarget => null;
public override long CompressedSize => _filePart?.Header.PackedSize ?? 0;
public override CompressionType CompressionType
{
get
{
if (_filePart.Header.CompressionType == Headers.CompressionType.Stored)
{
return CompressionType.None;
}
return CompressionType.AceLZ77;
}
}
public override long Size => _filePart?.Header.OriginalSize ?? 0;
public override DateTime? LastModifiedTime => _filePart.Header.DateTime;
public override DateTime? CreatedTime => null;
public override DateTime? LastAccessedTime => null;
public override DateTime? ArchivedTime => null;
public override bool IsEncrypted => _filePart.Header.IsFileEncrypted;
public override bool IsDirectory => _filePart.Header.IsDirectory;
public override bool IsSplitAfter => false;
internal override IEnumerable<FilePart> Parts => _filePart.Empty();
}
public override long Size => _filePart?.Header.OriginalSize ?? 0;
public override DateTime? LastModifiedTime => _filePart.Header.DateTime;
public override DateTime? CreatedTime => null;
public override DateTime? LastAccessedTime => null;
public override DateTime? ArchivedTime => null;
public override bool IsEncrypted => _filePart.Header.IsFileEncrypted;
public override bool IsDirectory => _filePart.Header.IsDirectory;
public override bool IsSplitAfter => false;
internal override IEnumerable<FilePart> Parts => _filePart.Empty();
}

View File

@@ -7,45 +7,46 @@ using System.Threading.Tasks;
using SharpCompress.Common.Ace.Headers;
using SharpCompress.IO;
namespace SharpCompress.Common.Ace;
public class AceFilePart : FilePart
namespace SharpCompress.Common.Ace
{
private readonly Stream _stream;
internal AceFileHeader Header { get; set; }
internal AceFilePart(AceFileHeader localAceHeader, Stream seekableStream)
: base(localAceHeader.ArchiveEncoding)
public class AceFilePart : FilePart
{
_stream = seekableStream;
Header = localAceHeader;
}
private readonly Stream _stream;
internal AceFileHeader Header { get; set; }
internal override string? FilePartName => Header.Filename;
internal override Stream GetCompressedStream()
{
if (_stream != null)
internal AceFilePart(AceFileHeader localAceHeader, Stream seekableStream)
: base(localAceHeader.ArchiveEncoding)
{
Stream compressedStream;
switch (Header.CompressionType)
{
case Headers.CompressionType.Stored:
compressedStream = new ReadOnlySubStream(
_stream,
Header.DataStartPosition,
Header.PackedSize
);
break;
default:
throw new NotSupportedException(
"CompressionMethod: " + Header.CompressionQuality
);
}
return compressedStream;
_stream = seekableStream;
Header = localAceHeader;
}
return _stream.NotNull();
}
internal override Stream? GetRawStream() => _stream;
internal override string? FilePartName => Header.Filename;
internal override Stream GetCompressedStream()
{
if (_stream != null)
{
Stream compressedStream;
switch (Header.CompressionType)
{
case Headers.CompressionType.Stored:
compressedStream = new ReadOnlySubStream(
_stream,
Header.DataStartPosition,
Header.PackedSize
);
break;
default:
throw new NotSupportedException(
"CompressionMethod: " + Header.CompressionQuality
);
}
return compressedStream;
}
return _stream.NotNull();
}
internal override Stream? GetRawStream() => _stream;
}
}

View File

@@ -7,28 +7,29 @@ using System.Threading.Tasks;
using SharpCompress.Common.Arj;
using SharpCompress.Readers;
namespace SharpCompress.Common.Ace;
public class AceVolume : Volume
namespace SharpCompress.Common.Ace
{
public AceVolume(Stream stream, ReaderOptions readerOptions, int index = 0)
: base(stream, readerOptions, index) { }
public override bool IsFirstVolume
public class AceVolume : Volume
{
get { return true; }
}
public AceVolume(Stream stream, ReaderOptions readerOptions, int index = 0)
: base(stream, readerOptions, index) { }
/// <summary>
/// ArjArchive is part of a multi-part archive.
/// </summary>
public override bool IsMultiVolume
{
get { return false; }
}
public override bool IsFirstVolume
{
get { return true; }
}
internal IEnumerable<AceFilePart> GetVolumeFileParts()
{
return new List<AceFilePart>();
/// <summary>
/// ArjArchive is part of a multi-part archive.
/// </summary>
public override bool IsMultiVolume
{
get { return false; }
}
internal IEnumerable<AceFilePart> GetVolumeFileParts()
{
return new List<AceFilePart>();
}
}
}

View File

@@ -7,168 +7,169 @@ using System.Threading.Tasks;
using System.Xml.Linq;
using SharpCompress.Common.Arc;
namespace SharpCompress.Common.Ace.Headers;
/// <summary>
/// ACE file entry header
/// </summary>
public sealed partial class AceFileHeader : AceHeader
namespace SharpCompress.Common.Ace.Headers
{
public long DataStartPosition { get; private set; }
public long PackedSize { get; set; }
public long OriginalSize { get; set; }
public DateTime DateTime { get; set; }
public int Attributes { get; set; }
public uint Crc32 { get; set; }
public CompressionType CompressionType { get; set; }
public CompressionQuality CompressionQuality { get; set; }
public ushort Parameters { get; set; }
public string Filename { get; set; } = string.Empty;
public List<byte> Comment { get; set; } = new();
/// <summary>
/// File data offset in the archive
/// ACE file entry header
/// </summary>
public ulong DataOffset { get; set; }
public bool IsDirectory => (Attributes & 0x10) != 0;
public bool IsContinuedFromPrev =>
(HeaderFlags & SharpCompress.Common.Ace.Headers.HeaderFlags.CONTINUED_PREV) != 0;
public bool IsContinuedToNext =>
(HeaderFlags & SharpCompress.Common.Ace.Headers.HeaderFlags.CONTINUED_NEXT) != 0;
public int DictionarySize
public sealed partial class AceFileHeader : AceHeader
{
get
public long DataStartPosition { get; private set; }
public long PackedSize { get; set; }
public long OriginalSize { get; set; }
public DateTime DateTime { get; set; }
public int Attributes { get; set; }
public uint Crc32 { get; set; }
public CompressionType CompressionType { get; set; }
public CompressionQuality CompressionQuality { get; set; }
public ushort Parameters { get; set; }
public string Filename { get; set; } = string.Empty;
public List<byte> Comment { get; set; } = new();
/// <summary>
/// File data offset in the archive
/// </summary>
public ulong DataOffset { get; set; }
public bool IsDirectory => (Attributes & 0x10) != 0;
public bool IsContinuedFromPrev =>
(HeaderFlags & SharpCompress.Common.Ace.Headers.HeaderFlags.CONTINUED_PREV) != 0;
public bool IsContinuedToNext =>
(HeaderFlags & SharpCompress.Common.Ace.Headers.HeaderFlags.CONTINUED_NEXT) != 0;
public int DictionarySize
{
int bits = Parameters & 0x0F;
return bits < 10 ? 1024 : 1 << bits;
}
}
public AceFileHeader(IArchiveEncoding archiveEncoding)
: base(archiveEncoding, AceHeaderType.FILE) { }
/// <summary>
/// Reads the next file entry header from the stream.
/// Returns null if no more entries or end of archive.
/// Supports both ACE 1.0 and ACE 2.0 formats.
/// </summary>
public override AceHeader? Read(Stream stream)
{
var headerData = ReadHeader(stream);
if (headerData.Length == 0)
{
return null;
}
int offset = 0;
// Header type (1 byte)
HeaderType = headerData[offset++];
// Skip recovery record headers (ACE 2.0 feature)
if (HeaderType == (byte)SharpCompress.Common.Ace.Headers.AceHeaderType.RECOVERY32)
{
// Skip to next header
return null;
}
if (HeaderType != (byte)SharpCompress.Common.Ace.Headers.AceHeaderType.FILE)
{
// Unknown header type - skip
return null;
}
// Header flags (2 bytes)
HeaderFlags = BitConverter.ToUInt16(headerData, offset);
offset += 2;
// Packed size (4 bytes)
PackedSize = BitConverter.ToUInt32(headerData, offset);
offset += 4;
// Original size (4 bytes)
OriginalSize = BitConverter.ToUInt32(headerData, offset);
offset += 4;
// File date/time in DOS format (4 bytes)
var dosDateTime = BitConverter.ToUInt32(headerData, offset);
DateTime = ConvertDosDateTime(dosDateTime);
offset += 4;
// File attributes (4 bytes)
Attributes = (int)BitConverter.ToUInt32(headerData, offset);
offset += 4;
// CRC32 (4 bytes)
Crc32 = BitConverter.ToUInt32(headerData, offset);
offset += 4;
// Compression type (1 byte)
byte compressionType = headerData[offset++];
CompressionType = GetCompressionType(compressionType);
// Compression quality/parameter (1 byte)
byte compressionQuality = headerData[offset++];
CompressionQuality = GetCompressionQuality(compressionQuality);
// Parameters (2 bytes)
Parameters = BitConverter.ToUInt16(headerData, offset);
offset += 2;
// Reserved (2 bytes) - skip
offset += 2;
// Filename length (2 bytes)
var filenameLength = BitConverter.ToUInt16(headerData, offset);
offset += 2;
// Filename
if (offset + filenameLength <= headerData.Length)
{
Filename = ArchiveEncoding.Decode(headerData, offset, filenameLength);
offset += filenameLength;
}
// Handle comment if present
if ((HeaderFlags & SharpCompress.Common.Ace.Headers.HeaderFlags.COMMENT) != 0)
{
// Comment length (2 bytes)
if (offset + 2 <= headerData.Length)
get
{
ushort commentLength = BitConverter.ToUInt16(headerData, offset);
offset += 2 + commentLength; // Skip comment
int bits = Parameters & 0x0F;
return bits < 10 ? 1024 : 1 << bits;
}
}
// Store the data start position
DataStartPosition = stream.Position;
public AceFileHeader(IArchiveEncoding archiveEncoding)
: base(archiveEncoding, AceHeaderType.FILE) { }
return this;
/// <summary>
/// Reads the next file entry header from the stream.
/// Returns null if no more entries or end of archive.
/// Supports both ACE 1.0 and ACE 2.0 formats.
/// </summary>
public override AceHeader? Read(Stream stream)
{
var headerData = ReadHeader(stream);
if (headerData.Length == 0)
{
return null;
}
int offset = 0;
// Header type (1 byte)
HeaderType = headerData[offset++];
// Skip recovery record headers (ACE 2.0 feature)
if (HeaderType == (byte)SharpCompress.Common.Ace.Headers.AceHeaderType.RECOVERY32)
{
// Skip to next header
return null;
}
if (HeaderType != (byte)SharpCompress.Common.Ace.Headers.AceHeaderType.FILE)
{
// Unknown header type - skip
return null;
}
// Header flags (2 bytes)
HeaderFlags = BitConverter.ToUInt16(headerData, offset);
offset += 2;
// Packed size (4 bytes)
PackedSize = BitConverter.ToUInt32(headerData, offset);
offset += 4;
// Original size (4 bytes)
OriginalSize = BitConverter.ToUInt32(headerData, offset);
offset += 4;
// File date/time in DOS format (4 bytes)
var dosDateTime = BitConverter.ToUInt32(headerData, offset);
DateTime = ConvertDosDateTime(dosDateTime);
offset += 4;
// File attributes (4 bytes)
Attributes = (int)BitConverter.ToUInt32(headerData, offset);
offset += 4;
// CRC32 (4 bytes)
Crc32 = BitConverter.ToUInt32(headerData, offset);
offset += 4;
// Compression type (1 byte)
byte compressionType = headerData[offset++];
CompressionType = GetCompressionType(compressionType);
// Compression quality/parameter (1 byte)
byte compressionQuality = headerData[offset++];
CompressionQuality = GetCompressionQuality(compressionQuality);
// Parameters (2 bytes)
Parameters = BitConverter.ToUInt16(headerData, offset);
offset += 2;
// Reserved (2 bytes) - skip
offset += 2;
// Filename length (2 bytes)
var filenameLength = BitConverter.ToUInt16(headerData, offset);
offset += 2;
// Filename
if (offset + filenameLength <= headerData.Length)
{
Filename = ArchiveEncoding.Decode(headerData, offset, filenameLength);
offset += filenameLength;
}
// Handle comment if present
if ((HeaderFlags & SharpCompress.Common.Ace.Headers.HeaderFlags.COMMENT) != 0)
{
// Comment length (2 bytes)
if (offset + 2 <= headerData.Length)
{
ushort commentLength = BitConverter.ToUInt16(headerData, offset);
offset += 2 + commentLength; // Skip comment
}
}
// Store the data start position
DataStartPosition = stream.Position;
return this;
}
// ReadAsync moved to AceFileHeader.Async.cs
public CompressionType GetCompressionType(byte value) =>
value switch
{
0 => CompressionType.Stored,
1 => CompressionType.Lz77,
2 => CompressionType.Blocked,
_ => CompressionType.Unknown,
};
public CompressionQuality GetCompressionQuality(byte value) =>
value switch
{
0 => CompressionQuality.None,
1 => CompressionQuality.Fastest,
2 => CompressionQuality.Fast,
3 => CompressionQuality.Normal,
4 => CompressionQuality.Good,
5 => CompressionQuality.Best,
_ => CompressionQuality.Unknown,
};
}
// ReadAsync moved to AceFileHeader.Async.cs
public CompressionType GetCompressionType(byte value) =>
value switch
{
0 => CompressionType.Stored,
1 => CompressionType.Lz77,
2 => CompressionType.Blocked,
_ => CompressionType.Unknown,
};
public CompressionQuality GetCompressionQuality(byte value) =>
value switch
{
0 => CompressionQuality.None,
1 => CompressionQuality.Fastest,
2 => CompressionQuality.Fast,
3 => CompressionQuality.Normal,
4 => CompressionQuality.Good,
5 => CompressionQuality.Best,
_ => CompressionQuality.Unknown,
};
}

View File

@@ -19,7 +19,7 @@ public abstract partial class AceHeader
{
// Read header CRC (2 bytes) and header size (2 bytes)
var headerBytes = new byte[4];
if (!await stream.ReadFullyAsync(headerBytes, 0, 4, cancellationToken))
if (await stream.ReadAsync(headerBytes, 0, 4, cancellationToken) != 4)
{
return Array.Empty<byte>();
}
@@ -33,7 +33,7 @@ public abstract partial class AceHeader
// Read the header data
var body = new byte[HeaderSize];
if (!await stream.ReadFullyAsync(body, 0, HeaderSize, cancellationToken))
if (await stream.ReadAsync(body, 0, HeaderSize, cancellationToken) != HeaderSize)
{
return Array.Empty<byte>();
}
@@ -59,7 +59,7 @@ public abstract partial class AceHeader
)
{
var bytes = new byte[14];
if (!await stream.ReadFullyAsync(bytes, 0, 14, cancellationToken))
if (await stream.ReadAsync(bytes, 0, 14, cancellationToken) != 14)
{
return false;
}

View File

@@ -5,152 +5,153 @@ using System.Threading.Tasks;
using SharpCompress.Common.Arj.Headers;
using SharpCompress.Crypto;
namespace SharpCompress.Common.Ace.Headers;
/// <summary>
/// Header type constants
/// </summary>
public enum AceHeaderType
namespace SharpCompress.Common.Ace.Headers
{
MAIN = 0,
FILE = 1,
RECOVERY32 = 2,
RECOVERY64A = 3,
RECOVERY64B = 4,
}
public abstract partial class AceHeader
{
// ACE signature: bytes at offset 7 should be "**ACE**"
private static readonly byte[] AceSignature =
[
(byte)'*',
(byte)'*',
(byte)'A',
(byte)'C',
(byte)'E',
(byte)'*',
(byte)'*',
];
public AceHeader(IArchiveEncoding archiveEncoding, AceHeaderType type)
/// <summary>
/// Header type constants
/// </summary>
public enum AceHeaderType
{
AceHeaderType = type;
ArchiveEncoding = archiveEncoding;
MAIN = 0,
FILE = 1,
RECOVERY32 = 2,
RECOVERY64A = 3,
RECOVERY64B = 4,
}
public IArchiveEncoding ArchiveEncoding { get; }
public AceHeaderType AceHeaderType { get; }
public ushort HeaderFlags { get; set; }
public ushort HeaderCrc { get; set; }
public ushort HeaderSize { get; set; }
public byte HeaderType { get; set; }
public bool IsFileEncrypted =>
(HeaderFlags & SharpCompress.Common.Ace.Headers.HeaderFlags.FILE_ENCRYPTED) != 0;
public bool Is64Bit =>
(HeaderFlags & SharpCompress.Common.Ace.Headers.HeaderFlags.MEMORY_64BIT) != 0;
public bool IsSolid =>
(HeaderFlags & SharpCompress.Common.Ace.Headers.HeaderFlags.SOLID_MAIN) != 0;
public bool IsMultiVolume =>
(HeaderFlags & SharpCompress.Common.Ace.Headers.HeaderFlags.MULTIVOLUME) != 0;
public abstract AceHeader? Read(Stream reader);
// Async methods moved to AceHeader.Async.cs
public byte[] ReadHeader(Stream stream)
public abstract partial class AceHeader
{
// Read header CRC (2 bytes) and header size (2 bytes)
var headerBytes = new byte[4];
if (!stream.ReadFully(headerBytes))
// ACE signature: bytes at offset 7 should be "**ACE**"
private static readonly byte[] AceSignature =
[
(byte)'*',
(byte)'*',
(byte)'A',
(byte)'C',
(byte)'E',
(byte)'*',
(byte)'*',
];
public AceHeader(IArchiveEncoding archiveEncoding, AceHeaderType type)
{
return Array.Empty<byte>();
AceHeaderType = type;
ArchiveEncoding = archiveEncoding;
}
HeaderCrc = BitConverter.ToUInt16(headerBytes, 0); // CRC for validation
HeaderSize = BitConverter.ToUInt16(headerBytes, 2);
if (HeaderSize == 0)
public IArchiveEncoding ArchiveEncoding { get; }
public AceHeaderType AceHeaderType { get; }
public ushort HeaderFlags { get; set; }
public ushort HeaderCrc { get; set; }
public ushort HeaderSize { get; set; }
public byte HeaderType { get; set; }
public bool IsFileEncrypted =>
(HeaderFlags & SharpCompress.Common.Ace.Headers.HeaderFlags.FILE_ENCRYPTED) != 0;
public bool Is64Bit =>
(HeaderFlags & SharpCompress.Common.Ace.Headers.HeaderFlags.MEMORY_64BIT) != 0;
public bool IsSolid =>
(HeaderFlags & SharpCompress.Common.Ace.Headers.HeaderFlags.SOLID_MAIN) != 0;
public bool IsMultiVolume =>
(HeaderFlags & SharpCompress.Common.Ace.Headers.HeaderFlags.MULTIVOLUME) != 0;
public abstract AceHeader? Read(Stream reader);
// Async methods moved to AceHeader.Async.cs
public byte[] ReadHeader(Stream stream)
{
return Array.Empty<byte>();
// Read header CRC (2 bytes) and header size (2 bytes)
var headerBytes = new byte[4];
if (stream.Read(headerBytes, 0, 4) != 4)
{
return Array.Empty<byte>();
}
HeaderCrc = BitConverter.ToUInt16(headerBytes, 0); // CRC for validation
HeaderSize = BitConverter.ToUInt16(headerBytes, 2);
if (HeaderSize == 0)
{
return Array.Empty<byte>();
}
// Read the header data
var body = new byte[HeaderSize];
if (stream.Read(body, 0, HeaderSize) != HeaderSize)
{
return Array.Empty<byte>();
}
// Verify crc
var checksum = AceCrc.AceCrc16(body);
if (checksum != HeaderCrc)
{
throw new InvalidDataException("Header checksum is invalid");
}
return body;
}
// Read the header data
var body = new byte[HeaderSize];
if (!stream.ReadFully(body))
public static bool IsArchive(Stream stream)
{
return Array.Empty<byte>();
}
// Verify crc
var checksum = AceCrc.AceCrc16(body);
if (checksum != HeaderCrc)
{
throw new InvalidDataException("Header checksum is invalid");
}
return body;
}
public static bool IsArchive(Stream stream)
{
// ACE files have a specific signature
// First two bytes are typically 0x60 0xEA (signature bytes)
// At offset 7, there should be "**ACE**" (7 bytes)
var bytes = new byte[14];
if (stream.Read(bytes, 0, 14) != 14)
{
return false;
}
// Check for "**ACE**" at offset 7
return CheckMagicBytes(bytes, 7);
}
protected static bool CheckMagicBytes(byte[] headerBytes, int offset)
{
// Check for "**ACE**" at specified offset
for (int i = 0; i < AceSignature.Length; i++)
{
if (headerBytes[offset + i] != AceSignature[i])
// ACE files have a specific signature
// First two bytes are typically 0x60 0xEA (signature bytes)
// At offset 7, there should be "**ACE**" (7 bytes)
var bytes = new byte[14];
if (stream.Read(bytes, 0, 14) != 14)
{
return false;
}
// Check for "**ACE**" at offset 7
return CheckMagicBytes(bytes, 7);
}
return true;
}
protected DateTime ConvertDosDateTime(uint dosDateTime)
{
try
protected static bool CheckMagicBytes(byte[] headerBytes, int offset)
{
int second = (int)(dosDateTime & 0x1F) * 2;
int minute = (int)((dosDateTime >> 5) & 0x3F);
int hour = (int)((dosDateTime >> 11) & 0x1F);
int day = (int)((dosDateTime >> 16) & 0x1F);
int month = (int)((dosDateTime >> 21) & 0x0F);
int year = (int)((dosDateTime >> 25) & 0x7F) + 1980;
// Check for "**ACE**" at specified offset
for (int i = 0; i < AceSignature.Length; i++)
{
if (headerBytes[offset + i] != AceSignature[i])
{
return false;
}
}
return true;
}
if (
day < 1
|| day > 31
|| month < 1
|| month > 12
|| hour > 23
|| minute > 59
|| second > 59
)
protected DateTime ConvertDosDateTime(uint dosDateTime)
{
try
{
int second = (int)(dosDateTime & 0x1F) * 2;
int minute = (int)((dosDateTime >> 5) & 0x3F);
int hour = (int)((dosDateTime >> 11) & 0x1F);
int day = (int)((dosDateTime >> 16) & 0x1F);
int month = (int)((dosDateTime >> 21) & 0x0F);
int year = (int)((dosDateTime >> 25) & 0x7F) + 1980;
if (
day < 1
|| day > 31
|| month < 1
|| month > 12
|| hour > 23
|| minute > 59
|| second > 59
)
{
return DateTime.MinValue;
}
return new DateTime(year, month, day, hour, minute, second);
}
catch
{
return DateTime.MinValue;
}
return new DateTime(year, month, day, hour, minute, second);
}
catch
{
return DateTime.MinValue;
}
}
}

View File

@@ -8,93 +8,94 @@ using SharpCompress.Common.Ace.Headers;
using SharpCompress.Common.Zip.Headers;
using SharpCompress.Crypto;
namespace SharpCompress.Common.Ace.Headers;
/// <summary>
/// ACE main archive header
/// </summary>
public sealed partial class AceMainHeader : AceHeader
namespace SharpCompress.Common.Ace.Headers
{
public byte ExtractVersion { get; set; }
public byte CreatorVersion { get; set; }
public HostOS HostOS { get; set; }
public byte VolumeNumber { get; set; }
public DateTime DateTime { get; set; }
public string Advert { get; set; } = string.Empty;
public List<byte> Comment { get; set; } = new();
public byte AceVersion { get; private set; }
public AceMainHeader(IArchiveEncoding archiveEncoding)
: base(archiveEncoding, AceHeaderType.MAIN) { }
/// <summary>
/// Reads the main archive header from the stream.
/// Returns header if this is a valid ACE archive.
/// Supports both ACE 1.0 and ACE 2.0 formats.
/// ACE main archive header
/// </summary>
public override AceHeader? Read(Stream stream)
public sealed partial class AceMainHeader : AceHeader
{
var headerData = ReadHeader(stream);
if (headerData.Length == 0)
public byte ExtractVersion { get; set; }
public byte CreatorVersion { get; set; }
public HostOS HostOS { get; set; }
public byte VolumeNumber { get; set; }
public DateTime DateTime { get; set; }
public string Advert { get; set; } = string.Empty;
public List<byte> Comment { get; set; } = new();
public byte AceVersion { get; private set; }
public AceMainHeader(IArchiveEncoding archiveEncoding)
: base(archiveEncoding, AceHeaderType.MAIN) { }
/// <summary>
/// Reads the main archive header from the stream.
/// Returns header if this is a valid ACE archive.
/// Supports both ACE 1.0 and ACE 2.0 formats.
/// </summary>
public override AceHeader? Read(Stream stream)
{
return null;
}
int offset = 0;
// Header type should be 0 for main header
if (headerData[offset++] != HeaderType)
{
return null;
}
// Header flags (2 bytes)
HeaderFlags = BitConverter.ToUInt16(headerData, offset);
offset += 2;
// Skip signature "**ACE**" (7 bytes)
if (!CheckMagicBytes(headerData, offset))
{
throw new InvalidDataException("Invalid ACE archive signature.");
}
offset += 7;
// ACE version (1 byte) - 10 for ACE 1.0, 20 for ACE 2.0
AceVersion = headerData[offset++];
ExtractVersion = headerData[offset++];
// Host OS (1 byte)
if (offset < headerData.Length)
{
var hostOsByte = headerData[offset++];
HostOS = hostOsByte <= 11 ? (HostOS)hostOsByte : HostOS.Unknown;
}
// Volume number (1 byte)
VolumeNumber = headerData[offset++];
// Creation date/time (4 bytes)
var dosDateTime = BitConverter.ToUInt32(headerData, offset);
DateTime = ConvertDosDateTime(dosDateTime);
offset += 4;
// Reserved fields (8 bytes)
if (offset + 8 <= headerData.Length)
{
offset += 8;
}
// Skip additional fields based on flags
// Handle comment if present
if ((HeaderFlags & SharpCompress.Common.Ace.Headers.HeaderFlags.COMMENT) != 0)
{
if (offset + 2 <= headerData.Length)
var headerData = ReadHeader(stream);
if (headerData.Length == 0)
{
ushort commentLength = BitConverter.ToUInt16(headerData, offset);
offset += 2 + commentLength;
return null;
}
int offset = 0;
// Header type should be 0 for main header
if (headerData[offset++] != HeaderType)
{
return null;
}
// Header flags (2 bytes)
HeaderFlags = BitConverter.ToUInt16(headerData, offset);
offset += 2;
// Skip signature "**ACE**" (7 bytes)
if (!CheckMagicBytes(headerData, offset))
{
throw new InvalidDataException("Invalid ACE archive signature.");
}
offset += 7;
// ACE version (1 byte) - 10 for ACE 1.0, 20 for ACE 2.0
AceVersion = headerData[offset++];
ExtractVersion = headerData[offset++];
// Host OS (1 byte)
if (offset < headerData.Length)
{
var hostOsByte = headerData[offset++];
HostOS = hostOsByte <= 11 ? (HostOS)hostOsByte : HostOS.Unknown;
}
// Volume number (1 byte)
VolumeNumber = headerData[offset++];
// Creation date/time (4 bytes)
var dosDateTime = BitConverter.ToUInt32(headerData, offset);
DateTime = ConvertDosDateTime(dosDateTime);
offset += 4;
// Reserved fields (8 bytes)
if (offset + 8 <= headerData.Length)
{
offset += 8;
}
// Skip additional fields based on flags
// Handle comment if present
if ((HeaderFlags & SharpCompress.Common.Ace.Headers.HeaderFlags.COMMENT) != 0)
{
if (offset + 2 <= headerData.Length)
{
ushort commentLength = BitConverter.ToUInt16(headerData, offset);
offset += 2 + commentLength;
}
}
return this;
}
return this;
// ReadAsync moved to AceMainHeader.Async.cs
}
// ReadAsync moved to AceMainHeader.Async.cs
}

View File

@@ -1,15 +1,16 @@
namespace SharpCompress.Common.Ace.Headers;
/// <summary>
/// Compression quality
/// </summary>
public enum CompressionQuality
namespace SharpCompress.Common.Ace.Headers
{
None,
Fastest,
Fast,
Normal,
Good,
Best,
Unknown,
/// <summary>
/// Compression quality
/// </summary>
public enum CompressionQuality
{
None,
Fastest,
Fast,
Normal,
Good,
Best,
Unknown,
}
}

View File

@@ -1,12 +1,13 @@
namespace SharpCompress.Common.Ace.Headers;
/// <summary>
/// Compression types
/// </summary>
public enum CompressionType
namespace SharpCompress.Common.Ace.Headers
{
Stored,
Lz77,
Blocked,
Unknown,
/// <summary>
/// Compression types
/// </summary>
public enum CompressionType
{
Stored,
Lz77,
Blocked,
Unknown,
}
}

View File

@@ -1,32 +1,33 @@
namespace SharpCompress.Common.Ace.Headers;
/// <summary>
/// Header flags (main + file, overlapping meanings)
/// </summary>
public static class HeaderFlags
namespace SharpCompress.Common.Ace.Headers
{
// Shared / low bits
public const ushort ADDSIZE = 0x0001; // extra size field present
public const ushort COMMENT = 0x0002; // comment present
public const ushort MEMORY_64BIT = 0x0004;
public const ushort AV_STRING = 0x0008; // AV string present
public const ushort SOLID = 0x0010; // solid file
public const ushort LOCKED = 0x0020;
public const ushort PROTECTED = 0x0040;
/// <summary>
/// Header flags (main + file, overlapping meanings)
/// </summary>
public static class HeaderFlags
{
// Shared / low bits
public const ushort ADDSIZE = 0x0001; // extra size field present
public const ushort COMMENT = 0x0002; // comment present
public const ushort MEMORY_64BIT = 0x0004;
public const ushort AV_STRING = 0x0008; // AV string present
public const ushort SOLID = 0x0010; // solid file
public const ushort LOCKED = 0x0020;
public const ushort PROTECTED = 0x0040;
// Main header specific
public const ushort V20FORMAT = 0x0100;
public const ushort SFX = 0x0200;
public const ushort LIMITSFXJR = 0x0400;
public const ushort MULTIVOLUME = 0x0800;
public const ushort ADVERT = 0x1000;
public const ushort RECOVERY = 0x2000;
public const ushort LOCKED_MAIN = 0x4000;
public const ushort SOLID_MAIN = 0x8000;
// Main header specific
public const ushort V20FORMAT = 0x0100;
public const ushort SFX = 0x0200;
public const ushort LIMITSFXJR = 0x0400;
public const ushort MULTIVOLUME = 0x0800;
public const ushort ADVERT = 0x1000;
public const ushort RECOVERY = 0x2000;
public const ushort LOCKED_MAIN = 0x4000;
public const ushort SOLID_MAIN = 0x8000;
// File header specific (same bits, different meaning)
public const ushort NTSECURITY = 0x0400;
public const ushort CONTINUED_PREV = 0x1000;
public const ushort CONTINUED_NEXT = 0x2000;
public const ushort FILE_ENCRYPTED = 0x4000; // file encrypted (file header)
// File header specific (same bits, different meaning)
public const ushort NTSECURITY = 0x0400;
public const ushort CONTINUED_PREV = 0x1000;
public const ushort CONTINUED_NEXT = 0x2000;
public const ushort FILE_ENCRYPTED = 0x4000; // file encrypted (file header)
}
}

View File

@@ -1,21 +1,22 @@
namespace SharpCompress.Common.Ace.Headers;
/// <summary>
/// Host OS type
/// </summary>
public enum HostOS
namespace SharpCompress.Common.Ace.Headers
{
MsDos = 0,
Os2,
Windows,
Unix,
MacOs,
WinNt,
Primos,
AppleGs,
Atari,
Vax,
Amiga,
Next,
Unknown,
/// <summary>
/// Host OS type
/// </summary>
public enum HostOS
{
MsDos = 0,
Os2,
Windows,
Unix,
MacOs,
WinNt,
Primos,
AppleGs,
Atari,
Vax,
Amiga,
Next,
Unknown,
}
}

View File

@@ -7,53 +7,54 @@ using System.Threading.Tasks;
using SharpCompress.Common.GZip;
using SharpCompress.Common.Tar;
namespace SharpCompress.Common.Arc;
public class ArcEntry : Entry
namespace SharpCompress.Common.Arc
{
private readonly ArcFilePart? _filePart;
internal ArcEntry(ArcFilePart? filePart)
public class ArcEntry : Entry
{
_filePart = filePart;
}
private readonly ArcFilePart? _filePart;
public override long Crc
{
get
internal ArcEntry(ArcFilePart? filePart)
{
if (_filePart == null)
{
return 0;
}
return _filePart.Header.Crc16;
_filePart = filePart;
}
public override long Crc
{
get
{
if (_filePart == null)
{
return 0;
}
return _filePart.Header.Crc16;
}
}
public override string? Key => _filePart?.Header.Name;
public override string? LinkTarget => null;
public override long CompressedSize => _filePart?.Header.CompressedSize ?? 0;
public override CompressionType CompressionType =>
_filePart?.Header.CompressionMethod ?? CompressionType.Unknown;
public override long Size => throw new NotImplementedException();
public override DateTime? LastModifiedTime => null;
public override DateTime? CreatedTime => null;
public override DateTime? LastAccessedTime => null;
public override DateTime? ArchivedTime => null;
public override bool IsEncrypted => false;
public override bool IsDirectory => false;
public override bool IsSplitAfter => false;
internal override IEnumerable<FilePart> Parts => _filePart.Empty();
}
public override string? Key => _filePart?.Header.Name;
public override string? LinkTarget => null;
public override long CompressedSize => _filePart?.Header.CompressedSize ?? 0;
public override CompressionType CompressionType =>
_filePart?.Header.CompressionMethod ?? CompressionType.Unknown;
public override long Size => throw new NotImplementedException();
public override DateTime? LastModifiedTime => null;
public override DateTime? CreatedTime => null;
public override DateTime? LastAccessedTime => null;
public override DateTime? ArchivedTime => null;
public override bool IsEncrypted => false;
public override bool IsDirectory => false;
public override bool IsSplitAfter => false;
internal override IEnumerable<FilePart> Parts => _filePart.Empty();
}

View File

@@ -2,93 +2,75 @@ using System;
using System.IO;
using System.Linq;
using System.Text;
using System.Threading;
using System.Threading.Tasks;
namespace SharpCompress.Common.Arc;
public class ArcEntryHeader
namespace SharpCompress.Common.Arc
{
public IArchiveEncoding ArchiveEncoding { get; }
public CompressionType CompressionMethod { get; private set; }
public string? Name { get; private set; }
public long CompressedSize { get; private set; }
public DateTime DateTime { get; private set; }
public int Crc16 { get; private set; }
public long OriginalSize { get; private set; }
public long DataStartPosition { get; private set; }
public ArcEntryHeader(IArchiveEncoding archiveEncoding)
public class ArcEntryHeader
{
this.ArchiveEncoding = archiveEncoding;
}
public IArchiveEncoding ArchiveEncoding { get; }
public CompressionType CompressionMethod { get; private set; }
public string? Name { get; private set; }
public long CompressedSize { get; private set; }
public DateTime DateTime { get; private set; }
public int Crc16 { get; private set; }
public long OriginalSize { get; private set; }
public long DataStartPosition { get; private set; }
public ArcEntryHeader? ReadHeader(Stream stream)
{
byte[] headerBytes = new byte[29];
if (stream.Read(headerBytes, 0, headerBytes.Length) != headerBytes.Length)
public ArcEntryHeader(IArchiveEncoding archiveEncoding)
{
return null;
this.ArchiveEncoding = archiveEncoding;
}
DataStartPosition = stream.Position;
return LoadFrom(headerBytes);
}
public async ValueTask<ArcEntryHeader?> ReadHeaderAsync(
Stream stream,
CancellationToken cancellationToken = default
)
{
byte[] headerBytes = new byte[29];
if (
await stream.ReadAsync(headerBytes, 0, headerBytes.Length, cancellationToken)
!= headerBytes.Length
)
public ArcEntryHeader? ReadHeader(Stream stream)
{
return null;
byte[] headerBytes = new byte[29];
if (stream.Read(headerBytes, 0, headerBytes.Length) != headerBytes.Length)
{
return null;
}
DataStartPosition = stream.Position;
return LoadFrom(headerBytes);
}
DataStartPosition = stream.Position;
return LoadFrom(headerBytes);
}
public ArcEntryHeader LoadFrom(byte[] headerBytes)
{
CompressionMethod = GetCompressionType(headerBytes[1]);
// Read name
int nameEnd = Array.IndexOf(headerBytes, (byte)0, 1); // Find null terminator
Name = Encoding.UTF8.GetString(headerBytes, 2, nameEnd > 0 ? nameEnd - 2 : 12);
int offset = 15;
CompressedSize = BitConverter.ToUInt32(headerBytes, offset);
offset += 4;
uint rawDateTime = BitConverter.ToUInt32(headerBytes, offset);
DateTime = ConvertToDateTime(rawDateTime);
offset += 4;
Crc16 = BitConverter.ToUInt16(headerBytes, offset);
offset += 2;
OriginalSize = BitConverter.ToUInt32(headerBytes, offset);
return this;
}
private CompressionType GetCompressionType(byte value)
{
return value switch
public ArcEntryHeader LoadFrom(byte[] headerBytes)
{
1 or 2 => CompressionType.None,
3 => CompressionType.Packed,
4 => CompressionType.Squeezed,
5 or 6 or 7 or 8 => CompressionType.Crunched,
9 => CompressionType.Squashed,
10 => CompressionType.Crushed,
11 => CompressionType.Distilled,
_ => CompressionType.Unknown,
};
}
CompressionMethod = GetCompressionType(headerBytes[1]);
public static DateTime ConvertToDateTime(long rawDateTime)
{
// Convert Unix timestamp to DateTime (UTC)
return DateTimeOffset.FromUnixTimeSeconds(rawDateTime).UtcDateTime;
// Read name
int nameEnd = Array.IndexOf(headerBytes, (byte)0, 1); // Find null terminator
Name = Encoding.UTF8.GetString(headerBytes, 2, nameEnd > 0 ? nameEnd - 2 : 12);
int offset = 15;
CompressedSize = BitConverter.ToUInt32(headerBytes, offset);
offset += 4;
uint rawDateTime = BitConverter.ToUInt32(headerBytes, offset);
DateTime = ConvertToDateTime(rawDateTime);
offset += 4;
Crc16 = BitConverter.ToUInt16(headerBytes, offset);
offset += 2;
OriginalSize = BitConverter.ToUInt32(headerBytes, offset);
return this;
}
private CompressionType GetCompressionType(byte value)
{
return value switch
{
1 or 2 => CompressionType.None,
3 => CompressionType.Packed,
4 => CompressionType.Squeezed,
5 or 6 or 7 or 8 => CompressionType.Crunched,
9 => CompressionType.Squashed,
10 => CompressionType.Crushed,
11 => CompressionType.Distilled,
_ => CompressionType.Unknown,
};
}
public static DateTime ConvertToDateTime(long rawDateTime)
{
// Convert Unix timestamp to DateTime (UTC)
return DateTimeOffset.FromUnixTimeSeconds(rawDateTime).UtcDateTime;
}
}
}

View File

@@ -1,58 +0,0 @@
using System;
using System.IO;
using System.Threading;
using System.Threading.Tasks;
using SharpCompress.Compressors.Lzw;
using SharpCompress.Compressors.RLE90;
using SharpCompress.Compressors.Squeezed;
using SharpCompress.IO;
namespace SharpCompress.Common.Arc;
public partial class ArcFilePart
{
internal override async ValueTask<Stream?> GetCompressedStreamAsync(
CancellationToken cancellationToken = default
)
{
if (_stream != null)
{
Stream compressedStream;
switch (Header.CompressionMethod)
{
case CompressionType.None:
compressedStream = new ReadOnlySubStream(
_stream,
Header.DataStartPosition,
Header.CompressedSize
);
break;
case CompressionType.Packed:
compressedStream = new RunLength90Stream(_stream, (int)Header.CompressedSize);
break;
case CompressionType.Squeezed:
compressedStream = await SqueezeStream.CreateAsync(
_stream,
(int)Header.CompressedSize,
cancellationToken
);
break;
case CompressionType.Crunched:
if (Header.OriginalSize > 128 * 1024)
{
throw new NotSupportedException(
"CompressionMethod: " + Header.CompressionMethod + " with size > 128KB"
);
}
compressedStream = new ArcLzwStream(_stream, (int)Header.CompressedSize, true);
break;
default:
throw new NotSupportedException(
"CompressionMethod: " + Header.CompressionMethod
);
}
return compressedStream;
}
return _stream;
}
}

View File

@@ -13,61 +13,71 @@ using SharpCompress.Compressors.RLE90;
using SharpCompress.Compressors.Squeezed;
using SharpCompress.IO;
namespace SharpCompress.Common.Arc;
public partial class ArcFilePart : FilePart
namespace SharpCompress.Common.Arc
{
private readonly Stream? _stream;
internal ArcFilePart(ArcEntryHeader localArcHeader, Stream? seekableStream)
: base(localArcHeader.ArchiveEncoding)
public class ArcFilePart : FilePart
{
_stream = seekableStream;
Header = localArcHeader;
}
private readonly Stream? _stream;
internal ArcEntryHeader Header { get; set; }
internal override string? FilePartName => Header.Name;
internal override Stream GetCompressedStream()
{
if (_stream != null)
internal ArcFilePart(ArcEntryHeader localArcHeader, Stream? seekableStream)
: base(localArcHeader.ArchiveEncoding)
{
Stream compressedStream;
switch (Header.CompressionMethod)
{
case CompressionType.None:
compressedStream = new ReadOnlySubStream(
_stream,
Header.DataStartPosition,
Header.CompressedSize
);
break;
case CompressionType.Packed:
compressedStream = new RunLength90Stream(_stream, (int)Header.CompressedSize);
break;
case CompressionType.Squeezed:
compressedStream = SqueezeStream.Create(_stream, (int)Header.CompressedSize);
break;
case CompressionType.Crunched:
if (Header.OriginalSize > 128 * 1024)
{
throw new NotSupportedException(
"CompressionMethod: " + Header.CompressionMethod + " with size > 128KB"
);
}
compressedStream = new ArcLzwStream(_stream, (int)Header.CompressedSize, true);
break;
default:
throw new NotSupportedException(
"CompressionMethod: " + Header.CompressionMethod
);
}
return compressedStream;
_stream = seekableStream;
Header = localArcHeader;
}
return _stream.NotNull();
}
internal override Stream? GetRawStream() => _stream;
internal ArcEntryHeader Header { get; set; }
internal override string? FilePartName => Header.Name;
internal override Stream GetCompressedStream()
{
if (_stream != null)
{
Stream compressedStream;
switch (Header.CompressionMethod)
{
case CompressionType.None:
compressedStream = new ReadOnlySubStream(
_stream,
Header.DataStartPosition,
Header.CompressedSize
);
break;
case CompressionType.Packed:
compressedStream = new RunLength90Stream(
_stream,
(int)Header.CompressedSize
);
break;
case CompressionType.Squeezed:
compressedStream = new SqueezeStream(_stream, (int)Header.CompressedSize);
break;
case CompressionType.Crunched:
if (Header.OriginalSize > 128 * 1024)
{
throw new NotSupportedException(
"CompressionMethod: "
+ Header.CompressionMethod
+ " with size > 128KB"
);
}
compressedStream = new ArcLzwStream(
_stream,
(int)Header.CompressedSize,
true
);
break;
default:
throw new NotSupportedException(
"CompressionMethod: " + Header.CompressionMethod
);
}
return compressedStream;
}
return _stream.NotNull();
}
internal override Stream? GetRawStream() => _stream;
}
}

View File

@@ -6,10 +6,11 @@ using System.Text;
using System.Threading.Tasks;
using SharpCompress.Readers;
namespace SharpCompress.Common.Arc;
public class ArcVolume : Volume
namespace SharpCompress.Common.Arc
{
public ArcVolume(Stream stream, ReaderOptions readerOptions, int index = 0)
: base(stream, readerOptions, index) { }
public class ArcVolume : Volume
{
public ArcVolume(Stream stream, ReaderOptions readerOptions, int index = 0)
: base(stream, readerOptions, index) { }
}
}

View File

@@ -6,52 +6,53 @@ using System.Threading.Tasks;
using SharpCompress.Common.Arc;
using SharpCompress.Common.Arj.Headers;
namespace SharpCompress.Common.Arj;
public class ArjEntry : Entry
namespace SharpCompress.Common.Arj
{
private readonly ArjFilePart _filePart;
internal ArjEntry(ArjFilePart filePart)
public class ArjEntry : Entry
{
_filePart = filePart;
}
private readonly ArjFilePart _filePart;
public override long Crc => _filePart.Header.OriginalCrc32;
public override string? Key => _filePart?.Header.Name;
public override string? LinkTarget => null;
public override long CompressedSize => _filePart?.Header.CompressedSize ?? 0;
public override CompressionType CompressionType
{
get
internal ArjEntry(ArjFilePart filePart)
{
if (_filePart.Header.CompressionMethod == CompressionMethod.Stored)
{
return CompressionType.None;
}
return CompressionType.ArjLZ77;
_filePart = filePart;
}
public override long Crc => _filePart.Header.OriginalCrc32;
public override string? Key => _filePart?.Header.Name;
public override string? LinkTarget => null;
public override long CompressedSize => _filePart?.Header.CompressedSize ?? 0;
public override CompressionType CompressionType
{
get
{
if (_filePart.Header.CompressionMethod == CompressionMethod.Stored)
{
return CompressionType.None;
}
return CompressionType.ArjLZ77;
}
}
public override long Size => _filePart?.Header.OriginalSize ?? 0;
public override DateTime? LastModifiedTime => _filePart.Header.DateTimeModified.DateTime;
public override DateTime? CreatedTime => _filePart.Header.DateTimeCreated.DateTime;
public override DateTime? LastAccessedTime => _filePart.Header.DateTimeAccessed.DateTime;
public override DateTime? ArchivedTime => null;
public override bool IsEncrypted => false;
public override bool IsDirectory => _filePart.Header.FileType == FileType.Directory;
public override bool IsSplitAfter => false;
internal override IEnumerable<FilePart> Parts => _filePart.Empty();
}
public override long Size => _filePart?.Header.OriginalSize ?? 0;
public override DateTime? LastModifiedTime => _filePart.Header.DateTimeModified.DateTime;
public override DateTime? CreatedTime => _filePart.Header.DateTimeCreated.DateTime;
public override DateTime? LastAccessedTime => _filePart.Header.DateTimeAccessed.DateTime;
public override DateTime? ArchivedTime => null;
public override bool IsEncrypted => false;
public override bool IsDirectory => _filePart.Header.FileType == FileType.Directory;
public override bool IsSplitAfter => false;
internal override IEnumerable<FilePart> Parts => _filePart.Empty();
}

View File

@@ -8,62 +8,65 @@ using SharpCompress.Common.Arj.Headers;
using SharpCompress.Compressors.Arj;
using SharpCompress.IO;
namespace SharpCompress.Common.Arj;
public class ArjFilePart : FilePart
namespace SharpCompress.Common.Arj
{
private readonly Stream _stream;
internal ArjLocalHeader Header { get; set; }
internal ArjFilePart(ArjLocalHeader localArjHeader, Stream seekableStream)
: base(localArjHeader.ArchiveEncoding)
public class ArjFilePart : FilePart
{
_stream = seekableStream;
Header = localArjHeader;
}
private readonly Stream _stream;
internal ArjLocalHeader Header { get; set; }
internal override string? FilePartName => Header.Name;
internal override Stream GetCompressedStream()
{
if (_stream != null)
internal ArjFilePart(ArjLocalHeader localArjHeader, Stream seekableStream)
: base(localArjHeader.ArchiveEncoding)
{
Stream compressedStream;
switch (Header.CompressionMethod)
{
case CompressionMethod.Stored:
compressedStream = new ReadOnlySubStream(
_stream,
Header.DataStartPosition,
Header.CompressedSize
);
break;
case CompressionMethod.CompressedMost:
case CompressionMethod.Compressed:
case CompressionMethod.CompressedFaster:
if (Header.OriginalSize > 128 * 1024)
{
throw new NotSupportedException(
"CompressionMethod: " + Header.CompressionMethod + " with size > 128KB"
);
}
compressedStream = new LhaStream<Lh7DecoderCfg>(
_stream,
(int)Header.OriginalSize
);
break;
case CompressionMethod.CompressedFastest:
compressedStream = new LHDecoderStream(_stream, (int)Header.OriginalSize);
break;
default:
throw new NotSupportedException(
"CompressionMethod: " + Header.CompressionMethod
);
}
return compressedStream;
_stream = seekableStream;
Header = localArjHeader;
}
return _stream.NotNull();
}
internal override Stream GetRawStream() => _stream;
internal override string? FilePartName => Header.Name;
internal override Stream GetCompressedStream()
{
if (_stream != null)
{
Stream compressedStream;
switch (Header.CompressionMethod)
{
case CompressionMethod.Stored:
compressedStream = new ReadOnlySubStream(
_stream,
Header.DataStartPosition,
Header.CompressedSize
);
break;
case CompressionMethod.CompressedMost:
case CompressionMethod.Compressed:
case CompressionMethod.CompressedFaster:
if (Header.OriginalSize > 128 * 1024)
{
throw new NotSupportedException(
"CompressionMethod: "
+ Header.CompressionMethod
+ " with size > 128KB"
);
}
compressedStream = new LhaStream<Lh7DecoderCfg>(
_stream,
(int)Header.OriginalSize
);
break;
case CompressionMethod.CompressedFastest:
compressedStream = new LHDecoderStream(_stream, (int)Header.OriginalSize);
break;
default:
throw new NotSupportedException(
"CompressionMethod: " + Header.CompressionMethod
);
}
return compressedStream;
}
return _stream.NotNull();
}
internal override Stream GetRawStream() => _stream;
}
}

View File

@@ -8,28 +8,29 @@ using SharpCompress.Common.Rar;
using SharpCompress.Common.Rar.Headers;
using SharpCompress.Readers;
namespace SharpCompress.Common.Arj;
public class ArjVolume : Volume
namespace SharpCompress.Common.Arj
{
public ArjVolume(Stream stream, ReaderOptions readerOptions, int index = 0)
: base(stream, readerOptions, index) { }
public override bool IsFirstVolume
public class ArjVolume : Volume
{
get { return true; }
}
public ArjVolume(Stream stream, ReaderOptions readerOptions, int index = 0)
: base(stream, readerOptions, index) { }
/// <summary>
/// ArjArchive is part of a multi-part archive.
/// </summary>
public override bool IsMultiVolume
{
get { return false; }
}
public override bool IsFirstVolume
{
get { return true; }
}
internal IEnumerable<ArjFilePart> GetVolumeFileParts()
{
return new List<ArjFilePart>();
/// <summary>
/// ArjArchive is part of a multi-part archive.
/// </summary>
public override bool IsMultiVolume
{
get { return false; }
}
internal IEnumerable<ArjFilePart> GetVolumeFileParts()
{
return new List<ArjFilePart>();
}
}
}

View File

@@ -8,153 +8,156 @@ using System.Threading.Tasks;
using SharpCompress.Common.Zip.Headers;
using SharpCompress.Crypto;
namespace SharpCompress.Common.Arj.Headers;
public enum ArjHeaderType
namespace SharpCompress.Common.Arj.Headers
{
MainHeader,
LocalHeader,
}
public abstract partial class ArjHeader
{
private const int FIRST_HDR_SIZE = 34;
private const ushort ARJ_MAGIC = 0xEA60;
public ArjHeader(ArjHeaderType type)
public enum ArjHeaderType
{
ArjHeaderType = type;
MainHeader,
LocalHeader,
}
public ArjHeaderType ArjHeaderType { get; }
public byte Flags { get; set; }
public FileType FileType { get; set; }
public abstract ArjHeader? Read(Stream reader);
// Async methods moved to ArjHeader.Async.cs
public byte[] ReadHeader(Stream stream)
public abstract partial class ArjHeader
{
// check for magic bytes
var magic = new byte[2];
if (stream.Read(magic) != 2)
private const int FIRST_HDR_SIZE = 34;
private const ushort ARJ_MAGIC = 0xEA60;
public ArjHeader(ArjHeaderType type)
{
return Array.Empty<byte>();
ArjHeaderType = type;
}
if (!CheckMagicBytes(magic))
public ArjHeaderType ArjHeaderType { get; }
public byte Flags { get; set; }
public FileType FileType { get; set; }
public abstract ArjHeader? Read(Stream reader);
// Async methods moved to ArjHeader.Async.cs
public byte[] ReadHeader(Stream stream)
{
throw new InvalidDataException("Not an ARJ file (wrong magic bytes)");
}
// read header_size
byte[] headerBytes = new byte[2];
stream.Read(headerBytes, 0, 2);
var headerSize = (ushort)(headerBytes[0] | headerBytes[1] << 8);
if (headerSize < 1)
{
return Array.Empty<byte>();
}
var body = new byte[headerSize];
var read = stream.Read(body, 0, headerSize);
if (read < headerSize)
{
return Array.Empty<byte>();
}
byte[] crc = new byte[4];
read = stream.Read(crc, 0, 4);
var checksum = Crc32Stream.Compute(body);
// Compute the hash value
if (checksum != BitConverter.ToUInt32(crc, 0))
{
throw new InvalidDataException("Header checksum is invalid");
}
return body;
}
// ReadHeaderAsync moved to ArjHeader.Async.cs
protected List<byte[]> ReadExtendedHeaders(Stream reader)
{
List<byte[]> extendedHeader = new List<byte[]>();
byte[] buffer = new byte[2];
while (true)
{
int bytesRead = reader.Read(buffer, 0, 2);
if (bytesRead < 2)
// check for magic bytes
var magic = new byte[2];
if (stream.Read(magic) != 2)
{
throw new EndOfStreamException(
"Unexpected end of stream while reading extended header size."
);
return Array.Empty<byte>();
}
var extHeaderSize = (ushort)(buffer[0] | (buffer[1] << 8));
if (extHeaderSize == 0)
if (!CheckMagicBytes(magic))
{
return extendedHeader;
throw new InvalidDataException("Not an ARJ file (wrong magic bytes)");
}
byte[] header = new byte[extHeaderSize];
bytesRead = reader.Read(header, 0, extHeaderSize);
if (bytesRead < extHeaderSize)
// read header_size
byte[] headerBytes = new byte[2];
stream.Read(headerBytes, 0, 2);
var headerSize = (ushort)(headerBytes[0] | headerBytes[1] << 8);
if (headerSize < 1)
{
throw new EndOfStreamException(
"Unexpected end of stream while reading extended header data."
);
return Array.Empty<byte>();
}
var body = new byte[headerSize];
var read = stream.Read(body, 0, headerSize);
if (read < headerSize)
{
return Array.Empty<byte>();
}
byte[] crc = new byte[4];
bytesRead = reader.Read(crc, 0, 4);
if (bytesRead < 4)
{
throw new EndOfStreamException(
"Unexpected end of stream while reading extended header CRC."
);
}
var checksum = Crc32Stream.Compute(header);
read = stream.Read(crc, 0, 4);
var checksum = Crc32Stream.Compute(body);
// Compute the hash value
if (checksum != BitConverter.ToUInt32(crc, 0))
{
throw new InvalidDataException("Extended header checksum is invalid");
throw new InvalidDataException("Header checksum is invalid");
}
return body;
}
// ReadHeaderAsync moved to ArjHeader.Async.cs
protected List<byte[]> ReadExtendedHeaders(Stream reader)
{
List<byte[]> extendedHeader = new List<byte[]>();
byte[] buffer = new byte[2];
while (true)
{
int bytesRead = reader.Read(buffer, 0, 2);
if (bytesRead < 2)
{
throw new EndOfStreamException(
"Unexpected end of stream while reading extended header size."
);
}
var extHeaderSize = (ushort)(buffer[0] | (buffer[1] << 8));
if (extHeaderSize == 0)
{
return extendedHeader;
}
byte[] header = new byte[extHeaderSize];
bytesRead = reader.Read(header, 0, extHeaderSize);
if (bytesRead < extHeaderSize)
{
throw new EndOfStreamException(
"Unexpected end of stream while reading extended header data."
);
}
byte[] crc = new byte[4];
bytesRead = reader.Read(crc, 0, 4);
if (bytesRead < 4)
{
throw new EndOfStreamException(
"Unexpected end of stream while reading extended header CRC."
);
}
var checksum = Crc32Stream.Compute(header);
if (checksum != BitConverter.ToUInt32(crc, 0))
{
throw new InvalidDataException("Extended header checksum is invalid");
}
extendedHeader.Add(header);
}
}
// Flag helpers
public bool IsGabled => (Flags & 0x01) != 0;
public bool IsAnsiPage => (Flags & 0x02) != 0;
public bool IsVolume => (Flags & 0x04) != 0;
public bool IsArjProtected => (Flags & 0x08) != 0;
public bool IsPathSym => (Flags & 0x10) != 0;
public bool IsBackup => (Flags & 0x20) != 0;
public bool IsSecured => (Flags & 0x40) != 0;
public bool IsAltName => (Flags & 0x80) != 0;
public static FileType FileTypeFromByte(byte value)
{
return Enum.IsDefined(typeof(FileType), value)
? (FileType)value
: Headers.FileType.Unknown;
}
public static bool IsArchive(Stream stream)
{
var bytes = new byte[2];
if (stream.Read(bytes, 0, 2) != 2)
{
return false;
}
extendedHeader.Add(header);
return CheckMagicBytes(bytes);
}
}
// Flag helpers
public bool IsGabled => (Flags & 0x01) != 0;
public bool IsAnsiPage => (Flags & 0x02) != 0;
public bool IsVolume => (Flags & 0x04) != 0;
public bool IsArjProtected => (Flags & 0x08) != 0;
public bool IsPathSym => (Flags & 0x10) != 0;
public bool IsBackup => (Flags & 0x20) != 0;
public bool IsSecured => (Flags & 0x40) != 0;
public bool IsAltName => (Flags & 0x80) != 0;
public static FileType FileTypeFromByte(byte value)
{
return Enum.IsDefined(typeof(FileType), value) ? (FileType)value : Headers.FileType.Unknown;
}
public static bool IsArchive(Stream stream)
{
var bytes = new byte[2];
if (stream.Read(bytes, 0, 2) != 2)
protected static bool CheckMagicBytes(byte[] headerBytes)
{
return false;
var magicValue = (ushort)(headerBytes[0] | headerBytes[1] << 8);
return magicValue == ARJ_MAGIC;
}
return CheckMagicBytes(bytes);
}
protected static bool CheckMagicBytes(byte[] headerBytes)
{
var magicValue = (ushort)(headerBytes[0] | headerBytes[1] << 8);
return magicValue == ARJ_MAGIC;
}
}

View File

@@ -7,156 +7,158 @@ using System.Text;
using System.Threading;
using System.Threading.Tasks;
namespace SharpCompress.Common.Arj.Headers;
public partial class ArjLocalHeader : ArjHeader
namespace SharpCompress.Common.Arj.Headers
{
public ArchiveEncoding ArchiveEncoding { get; }
public long DataStartPosition { get; protected set; }
public byte ArchiverVersionNumber { get; set; }
public byte MinVersionToExtract { get; set; }
public HostOS HostOS { get; set; }
public CompressionMethod CompressionMethod { get; set; }
public DosDateTime DateTimeModified { get; set; } = new DosDateTime(0);
public long CompressedSize { get; set; }
public long OriginalSize { get; set; }
public long OriginalCrc32 { get; set; }
public int FileSpecPosition { get; set; }
public int FileAccessMode { get; set; }
public byte FirstChapter { get; set; }
public byte LastChapter { get; set; }
public long ExtendedFilePosition { get; set; }
public DosDateTime DateTimeAccessed { get; set; } = new DosDateTime(0);
public DosDateTime DateTimeCreated { get; set; } = new DosDateTime(0);
public long OriginalSizeEvenForVolumes { get; set; }
public string Name { get; set; } = string.Empty;
public string Comment { get; set; } = string.Empty;
private const byte StdHdrSize = 30;
private const byte R9HdrSize = 46;
public ArjLocalHeader(ArchiveEncoding archiveEncoding)
: base(ArjHeaderType.LocalHeader)
public partial class ArjLocalHeader : ArjHeader
{
ArchiveEncoding =
archiveEncoding ?? throw new ArgumentNullException(nameof(archiveEncoding));
}
public ArchiveEncoding ArchiveEncoding { get; }
public long DataStartPosition { get; protected set; }
public override ArjHeader? Read(Stream stream)
{
var body = ReadHeader(stream);
if (body.Length > 0)
public byte ArchiverVersionNumber { get; set; }
public byte MinVersionToExtract { get; set; }
public HostOS HostOS { get; set; }
public CompressionMethod CompressionMethod { get; set; }
public DosDateTime DateTimeModified { get; set; } = new DosDateTime(0);
public long CompressedSize { get; set; }
public long OriginalSize { get; set; }
public long OriginalCrc32 { get; set; }
public int FileSpecPosition { get; set; }
public int FileAccessMode { get; set; }
public byte FirstChapter { get; set; }
public byte LastChapter { get; set; }
public long ExtendedFilePosition { get; set; }
public DosDateTime DateTimeAccessed { get; set; } = new DosDateTime(0);
public DosDateTime DateTimeCreated { get; set; } = new DosDateTime(0);
public long OriginalSizeEvenForVolumes { get; set; }
public string Name { get; set; } = string.Empty;
public string Comment { get; set; } = string.Empty;
private const byte StdHdrSize = 30;
private const byte R9HdrSize = 46;
public ArjLocalHeader(ArchiveEncoding archiveEncoding)
: base(ArjHeaderType.LocalHeader)
{
ReadExtendedHeaders(stream);
var header = LoadFrom(body);
header.DataStartPosition = stream.Position;
return header;
ArchiveEncoding =
archiveEncoding ?? throw new ArgumentNullException(nameof(archiveEncoding));
}
return null;
}
// ReadAsync moved to ArjLocalHeader.Async.cs
public ArjLocalHeader LoadFrom(byte[] headerBytes)
{
int offset = 0;
int ReadInt16()
public override ArjHeader? Read(Stream stream)
{
if (offset + 1 >= headerBytes.Length)
var body = ReadHeader(stream);
if (body.Length > 0)
{
throw new EndOfStreamException();
ReadExtendedHeaders(stream);
var header = LoadFrom(body);
header.DataStartPosition = stream.Position;
return header;
}
var v = headerBytes[offset] & 0xFF | (headerBytes[offset + 1] & 0xFF) << 8;
offset += 2;
return v;
return null;
}
long ReadInt32()
// ReadAsync moved to ArjLocalHeader.Async.cs
public ArjLocalHeader LoadFrom(byte[] headerBytes)
{
if (offset + 3 >= headerBytes.Length)
int offset = 0;
int ReadInt16()
{
throw new EndOfStreamException();
if (offset + 1 >= headerBytes.Length)
{
throw new EndOfStreamException();
}
var v = headerBytes[offset] & 0xFF | (headerBytes[offset + 1] & 0xFF) << 8;
offset += 2;
return v;
}
long v =
headerBytes[offset] & 0xFF
| (headerBytes[offset + 1] & 0xFF) << 8
| (headerBytes[offset + 2] & 0xFF) << 16
| (headerBytes[offset + 3] & 0xFF) << 24;
offset += 4;
return v;
}
byte headerSize = headerBytes[offset++];
ArchiverVersionNumber = headerBytes[offset++];
MinVersionToExtract = headerBytes[offset++];
HostOS hostOS = (HostOS)headerBytes[offset++];
Flags = headerBytes[offset++];
CompressionMethod = CompressionMethodFromByte(headerBytes[offset++]);
FileType = FileTypeFromByte(headerBytes[offset++]);
offset++; // Skip 1 byte
var rawTimestamp = ReadInt32();
DateTimeModified = rawTimestamp != 0 ? new DosDateTime(rawTimestamp) : new DosDateTime(0);
CompressedSize = ReadInt32();
OriginalSize = ReadInt32();
OriginalCrc32 = ReadInt32();
FileSpecPosition = ReadInt16();
FileAccessMode = ReadInt16();
FirstChapter = headerBytes[offset++];
LastChapter = headerBytes[offset++];
ExtendedFilePosition = 0;
OriginalSizeEvenForVolumes = 0;
if (headerSize > StdHdrSize)
{
ExtendedFilePosition = ReadInt32();
if (headerSize >= R9HdrSize)
long ReadInt32()
{
rawTimestamp = ReadInt32();
DateTimeAccessed =
rawTimestamp != 0 ? new DosDateTime(rawTimestamp) : new DosDateTime(0);
rawTimestamp = ReadInt32();
DateTimeCreated =
rawTimestamp != 0 ? new DosDateTime(rawTimestamp) : new DosDateTime(0);
OriginalSizeEvenForVolumes = ReadInt32();
if (offset + 3 >= headerBytes.Length)
{
throw new EndOfStreamException();
}
long v =
headerBytes[offset] & 0xFF
| (headerBytes[offset + 1] & 0xFF) << 8
| (headerBytes[offset + 2] & 0xFF) << 16
| (headerBytes[offset + 3] & 0xFF) << 24;
offset += 4;
return v;
}
byte headerSize = headerBytes[offset++];
ArchiverVersionNumber = headerBytes[offset++];
MinVersionToExtract = headerBytes[offset++];
HostOS hostOS = (HostOS)headerBytes[offset++];
Flags = headerBytes[offset++];
CompressionMethod = CompressionMethodFromByte(headerBytes[offset++]);
FileType = FileTypeFromByte(headerBytes[offset++]);
offset++; // Skip 1 byte
var rawTimestamp = ReadInt32();
DateTimeModified =
rawTimestamp != 0 ? new DosDateTime(rawTimestamp) : new DosDateTime(0);
CompressedSize = ReadInt32();
OriginalSize = ReadInt32();
OriginalCrc32 = ReadInt32();
FileSpecPosition = ReadInt16();
FileAccessMode = ReadInt16();
FirstChapter = headerBytes[offset++];
LastChapter = headerBytes[offset++];
ExtendedFilePosition = 0;
OriginalSizeEvenForVolumes = 0;
if (headerSize > StdHdrSize)
{
ExtendedFilePosition = ReadInt32();
if (headerSize >= R9HdrSize)
{
rawTimestamp = ReadInt32();
DateTimeAccessed =
rawTimestamp != 0 ? new DosDateTime(rawTimestamp) : new DosDateTime(0);
rawTimestamp = ReadInt32();
DateTimeCreated =
rawTimestamp != 0 ? new DosDateTime(rawTimestamp) : new DosDateTime(0);
OriginalSizeEvenForVolumes = ReadInt32();
}
}
Name = Encoding.ASCII.GetString(
headerBytes,
offset,
Array.IndexOf(headerBytes, (byte)0, offset) - offset
);
offset += Name.Length + 1;
Comment = Encoding.ASCII.GetString(
headerBytes,
offset,
Array.IndexOf(headerBytes, (byte)0, offset) - offset
);
offset += Comment.Length + 1;
return this;
}
Name = Encoding.ASCII.GetString(
headerBytes,
offset,
Array.IndexOf(headerBytes, (byte)0, offset) - offset
);
offset += Name.Length + 1;
Comment = Encoding.ASCII.GetString(
headerBytes,
offset,
Array.IndexOf(headerBytes, (byte)0, offset) - offset
);
offset += Comment.Length + 1;
return this;
}
public static CompressionMethod CompressionMethodFromByte(byte value)
{
return value switch
public static CompressionMethod CompressionMethodFromByte(byte value)
{
0 => CompressionMethod.Stored,
1 => CompressionMethod.CompressedMost,
2 => CompressionMethod.Compressed,
3 => CompressionMethod.CompressedFaster,
4 => CompressionMethod.CompressedFastest,
8 => CompressionMethod.NoDataNoCrc,
9 => CompressionMethod.NoData,
_ => CompressionMethod.Unknown,
};
return value switch
{
0 => CompressionMethod.Stored,
1 => CompressionMethod.CompressedMost,
2 => CompressionMethod.Compressed,
3 => CompressionMethod.CompressedFaster,
4 => CompressionMethod.CompressedFastest,
8 => CompressionMethod.NoDataNoCrc,
9 => CompressionMethod.NoData,
_ => CompressionMethod.Unknown,
};
}
}
}

View File

@@ -6,136 +6,137 @@ using System.Threading.Tasks;
using SharpCompress.Compressors.Deflate;
using SharpCompress.Crypto;
namespace SharpCompress.Common.Arj.Headers;
public partial class ArjMainHeader : ArjHeader
namespace SharpCompress.Common.Arj.Headers
{
private const int FIRST_HDR_SIZE = 34;
private const ushort ARJ_MAGIC = 0xEA60;
public ArchiveEncoding ArchiveEncoding { get; }
public int ArchiverVersionNumber { get; private set; }
public int MinVersionToExtract { get; private set; }
public HostOS HostOs { get; private set; }
public int SecurityVersion { get; private set; }
public DosDateTime CreationDateTime { get; private set; } = new DosDateTime(0);
public long CompressedSize { get; private set; }
public long ArchiveSize { get; private set; }
public long SecurityEnvelope { get; private set; }
public int FileSpecPosition { get; private set; }
public int SecurityEnvelopeLength { get; private set; }
public int EncryptionVersion { get; private set; }
public int LastChapter { get; private set; }
public int ArjProtectionFactor { get; private set; }
public int Flags2 { get; private set; }
public string Name { get; private set; } = string.Empty;
public string Comment { get; private set; } = string.Empty;
public ArjMainHeader(ArchiveEncoding archiveEncoding)
: base(ArjHeaderType.MainHeader)
public partial class ArjMainHeader : ArjHeader
{
ArchiveEncoding =
archiveEncoding ?? throw new ArgumentNullException(nameof(archiveEncoding));
}
private const int FIRST_HDR_SIZE = 34;
private const ushort ARJ_MAGIC = 0xEA60;
public override ArjHeader? Read(Stream stream)
{
var body = ReadHeader(stream);
ReadExtendedHeaders(stream);
return LoadFrom(body);
}
public ArchiveEncoding ArchiveEncoding { get; }
// ReadAsync moved to ArjMainHeader.Async.cs
public int ArchiverVersionNumber { get; private set; }
public int MinVersionToExtract { get; private set; }
public HostOS HostOs { get; private set; }
public int SecurityVersion { get; private set; }
public DosDateTime CreationDateTime { get; private set; } = new DosDateTime(0);
public long CompressedSize { get; private set; }
public long ArchiveSize { get; private set; }
public long SecurityEnvelope { get; private set; }
public int FileSpecPosition { get; private set; }
public int SecurityEnvelopeLength { get; private set; }
public int EncryptionVersion { get; private set; }
public int LastChapter { get; private set; }
public ArjMainHeader LoadFrom(byte[] headerBytes)
{
var offset = 1;
public int ArjProtectionFactor { get; private set; }
public int Flags2 { get; private set; }
public string Name { get; private set; } = string.Empty;
public string Comment { get; private set; } = string.Empty;
byte ReadByte()
public ArjMainHeader(ArchiveEncoding archiveEncoding)
: base(ArjHeaderType.MainHeader)
{
if (offset >= headerBytes.Length)
{
throw new EndOfStreamException();
}
return (byte)(headerBytes[offset++] & 0xFF);
ArchiveEncoding =
archiveEncoding ?? throw new ArgumentNullException(nameof(archiveEncoding));
}
int ReadInt16()
public override ArjHeader? Read(Stream stream)
{
if (offset + 1 >= headerBytes.Length)
{
throw new EndOfStreamException();
}
var v = headerBytes[offset] & 0xFF | (headerBytes[offset + 1] & 0xFF) << 8;
offset += 2;
return v;
var body = ReadHeader(stream);
ReadExtendedHeaders(stream);
return LoadFrom(body);
}
long ReadInt32()
{
if (offset + 3 >= headerBytes.Length)
{
throw new EndOfStreamException();
}
long v =
headerBytes[offset] & 0xFF
| (headerBytes[offset + 1] & 0xFF) << 8
| (headerBytes[offset + 2] & 0xFF) << 16
| (headerBytes[offset + 3] & 0xFF) << 24;
offset += 4;
return v;
}
string ReadNullTerminatedString(byte[] x, int startIndex)
{
var result = new StringBuilder();
int i = startIndex;
// ReadAsync moved to ArjMainHeader.Async.cs
while (i < x.Length && x[i] != 0)
public ArjMainHeader LoadFrom(byte[] headerBytes)
{
var offset = 1;
byte ReadByte()
{
result.Append((char)x[i]);
if (offset >= headerBytes.Length)
{
throw new EndOfStreamException();
}
return (byte)(headerBytes[offset++] & 0xFF);
}
int ReadInt16()
{
if (offset + 1 >= headerBytes.Length)
{
throw new EndOfStreamException();
}
var v = headerBytes[offset] & 0xFF | (headerBytes[offset + 1] & 0xFF) << 8;
offset += 2;
return v;
}
long ReadInt32()
{
if (offset + 3 >= headerBytes.Length)
{
throw new EndOfStreamException();
}
long v =
headerBytes[offset] & 0xFF
| (headerBytes[offset + 1] & 0xFF) << 8
| (headerBytes[offset + 2] & 0xFF) << 16
| (headerBytes[offset + 3] & 0xFF) << 24;
offset += 4;
return v;
}
string ReadNullTerminatedString(byte[] x, int startIndex)
{
var result = new StringBuilder();
int i = startIndex;
while (i < x.Length && x[i] != 0)
{
result.Append((char)x[i]);
i++;
}
// Skip the null terminator
i++;
if (i < x.Length)
{
byte[] remainder = new byte[x.Length - i];
Array.Copy(x, i, remainder, 0, remainder.Length);
x = remainder;
}
return result.ToString();
}
// Skip the null terminator
i++;
if (i < x.Length)
{
byte[] remainder = new byte[x.Length - i];
Array.Copy(x, i, remainder, 0, remainder.Length);
x = remainder;
}
ArchiverVersionNumber = ReadByte();
MinVersionToExtract = ReadByte();
return result.ToString();
var hostOsByte = ReadByte();
HostOs = hostOsByte <= 11 ? (HostOS)hostOsByte : HostOS.Unknown;
Flags = ReadByte();
SecurityVersion = ReadByte();
FileType = FileTypeFromByte(ReadByte());
offset++; // skip reserved
CreationDateTime = new DosDateTime((int)ReadInt32());
CompressedSize = ReadInt32();
ArchiveSize = ReadInt32();
SecurityEnvelope = ReadInt32();
FileSpecPosition = ReadInt16();
SecurityEnvelopeLength = ReadInt16();
EncryptionVersion = ReadByte();
LastChapter = ReadByte();
Name = ReadNullTerminatedString(headerBytes, offset);
Comment = ReadNullTerminatedString(headerBytes, offset + 1 + Name.Length);
return this;
}
ArchiverVersionNumber = ReadByte();
MinVersionToExtract = ReadByte();
var hostOsByte = ReadByte();
HostOs = hostOsByte <= 11 ? (HostOS)hostOsByte : HostOS.Unknown;
Flags = ReadByte();
SecurityVersion = ReadByte();
FileType = FileTypeFromByte(ReadByte());
offset++; // skip reserved
CreationDateTime = new DosDateTime((int)ReadInt32());
CompressedSize = ReadInt32();
ArchiveSize = ReadInt32();
SecurityEnvelope = ReadInt32();
FileSpecPosition = ReadInt16();
SecurityEnvelopeLength = ReadInt16();
EncryptionVersion = ReadByte();
LastChapter = ReadByte();
Name = ReadNullTerminatedString(headerBytes, offset);
Comment = ReadNullTerminatedString(headerBytes, offset + 1 + Name.Length);
return this;
}
}

View File

@@ -4,16 +4,17 @@ using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace SharpCompress.Common.Arj.Headers;
public enum CompressionMethod
namespace SharpCompress.Common.Arj.Headers
{
Stored = 0,
CompressedMost = 1,
Compressed = 2,
CompressedFaster = 3,
CompressedFastest = 4,
NoDataNoCrc = 8,
NoData = 9,
Unknown,
public enum CompressionMethod
{
Stored = 0,
CompressedMost = 1,
Compressed = 2,
CompressedFaster = 3,
CompressedFastest = 4,
NoDataNoCrc = 8,
NoData = 9,
Unknown,
}
}

View File

@@ -1,36 +1,37 @@
using System;
namespace SharpCompress.Common.Arj.Headers;
public class DosDateTime
namespace SharpCompress.Common.Arj.Headers
{
public DateTime DateTime { get; }
public DosDateTime(long dosValue)
public class DosDateTime
{
// Ensure only the lower 32 bits are used
int value = unchecked((int)(dosValue & 0xFFFFFFFF));
public DateTime DateTime { get; }
var date = (value >> 16) & 0xFFFF;
var time = value & 0xFFFF;
var day = date & 0x1F;
var month = (date >> 5) & 0x0F;
var year = ((date >> 9) & 0x7F) + 1980;
var second = (time & 0x1F) * 2;
var minute = (time >> 5) & 0x3F;
var hour = (time >> 11) & 0x1F;
try
public DosDateTime(long dosValue)
{
DateTime = new DateTime(year, month, day, hour, minute, second);
}
catch
{
DateTime = DateTime.MinValue;
// Ensure only the lower 32 bits are used
int value = unchecked((int)(dosValue & 0xFFFFFFFF));
var date = (value >> 16) & 0xFFFF;
var time = value & 0xFFFF;
var day = date & 0x1F;
var month = (date >> 5) & 0x0F;
var year = ((date >> 9) & 0x7F) + 1980;
var second = (time & 0x1F) * 2;
var minute = (time >> 5) & 0x3F;
var hour = (time >> 11) & 0x1F;
try
{
DateTime = new DateTime(year, month, day, hour, minute, second);
}
catch
{
DateTime = DateTime.MinValue;
}
}
public override string ToString() => DateTime.ToString("yyyy-MM-dd HH:mm:ss");
}
public override string ToString() => DateTime.ToString("yyyy-MM-dd HH:mm:ss");
}

View File

@@ -1,12 +1,13 @@
namespace SharpCompress.Common.Arj.Headers;
public enum FileType : byte
namespace SharpCompress.Common.Arj.Headers
{
Binary = 0,
Text7Bit = 1,
CommentHeader = 2,
Directory = 3,
VolumeLabel = 4,
ChapterLabel = 5,
Unknown = 255,
public enum FileType : byte
{
Binary = 0,
Text7Bit = 1,
CommentHeader = 2,
Directory = 3,
VolumeLabel = 4,
ChapterLabel = 5,
Unknown = 255,
}
}

View File

@@ -1,18 +1,19 @@
namespace SharpCompress.Common.Arj.Headers;
public enum HostOS
namespace SharpCompress.Common.Arj.Headers
{
MsDos = 0,
PrimOS = 1,
Unix = 2,
Amiga = 3,
MacOs = 4,
OS2 = 5,
AppleGS = 6,
AtariST = 7,
NeXT = 8,
VaxVMS = 9,
Win95 = 10,
Win32 = 11,
Unknown = 255,
public enum HostOS
{
MsDos = 0,
PrimOS = 1,
Unix = 2,
Amiga = 3,
MacOs = 4,
OS2 = 5,
AppleGS = 6,
AtariST = 7,
NeXT = 8,
VaxVMS = 9,
Win95 = 10,
Win32 = 11,
Unknown = 255,
}
}

View File

@@ -0,0 +1,108 @@
using System;
using System.Buffers.Binary;
using System.IO;
using System.Threading;
using System.Threading.Tasks;
namespace SharpCompress.Common
{
public sealed class AsyncBinaryReader : IDisposable
{
private readonly Stream _stream;
private readonly Stream _originalStream;
private readonly bool _leaveOpen;
private readonly byte[] _buffer = new byte[8];
private bool _disposed;
public AsyncBinaryReader(Stream stream, bool leaveOpen = false, int bufferSize = 4096)
{
if (!stream.CanRead)
{
throw new ArgumentException("Stream must be readable.");
}
_originalStream = stream ?? throw new ArgumentNullException(nameof(stream));
_leaveOpen = leaveOpen;
// Use the stream directly without wrapping in BufferedStream
// BufferedStream uses synchronous Read internally which doesn't work with async-only streams
// SharpCompress uses SharpCompressStream for buffering which supports true async reads
_stream = stream;
}
public Stream BaseStream => _stream;
public async ValueTask<byte> ReadByteAsync(CancellationToken ct = default)
{
await _stream.ReadExactAsync(_buffer, 0, 1, ct).ConfigureAwait(false);
return _buffer[0];
}
public async ValueTask<ushort> ReadUInt16Async(CancellationToken ct = default)
{
await _stream.ReadExactAsync(_buffer, 0, 2, ct).ConfigureAwait(false);
return BinaryPrimitives.ReadUInt16LittleEndian(_buffer);
}
public async ValueTask<uint> ReadUInt32Async(CancellationToken ct = default)
{
await _stream.ReadExactAsync(_buffer, 0, 4, ct).ConfigureAwait(false);
return BinaryPrimitives.ReadUInt32LittleEndian(_buffer);
}
public async ValueTask<ulong> ReadUInt64Async(CancellationToken ct = default)
{
await _stream.ReadExactAsync(_buffer, 0, 8, ct).ConfigureAwait(false);
return BinaryPrimitives.ReadUInt64LittleEndian(_buffer);
}
public async ValueTask ReadBytesAsync(
byte[] bytes,
int offset,
int count,
CancellationToken ct = default
)
{
await _stream.ReadExactAsync(bytes, offset, count, ct).ConfigureAwait(false);
}
public async ValueTask SkipAsync(int count, CancellationToken ct = default)
{
await _stream.SkipAsync(count, ct).ConfigureAwait(false);
}
public void Dispose()
{
if (_disposed)
{
return;
}
_disposed = true;
// Dispose the original stream if we own it
if (!_leaveOpen)
{
_originalStream.Dispose();
}
}
#if NET8_0_OR_GREATER
public async ValueTask DisposeAsync()
{
if (_disposed)
{
return;
}
_disposed = true;
// Dispose the original stream if we own it
if (!_leaveOpen)
{
await _originalStream.DisposeAsync().ConfigureAwait(false);
}
}
#endif
}
}

View File

@@ -1,41 +0,0 @@
namespace SharpCompress.Common;
public static class Constants
{
/// <summary>
/// The default buffer size for stream operations, matching .NET's Stream.CopyTo default of 81920 bytes.
/// This can be modified globally at runtime.
/// </summary>
public static int BufferSize { get; set; } = 81920;
/// <summary>
/// The default size for rewindable buffers in SharpCompressStream.
/// Used for format detection on non-seekable streams.
/// </summary>
/// <remarks>
/// <para>
/// When opening archives from non-seekable streams (network streams, pipes,
/// compressed streams), SharpCompress uses a ring buffer to enable format
/// auto-detection. This buffer allows the library to try multiple decoders
/// by rewinding and re-reading the same data.
/// </para>
/// <para>
/// <b>Default:</b> 81920 bytes (81KB) - sufficient for typical format detection.
/// </para>
/// <para>
/// <b>Typical usage:</b> 500-1000 bytes for most archives
/// </para>
/// <para>
/// <b>Can be overridden per-stream via ReaderOptions.RewindableBufferSize.</b>
/// </para>
/// <para>
/// <b>Increase if:</b>
/// <list type="bullet">
/// <item>Handling self-extracting archives (may need 512KB+)</item>
/// <item>Format detection fails with buffer overflow errors</item>
/// <item>Using custom formats with large headers</item>
/// </list>
/// </para>
/// </remarks>
public static int RewindableBufferSize { get; set; } = 81920;
}

View File

@@ -42,6 +42,9 @@ public partial class EntryStream
await lzmaStream.FlushAsync().ConfigureAwait(false);
}
}
#if DEBUG_STREAMS
this.DebugDispose(typeof(EntryStream));
#endif
await base.DisposeAsync().ConfigureAwait(false);
await _stream.DisposeAsync().ConfigureAwait(false);
}

View File

@@ -8,8 +8,28 @@ using SharpCompress.Readers;
namespace SharpCompress.Common;
public partial class EntryStream : Stream
public partial class EntryStream : Stream, IStreamStack
{
#if DEBUG_STREAMS
long IStreamStack.InstanceId { get; set; }
#endif
int IStreamStack.DefaultBufferSize { get; set; }
Stream IStreamStack.BaseStream() => _stream;
int IStreamStack.BufferSize
{
get => 0;
set { }
}
int IStreamStack.BufferPosition
{
get => 0;
set { }
}
void IStreamStack.SetPosition(long position) { }
private readonly IReader _reader;
private readonly Stream _stream;
private bool _completed;
@@ -19,6 +39,9 @@ public partial class EntryStream : Stream
{
_reader = reader;
_stream = stream;
#if DEBUG_STREAMS
this.DebugConstruct(typeof(EntryStream));
#endif
}
/// <summary>
@@ -39,34 +62,24 @@ public partial class EntryStream : Stream
_isDisposed = true;
if (!(_completed || _reader.Cancelled))
{
if (Utility.UseSyncOverAsyncDispose())
{
SkipEntryAsync().GetAwaiter().GetResult();
}
else
{
SkipEntry();
}
SkipEntry();
}
//Need a safe standard approach to this - it's okay for compression to overreads. Handling needs to be standardised
if (_stream is IStreamStack ss)
{
if (
ss.GetStream<SharpCompress.Compressors.Deflate.DeflateStream>()
is SharpCompress.Compressors.Deflate.DeflateStream deflateStream
)
if (ss.BaseStream() is SharpCompress.Compressors.Deflate.DeflateStream deflateStream)
{
deflateStream.Flush(); //Deflate over reads. Knock it back
}
else if (
ss.GetStream<SharpCompress.Compressors.LZMA.LzmaStream>()
is SharpCompress.Compressors.LZMA.LzmaStream lzmaStream
)
else if (ss.BaseStream() is SharpCompress.Compressors.LZMA.LzmaStream lzmaStream)
{
lzmaStream.Flush(); //Lzma over reads. Knock it back
}
}
#if DEBUG_STREAMS
this.DebugDispose(typeof(EntryStream));
#endif
base.Dispose(disposing);
_stream.Dispose();
}

View File

@@ -4,7 +4,6 @@ using System.IO;
using System.Threading;
using System.Threading.Tasks;
using SharpCompress.Common;
using SharpCompress.IO;
namespace SharpCompress.Common.Rar;
@@ -48,12 +47,14 @@ internal class AsyncMarkingBinaryReader
public async ValueTask<ushort> ReadUInt16Async(CancellationToken cancellationToken = default)
{
CurrentReadByteCount += 2;
var bytes = await ReadBytesAsync(2, cancellationToken).ConfigureAwait(false);
return BinaryPrimitives.ReadUInt16LittleEndian(bytes);
}
public async ValueTask<uint> ReadUInt32Async(CancellationToken cancellationToken = default)
{
CurrentReadByteCount += 4;
var bytes = await ReadBytesAsync(4, cancellationToken).ConfigureAwait(false);
return BinaryPrimitives.ReadUInt32LittleEndian(bytes);
}
@@ -62,6 +63,7 @@ internal class AsyncMarkingBinaryReader
CancellationToken cancellationToken = default
)
{
CurrentReadByteCount += 8;
var bytes = await ReadBytesAsync(8, cancellationToken).ConfigureAwait(false);
return BinaryPrimitives.ReadUInt64LittleEndian(bytes);
}
@@ -70,6 +72,7 @@ internal class AsyncMarkingBinaryReader
CancellationToken cancellationToken = default
)
{
CurrentReadByteCount += 2;
var bytes = await ReadBytesAsync(2, cancellationToken).ConfigureAwait(false);
return BinaryPrimitives.ReadInt16LittleEndian(bytes);
}
@@ -78,6 +81,7 @@ internal class AsyncMarkingBinaryReader
CancellationToken cancellationToken = default
)
{
CurrentReadByteCount += 4;
var bytes = await ReadBytesAsync(4, cancellationToken).ConfigureAwait(false);
return BinaryPrimitives.ReadInt32LittleEndian(bytes);
}
@@ -86,6 +90,7 @@ internal class AsyncMarkingBinaryReader
CancellationToken cancellationToken = default
)
{
CurrentReadByteCount += 8;
var bytes = await ReadBytesAsync(8, cancellationToken).ConfigureAwait(false);
return BinaryPrimitives.ReadInt64LittleEndian(bytes);
}

View File

@@ -6,7 +6,13 @@ using System.Threading;
using System.Threading.Tasks;
using SharpCompress.Common.Rar;
using SharpCompress.IO;
#if !Rar2017_64bit
using size_t = System.UInt32;
#else
using nint = System.Int64;
using nuint = System.UInt64;
using size_t = System.UInt64;
#endif
namespace SharpCompress.Common.Rar.Headers;
@@ -113,9 +119,7 @@ internal partial class FileHeader
{
case FHEXTRA_CRYPT:
{
Rar5CryptoInfo = await Rar5CryptoInfo
.CreateAsync(reader, true)
.ConfigureAwait(false);
Rar5CryptoInfo = await Rar5CryptoInfo.CreateAsync(reader, true);
if (Rar5CryptoInfo.PswCheck.All(singleByte => singleByte == 0))
{
Rar5CryptoInfo = null;

View File

@@ -6,7 +6,13 @@ using System.Threading;
using System.Threading.Tasks;
using SharpCompress.Common.Rar;
using SharpCompress.IO;
#if !Rar2017_64bit
using size_t = System.UInt32;
#else
using nint = System.Int64;
using nuint = System.UInt64;
using size_t = System.UInt64;
#endif
namespace SharpCompress.Common.Rar.Headers;

View File

@@ -1,9 +1,6 @@
using System;
using System.Collections.Generic;
using System.IO;
using System.Threading;
using System.Threading.Tasks;
using SharpCompress.Common;
using SharpCompress.Crypto;
namespace SharpCompress.Common.Rar;
@@ -58,83 +55,6 @@ internal sealed class RarCryptoWrapper : Stream
return count;
}
public override Task<int> ReadAsync(
byte[] buffer,
int offset,
int count,
CancellationToken cancellationToken
) => ReadAndDecryptAsync(buffer, offset, count, cancellationToken);
private async Task<int> ReadAndDecryptAsync(
byte[] buffer,
int offset,
int count,
CancellationToken cancellationToken
)
{
cancellationToken.ThrowIfCancellationRequested();
var queueSize = _data.Count;
var sizeToRead = count - queueSize;
if (sizeToRead > 0)
{
var alignedSize = sizeToRead + ((~sizeToRead + 1) & 0xf);
byte[] cipherText = new byte[16];
try
{
for (var i = 0; i < alignedSize / 16; i++)
{
await _actualStream
.ReadExactAsync(cipherText, 0, 16, cancellationToken)
.ConfigureAwait(false);
var readBytes = _rijndael.ProcessBlock(cipherText);
foreach (var readByte in readBytes)
{
_data.Enqueue(readByte);
}
}
}
catch (EndOfStreamException e)
{
throw new InvalidFormatException("Unexpected end of encrypted stream", e);
}
}
var bytesToReturn = Math.Min(count, _data.Count);
for (var i = 0; i < bytesToReturn; i++)
{
buffer[offset + i] = _data.Dequeue();
}
return bytesToReturn;
}
#if NETCOREAPP2_1_OR_GREATER || NETSTANDARD2_1_OR_GREATER
public override async ValueTask<int> ReadAsync(
Memory<byte> buffer,
CancellationToken cancellationToken = default
)
{
cancellationToken.ThrowIfCancellationRequested();
var array = System.Buffers.ArrayPool<byte>.Shared.Rent(buffer.Length);
try
{
var bytesRead = await ReadAndDecryptAsync(array, 0, buffer.Length, cancellationToken)
.ConfigureAwait(false);
new ReadOnlySpan<byte>(array, 0, bytesRead).CopyTo(buffer.Span);
return bytesRead;
}
finally
{
System.Buffers.ArrayPool<byte>.Shared.Return(array);
}
}
#endif
public override void Write(byte[] buffer, int offset, int count) =>
throw new NotSupportedException();

View File

@@ -116,9 +116,7 @@ public abstract class RarVolume : Volume
if (fh.FileName == "CMT")
{
var buffer = new byte[fh.CompressedSize];
await fh
.PackedStream.NotNull()
.ReadFullyAsync(buffer, cancellationToken);
fh.PackedStream.NotNull().ReadFully(buffer);
Comment = Encoding.UTF8.GetString(buffer, 0, buffer.Length - 1);
}
}

View File

@@ -1,40 +0,0 @@
#nullable disable
using System.IO;
using System.Threading;
using System.Threading.Tasks;
using SharpCompress.Compressors.LZMA;
using SharpCompress.Compressors.LZMA.Utilites;
namespace SharpCompress.Common.SevenZip;
internal sealed partial class ArchiveDatabase
{
internal async ValueTask<Stream> GetFolderStreamAsync(
Stream stream,
CFolder folder,
IPasswordProvider pw,
CancellationToken cancellationToken
)
{
var packStreamIndex = folder._firstPackStreamId;
var folderStartPackPos = GetFolderStreamPos(folder, 0);
var count = folder._packStreams.Count;
var packSizes = new long[count];
for (var j = 0; j < count; j++)
{
packSizes[j] = _packSizes[packStreamIndex + j];
}
return await DecoderStreamHelper
.CreateDecoderStreamAsync(
stream,
folderStartPackPos,
packSizes,
folder,
pw,
cancellationToken
)
.ConfigureAwait(false);
}
}

View File

@@ -1,4 +1,4 @@
#nullable disable
#nullable disable
using System;
using System.Collections.Generic;
@@ -8,7 +8,7 @@ using SharpCompress.Compressors.LZMA.Utilites;
namespace SharpCompress.Common.SevenZip;
internal partial class ArchiveDatabase
internal class ArchiveDatabase
{
internal byte _majorVersion;
internal byte _minorVersion;

View File

@@ -1,7 +1,6 @@
#nullable disable
using System;
using System.Collections.Generic;
using System.IO;
using System.Threading;
using System.Threading.Tasks;
@@ -126,12 +125,10 @@ internal sealed partial class ArchiveReader
throw new InvalidOperationException();
}
var dataVector = await ReadAndDecodePackedStreamsAsync(
db._startPositionAfterHeader,
db.PasswordProvider,
cancellationToken
)
.ConfigureAwait(false);
var dataVector = ReadAndDecodePackedStreams(
db._startPositionAfterHeader,
db.PasswordProvider
);
// compressed header without content is odd but ok
if (dataVector.Count == 0)
@@ -153,428 +150,9 @@ internal sealed partial class ArchiveReader
}
}
await ReadHeaderAsync(db, db.PasswordProvider, cancellationToken).ConfigureAwait(false);
ReadHeader(db, db.PasswordProvider);
}
db.Fill();
return db;
}
private async ValueTask<List<byte[]>> ReadAndDecodePackedStreamsAsync(
long baseOffset,
IPasswordProvider pass,
CancellationToken cancellationToken
)
{
#if DEBUG
Log.WriteLine("-- ReadAndDecodePackedStreamsAsync --");
Log.PushIndent();
#endif
try
{
ReadStreamsInfo(
null,
out var dataStartPos,
out var packSizes,
out var packCrCs,
out var folders,
out var numUnpackStreamsInFolders,
out var unpackSizes,
out var digests
);
dataStartPos += baseOffset;
var dataVector = new List<byte[]>(folders.Count);
var packIndex = 0;
foreach (var folder in folders)
{
var oldDataStartPos = dataStartPos;
var myPackSizes = new long[folder._packStreams.Count];
for (var i = 0; i < myPackSizes.Length; i++)
{
var packSize = packSizes[packIndex + i];
myPackSizes[i] = packSize;
dataStartPos += packSize;
}
var outStream = await DecoderStreamHelper
.CreateDecoderStreamAsync(
_stream,
oldDataStartPos,
myPackSizes,
folder,
pass,
cancellationToken
)
.ConfigureAwait(false);
var unpackSize = checked((int)folder.GetUnpackSize());
var data = new byte[unpackSize];
await outStream
.ReadExactAsync(data, 0, data.Length, cancellationToken)
.ConfigureAwait(false);
if (outStream.ReadByte() >= 0)
{
throw new InvalidFormatException("Decoded stream is longer than expected.");
}
dataVector.Add(data);
if (folder.UnpackCrcDefined)
{
if (
Crc.Finish(Crc.Update(Crc.INIT_CRC, data, 0, unpackSize))
!= folder._unpackCrc
)
{
throw new InvalidFormatException(
"Decoded stream does not match expected CRC."
);
}
}
}
return dataVector;
}
finally
{
#if DEBUG
Log.PopIndent();
#endif
}
}
private async ValueTask ReadHeaderAsync(
ArchiveDatabase db,
IPasswordProvider getTextPassword,
CancellationToken cancellationToken
)
{
#if DEBUG
Log.WriteLine("-- ReadHeaderAsync --");
Log.PushIndent();
#endif
try
{
var type = ReadId();
if (type == BlockType.ArchiveProperties)
{
ReadArchiveProperties();
type = ReadId();
}
List<byte[]> dataVector = null;
if (type == BlockType.AdditionalStreamsInfo)
{
dataVector = await ReadAndDecodePackedStreamsAsync(
db._startPositionAfterHeader,
getTextPassword,
cancellationToken
)
.ConfigureAwait(false);
type = ReadId();
}
List<long> unpackSizes;
List<uint?> digests;
if (type == BlockType.MainStreamsInfo)
{
ReadStreamsInfo(
dataVector,
out db._dataStartPosition,
out db._packSizes,
out db._packCrCs,
out db._folders,
out db._numUnpackStreamsVector,
out unpackSizes,
out digests
);
db._dataStartPosition += db._startPositionAfterHeader;
type = ReadId();
}
else
{
unpackSizes = new List<long>(db._folders.Count);
digests = new List<uint?>(db._folders.Count);
db._numUnpackStreamsVector = new List<int>(db._folders.Count);
for (var i = 0; i < db._folders.Count; i++)
{
var folder = db._folders[i];
unpackSizes.Add(folder.GetUnpackSize());
digests.Add(folder._unpackCrc);
db._numUnpackStreamsVector.Add(1);
}
}
db._files.Clear();
if (type == BlockType.End)
{
return;
}
if (type != BlockType.FilesInfo)
{
throw new InvalidOperationException();
}
var numFiles = ReadNum();
#if DEBUG
Log.WriteLine("NumFiles: " + numFiles);
#endif
db._files = new List<CFileItem>(numFiles);
for (var i = 0; i < numFiles; i++)
{
db._files.Add(new CFileItem());
}
var emptyStreamVector = new BitVector(numFiles);
BitVector emptyFileVector = null;
BitVector antiFileVector = null;
var numEmptyStreams = 0;
for (; ; )
{
type = ReadId();
if (type == BlockType.End)
{
break;
}
var size = checked((long)ReadNumber());
var oldPos = _currentReader.Offset;
switch (type)
{
case BlockType.Name:
using (var streamSwitch = new CStreamSwitch())
{
streamSwitch.Set(this, dataVector);
#if DEBUG
Log.Write("FileNames:");
#endif
for (var i = 0; i < db._files.Count; i++)
{
db._files[i].Name = _currentReader.ReadString();
#if DEBUG
Log.Write(" " + db._files[i].Name);
#endif
}
#if DEBUG
Log.WriteLine();
#endif
}
break;
case BlockType.WinAttributes:
#if DEBUG
Log.Write("WinAttributes:");
#endif
ReadAttributeVector(
dataVector,
numFiles,
delegate(int i, uint? attr)
{
db._files[i].ExtendedAttrib = attr;
if (attr.HasValue && (attr.Value >> 16) != 0)
{
attr = attr.Value & 0x7FFFu;
}
db._files[i].Attrib = attr;
#if DEBUG
Log.Write(
" " + (attr.HasValue ? attr.Value.ToString("x8") : "n/a")
);
#endif
}
);
#if DEBUG
Log.WriteLine();
#endif
break;
case BlockType.EmptyStream:
emptyStreamVector = ReadBitVector(numFiles);
#if DEBUG
Log.Write("EmptyStream: ");
#endif
for (var i = 0; i < emptyStreamVector.Length; i++)
{
if (emptyStreamVector[i])
{
#if DEBUG
Log.Write("x");
#endif
numEmptyStreams++;
}
else
{
#if DEBUG
Log.Write(".");
#endif
}
}
#if DEBUG
Log.WriteLine();
#endif
emptyFileVector = new BitVector(numEmptyStreams);
antiFileVector = new BitVector(numEmptyStreams);
break;
case BlockType.EmptyFile:
emptyFileVector = ReadBitVector(numEmptyStreams);
#if DEBUG
Log.Write("EmptyFile: ");
for (var i = 0; i < numEmptyStreams; i++)
{
Log.Write(emptyFileVector[i] ? "x" : ".");
}
Log.WriteLine();
#endif
break;
case BlockType.Anti:
antiFileVector = ReadBitVector(numEmptyStreams);
#if DEBUG
Log.Write("Anti: ");
for (var i = 0; i < numEmptyStreams; i++)
{
Log.Write(antiFileVector[i] ? "x" : ".");
}
Log.WriteLine();
#endif
break;
case BlockType.StartPos:
#if DEBUG
Log.Write("StartPos:");
#endif
ReadNumberVector(
dataVector,
numFiles,
delegate(int i, long? startPos)
{
db._files[i].StartPos = startPos;
#if DEBUG
Log.Write(
" " + (startPos.HasValue ? startPos.Value.ToString() : "n/a")
);
#endif
}
);
#if DEBUG
Log.WriteLine();
#endif
break;
case BlockType.CTime:
#if DEBUG
Log.Write("CTime:");
#endif
ReadDateTimeVector(
dataVector,
numFiles,
delegate(int i, DateTime? time)
{
db._files[i].CTime = time;
#if DEBUG
Log.Write(" " + (time.HasValue ? time.Value.ToString() : "n/a"));
#endif
}
);
#if DEBUG
Log.WriteLine();
#endif
break;
case BlockType.ATime:
#if DEBUG
Log.Write("ATime:");
#endif
ReadDateTimeVector(
dataVector,
numFiles,
delegate(int i, DateTime? time)
{
db._files[i].ATime = time;
#if DEBUG
Log.Write(" " + (time.HasValue ? time.Value.ToString() : "n/a"));
#endif
}
);
#if DEBUG
Log.WriteLine();
#endif
break;
case BlockType.MTime:
#if DEBUG
Log.Write("MTime:");
#endif
ReadDateTimeVector(
dataVector,
numFiles,
delegate(int i, DateTime? time)
{
db._files[i].MTime = time;
#if DEBUG
Log.Write(" " + (time.HasValue ? time.Value.ToString() : "n/a"));
#endif
}
);
#if DEBUG
Log.WriteLine();
#endif
break;
case BlockType.Dummy:
#if DEBUG
Log.Write("Dummy: " + size);
#endif
for (long j = 0; j < size; j++)
{
if (ReadByte() != 0)
{
throw new InvalidOperationException();
}
}
break;
default:
SkipData(size);
break;
}
var checkRecordsSize = (db._majorVersion > 0 || db._minorVersion > 2);
if (checkRecordsSize && _currentReader.Offset - oldPos != size)
{
throw new InvalidOperationException();
}
}
var emptyFileIndex = 0;
var sizeIndex = 0;
for (var i = 0; i < numFiles; i++)
{
var file = db._files[i];
file.HasStream = !emptyStreamVector[i];
if (file.HasStream)
{
file.IsDir = false;
file.IsAnti = false;
file.Size = unpackSizes[sizeIndex];
file.Crc = digests[sizeIndex];
sizeIndex++;
}
else
{
file.IsDir = !emptyFileVector[emptyFileIndex];
file.IsAnti = antiFileVector[emptyFileIndex];
emptyFileIndex++;
file.Size = 0;
file.Crc = null;
}
}
}
finally
{
#if DEBUG
Log.PopIndent();
#endif
}
}
}

View File

@@ -1,7 +1,5 @@
using System.IO;
using System.Linq;
using System.Threading;
using System.Threading.Tasks;
using SharpCompress.IO;
namespace SharpCompress.Common.SevenZip;
@@ -60,35 +58,6 @@ internal class SevenZipFilePart : FilePart
return new ReadOnlySubStream(folderStream, Header.Size, leaveOpen: false);
}
internal override async ValueTask<Stream?> GetCompressedStreamAsync(
CancellationToken cancellationToken = default
)
{
if (!Header.HasStream)
{
return Stream.Null;
}
var folderStream = await _database.GetFolderStreamAsync(
_stream,
Folder!,
_database.PasswordProvider,
cancellationToken
);
var firstFileIndex = _database._folderStartFileIndex[_database._folders.IndexOf(Folder!)];
var skipCount = Index - firstFileIndex;
long skipSize = 0;
for (var i = 0; i < skipCount; i++)
{
skipSize += _database._files[firstFileIndex + i].Size;
}
if (skipSize > 0)
{
await folderStream.SkipAsync(skipSize, cancellationToken);
}
return new ReadOnlySubStream(folderStream, Header.Size, leaveOpen: false);
}
public CompressionType CompressionType
{
get

View File

@@ -6,7 +6,6 @@ using System.IO;
using System.Text;
using System.Threading;
using System.Threading.Tasks;
using SharpCompress.IO;
namespace SharpCompress.Common.Tar.Headers;

View File

@@ -1,6 +1,4 @@
using System.IO;
using System.Threading;
using System.Threading.Tasks;
using SharpCompress.Common.Tar.Headers;
namespace SharpCompress.Common.Tar;
@@ -25,28 +23,10 @@ internal sealed class TarFilePart : FilePart
if (_seekableStream != null)
{
_seekableStream.Position = Header.DataStartPosition ?? 0;
return new TarReadOnlySubStream(_seekableStream, Header.Size, false);
return new TarReadOnlySubStream(_seekableStream, Header.Size);
}
return Header.PackedStream.NotNull();
}
internal override ValueTask<Stream?> GetCompressedStreamAsync(
CancellationToken cancellationToken = default
)
{
if (_seekableStream != null)
{
var useSyncOverAsync = false;
#if LEGACY_DOTNET
useSyncOverAsync = true;
#endif
_seekableStream.Position = Header.DataStartPosition ?? 0;
return new ValueTask<Stream?>(
new TarReadOnlySubStream(_seekableStream, Header.Size, useSyncOverAsync)
);
}
return new ValueTask<Stream?>(Header.PackedStream.NotNull());
}
internal override Stream? GetRawStream() => null;
}

View File

@@ -13,17 +13,12 @@ internal static partial class TarHeaderFactory
IArchiveEncoding archiveEncoding
)
{
#if NET8_0_OR_GREATER
await using var reader = new AsyncBinaryReader(stream, leaveOpen: true);
#else
using var reader = new AsyncBinaryReader(stream, leaveOpen: true);
#endif
while (true)
{
TarHeader? header = null;
try
{
var reader = new AsyncBinaryReader(stream, false);
header = new TarHeader(archiveEncoding);
if (!await header.ReadAsync(reader))
{
@@ -41,15 +36,7 @@ internal static partial class TarHeaderFactory
break;
case StreamingMode.Streaming:
{
var useSyncOverAsync = false;
#if LEGACY_DOTNET
useSyncOverAsync = true;
#endif
header.PackedStream = new TarReadOnlySubStream(
stream,
header.Size,
useSyncOverAsync
);
header.PackedStream = new TarReadOnlySubStream(stream, header.Size);
}
break;
default:

View File

@@ -37,11 +37,7 @@ internal static partial class TarHeaderFactory
break;
case StreamingMode.Streaming:
{
header.PackedStream = new TarReadOnlySubStream(
stream,
header.Size,
false
);
header.PackedStream = new TarReadOnlySubStream(stream, header.Size);
}
break;
default:

View File

@@ -1,21 +1,40 @@
using System;
using System.IO;
using SharpCompress.IO;
namespace SharpCompress.Common.Tar;
internal class TarReadOnlySubStream : Stream
internal class TarReadOnlySubStream : SharpCompressStream, IStreamStack
{
private readonly Stream _stream;
private readonly bool _useSyncOverAsyncDispose;
#if DEBUG_STREAMS
long IStreamStack.InstanceId { get; set; }
#endif
Stream IStreamStack.BaseStream() => base.Stream;
int IStreamStack.BufferSize
{
get => 0;
set { }
}
int IStreamStack.BufferPosition
{
get => 0;
set { }
}
void IStreamStack.SetPosition(long position) { }
private bool _isDisposed;
private long _amountRead;
public TarReadOnlySubStream(Stream stream, long bytesToRead, bool useSyncOverAsyncDispose)
public TarReadOnlySubStream(Stream stream, long bytesToRead)
: base(stream, leaveOpen: true, throwOnDispose: false)
{
_stream = stream;
_useSyncOverAsyncDispose = useSyncOverAsyncDispose;
BytesLeftToRead = bytesToRead;
#if DEBUG_STREAMS
this.DebugConstruct(typeof(TarReadOnlySubStream));
#endif
}
protected override void Dispose(bool disposing)
@@ -26,10 +45,13 @@ internal class TarReadOnlySubStream : Stream
}
_isDisposed = true;
#if DEBUG_STREAMS
this.DebugDispose(typeof(TarReadOnlySubStream));
#endif
if (disposing)
{
// Ensure we read all remaining blocks for this entry.
_stream.Skip(BytesLeftToRead);
Stream.Skip(BytesLeftToRead);
_amountRead += BytesLeftToRead;
// If the last block wasn't a full 512 bytes, skip the remaining padding bytes.
@@ -37,16 +59,11 @@ internal class TarReadOnlySubStream : Stream
if (bytesInLastBlock != 0)
{
if (Utility.UseSyncOverAsyncDispose())
{
_stream.SkipAsync(512 - bytesInLastBlock).GetAwaiter().GetResult();
}
else
{
_stream.Skip(512 - bytesInLastBlock);
}
Stream.Skip(512 - bytesInLastBlock);
}
}
base.Dispose(disposing);
}
#if !LEGACY_DOTNET
@@ -58,8 +75,11 @@ internal class TarReadOnlySubStream : Stream
}
_isDisposed = true;
#if DEBUG_STREAMS
this.DebugDispose(typeof(TarReadOnlySubStream));
#endif
// Ensure we read all remaining blocks for this entry.
await _stream.SkipAsync(BytesLeftToRead).ConfigureAwait(false);
await Stream.SkipAsync(BytesLeftToRead).ConfigureAwait(false);
_amountRead += BytesLeftToRead;
// If the last block wasn't a full 512 bytes, skip the remaining padding bytes.
@@ -67,9 +87,11 @@ internal class TarReadOnlySubStream : Stream
if (bytesInLastBlock != 0)
{
await _stream.SkipAsync(512 - bytesInLastBlock).ConfigureAwait(false);
await Stream.SkipAsync(512 - bytesInLastBlock).ConfigureAwait(false);
}
// Call base Dispose instead of base DisposeAsync to avoid double disposal
base.Dispose(true);
GC.SuppressFinalize(this);
}
#endif
@@ -102,7 +124,7 @@ internal class TarReadOnlySubStream : Stream
{
count = (int)BytesLeftToRead;
}
var read = _stream.Read(buffer, offset, count);
var read = Stream.Read(buffer, offset, count);
if (read > 0)
{
BytesLeftToRead -= read;
@@ -117,7 +139,7 @@ internal class TarReadOnlySubStream : Stream
{
return -1;
}
var value = _stream.ReadByte();
var value = Stream.ReadByte();
if (value != -1)
{
--BytesLeftToRead;
@@ -137,7 +159,7 @@ internal class TarReadOnlySubStream : Stream
{
count = (int)BytesLeftToRead;
}
var read = await _stream
var read = await Stream
.ReadAsync(buffer, offset, count, cancellationToken)
.ConfigureAwait(false);
if (read > 0)
@@ -158,7 +180,7 @@ internal class TarReadOnlySubStream : Stream
{
buffer = buffer.Slice(0, (int)BytesLeftToRead);
}
var read = await _stream.ReadAsync(buffer, cancellationToken).ConfigureAwait(false);
var read = await Stream.ReadAsync(buffer, cancellationToken).ConfigureAwait(false);
if (read > 0)
{
BytesLeftToRead -= read;

View File

@@ -16,15 +16,14 @@ public abstract partial class Volume : IVolume, IAsyncDisposable
Index = index;
ReaderOptions = readerOptions ?? new ReaderOptions();
_baseStream = stream;
// Only rewind if it's a buffered SharpCompressStream (not passthrough)
if (stream is SharpCompressStream ss && !ss.IsPassthrough)
{
ss.Rewind();
}
if (ReaderOptions.LeaveStreamOpen)
{
stream = SharpCompressStream.CreateNonDisposing(stream);
stream = SharpCompressStream.Create(stream, leaveOpen: true);
}
if (stream is IStreamStack ss)
{
ss.SetBuffer(ReaderOptions.BufferSize, true);
}
_actualStream = stream;

View File

@@ -1,6 +1,5 @@
using System.IO;
using System.Threading.Tasks;
using SharpCompress.IO;
namespace SharpCompress.Common.Zip.Headers;

View File

@@ -2,7 +2,6 @@ using System.IO;
using System.Linq;
using System.Threading.Tasks;
using SharpCompress.Common;
using SharpCompress.IO;
namespace SharpCompress.Common.Zip.Headers;

View File

@@ -1,6 +1,5 @@
using System.IO;
using System.Threading.Tasks;
using SharpCompress.IO;
namespace SharpCompress.Common.Zip.Headers;

View File

@@ -1,7 +1,6 @@
using System.IO;
using System.Linq;
using System.Threading.Tasks;
using SharpCompress.IO;
namespace SharpCompress.Common.Zip.Headers;

View File

@@ -1,7 +1,6 @@
using System;
using System.IO;
using System.Threading.Tasks;
using SharpCompress.IO;
namespace SharpCompress.Common.Zip.Headers;

View File

@@ -1,6 +1,5 @@
using System.IO;
using System.Threading.Tasks;
using SharpCompress.IO;
namespace SharpCompress.Common.Zip.Headers;

View File

@@ -1,6 +1,5 @@
using System.IO;
using System.Threading.Tasks;
using SharpCompress.IO;
namespace SharpCompress.Common.Zip.Headers;

View File

@@ -1,6 +1,5 @@
using System.IO;
using System.Threading.Tasks;
using SharpCompress.IO;
namespace SharpCompress.Common.Zip.Headers;

View File

@@ -1,132 +0,0 @@
using System;
using System.Buffers;
using System.IO;
using System.Threading;
using System.Threading.Tasks;
namespace SharpCompress.Common.Zip;
internal partial class PkwareTraditionalCryptoStream
{
public override async Task<int> ReadAsync(
byte[] buffer,
int offset,
int count,
CancellationToken cancellationToken
)
{
if (_mode == CryptoMode.Encrypt)
{
throw new NotSupportedException("This stream does not encrypt via Read()");
}
if (buffer is null)
{
throw new ArgumentNullException(nameof(buffer));
}
var temp = new byte[count];
var readBytes = await _stream
.ReadAsync(temp, 0, count, cancellationToken)
.ConfigureAwait(false);
var decrypted = _encryptor.Decrypt(temp, readBytes);
Buffer.BlockCopy(decrypted, 0, buffer, offset, readBytes);
return readBytes;
}
#if !LEGACY_DOTNET
public override async ValueTask<int> ReadAsync(
Memory<byte> buffer,
CancellationToken cancellationToken = default
)
{
if (_mode == CryptoMode.Encrypt)
{
throw new NotSupportedException("This stream does not encrypt via Read()");
}
byte[] temp = ArrayPool<byte>.Shared.Rent(buffer.Length);
try
{
int readBytes = await _stream
.ReadAsync(temp.AsMemory(0, buffer.Length), cancellationToken)
.ConfigureAwait(false);
var decrypted = _encryptor.Decrypt(temp, readBytes);
decrypted.AsMemory(0, readBytes).CopyTo(buffer);
return readBytes;
}
finally
{
ArrayPool<byte>.Shared.Return(temp);
}
}
#endif
public override async Task WriteAsync(
byte[] buffer,
int offset,
int count,
CancellationToken cancellationToken
)
{
if (_mode == CryptoMode.Decrypt)
{
throw new NotSupportedException("This stream does not Decrypt via Write()");
}
if (count == 0)
{
return;
}
byte[] plaintext;
if (offset != 0)
{
plaintext = new byte[count];
Buffer.BlockCopy(buffer, offset, plaintext, 0, count);
}
else
{
plaintext = buffer;
}
var encrypted = _encryptor.Encrypt(plaintext, count);
await _stream
.WriteAsync(encrypted, 0, encrypted.Length, cancellationToken)
.ConfigureAwait(false);
}
#if !LEGACY_DOTNET
public override async ValueTask WriteAsync(
ReadOnlyMemory<byte> buffer,
CancellationToken cancellationToken = default
)
{
if (_mode == CryptoMode.Decrypt)
{
throw new NotSupportedException("This stream does not Decrypt via Write()");
}
if (buffer.Length == 0)
{
return;
}
byte[] plaintext;
if (buffer.Span.Overlaps(buffer.Span))
{
plaintext = buffer.ToArray();
}
else
{
plaintext = new byte[buffer.Length];
buffer.CopyTo(plaintext);
}
var encrypted = _encryptor.Encrypt(plaintext, buffer.Length);
await _stream
.WriteAsync(encrypted.AsMemory(0, encrypted.Length), cancellationToken)
.ConfigureAwait(false);
}
#endif
}

View File

@@ -1,5 +1,6 @@
using System;
using System.IO;
using SharpCompress.IO;
namespace SharpCompress.Common.Zip;
@@ -9,8 +10,28 @@ internal enum CryptoMode
Decrypt,
}
internal partial class PkwareTraditionalCryptoStream : Stream
internal class PkwareTraditionalCryptoStream : Stream, IStreamStack
{
#if DEBUG_STREAMS
long IStreamStack.InstanceId { get; set; }
#endif
int IStreamStack.DefaultBufferSize { get; set; }
Stream IStreamStack.BaseStream() => _stream;
int IStreamStack.BufferSize
{
get => 0;
set { return; }
}
int IStreamStack.BufferPosition
{
get => 0;
set { return; }
}
void IStreamStack.SetPosition(long position) { }
private readonly PkwareTraditionalEncryptionData _encryptor;
private readonly CryptoMode _mode;
private readonly Stream _stream;
@@ -25,6 +46,10 @@ internal partial class PkwareTraditionalCryptoStream : Stream
_encryptor = encryptor;
_stream = stream;
_mode = mode;
#if DEBUG_STREAMS
this.DebugConstruct(typeof(PkwareTraditionalCryptoStream));
#endif
}
public override bool CanRead => (_mode == CryptoMode.Decrypt);
@@ -100,6 +125,9 @@ internal partial class PkwareTraditionalCryptoStream : Stream
return;
}
_isDisposed = true;
#if DEBUG_STREAMS
this.DebugDispose(typeof(PkwareTraditionalCryptoStream));
#endif
base.Dispose(disposing);
_stream.Dispose();
}

View File

@@ -4,7 +4,6 @@ using System.Collections.Generic;
using System.IO;
using System.Threading.Tasks;
using SharpCompress.Common.Zip.Headers;
using SharpCompress.IO;
namespace SharpCompress.Common.Zip;
@@ -12,11 +11,7 @@ internal sealed partial class SeekableZipHeaderFactory
{
internal async IAsyncEnumerable<ZipHeader> ReadSeekableHeaderAsync(Stream stream)
{
#if NET8_0_OR_GREATER
await using var reader = new AsyncBinaryReader(stream, leaveOpen: true);
#else
using var reader = new AsyncBinaryReader(stream, leaveOpen: true);
#endif
var reader = new AsyncBinaryReader(stream);
await SeekBackToHeaderAsync(stream, reader);
@@ -132,11 +127,7 @@ internal sealed partial class SeekableZipHeaderFactory
)
{
stream.Seek(directoryEntryHeader.RelativeOffsetOfEntryHeader, SeekOrigin.Begin);
#if NET8_0_OR_GREATER
await using var reader = new AsyncBinaryReader(stream, leaveOpen: true);
#else
using var reader = new AsyncBinaryReader(stream, leaveOpen: true);
#endif
var reader = new AsyncBinaryReader(stream);
var signature = await reader.ReadUInt32Async();
if (await ReadHeader(signature, reader, _zip64) is not LocalEntryHeader localEntryHeader)
{

View File

@@ -24,7 +24,7 @@ internal sealed partial class StreamingZipFilePart
.ConfigureAwait(false);
if (LeaveStreamOpen)
{
return SharpCompressStream.CreateNonDisposing(_decompressionStream);
return SharpCompressStream.Create(_decompressionStream, leaveOpen: true);
}
return _decompressionStream;
}

View File

@@ -26,16 +26,16 @@ internal sealed partial class StreamingZipFilePart : ZipFilePart
);
if (LeaveStreamOpen)
{
return SharpCompressStream.CreateNonDisposing(_decompressionStream);
return SharpCompressStream.Create(_decompressionStream, leaveOpen: true);
}
return _decompressionStream;
}
internal BinaryReader FixStreamedFileLocation(ref Stream stream)
internal BinaryReader FixStreamedFileLocation(ref SharpCompressStream rewindableStream)
{
if (Header.IsDirectory)
{
return new BinaryReader(stream, System.Text.Encoding.Default, leaveOpen: true);
return new BinaryReader(rewindableStream);
}
if (Header.HasData && !Skipped)
@@ -49,12 +49,12 @@ internal sealed partial class StreamingZipFilePart : ZipFilePart
if (_decompressionStream is DeflateStream deflateStream)
{
stream.Position = 0;
((IStreamStack)rewindableStream).StackSeek(0);
}
Skipped = true;
}
var reader = new BinaryReader(stream, System.Text.Encoding.Default, leaveOpen: true);
var reader = new BinaryReader(rewindableStream);
_decompressionStream = null;
return reader;
}

View File

@@ -60,7 +60,7 @@ internal sealed partial class StreamingZipHeaderFactory
private sealed class StreamHeaderAsyncEnumerator : IAsyncEnumerator<ZipHeader>, IDisposable
{
private readonly StreamingZipHeaderFactory _headerFactory;
private readonly SharpCompressStream _sharpCompressStream;
private readonly SharpCompressStream _rewindableStream;
private readonly AsyncBinaryReader _reader;
private readonly CancellationToken _cancellationToken;
private bool _completed;
@@ -72,10 +72,8 @@ internal sealed partial class StreamingZipHeaderFactory
)
{
_headerFactory = headerFactory;
// Use Create to avoid double-wrapping if stream is already a SharpCompressStream,
// and to preserve seekability for DataDescriptorStream which needs to seek backward
_sharpCompressStream = SharpCompressStream.Create(stream);
_reader = new AsyncBinaryReader(_sharpCompressStream, leaveOpen: true);
_rewindableStream = EnsureSharpCompressStream(stream);
_reader = new AsyncBinaryReader(_rewindableStream, leaveOpen: true);
_cancellationToken = cancellationToken;
}
@@ -110,9 +108,7 @@ internal sealed partial class StreamingZipHeaderFactory
continue;
}
var pos = _sharpCompressStream.CanSeek
? (long?)_sharpCompressStream.Position
: null;
var pos = _rewindableStream.CanSeek ? (long?)_rewindableStream.Position : null;
var crc = await _reader
.ReadUInt32Async(_cancellationToken)
@@ -180,9 +176,7 @@ internal sealed partial class StreamingZipHeaderFactory
continue;
}
var pos = _sharpCompressStream.CanSeek
? (long?)_sharpCompressStream.Position
: null;
var pos = _rewindableStream.CanSeek ? (long?)_rewindableStream.Position : null;
headerBytes = await _reader
.ReadUInt32Async(_cancellationToken)
@@ -240,13 +234,8 @@ internal sealed partial class StreamingZipHeaderFactory
{
lastEntryHeader.DataStartPosition = pos - lastEntryHeader.CompressedSize;
// For SeekableSharpCompressStream, seek back to just after the local header signature.
// Plain SharpCompressStream cannot seek to arbitrary positions, so we skip this.
// 4 = First 4 bytes of the entry header (i.e. 50 4B 03 04)
if (_sharpCompressStream is SeekableSharpCompressStream)
{
_sharpCompressStream.Position = pos.Value + 4;
}
_rewindableStream.Position = pos.Value + 4;
}
}
else
@@ -292,12 +281,10 @@ internal sealed partial class StreamingZipHeaderFactory
} // Check if zip is streaming ( Length is 0 and is declared in PostDataDescriptor )
else if (localHeader.Flags.HasFlag(HeaderFlags.UsePostDataDescriptor))
{
// Peek ahead to check if next data is a header or file data.
// Use the IStreamStack.Rewind mechanism to give back the peeked bytes.
var nextHeaderBytes = await _reader
.ReadUInt32Async(_cancellationToken)
.ConfigureAwait(false);
_sharpCompressStream.Rewind(sizeof(uint));
((IStreamStack)_rewindableStream).Rewind(sizeof(uint));
// Check if next data is PostDataDescriptor, streamed file with 0 length
header.HasData = !IsHeader(nextHeaderBytes);
@@ -326,5 +313,29 @@ internal sealed partial class StreamingZipHeaderFactory
{
_reader.Dispose();
}
/// <summary>
/// Ensures the stream is a <see cref="SharpCompressStream"/> so header parsing can use rewind/buffer helpers.
/// </summary>
private static SharpCompressStream EnsureSharpCompressStream(Stream stream)
{
if (stream is SharpCompressStream sharpCompressStream)
{
return sharpCompressStream;
}
// Ensure the stream is already a SharpCompressStream so the buffer/size is set.
// The original code wrapped this with RewindableStream; use SharpCompressStream so we can get the buffer size.
if (stream is SourceStream src)
{
return new SharpCompressStream(
stream,
src.ReaderOptions.LeaveStreamOpen,
bufferSize: src.ReaderOptions.BufferSize
);
}
throw new ArgumentException("Stream must be a SharpCompressStream", nameof(stream));
}
}
}

View File

@@ -2,12 +2,15 @@ using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Threading;
using System.Threading.Tasks;
using SharpCompress.Common;
using SharpCompress.Common.Zip.Headers;
using SharpCompress.IO;
namespace SharpCompress.Common.Zip;
internal partial class StreamingZipHeaderFactory : ZipHeaderFactory
internal sealed partial class StreamingZipHeaderFactory : ZipHeaderFactory
{
private IEnumerable<ZipEntry>? _entries;
@@ -20,192 +23,207 @@ internal partial class StreamingZipHeaderFactory : ZipHeaderFactory
internal IEnumerable<ZipHeader> ReadStreamHeader(Stream stream)
{
// Use Create to avoid double-wrapping if stream is already a SharpCompressStream,
// and to preserve seekability for DataDescriptorStream which needs to seek backward
var sharpCompressStream = SharpCompressStream.Create(stream);
var reader = new BinaryReader(
sharpCompressStream,
System.Text.Encoding.Default,
leaveOpen: true
);
try
if (stream is not SharpCompressStream) //ensure the stream is already a SharpCompressStream. So the buffer/size will already be set
{
while (true)
//the original code wrapped this with RewindableStream. Wrap with SharpCompressStream as we can get the buffer size
if (stream is SourceStream src)
{
uint headerBytes = 0;
if (
_lastEntryHeader != null
&& FlagUtility.HasFlag(
_lastEntryHeader.Flags,
HeaderFlags.UsePostDataDescriptor
)
)
stream = new SharpCompressStream(
stream,
src.ReaderOptions.LeaveStreamOpen,
bufferSize: src.ReaderOptions.BufferSize
);
}
else
{
throw new ArgumentException("Stream must be a SharpCompressStream", nameof(stream));
}
}
var rewindableStream = (SharpCompressStream)stream;
while (true)
{
var reader = new BinaryReader(rewindableStream);
uint headerBytes = 0;
if (
_lastEntryHeader != null
&& FlagUtility.HasFlag(_lastEntryHeader.Flags, HeaderFlags.UsePostDataDescriptor)
)
{
if (_lastEntryHeader.Part is null)
{
if (_lastEntryHeader.Part is null)
{
continue;
}
// removed requirement for FixStreamedFileLocation()
var pos = sharpCompressStream.CanSeek
? (long?)sharpCompressStream.Position
: null;
var crc = reader.ReadUInt32();
if (crc == POST_DATA_DESCRIPTOR)
{
crc = reader.ReadUInt32();
}
_lastEntryHeader.Crc = crc;
//attempt 32bit read
ulong compSize = reader.ReadUInt32();
ulong uncompSize = reader.ReadUInt32();
headerBytes = reader.ReadUInt32();
//check for zip64 sentinel or unexpected header
bool isSentinel = compSize == 0xFFFFFFFF || uncompSize == 0xFFFFFFFF;
bool isHeader = headerBytes == 0x04034b50 || headerBytes == 0x02014b50;
if (!isHeader && !isSentinel)
{
//reshuffle into 64-bit values
compSize = (uncompSize << 32) | compSize;
uncompSize = ((ulong)headerBytes << 32) | reader.ReadUInt32();
headerBytes = reader.ReadUInt32();
}
else if (isSentinel)
{
//standards-compliant zip64 descriptor
compSize = reader.ReadUInt64();
uncompSize = reader.ReadUInt64();
}
_lastEntryHeader.CompressedSize = (long)compSize;
_lastEntryHeader.UncompressedSize = (long)uncompSize;
if (pos.HasValue)
{
_lastEntryHeader.DataStartPosition = pos - _lastEntryHeader.CompressedSize;
}
continue;
}
else if (_lastEntryHeader != null && _lastEntryHeader.IsZip64)
// removed requirement for FixStreamedFileLocation()
var pos = rewindableStream.CanSeek ? (long?)rewindableStream.Position : null;
var crc = reader.ReadUInt32();
if (crc == POST_DATA_DESCRIPTOR)
{
if (_lastEntryHeader.Part is null)
{
continue;
}
crc = reader.ReadUInt32();
}
_lastEntryHeader.Crc = crc;
//reader = ((StreamingZipFilePart)_lastEntryHeader.Part).FixStreamedFileLocation(
// ref sharpCompressStream
//);
//attempt 32bit read
ulong compSize = reader.ReadUInt32();
ulong uncompSize = reader.ReadUInt32();
headerBytes = reader.ReadUInt32();
var pos = sharpCompressStream.CanSeek
? (long?)sharpCompressStream.Position
: null;
//check for zip64 sentinel or unexpected header
bool isSentinel = compSize == 0xFFFFFFFF || uncompSize == 0xFFFFFFFF;
bool isHeader = headerBytes == 0x04034b50 || headerBytes == 0x02014b50;
if (!isHeader && !isSentinel)
{
//reshuffle into 64-bit values
compSize = (uncompSize << 32) | compSize;
uncompSize = ((ulong)headerBytes << 32) | reader.ReadUInt32();
headerBytes = reader.ReadUInt32();
}
else if (isSentinel)
{
//standards-compliant zip64 descriptor
compSize = reader.ReadUInt64();
uncompSize = reader.ReadUInt64();
}
var version = reader.ReadUInt16();
var flags = (HeaderFlags)reader.ReadUInt16();
var compressionMethod = (ZipCompressionMethod)reader.ReadUInt16();
var lastModifiedDate = reader.ReadUInt16();
var lastModifiedTime = reader.ReadUInt16();
_lastEntryHeader.CompressedSize = (long)compSize;
_lastEntryHeader.UncompressedSize = (long)uncompSize;
var crc = reader.ReadUInt32();
if (pos.HasValue)
{
_lastEntryHeader.DataStartPosition = pos - _lastEntryHeader.CompressedSize;
}
}
else if (_lastEntryHeader != null && _lastEntryHeader.IsZip64)
{
if (_lastEntryHeader.Part is null)
{
continue;
}
if (crc == POST_DATA_DESCRIPTOR)
{
crc = reader.ReadUInt32();
}
_lastEntryHeader.Crc = crc;
//reader = ((StreamingZipFilePart)_lastEntryHeader.Part).FixStreamedFileLocation(
// ref rewindableStream
//);
// The DataDescriptor can be either 64bit or 32bit
var compressed_size = reader.ReadUInt32();
var uncompressed_size = reader.ReadUInt32();
var pos = rewindableStream.CanSeek ? (long?)rewindableStream.Position : null;
// Check if we have header or 64bit DataDescriptor
var test_header = !(headerBytes == 0x04034b50 || headerBytes == 0x02014b50);
headerBytes = reader.ReadUInt32();
var test_64bit = ((long)uncompressed_size << 32) | compressed_size;
if (test_64bit == _lastEntryHeader.CompressedSize && test_header)
{
_lastEntryHeader.UncompressedSize =
((long)reader.ReadUInt32() << 32) | headerBytes;
headerBytes = reader.ReadUInt32();
}
else
{
_lastEntryHeader.UncompressedSize = uncompressed_size;
}
var version = reader.ReadUInt16();
var flags = (HeaderFlags)reader.ReadUInt16();
var compressionMethod = (ZipCompressionMethod)reader.ReadUInt16();
var lastModifiedDate = reader.ReadUInt16();
var lastModifiedTime = reader.ReadUInt16();
if (pos.HasValue)
{
_lastEntryHeader.DataStartPosition = pos - _lastEntryHeader.CompressedSize;
var crc = reader.ReadUInt32();
// 4 = First 4 bytes of the entry header (i.e. 50 4B 03 04)
sharpCompressStream.Position = pos.Value + 4;
}
if (crc == POST_DATA_DESCRIPTOR)
{
crc = reader.ReadUInt32();
}
_lastEntryHeader.Crc = crc;
// The DataDescriptor can be either 64bit or 32bit
var compressed_size = reader.ReadUInt32();
var uncompressed_size = reader.ReadUInt32();
// Check if we have header or 64bit DataDescriptor
var test_header = !(headerBytes == 0x04034b50 || headerBytes == 0x02014b50);
var test_64bit = ((long)uncompressed_size << 32) | compressed_size;
if (test_64bit == _lastEntryHeader.CompressedSize && test_header)
{
_lastEntryHeader.UncompressedSize =
((long)reader.ReadUInt32() << 32) | headerBytes;
headerBytes = reader.ReadUInt32();
}
else
{
headerBytes = reader.ReadUInt32();
_lastEntryHeader.UncompressedSize = uncompressed_size;
}
_lastEntryHeader = null;
var header = ReadHeader(headerBytes, reader);
if (header is null)
if (pos.HasValue)
{
yield break;
_lastEntryHeader.DataStartPosition = pos - _lastEntryHeader.CompressedSize;
// 4 = First 4 bytes of the entry header (i.e. 50 4B 03 04)
rewindableStream.Position = pos.Value + 4;
}
//entry could be zero bytes so we need to know that.
if (header.ZipHeaderType == ZipHeaderType.LocalEntry)
{
var local_header = ((LocalEntryHeader)header);
var dir_header = _entries?.FirstOrDefault(entry =>
entry.Key == local_header.Name
&& local_header.CompressedSize == 0
&& local_header.UncompressedSize == 0
&& local_header.Crc == 0
&& local_header.IsDirectory == false
);
if (dir_header != null)
{
local_header.UncompressedSize = dir_header.Size;
local_header.CompressedSize = dir_header.CompressedSize;
local_header.Crc = (uint)dir_header.Crc;
}
// If we have CompressedSize, there is data to be read
if (local_header.CompressedSize > 0)
{
header.HasData = true;
} // Check if zip is streaming ( Length is 0 and is declared in PostDataDescriptor )
else if (local_header.Flags.HasFlag(HeaderFlags.UsePostDataDescriptor))
{
// Peek ahead to check if next data is a header or file data.
// Use the IStreamStack.Rewind mechanism to give back the peeked bytes.
var nextHeaderBytes = reader.ReadUInt32();
sharpCompressStream.Rewind(sizeof(uint));
// Check if next data is PostDataDescriptor, streamed file with 0 length
header.HasData = !IsHeader(nextHeaderBytes);
}
else // We are not streaming and compressed size is 0, we have no data
{
header.HasData = false;
}
}
yield return header;
}
}
finally
{
reader.Dispose();
else
{
headerBytes = reader.ReadUInt32();
}
_lastEntryHeader = null;
var header = ReadHeader(headerBytes, reader);
if (header is null)
{
yield break;
}
//entry could be zero bytes so we need to know that.
if (header.ZipHeaderType == ZipHeaderType.LocalEntry)
{
var local_header = ((LocalEntryHeader)header);
var dir_header = _entries?.FirstOrDefault(entry =>
entry.Key == local_header.Name
&& local_header.CompressedSize == 0
&& local_header.UncompressedSize == 0
&& local_header.Crc == 0
&& local_header.IsDirectory == false
);
if (dir_header != null)
{
local_header.UncompressedSize = dir_header.Size;
local_header.CompressedSize = dir_header.CompressedSize;
local_header.Crc = (uint)dir_header.Crc;
}
// If we have CompressedSize, there is data to be read
if (local_header.CompressedSize > 0)
{
header.HasData = true;
} // Check if zip is streaming ( Length is 0 and is declared in PostDataDescriptor )
else if (local_header.Flags.HasFlag(HeaderFlags.UsePostDataDescriptor))
{
var nextHeaderBytes = reader.ReadUInt32();
((IStreamStack)rewindableStream).Rewind(sizeof(uint));
// Check if next data is PostDataDescriptor, streamed file with 0 length
header.HasData = !IsHeader(nextHeaderBytes);
}
else // We are not streaming and compressed size is 0, we have no data
{
header.HasData = false;
}
}
yield return header;
}
}
private static SharpCompressStream EnsureSharpCompressStream(Stream stream)
{
if (stream is SharpCompressStream sharpCompressStream)
{
return sharpCompressStream;
}
// Ensure the stream is already a SharpCompressStream so the buffer/size is set.
// The original code wrapped this with RewindableStream; use SharpCompressStream so we can get the buffer size.
if (stream is SourceStream src)
{
return new SharpCompressStream(
stream,
src.ReaderOptions.LeaveStreamOpen,
bufferSize: src.ReaderOptions.BufferSize
);
}
throw new ArgumentException("Stream must be a SharpCompressStream", nameof(stream));
}
}

View File

@@ -1,139 +0,0 @@
using System;
using System.Buffers;
using System.IO;
using System.Threading;
using System.Threading.Tasks;
namespace SharpCompress.Common.Zip;
internal partial class WinzipAesCryptoStream
{
#if !LEGACY_DOTNET
public override async ValueTask DisposeAsync()
{
if (_isDisposed)
{
return;
}
_isDisposed = true;
// Read out last 10 auth bytes asynchronously
byte[] authBytes = ArrayPool<byte>.Shared.Rent(10);
try
{
await _stream.ReadFullyAsync(authBytes, 0, 10).ConfigureAwait(false);
}
finally
{
ArrayPool<byte>.Shared.Return(authBytes);
await _stream.DisposeAsync().ConfigureAwait(false);
}
}
#endif
public override async Task<int> ReadAsync(
byte[] buffer,
int offset,
int count,
CancellationToken cancellationToken
)
{
if (_totalBytesLeftToRead == 0)
{
return 0;
}
var bytesToRead = count;
if (count > _totalBytesLeftToRead)
{
bytesToRead = (int)_totalBytesLeftToRead;
}
var read = await _stream
.ReadAsync(buffer, offset, bytesToRead, cancellationToken)
.ConfigureAwait(false);
_totalBytesLeftToRead -= read;
ReadTransformBlocks(buffer, offset, read);
return read;
}
#if !LEGACY_DOTNET
public override async ValueTask<int> ReadAsync(
Memory<byte> buffer,
CancellationToken cancellationToken = default
)
{
if (_totalBytesLeftToRead == 0)
{
return 0;
}
var bytesToRead = buffer.Length;
if (buffer.Length > _totalBytesLeftToRead)
{
bytesToRead = (int)_totalBytesLeftToRead;
}
var read = await _stream
.ReadAsync(buffer.Slice(0, bytesToRead), cancellationToken)
.ConfigureAwait(false);
_totalBytesLeftToRead -= read;
ReadTransformBlocks(buffer.Span, read);
return read;
}
private void ReadTransformBlocks(Span<byte> buffer, int count)
{
var posn = 0;
var last = count;
while (posn < buffer.Length && posn < last)
{
var n = ReadTransformOneBlock(buffer, posn, last);
posn += n;
}
}
private int ReadTransformOneBlock(Span<byte> buffer, int offset, int last)
{
if (_isFinalBlock)
{
throw new InvalidOperationException();
}
var bytesRemaining = last - offset;
var bytesToRead =
(bytesRemaining > BLOCK_SIZE_IN_BYTES) ? BLOCK_SIZE_IN_BYTES : bytesRemaining;
// update the counter
System.Buffers.Binary.BinaryPrimitives.WriteInt32LittleEndian(_counter, _nonce++);
// Determine if this is the final block
if ((bytesToRead == bytesRemaining) && (_totalBytesLeftToRead == 0))
{
_counterOut = _transform.TransformFinalBlock(_counter, 0, BLOCK_SIZE_IN_BYTES);
_isFinalBlock = true;
}
else
{
_transform.TransformBlock(
_counter,
0, // offset
BLOCK_SIZE_IN_BYTES,
_counterOut,
0
); // offset
}
XorInPlace(buffer, offset, bytesToRead);
return bytesToRead;
}
private void XorInPlace(Span<byte> buffer, int offset, int count)
{
for (var i = 0; i < count; i++)
{
buffer[offset + i] = (byte)(_counterOut[i] ^ buffer[offset + i]);
}
}
#endif
}

View File

@@ -1,14 +1,33 @@
using System;
using System.Buffers;
using System.Buffers.Binary;
using System.IO;
using System.Security.Cryptography;
using System.Threading.Tasks;
using SharpCompress.IO;
namespace SharpCompress.Common.Zip;
internal partial class WinzipAesCryptoStream : Stream
internal class WinzipAesCryptoStream : Stream, IStreamStack
{
#if DEBUG_STREAMS
long IStreamStack.InstanceId { get; set; }
#endif
int IStreamStack.DefaultBufferSize { get; set; }
Stream IStreamStack.BaseStream() => _stream;
int IStreamStack.BufferSize
{
get => 0;
set { }
}
int IStreamStack.BufferPosition
{
get => 0;
set { }
}
void IStreamStack.SetPosition(long position) { }
private const int BLOCK_SIZE_IN_BYTES = 16;
private readonly SymmetricAlgorithm _cipher;
private readonly byte[] _counter = new byte[BLOCK_SIZE_IN_BYTES];
@@ -29,6 +48,10 @@ internal partial class WinzipAesCryptoStream : Stream
_stream = stream;
_totalBytesLeftToRead = length;
#if DEBUG_STREAMS
this.DebugConstruct(typeof(WinzipAesCryptoStream));
#endif
_cipher = CreateCipher(winzipAesEncryptionData);
var iv = new byte[BLOCK_SIZE_IN_BYTES];
@@ -66,36 +89,18 @@ internal partial class WinzipAesCryptoStream : Stream
return;
}
_isDisposed = true;
#if DEBUG_STREAMS
this.DebugDispose(typeof(WinzipAesCryptoStream));
#endif
if (disposing)
{
// Read out last 10 auth bytes - catch exceptions for async-only streams
if (Utility.UseSyncOverAsyncDispose())
{
var ten = ArrayPool<byte>.Shared.Rent(10);
try
{
_stream.ReadFullyAsync(ten, 0, 10).GetAwaiter().GetResult();
}
finally
{
ArrayPool<byte>.Shared.Return(ten);
}
}
else
{
Span<byte> ten = stackalloc byte[10];
_stream.ReadFully(ten);
}
//read out last 10 auth bytes
Span<byte> ten = stackalloc byte[10];
_stream.ReadFully(ten);
_stream.Dispose();
}
}
private async Task ReadAuthBytesAsync()
{
byte[] authBytes = new byte[10];
await _stream.ReadFullyAsync(authBytes, 0, 10).ConfigureAwait(false);
}
public override void Flush() { }
public override int Read(byte[] buffer, int offset, int count)

View File

@@ -39,7 +39,7 @@ internal abstract partial class ZipFilePart
.ConfigureAwait(false);
if (LeaveStreamOpen)
{
return SharpCompressStream.CreateNonDisposing(decompressionStream);
return SharpCompressStream.Create(decompressionStream, leaveOpen: true);
}
return decompressionStream;
}
@@ -63,7 +63,7 @@ internal abstract partial class ZipFilePart
) || Header.IsZip64
)
{
plainStream = SharpCompressStream.CreateNonDisposing(plainStream); //make sure AES doesn't close
plainStream = SharpCompressStream.Create(plainStream, leaveOpen: true); //make sure AES doesn't close
}
else
{
@@ -136,55 +136,28 @@ internal abstract partial class ZipFilePart
}
case ZipCompressionMethod.Shrink:
{
return await ShrinkStream
.CreateAsync(
stream,
CompressionMode.Decompress,
Header.CompressedSize,
Header.UncompressedSize,
cancellationToken
)
.ConfigureAwait(false);
return new ShrinkStream(
stream,
CompressionMode.Decompress,
Header.CompressedSize,
Header.UncompressedSize
);
}
case ZipCompressionMethod.Reduce1:
{
return await ReduceStream.CreateAsync(
stream,
Header.CompressedSize,
Header.UncompressedSize,
1,
cancellationToken
);
return new ReduceStream(stream, Header.CompressedSize, Header.UncompressedSize, 1);
}
case ZipCompressionMethod.Reduce2:
{
return await ReduceStream.CreateAsync(
stream,
Header.CompressedSize,
Header.UncompressedSize,
2,
cancellationToken
);
return new ReduceStream(stream, Header.CompressedSize, Header.UncompressedSize, 2);
}
case ZipCompressionMethod.Reduce3:
{
return await ReduceStream.CreateAsync(
stream,
Header.CompressedSize,
Header.UncompressedSize,
3,
cancellationToken
);
return new ReduceStream(stream, Header.CompressedSize, Header.UncompressedSize, 3);
}
case ZipCompressionMethod.Reduce4:
{
return await ReduceStream.CreateAsync(
stream,
Header.CompressedSize,
Header.UncompressedSize,
4,
cancellationToken
);
return new ReduceStream(stream, Header.CompressedSize, Header.UncompressedSize, 4);
}
case ZipCompressionMethod.Explode:
{
@@ -228,7 +201,7 @@ internal abstract partial class ZipFilePart
await stream
.ReadFullyAsync(props, 0, propsSize, cancellationToken)
.ConfigureAwait(false);
return await LzmaStream.CreateAsync(
return LzmaStream.Create(
props,
stream,
Header.CompressedSize > 0 ? Header.CompressedSize - 4 - props.Length : -1,
@@ -249,9 +222,7 @@ internal abstract partial class ZipFilePart
{
var props = new byte[2];
await stream.ReadFullyAsync(props, 0, 2, cancellationToken).ConfigureAwait(false);
return await PpmdStream
.CreateAsync(new PpmdProperties(props), stream, false, cancellationToken)
.ConfigureAwait(false);
return new PpmdStream(new PpmdProperties(props), stream, false);
}
case ZipCompressionMethod.WinzipAes:
{

View File

@@ -45,7 +45,7 @@ internal abstract partial class ZipFilePart : FilePart
);
if (LeaveStreamOpen)
{
return SharpCompressStream.CreateNonDisposing(decompressionStream);
return SharpCompressStream.Create(decompressionStream, leaveOpen: true);
}
return decompressionStream;
}
@@ -88,39 +88,19 @@ internal abstract partial class ZipFilePart : FilePart
}
case ZipCompressionMethod.Reduce1:
{
return ReduceStream.Create(
stream,
Header.CompressedSize,
Header.UncompressedSize,
1
);
return new ReduceStream(stream, Header.CompressedSize, Header.UncompressedSize, 1);
}
case ZipCompressionMethod.Reduce2:
{
return ReduceStream.Create(
stream,
Header.CompressedSize,
Header.UncompressedSize,
2
);
return new ReduceStream(stream, Header.CompressedSize, Header.UncompressedSize, 2);
}
case ZipCompressionMethod.Reduce3:
{
return ReduceStream.Create(
stream,
Header.CompressedSize,
Header.UncompressedSize,
3
);
return new ReduceStream(stream, Header.CompressedSize, Header.UncompressedSize, 3);
}
case ZipCompressionMethod.Reduce4:
{
return ReduceStream.Create(
stream,
Header.CompressedSize,
Header.UncompressedSize,
4
);
return new ReduceStream(stream, Header.CompressedSize, Header.UncompressedSize, 4);
}
case ZipCompressionMethod.Explode:
{
@@ -150,26 +130,18 @@ internal abstract partial class ZipFilePart : FilePart
{
throw new NotSupportedException("LZMA with pkware encryption.");
}
using (
var reader = new BinaryReader(
stream,
System.Text.Encoding.Default,
leaveOpen: true
)
)
{
reader.ReadUInt16(); //LZMA version
var props = new byte[reader.ReadUInt16()];
reader.Read(props, 0, props.Length);
return LzmaStream.Create(
props,
stream,
Header.CompressedSize > 0 ? Header.CompressedSize - 4 - props.Length : -1,
FlagUtility.HasFlag(Header.Flags, HeaderFlags.Bit1)
? -1
: Header.UncompressedSize
);
}
var reader = new BinaryReader(stream);
reader.ReadUInt16(); //LZMA version
var props = new byte[reader.ReadUInt16()];
reader.Read(props, 0, props.Length);
return LzmaStream.Create(
props,
stream,
Header.CompressedSize > 0 ? Header.CompressedSize - 4 - props.Length : -1,
FlagUtility.HasFlag(Header.Flags, HeaderFlags.Bit1)
? -1
: Header.UncompressedSize
);
}
case ZipCompressionMethod.Xz:
{
@@ -183,7 +155,7 @@ internal abstract partial class ZipFilePart : FilePart
{
Span<byte> props = stackalloc byte[2];
stream.ReadFully(props);
return PpmdStream.Create(new PpmdProperties(props), stream, false);
return new PpmdStream(new PpmdProperties(props), stream, false);
}
case ZipCompressionMethod.WinzipAes:
{
@@ -241,7 +213,7 @@ internal abstract partial class ZipFilePart : FilePart
) || Header.IsZip64
)
{
plainStream = SharpCompressStream.CreateNonDisposing(plainStream); //make sure AES doesn't close
plainStream = SharpCompressStream.Create(plainStream, leaveOpen: true); //make sure AES doesn't close
}
else
{

View File

@@ -30,14 +30,35 @@ using System;
using System.IO;
using System.Threading;
using System.Threading.Tasks;
using SharpCompress.IO;
namespace SharpCompress.Compressors.ADC;
/// <summary>
/// Provides a forward readable only stream that decompresses ADC data
/// </summary>
public sealed partial class ADCStream : Stream
public sealed partial class ADCStream : Stream, IStreamStack
{
#if DEBUG_STREAMS
long IStreamStack.InstanceId { get; set; }
#endif
int IStreamStack.DefaultBufferSize { get; set; }
Stream IStreamStack.BaseStream() => _stream;
int IStreamStack.BufferSize
{
get => 0;
set { }
}
int IStreamStack.BufferPosition
{
get => 0;
set { }
}
void IStreamStack.SetPosition(long position) { }
/// <summary>
/// This stream holds the compressed data
/// </summary>
@@ -76,6 +97,9 @@ public sealed partial class ADCStream : Stream
}
_stream = stream;
#if DEBUG_STREAMS
this.DebugConstruct(typeof(ADCStream));
#endif
}
public override bool CanRead => _stream.CanRead;
@@ -101,6 +125,9 @@ public sealed partial class ADCStream : Stream
return;
}
_isDisposed = true;
#if DEBUG_STREAMS
this.DebugDispose(typeof(ADCStream));
#endif
base.Dispose(disposing);
}

View File

@@ -1,75 +0,0 @@
using System;
using System.IO;
using System.Linq;
using System.Threading;
using System.Threading.Tasks;
using SharpCompress.Compressors.RLE90;
public partial class ArcLzwStream
{
public override async Task<int> ReadAsync(
byte[] buffer,
int offset,
int count,
CancellationToken cancellationToken
)
{
if (_processed)
{
return 0;
}
_processed = true;
var data = new byte[_compressedSize];
int totalRead = 0;
while (totalRead < _compressedSize)
{
int read = await _stream
.ReadAsync(data, totalRead, _compressedSize - totalRead, cancellationToken)
.ConfigureAwait(false);
if (read == 0)
{
break;
}
totalRead += read;
}
var decoded = Decompress(data, _useCrunched);
var result = decoded.Count();
if (_useCrunched)
{
var unpacked = RLE.UnpackRLE(decoded.ToArray());
unpacked.CopyTo(buffer, 0);
result = unpacked.Count;
}
else
{
decoded.CopyTo(buffer, 0);
}
return result;
}
#if !LEGACY_DOTNET
public override async ValueTask<int> ReadAsync(
Memory<byte> buffer,
CancellationToken cancellationToken = default
)
{
if (buffer.IsEmpty)
{
return 0;
}
byte[] array = System.Buffers.ArrayPool<byte>.Shared.Rent(buffer.Length);
try
{
int read = await ReadAsync(array, 0, buffer.Length, cancellationToken)
.ConfigureAwait(false);
array.AsSpan(0, read).CopyTo(buffer.Span);
return read;
}
finally
{
System.Buffers.ArrayPool<byte>.Shared.Return(array);
}
}
#endif
}

View File

@@ -4,9 +4,30 @@ using System.IO;
using System.Linq;
using SharpCompress.Compressors.RLE90;
using SharpCompress.Compressors.Squeezed;
using SharpCompress.IO;
public partial class ArcLzwStream : Stream
public partial class ArcLzwStream : Stream, IStreamStack
{
#if DEBUG_STREAMS
long IStreamStack.InstanceId { get; set; }
#endif
int IStreamStack.DefaultBufferSize { get; set; }
Stream IStreamStack.BaseStream() => _stream;
int IStreamStack.BufferSize
{
get => 0;
set { }
}
int IStreamStack.BufferPosition
{
get => 0;
set { }
}
void IStreamStack.SetPosition(long position) { }
private Stream _stream;
private bool _processed;
private bool _useCrunched;
@@ -33,6 +54,9 @@ public partial class ArcLzwStream : Stream
public ArcLzwStream(Stream stream, int compressedSize, bool useCrunched = true)
{
_stream = stream;
#if DEBUG_STREAMS
this.DebugConstruct(typeof(ArcLzwStream));
#endif
_useCrunched = useCrunched;
_compressedSize = compressedSize;
@@ -199,6 +223,9 @@ public partial class ArcLzwStream : Stream
protected override void Dispose(bool disposing)
{
#if DEBUG_STREAMS
this.DebugDispose(typeof(ArcLzwStream));
#endif
base.Dispose(disposing);
}
}

View File

@@ -1,68 +1,72 @@
using System;
using System.IO;
namespace SharpCompress.Compressors.Arj;
[CLSCompliant(true)]
public partial class BitReader
namespace SharpCompress.Compressors.Arj
{
private readonly Stream _input;
private int _bitBuffer; // currently buffered bits
private int _bitCount; // number of bits in buffer
public BitReader(Stream input)
[CLSCompliant(true)]
public partial class BitReader
{
_input = input ?? throw new ArgumentNullException(nameof(input));
_bitBuffer = 0;
_bitCount = 0;
}
private readonly Stream _input;
private int _bitBuffer; // currently buffered bits
private int _bitCount; // number of bits in buffer
/// <summary>
/// Reads a single bit from the stream. Returns 0 or 1.
/// </summary>
public int ReadBit()
{
if (_bitCount == 0)
public BitReader(Stream input)
{
int nextByte = _input.ReadByte();
if (nextByte < 0)
_input = input ?? throw new ArgumentNullException(nameof(input));
_bitBuffer = 0;
_bitCount = 0;
}
/// <summary>
/// Reads a single bit from the stream. Returns 0 or 1.
/// </summary>
public int ReadBit()
{
if (_bitCount == 0)
{
throw new EndOfStreamException("No more data available in BitReader.");
int nextByte = _input.ReadByte();
if (nextByte < 0)
{
throw new EndOfStreamException("No more data available in BitReader.");
}
_bitBuffer = nextByte;
_bitCount = 8;
}
_bitBuffer = nextByte;
_bitCount = 8;
int bit = (_bitBuffer >> (_bitCount - 1)) & 1;
_bitCount--;
return bit;
}
int bit = (_bitBuffer >> (_bitCount - 1)) & 1;
_bitCount--;
return bit;
}
/// <summary>
/// Reads n bits (up to 32) from the stream.
/// </summary>
public int ReadBits(int count)
{
if (count < 0 || count > 32)
/// <summary>
/// Reads n bits (up to 32) from the stream.
/// </summary>
public int ReadBits(int count)
{
throw new ArgumentOutOfRangeException(nameof(count), "Count must be between 0 and 32.");
if (count < 0 || count > 32)
{
throw new ArgumentOutOfRangeException(
nameof(count),
"Count must be between 0 and 32."
);
}
int result = 0;
for (int i = 0; i < count; i++)
{
result = (result << 1) | ReadBit();
}
return result;
}
int result = 0;
for (int i = 0; i < count; i++)
/// <summary>
/// Resets any buffered bits.
/// </summary>
public void AlignToByte()
{
result = (result << 1) | ReadBit();
_bitCount = 0;
_bitBuffer = 0;
}
return result;
}
/// <summary>
/// Resets any buffered bits.
/// </summary>
public void AlignToByte()
{
_bitCount = 0;
_bitBuffer = 0;
}
}

View File

@@ -2,41 +2,42 @@ using System;
using System.Collections;
using System.Collections.Generic;
namespace SharpCompress.Compressors.Arj;
/// <summary>
/// Iterator that reads & pushes values back into the ring buffer.
/// </summary>
public class HistoryIterator : IEnumerator<byte>
namespace SharpCompress.Compressors.Arj
{
private int _index;
private readonly IRingBuffer _ring;
public HistoryIterator(IRingBuffer ring, int startIndex)
/// <summary>
/// Iterator that reads & pushes values back into the ring buffer.
/// </summary>
public class HistoryIterator : IEnumerator<byte>
{
_ring = ring;
_index = startIndex;
private int _index;
private readonly IRingBuffer _ring;
public HistoryIterator(IRingBuffer ring, int startIndex)
{
_ring = ring;
_index = startIndex;
}
public bool MoveNext()
{
Current = _ring[_index];
_index = unchecked(_index + 1);
// Push value back into the ring buffer
_ring.Push(Current);
return true; // iterator is infinite
}
public void Reset()
{
throw new NotSupportedException();
}
public byte Current { get; private set; }
object IEnumerator.Current => Current;
public void Dispose() { }
}
public bool MoveNext()
{
Current = _ring[_index];
_index = unchecked(_index + 1);
// Push value back into the ring buffer
_ring.Push(Current);
return true; // iterator is infinite
}
public void Reset()
{
throw new NotSupportedException();
}
public byte Current { get; private set; }
object IEnumerator.Current => Current;
public void Dispose() { }
}

View File

@@ -1,38 +0,0 @@
using System;
using System.Threading;
using System.Threading.Tasks;
namespace SharpCompress.Compressors.Arj;
public sealed partial class HuffTree
{
public async ValueTask<int> ReadEntryAsync(
BitReader reader,
CancellationToken cancellationToken
)
{
if (_tree.Count == 0)
{
throw new InvalidOperationException("Tree not initialized");
}
TreeEntry node = _tree[0];
while (true)
{
if (node.Type == NodeType.Leaf)
{
return node.LeafValue;
}
int bit = await reader.ReadBitAsync(cancellationToken).ConfigureAwait(false);
int index = node.BranchIndex + bit;
if (index >= _tree.Count)
{
throw new InvalidOperationException("Invalid branch index during read");
}
node = _tree[index];
}
}
}

View File

@@ -3,212 +3,216 @@ using System.Collections.Generic;
using System.IO;
using System.Text;
namespace SharpCompress.Compressors.Arj;
[CLSCompliant(true)]
public enum NodeType
namespace SharpCompress.Compressors.Arj
{
Leaf,
Branch,
}
[CLSCompliant(true)]
public sealed class TreeEntry
{
public readonly NodeType Type;
public readonly int LeafValue;
public readonly int BranchIndex;
public const int MAX_INDEX = 4096;
private TreeEntry(NodeType type, int leafValue, int branchIndex)
[CLSCompliant(true)]
public enum NodeType
{
Type = type;
LeafValue = leafValue;
BranchIndex = branchIndex;
Leaf,
Branch,
}
public static TreeEntry Leaf(int value)
[CLSCompliant(true)]
public sealed class TreeEntry
{
return new TreeEntry(NodeType.Leaf, value, -1);
}
public readonly NodeType Type;
public readonly int LeafValue;
public readonly int BranchIndex;
public static TreeEntry Branch(int index)
{
if (index >= MAX_INDEX)
public const int MAX_INDEX = 4096;
private TreeEntry(NodeType type, int leafValue, int branchIndex)
{
throw new ArgumentOutOfRangeException(nameof(index), "Branch index exceeds MAX_INDEX");
Type = type;
LeafValue = leafValue;
BranchIndex = branchIndex;
}
public static TreeEntry Leaf(int value)
{
return new TreeEntry(NodeType.Leaf, value, -1);
}
public static TreeEntry Branch(int index)
{
if (index >= MAX_INDEX)
{
throw new ArgumentOutOfRangeException(
nameof(index),
"Branch index exceeds MAX_INDEX"
);
}
return new TreeEntry(NodeType.Branch, 0, index);
}
}
[CLSCompliant(true)]
public sealed class HuffTree
{
private readonly List<TreeEntry> _tree;
public HuffTree(int capacity = 0)
{
_tree = new List<TreeEntry>(capacity);
}
public void SetSingle(int value)
{
_tree.Clear();
_tree.Add(TreeEntry.Leaf(value));
}
public void BuildTree(byte[] lengths, int count)
{
if (lengths == null)
{
throw new ArgumentNullException(nameof(lengths));
}
if (count < 0 || count > lengths.Length)
{
throw new ArgumentOutOfRangeException(nameof(count));
}
if (count > TreeEntry.MAX_INDEX / 2)
{
throw new ArgumentException(
$"Count exceeds maximum allowed: {TreeEntry.MAX_INDEX / 2}"
);
}
byte[] slice = new byte[count];
Array.Copy(lengths, slice, count);
BuildTree(slice);
}
public void BuildTree(byte[] valueLengths)
{
if (valueLengths == null)
{
throw new ArgumentNullException(nameof(valueLengths));
}
if (valueLengths.Length > TreeEntry.MAX_INDEX / 2)
{
throw new InvalidOperationException("Too many code lengths");
}
_tree.Clear();
int maxAllocated = 1; // start with a single (root) node
for (byte currentLen = 1; ; currentLen++)
{
// add missing branches up to current limit
int maxLimit = maxAllocated;
for (int i = _tree.Count; i < maxLimit; i++)
{
// TreeEntry.Branch may throw if index too large
try
{
_tree.Add(TreeEntry.Branch(maxAllocated));
}
catch (ArgumentOutOfRangeException e)
{
_tree.Clear();
throw new InvalidOperationException("Branch index exceeds limit", e);
}
// each branch node allocates two children
maxAllocated += 2;
}
// fill tree with leaves found in the lengths table at the current length
bool moreLeaves = false;
for (int value = 0; value < valueLengths.Length; value++)
{
byte len = valueLengths[value];
if (len == currentLen)
{
_tree.Add(TreeEntry.Leaf(value));
}
else if (len > currentLen)
{
moreLeaves = true; // there are more leaves to process
}
}
// sanity check (too many leaves)
if (_tree.Count > maxAllocated)
{
throw new InvalidOperationException("Too many leaves");
}
// stop when no longer finding longer codes
if (!moreLeaves)
{
break;
}
}
// ensure tree is complete
if (_tree.Count != maxAllocated)
{
throw new InvalidOperationException(
$"Missing some leaves: tree count = {_tree.Count}, expected = {maxAllocated}"
);
}
}
public int ReadEntry(BitReader reader)
{
if (_tree.Count == 0)
{
throw new InvalidOperationException("Tree not initialized");
}
TreeEntry node = _tree[0];
while (true)
{
if (node.Type == NodeType.Leaf)
{
return node.LeafValue;
}
int bit = reader.ReadBit();
int index = node.BranchIndex + bit;
if (index >= _tree.Count)
{
throw new InvalidOperationException("Invalid branch index during read");
}
node = _tree[index];
}
}
public override string ToString()
{
var result = new StringBuilder();
void FormatStep(int index, string prefix)
{
var node = _tree[index];
if (node.Type == NodeType.Leaf)
{
result.AppendLine($"{prefix} -> {node.LeafValue}");
}
else
{
FormatStep(node.BranchIndex, prefix + "0");
FormatStep(node.BranchIndex + 1, prefix + "1");
}
}
if (_tree.Count > 0)
{
FormatStep(0, "");
}
return result.ToString();
}
return new TreeEntry(NodeType.Branch, 0, index);
}
}
[CLSCompliant(true)]
public sealed partial class HuffTree
{
private readonly List<TreeEntry> _tree;
public HuffTree(int capacity = 0)
{
_tree = new List<TreeEntry>(capacity);
}
public void SetSingle(int value)
{
_tree.Clear();
_tree.Add(TreeEntry.Leaf(value));
}
public void BuildTree(byte[] lengths, int count)
{
if (lengths == null)
{
throw new ArgumentNullException(nameof(lengths));
}
if (count < 0 || count > lengths.Length)
{
throw new ArgumentOutOfRangeException(nameof(count));
}
if (count > TreeEntry.MAX_INDEX / 2)
{
throw new ArgumentException(
$"Count exceeds maximum allowed: {TreeEntry.MAX_INDEX / 2}"
);
}
byte[] slice = new byte[count];
Array.Copy(lengths, slice, count);
BuildTree(slice);
}
public void BuildTree(byte[] valueLengths)
{
if (valueLengths == null)
{
throw new ArgumentNullException(nameof(valueLengths));
}
if (valueLengths.Length > TreeEntry.MAX_INDEX / 2)
{
throw new InvalidOperationException("Too many code lengths");
}
_tree.Clear();
int maxAllocated = 1; // start with a single (root) node
for (byte currentLen = 1; ; currentLen++)
{
// add missing branches up to current limit
int maxLimit = maxAllocated;
for (int i = _tree.Count; i < maxLimit; i++)
{
// TreeEntry.Branch may throw if index too large
try
{
_tree.Add(TreeEntry.Branch(maxAllocated));
}
catch (ArgumentOutOfRangeException e)
{
_tree.Clear();
throw new InvalidOperationException("Branch index exceeds limit", e);
}
// each branch node allocates two children
maxAllocated += 2;
}
// fill tree with leaves found in the lengths table at the current length
bool moreLeaves = false;
for (int value = 0; value < valueLengths.Length; value++)
{
byte len = valueLengths[value];
if (len == currentLen)
{
_tree.Add(TreeEntry.Leaf(value));
}
else if (len > currentLen)
{
moreLeaves = true; // there are more leaves to process
}
}
// sanity check (too many leaves)
if (_tree.Count > maxAllocated)
{
throw new InvalidOperationException("Too many leaves");
}
// stop when no longer finding longer codes
if (!moreLeaves)
{
break;
}
}
// ensure tree is complete
if (_tree.Count != maxAllocated)
{
throw new InvalidOperationException(
$"Missing some leaves: tree count = {_tree.Count}, expected = {maxAllocated}"
);
}
}
public int ReadEntry(BitReader reader)
{
if (_tree.Count == 0)
{
throw new InvalidOperationException("Tree not initialized");
}
TreeEntry node = _tree[0];
while (true)
{
if (node.Type == NodeType.Leaf)
{
return node.LeafValue;
}
int bit = reader.ReadBit();
int index = node.BranchIndex + bit;
if (index >= _tree.Count)
{
throw new InvalidOperationException("Invalid branch index during read");
}
node = _tree[index];
}
}
public override string ToString()
{
var result = new StringBuilder();
void FormatStep(int index, string prefix)
{
var node = _tree[index];
if (node.Type == NodeType.Leaf)
{
result.AppendLine($"{prefix} -> {node.LeafValue}");
}
else
{
FormatStep(node.BranchIndex, prefix + "0");
FormatStep(node.BranchIndex + 1, prefix + "1");
}
}
if (_tree.Count > 0)
{
FormatStep(0, "");
}
return result.ToString();
}
}

View File

@@ -1,8 +1,9 @@
namespace SharpCompress.Compressors.Arj;
public interface ILhaDecoderConfig
namespace SharpCompress.Compressors.Arj
{
int HistoryBits { get; }
int OffsetBits { get; }
RingBuffer RingBuffer { get; }
public interface ILhaDecoderConfig
{
int HistoryBits { get; }
int OffsetBits { get; }
RingBuffer RingBuffer { get; }
}
}

View File

@@ -1,16 +1,17 @@
namespace SharpCompress.Compressors.Arj;
public interface IRingBuffer
namespace SharpCompress.Compressors.Arj
{
int BufferSize { get; }
public interface IRingBuffer
{
int BufferSize { get; }
int Cursor { get; }
void SetCursor(int pos);
int Cursor { get; }
void SetCursor(int pos);
void Push(byte value);
void Push(byte value);
HistoryIterator IterFromOffset(int offset);
HistoryIterator IterFromPos(int pos);
HistoryIterator IterFromOffset(int offset);
HistoryIterator IterFromPos(int pos);
byte this[int index] { get; }
byte this[int index] { get; }
}
}

View File

@@ -1,182 +1,205 @@
using System;
using System.Collections.Generic;
using System.IO;
using SharpCompress.IO;
namespace SharpCompress.Compressors.Arj;
[CLSCompliant(true)]
public sealed partial class LHDecoderStream : Stream
namespace SharpCompress.Compressors.Arj
{
private readonly BitReader _bitReader;
private readonly Stream _stream;
// Buffer containing *all* bytes decoded so far.
private readonly List<byte> _buffer = new();
private long _readPosition;
private readonly int _originalSize;
private bool _finishedDecoding;
private bool _disposed;
private const int THRESHOLD = 3;
public LHDecoderStream(Stream compressedStream, int originalSize)
[CLSCompliant(true)]
public sealed partial class LHDecoderStream : Stream, IStreamStack
{
_stream = compressedStream ?? throw new ArgumentNullException(nameof(compressedStream));
if (!compressedStream.CanRead)
#if DEBUG_STREAMS
long IStreamStack.InstanceId { get; set; }
#endif
int IStreamStack.DefaultBufferSize { get; set; }
Stream IStreamStack.BaseStream() => _stream;
int IStreamStack.BufferSize
{
throw new ArgumentException(
"compressedStream must be readable.",
nameof(compressedStream)
);
get => 0;
set { }
}
int IStreamStack.BufferPosition
{
get => 0;
set { }
}
_bitReader = new BitReader(compressedStream);
_originalSize = originalSize;
_readPosition = 0;
_finishedDecoding = (originalSize == 0);
}
void IStreamStack.SetPosition(long position) { }
public Stream BaseStream => _stream;
private readonly BitReader _bitReader;
private readonly Stream _stream;
public override bool CanRead => true;
public override bool CanSeek => false;
public override bool CanWrite => false;
// Buffer containing *all* bytes decoded so far.
private readonly List<byte> _buffer = new();
public override long Length => _originalSize;
private long _readPosition;
private readonly int _originalSize;
private bool _finishedDecoding;
private bool _disposed;
public override long Position
{
get => _readPosition;
set => throw new NotSupportedException();
}
private const int THRESHOLD = 3;
/// <summary>
/// Decodes a single element (literal or back-reference) and appends it to _buffer.
/// Returns true if data was added, or false if all input has already been decoded.
/// </summary>
private bool DecodeNext()
{
if (_buffer.Count >= _originalSize)
public LHDecoderStream(Stream compressedStream, int originalSize)
{
_finishedDecoding = true;
return false;
}
int len = DecodeVal(0, 7);
if (len == 0)
{
byte nextChar = (byte)_bitReader.ReadBits(8);
_buffer.Add(nextChar);
}
else
{
int repCount = len + THRESHOLD - 1;
int backPtr = DecodeVal(9, 13);
if (backPtr >= _buffer.Count)
_stream = compressedStream ?? throw new ArgumentNullException(nameof(compressedStream));
if (!compressedStream.CanRead)
{
throw new InvalidDataException("Invalid back_ptr in LH stream");
throw new ArgumentException(
"compressedStream must be readable.",
nameof(compressedStream)
);
}
int srcIndex = _buffer.Count - 1 - backPtr;
for (int j = 0; j < repCount && _buffer.Count < _originalSize; j++)
_bitReader = new BitReader(compressedStream);
_originalSize = originalSize;
_readPosition = 0;
_finishedDecoding = (originalSize == 0);
}
public Stream BaseStream => _stream;
public override bool CanRead => true;
public override bool CanSeek => false;
public override bool CanWrite => false;
public override long Length => _originalSize;
public override long Position
{
get => _readPosition;
set => throw new NotSupportedException();
}
/// <summary>
/// Decodes a single element (literal or back-reference) and appends it to _buffer.
/// Returns true if data was added, or false if all input has already been decoded.
/// </summary>
private bool DecodeNext()
{
if (_buffer.Count >= _originalSize)
{
byte b = _buffer[srcIndex];
_buffer.Add(b);
srcIndex++;
// srcIndex may grow; it's allowed (source region can overlap destination)
_finishedDecoding = true;
return false;
}
}
if (_buffer.Count >= _originalSize)
{
_finishedDecoding = true;
}
return true;
}
private int DecodeVal(int from, int to)
{
int add = 0;
int bit = from;
while (bit < to && _bitReader.ReadBits(1) == 1)
{
add |= 1 << bit;
bit++;
}
int res = bit > 0 ? _bitReader.ReadBits(bit) : 0;
return res + add;
}
/// <summary>
/// Reads decompressed bytes into buffer[offset..offset+count].
/// The method decodes additional data on demand when needed.
/// </summary>
public override int Read(byte[] buffer, int offset, int count)
{
if (_disposed)
{
throw new ObjectDisposedException(nameof(LHDecoderStream));
}
if (buffer == null)
{
throw new ArgumentNullException(nameof(buffer));
}
if (offset < 0 || count < 0 || offset + count > buffer.Length)
{
throw new ArgumentOutOfRangeException("offset/count");
}
if (_readPosition >= _originalSize)
{
return 0; // EOF
}
int totalRead = 0;
while (totalRead < count && _readPosition < _originalSize)
{
if (_readPosition >= _buffer.Count)
int len = DecodeVal(0, 7);
if (len == 0)
{
bool had = DecodeNext();
if (!had)
byte nextChar = (byte)_bitReader.ReadBits(8);
_buffer.Add(nextChar);
}
else
{
int repCount = len + THRESHOLD - 1;
int backPtr = DecodeVal(9, 13);
if (backPtr >= _buffer.Count)
{
throw new InvalidDataException("Invalid back_ptr in LH stream");
}
int srcIndex = _buffer.Count - 1 - backPtr;
for (int j = 0; j < repCount && _buffer.Count < _originalSize; j++)
{
byte b = _buffer[srcIndex];
_buffer.Add(b);
srcIndex++;
// srcIndex may grow; it's allowed (source region can overlap destination)
}
}
if (_buffer.Count >= _originalSize)
{
_finishedDecoding = true;
}
return true;
}
private int DecodeVal(int from, int to)
{
int add = 0;
int bit = from;
while (bit < to && _bitReader.ReadBits(1) == 1)
{
add |= 1 << bit;
bit++;
}
int res = bit > 0 ? _bitReader.ReadBits(bit) : 0;
return res + add;
}
/// <summary>
/// Reads decompressed bytes into buffer[offset..offset+count].
/// The method decodes additional data on demand when needed.
/// </summary>
public override int Read(byte[] buffer, int offset, int count)
{
if (_disposed)
{
throw new ObjectDisposedException(nameof(LHDecoderStream));
}
if (buffer == null)
{
throw new ArgumentNullException(nameof(buffer));
}
if (offset < 0 || count < 0 || offset + count > buffer.Length)
{
throw new ArgumentOutOfRangeException("offset/count");
}
if (_readPosition >= _originalSize)
{
return 0; // EOF
}
int totalRead = 0;
while (totalRead < count && _readPosition < _originalSize)
{
if (_readPosition >= _buffer.Count)
{
bool had = DecodeNext();
if (!had)
{
break;
}
}
int available = _buffer.Count - (int)_readPosition;
if (available <= 0)
{
if (!_finishedDecoding)
{
continue;
}
break;
}
int toCopy = Math.Min(available, count - totalRead);
_buffer.CopyTo((int)_readPosition, buffer, offset + totalRead, toCopy);
_readPosition += toCopy;
totalRead += toCopy;
}
int available = _buffer.Count - (int)_readPosition;
if (available <= 0)
{
if (!_finishedDecoding)
{
continue;
}
break;
}
int toCopy = Math.Min(available, count - totalRead);
_buffer.CopyTo((int)_readPosition, buffer, offset + totalRead, toCopy);
_readPosition += toCopy;
totalRead += toCopy;
return totalRead;
}
return totalRead;
public override void Flush() => throw new NotSupportedException();
public override long Seek(long offset, SeekOrigin origin) =>
throw new NotSupportedException();
public override void SetLength(long value) => throw new NotSupportedException();
public override void Write(byte[] buffer, int offset, int count) =>
throw new NotSupportedException();
}
public override void Flush() => throw new NotSupportedException();
public override long Seek(long offset, SeekOrigin origin) => throw new NotSupportedException();
public override void SetLength(long value) => throw new NotSupportedException();
public override void Write(byte[] buffer, int offset, int count) =>
throw new NotSupportedException();
}

View File

@@ -1,8 +1,9 @@
namespace SharpCompress.Compressors.Arj;
public class Lh5DecoderCfg : ILhaDecoderConfig
namespace SharpCompress.Compressors.Arj
{
public int HistoryBits => 14;
public int OffsetBits => 4;
public RingBuffer RingBuffer { get; } = new RingBuffer(1 << 14);
public class Lh5DecoderCfg : ILhaDecoderConfig
{
public int HistoryBits => 14;
public int OffsetBits => 4;
public RingBuffer RingBuffer { get; } = new RingBuffer(1 << 14);
}
}

View File

@@ -1,8 +1,9 @@
namespace SharpCompress.Compressors.Arj;
public class Lh7DecoderCfg : ILhaDecoderConfig
namespace SharpCompress.Compressors.Arj
{
public int HistoryBits => 17;
public int OffsetBits => 5;
public RingBuffer RingBuffer { get; } = new RingBuffer(1 << 17);
public class Lh7DecoderCfg : ILhaDecoderConfig
{
public int HistoryBits => 17;
public int OffsetBits => 5;
public RingBuffer RingBuffer { get; } = new RingBuffer(1 << 17);
}
}

View File

@@ -1,401 +0,0 @@
using System;
using System.IO;
using System.Threading;
using System.Threading.Tasks;
namespace SharpCompress.Compressors.Arj;
public sealed partial class LhaStream<C>
{
public override async Task<int> ReadAsync(
byte[] buffer,
int offset,
int count,
CancellationToken cancellationToken
)
{
if (buffer is null)
{
throw new ArgumentNullException(nameof(buffer));
}
if (offset < 0 || count < 0 || (offset + count) > buffer.Length)
{
throw new ArgumentOutOfRangeException();
}
if (_producedBytes >= _originalSize)
{
return 0; // EOF
}
if (count == 0)
{
return 0;
}
int bytesRead = await FillBufferAsync(buffer, cancellationToken).ConfigureAwait(false);
return bytesRead;
}
#if !LEGACY_DOTNET
public override async ValueTask<int> ReadAsync(
Memory<byte> buffer,
CancellationToken cancellationToken = default
)
{
if (_producedBytes >= _originalSize)
{
return 0; // EOF
}
if (buffer.Length == 0)
{
return 0;
}
int bytesRead = await FillBufferAsync(buffer, cancellationToken).ConfigureAwait(false);
return bytesRead;
}
#endif
private async ValueTask<byte> ReadCodeLengthAsync(CancellationToken cancellationToken)
{
byte len = (byte)await _bitReader.ReadBitsAsync(3, cancellationToken).ConfigureAwait(false);
if (len == 7)
{
while (await _bitReader.ReadBitAsync(cancellationToken).ConfigureAwait(false) != 0)
{
len++;
if (len > 255)
{
throw new InvalidOperationException("Code length overflow");
}
}
}
return len;
}
private async ValueTask<int> ReadCodeSkipAsync(
int skipRange,
CancellationToken cancellationToken
)
{
int bits;
int increment;
switch (skipRange)
{
case 0:
return 1;
case 1:
bits = 4;
increment = 3; // 3..=18
break;
default:
bits = 9;
increment = 20; // 20..=531
break;
}
int skip = await _bitReader.ReadBitsAsync(bits, cancellationToken).ConfigureAwait(false);
return skip + increment;
}
private async ValueTask ReadTempTreeAsync(CancellationToken cancellationToken)
{
byte[] codeLengths = new byte[NUM_TEMP_CODELEN];
// number of codes to read (5 bits)
int numCodes = await _bitReader.ReadBitsAsync(5, cancellationToken).ConfigureAwait(false);
// single code only
if (numCodes == 0)
{
int code = await _bitReader.ReadBitsAsync(5, cancellationToken).ConfigureAwait(false);
_offsetTree.SetSingle((byte)code);
return;
}
if (numCodes > NUM_TEMP_CODELEN)
{
throw new Exception("temporary codelen table has invalid size");
}
// read actual lengths
int count = Math.Min(3, numCodes);
for (int i = 0; i < count; i++)
{
codeLengths[i] = (byte)
await ReadCodeLengthAsync(cancellationToken).ConfigureAwait(false);
}
// 2-bit skip value follows
int skip = await _bitReader.ReadBitsAsync(2, cancellationToken).ConfigureAwait(false);
if (3 + skip > numCodes)
{
throw new Exception("temporary codelen table has invalid size");
}
for (int i = 3 + skip; i < numCodes; i++)
{
codeLengths[i] = (byte)
await ReadCodeLengthAsync(cancellationToken).ConfigureAwait(false);
}
_offsetTree.BuildTree(codeLengths, numCodes);
}
private async ValueTask ReadCommandTreeAsync(CancellationToken cancellationToken)
{
byte[] codeLengths = new byte[NUM_COMMANDS];
// number of codes to read (9 bits)
int numCodes = await _bitReader.ReadBitsAsync(9, cancellationToken).ConfigureAwait(false);
// single code only
if (numCodes == 0)
{
int code = await _bitReader.ReadBitsAsync(9, cancellationToken).ConfigureAwait(false);
_commandTree.SetSingle((ushort)code);
return;
}
if (numCodes > NUM_COMMANDS)
{
throw new Exception("commands codelen table has invalid size");
}
int index = 0;
while (index < numCodes)
{
for (int n = 0; n < numCodes - index; n++)
{
int code = await _offsetTree
.ReadEntryAsync(_bitReader, cancellationToken)
.ConfigureAwait(false);
if (code >= 0 && code <= 2) // skip range
{
int skipCount = await ReadCodeSkipAsync(code, cancellationToken)
.ConfigureAwait(false);
index += n + skipCount;
goto outerLoop;
}
else
{
codeLengths[index + n] = (byte)(code - 2);
}
}
break;
outerLoop:
;
}
_commandTree.BuildTree(codeLengths, numCodes);
}
private async ValueTask ReadOffsetTreeAsync(CancellationToken cancellationToken)
{
int numCodes = await _bitReader
.ReadBitsAsync(_config.OffsetBits, cancellationToken)
.ConfigureAwait(false);
if (numCodes == 0)
{
int code = await _bitReader
.ReadBitsAsync(_config.OffsetBits, cancellationToken)
.ConfigureAwait(false);
_offsetTree.SetSingle(code);
return;
}
if (numCodes > _config.HistoryBits)
{
throw new InvalidDataException("Offset code table too large");
}
byte[] codeLengths = new byte[NUM_TEMP_CODELEN];
for (int i = 0; i < numCodes; i++)
{
codeLengths[i] = (byte)
await ReadCodeLengthAsync(cancellationToken).ConfigureAwait(false);
}
_offsetTree.BuildTree(codeLengths, numCodes);
}
private async ValueTask BeginNewBlockAsync(CancellationToken cancellationToken)
{
await ReadTempTreeAsync(cancellationToken).ConfigureAwait(false);
await ReadCommandTreeAsync(cancellationToken).ConfigureAwait(false);
await ReadOffsetTreeAsync(cancellationToken).ConfigureAwait(false);
}
private ValueTask<int> ReadCommandAsync(CancellationToken cancellationToken) =>
_commandTree.ReadEntryAsync(_bitReader, cancellationToken);
private async ValueTask<int> ReadOffsetAsync(CancellationToken cancellationToken)
{
int bits = await _offsetTree
.ReadEntryAsync(_bitReader, cancellationToken)
.ConfigureAwait(false);
if (bits <= 1)
{
return bits;
}
int res = await _bitReader.ReadBitsAsync(bits - 1, cancellationToken).ConfigureAwait(false);
return res | (1 << (bits - 1));
}
public async ValueTask<int> FillBufferAsync(byte[] buffer, CancellationToken cancellationToken)
{
int bufLen = buffer.Length;
int bufIndex = 0;
// stop when we reached original size
if (_producedBytes >= _originalSize)
{
return 0;
}
// calculate limit, so that we don't go over the original size
int remaining = (int)Math.Min(bufLen, _originalSize - _producedBytes);
while (bufIndex < remaining)
{
if (_copyProgress.HasValue)
{
var (offset, count) = _copyProgress.Value;
int copied = CopyFromHistory(
buffer,
bufIndex,
offset,
(int)Math.Min(count, remaining - bufIndex)
);
bufIndex += copied;
_copyProgress = null;
}
if (_remainingCommands == 0)
{
_remainingCommands = await _bitReader
.ReadBitsAsync(16, cancellationToken)
.ConfigureAwait(false);
if (bufIndex + _remainingCommands > remaining)
{
break;
}
await BeginNewBlockAsync(cancellationToken).ConfigureAwait(false);
}
_remainingCommands--;
int command = await ReadCommandAsync(cancellationToken).ConfigureAwait(false);
if (command >= 0 && command <= 0xFF)
{
byte value = (byte)command;
buffer[bufIndex++] = value;
_ringBuffer.Push(value);
}
else
{
int count = command - 0x100 + 3;
int offset = await ReadOffsetAsync(cancellationToken).ConfigureAwait(false);
int copyCount = (int)Math.Min(count, remaining - bufIndex);
bufIndex += CopyFromHistory(buffer, bufIndex, offset, copyCount);
}
}
_producedBytes += bufIndex;
return bufIndex;
}
#if !LEGACY_DOTNET
public async ValueTask<int> FillBufferAsync(
Memory<byte> buffer,
CancellationToken cancellationToken
)
{
int bufLen = buffer.Length;
int bufIndex = 0;
// stop when we reached original size
if (_producedBytes >= _originalSize)
{
return 0;
}
// calculate limit, so that we don't go over the original size
int remaining = (int)Math.Min(bufLen, _originalSize - _producedBytes);
while (bufIndex < remaining)
{
if (_copyProgress.HasValue)
{
var (offset, count) = _copyProgress.Value;
int copied = CopyFromHistory(
buffer.Span,
bufIndex,
offset,
(int)Math.Min(count, remaining - bufIndex)
);
bufIndex += copied;
_copyProgress = null;
}
if (_remainingCommands == 0)
{
_remainingCommands = await _bitReader
.ReadBitsAsync(16, cancellationToken)
.ConfigureAwait(false);
if (bufIndex + _remainingCommands > remaining)
{
break;
}
await BeginNewBlockAsync(cancellationToken).ConfigureAwait(false);
}
_remainingCommands--;
int command = await ReadCommandAsync(cancellationToken).ConfigureAwait(false);
if (command >= 0 && command <= 0xFF)
{
byte value = (byte)command;
buffer.Span[bufIndex++] = value;
_ringBuffer.Push(value);
}
else
{
int count = command - 0x100 + 3;
int offset = await ReadOffsetAsync(cancellationToken).ConfigureAwait(false);
int copyCount = (int)Math.Min(count, remaining - bufIndex);
bufIndex += CopyFromHistory(buffer.Span, bufIndex, offset, copyCount);
}
}
_producedBytes += bufIndex;
return bufIndex;
}
private int CopyFromHistory(Span<byte> target, int targetIndex, int offset, int count)
{
var historyIter = _ringBuffer.IterFromOffset(offset);
int copied = 0;
while (copied < count && historyIter.MoveNext() && (targetIndex + copied) < target.Length)
{
target[targetIndex + copied] = historyIter.Current;
copied++;
}
if (copied < count)
{
_copyProgress = (offset, count - copied);
}
return copied;
}
#endif
}

Some files were not shown because too many files have changed in this diff Show More