Compare commits

..

3 Commits

Author SHA1 Message Date
copilot-swe-agent[bot]
238ed748fc Revert "Document Copilot instructions setup status"
This reverts commit be6aefc8c4.
2025-10-29 13:22:55 +00:00
copilot-swe-agent[bot]
be6aefc8c4 Document Copilot instructions setup status
Co-authored-by: adamhathcock <527620+adamhathcock@users.noreply.github.com>
2025-10-29 13:20:10 +00:00
copilot-swe-agent[bot]
b8867e7e54 Initial plan 2025-10-29 13:14:08 +00:00
463 changed files with 2481 additions and 84392 deletions

View File

@@ -3,11 +3,11 @@
"isRoot": true,
"tools": {
"csharpier": {
"version": "1.2.4",
"version": "1.1.2",
"commands": [
"csharpier"
],
"rollForward": false
}
}
}
}

15
.github/COPILOT_AGENT_README.md vendored Normal file
View File

@@ -0,0 +1,15 @@
# Copilot Coding Agent Configuration
This repository includes a minimal opt-in configuration and CI workflow to allow the GitHub Copilot coding agent to open and validate PRs.
- .copilot-agent.yml: opt-in config for automated agents
- .github/agents/copilot-agent.yml: detailed agent policy configuration
- .github/workflows/dotnetcore.yml: CI runs on PRs touching the solution, source, or tests to validate changes
- AGENTS.md: general instructions for Copilot coding agent with project-specific guidelines
Maintainers can adjust the allowed paths or disable the agent by editing or removing .copilot-agent.yml.
Notes:
- The agent can create, modify, and delete files within the allowed paths (src, tests, README.md, AGENTS.md)
- All changes require review before merge
- If build/test paths are different, update the workflow accordingly; this workflow targets SharpCompress.sln and the SharpCompress.Test test project.

View File

@@ -1,25 +0,0 @@
# Plan: Implement Missing Async Functionality in SharpCompress
SharpCompress has async support for low-level stream operations and Reader/Writer APIs, but critical entry points (Archive.Open, factory methods, initialization) remain synchronous. This plan adds async overloads for all user-facing I/O operations and fixes existing async bugs, enabling full end-to-end async workflows.
## Steps
1. **Add async factory methods** to [ArchiveFactory.cs](src/SharpCompress/Factories/ArchiveFactory.cs), [ReaderFactory.cs](src/SharpCompress/Factories/ReaderFactory.cs), and [WriterFactory.cs](src/SharpCompress/Factories/WriterFactory.cs) with `OpenAsync` and `CreateAsync` overloads accepting `CancellationToken`
2. **Implement async Open methods** on concrete archive types ([ZipArchive.cs](src/SharpCompress/Archives/Zip/ZipArchive.cs), [TarArchive.cs](src/SharpCompress/Archives/Tar/TarArchive.cs), [RarArchive.cs](src/SharpCompress/Archives/Rar/RarArchive.cs), [GZipArchive.cs](src/SharpCompress/Archives/GZip/GZipArchive.cs), [SevenZipArchive.cs](src/SharpCompress/Archives/SevenZip/SevenZipArchive.cs)) and reader types ([ZipReader.cs](src/SharpCompress/Readers/Zip/ZipReader.cs), [TarReader.cs](src/SharpCompress/Readers/Tar/TarReader.cs), etc.)
3. **Convert archive initialization logic to async** including header reading, volume loading, and format signature detection across archive constructors and internal initialization methods
4. **Fix LZMA decoder async bugs** in [LzmaStream.cs](src/SharpCompress/Compressors/LZMA/LzmaStream.cs), [Decoder.cs](src/SharpCompress/Compressors/LZMA/Decoder.cs), and [OutWindow.cs](src/SharpCompress/Compressors/LZMA/OutWindow.cs) to enable true async 7Zip support and remove `NonDisposingStream` workaround
5. **Complete Rar async implementation** by converting `UnpackV2017` methods to async in [UnpackV2017.cs](src/SharpCompress/Compressors/Rar/UnpackV2017.cs) and updating Rar20 decompression
6. **Add comprehensive async tests** covering all new async entry points, cancellation scenarios, and concurrent operations across all archive formats in test files
## Further Considerations
1. **Breaking changes** - Should new async methods be added alongside existing sync methods (non-breaking), or should sync methods eventually be deprecated? Recommend additive approach for backward compatibility.
2. **Performance impact** - Header parsing for formats like Zip/Tar is often small; consider whether truly async parsing adds value vs sync parsing wrapped in Task, or make it conditional based on stream type (network vs file).
3. **7Zip complexity** - The LZMA async bug fix (Step 4) may be challenging due to state management in the decoder; consider whether to scope it separately or implement a simpler workaround that maintains correctness.

View File

@@ -1,123 +0,0 @@
# Plan: Modernize SharpCompress Public API
Based on comprehensive analysis, the API has several inconsistencies around factory patterns, async support, format capabilities, and options classes. Most improvements can be done incrementally without breaking changes.
## Steps
1. **Standardize factory patterns** by deprecating format-specific static `Open` methods in [Archives/Zip/ZipArchive.cs](src/SharpCompress/Archives/Zip/ZipArchive.cs), [Archives/Tar/TarArchive.cs](src/SharpCompress/Archives/Tar/TarArchive.cs), etc. in favor of centralized [Factories/ArchiveFactory.cs](src/SharpCompress/Factories/ArchiveFactory.cs)
2. **Complete async implementation** in [Writers/Zip/ZipWriter.cs](src/SharpCompress/Writers/Zip/ZipWriter.cs) and other writers that currently use sync-over-async, implementing true async I/O throughout the writer hierarchy
3. **Unify options classes** by making [Common/ExtractionOptions.cs](src/SharpCompress/Common/ExtractionOptions.cs) inherit from `OptionsBase` and adding progress reporting to extraction methods consistently
4. **Clarify GZip semantics** in [Archives/GZip/GZipArchive.cs](src/SharpCompress/Archives/GZip/GZipArchive.cs) by adding XML documentation explaining single-entry limitation and relationship to GZip compression used in Tar.gz
## Further Considerations
1. **Breaking changes roadmap** - Should we plan a major version (2.0) to remove deprecated factory methods, clean up `ArchiveType` enum (remove Arc/Arj or add full support), and consolidate naming patterns?
2. **Progress reporting consistency** - Should `IProgress<ArchiveExtractionProgress<IEntry>>` be added to all extraction extension methods or consolidated into options classes?
## Detailed Analysis
### Factory Pattern Issues
Three different factory patterns exist with overlapping functionality:
1. **Static Factories**: ArchiveFactory, ReaderFactory, WriterFactory
2. **Instance Factories**: IArchiveFactory, IReaderFactory, IWriterFactory
3. **Format-specific static methods**: Each archive class has static `Open` methods
**Example confusion:**
```csharp
// Three ways to open a Zip archive - which is recommended?
var archive1 = ArchiveFactory.Open("file.zip");
var archive2 = ZipArchive.Open("file.zip");
var archive3 = ArchiveFactory.AutoFactory.Open(fileInfo, options);
```
### Async Support Gaps
Base `IWriter` interface has async methods, but writer implementations provide minimal async support. Most writers just call synchronous methods:
```csharp
public virtual async Task WriteAsync(...)
{
// Default implementation calls synchronous version
Write(filename, source, modificationTime);
await Task.CompletedTask.ConfigureAwait(false);
}
```
Real async implementations only in:
- `TarWriter` - Proper async implementation
- Most other writers use sync-over-async
### GZip Archive Special Case
GZip is treated as both a compression format and an archive format, but only supports single-entry archives:
```csharp
protected override GZipArchiveEntry CreateEntryInternal(...)
{
if (Entries.Any())
{
throw new InvalidFormatException("Only one entry is allowed in a GZip Archive");
}
// ...
}
```
### Options Class Hierarchy
```
OptionsBase (LeaveStreamOpen, ArchiveEncoding)
├─ ReaderOptions (LookForHeader, Password, DisableCheckIncomplete, BufferSize, ExtensionHint, Progress)
├─ WriterOptions (CompressionType, CompressionLevel, Progress)
│ ├─ ZipWriterOptions (ArchiveComment, UseZip64)
│ ├─ TarWriterOptions (FinalizeArchiveOnClose, HeaderFormat)
│ └─ GZipWriterOptions (no additional properties)
└─ ExtractionOptions (standalone - Overwrite, ExtractFullPath, PreserveFileTime, PreserveAttributes)
```
**Issues:**
- `ExtractionOptions` doesn't inherit from `OptionsBase` - no encoding support during extraction
- Progress reporting inconsistency between readers and extraction
- Obsolete properties (`ChecksumIsValid`, `Version`) with unclear migration path
### Implementation Priorities
**High Priority (Non-Breaking):**
1. Add API usage guide (Archive vs Reader, factory recommendations, async best practices)
2. Fix progress reporting consistency
3. Complete async implementation in writers
**Medium Priority (Next Major Version):**
1. Unify factory pattern - deprecate format-specific static `Open` methods
2. Clean up options classes - make `ExtractionOptions` inherit from `OptionsBase`
3. Clarify archive types - remove Arc/Arj from `ArchiveType` enum or add full support
4. Standardize naming across archive types
**Low Priority:**
1. Add BZip2 archive support similar to GZipArchive
2. Complete obsolete property cleanup with migration guide
### Backward Compatibility Strategy
**Safe (Non-Breaking) Changes:**
- Add new methods to interfaces (use default implementations)
- Add new options properties (with defaults)
- Add new factory methods
- Improve async implementations
- Add progress reporting support
**Breaking Changes to Avoid:**
- ❌ Removing format-specific `Open` methods (deprecate instead)
- ❌ Changing `LeaveStreamOpen` default (currently `true`)
- ❌ Removing obsolete properties before major version bump
- ❌ Changing return types or signatures of existing methods
**Deprecation Pattern:**
- Use `[Obsolete]` for one major version
- Use `[EditorBrowsable(EditorBrowsableState.Never)]` in next major version
- Remove in following major version

View File

@@ -14,12 +14,12 @@ jobs:
os: [windows-latest, ubuntu-latest]
steps:
- uses: actions/checkout@v6
- uses: actions/checkout@v5
- uses: actions/setup-dotnet@v5
with:
dotnet-version: 10.0.x
dotnet-version: 8.0.x
- run: dotnet run --project build/build.csproj
- uses: actions/upload-artifact@v6
- uses: actions/upload-artifact@v5
with:
name: ${{ matrix.os }}-sharpcompress.nupkg
path: artifacts/*

4
.gitignore vendored
View File

@@ -11,11 +11,11 @@ TestResults/
packages/*/
project.lock.json
tests/TestArchives/Scratch
tests/TestArchives/*/Scratch
tests/TestArchives/*/Scratch2
.vs
tools
.vscode
.idea/
.DS_Store
*.snupkg
/tests/TestArchives/6d23a38c-f064-4ef1-ad89-b942396f53b9/Scratch

View File

@@ -1,9 +0,0 @@
{
"recommendations": [
"ms-dotnettools.csdevkit",
"ms-dotnettools.csharp",
"ms-dotnettools.vscode-dotnet-runtime",
"csharpier.csharpier-vscode",
"formulahendry.dotnet-test-explorer"
]
}

97
.vscode/launch.json vendored
View File

@@ -1,97 +0,0 @@
{
"version": "0.2.0",
"configurations": [
{
"name": "Debug Tests (net10.0)",
"type": "coreclr",
"request": "launch",
"preLaunchTask": "build",
"program": "dotnet",
"args": [
"test",
"${workspaceFolder}/tests/SharpCompress.Test/SharpCompress.Test.csproj",
"-f",
"net10.0",
"--no-build",
"--verbosity=normal"
],
"cwd": "${workspaceFolder}",
"console": "internalConsole",
"stopAtEntry": false
},
{
"name": "Debug Specific Test (net10.0)",
"type": "coreclr",
"request": "launch",
"preLaunchTask": "build",
"program": "dotnet",
"args": [
"test",
"${workspaceFolder}/tests/SharpCompress.Test/SharpCompress.Test.csproj",
"-f",
"net10.0",
"--no-build",
"--filter",
"FullyQualifiedName~${input:testName}"
],
"cwd": "${workspaceFolder}",
"console": "internalConsole",
"stopAtEntry": false
},
{
"name": "Debug Performance Tests",
"type": "coreclr",
"request": "launch",
"preLaunchTask": "build",
"program": "dotnet",
"args": [
"run",
"--project",
"${workspaceFolder}/tests/SharpCompress.Performance/SharpCompress.Performance.csproj",
"--no-build"
],
"cwd": "${workspaceFolder}",
"console": "internalConsole",
"stopAtEntry": false
},
{
"name": "Debug Build Script",
"type": "coreclr",
"request": "launch",
"program": "dotnet",
"args": [
"run",
"--project",
"${workspaceFolder}/build/build.csproj",
"--",
"${input:buildTarget}"
],
"cwd": "${workspaceFolder}",
"console": "internalConsole",
"stopAtEntry": false
}
],
"inputs": [
{
"id": "testName",
"type": "promptString",
"description": "Enter test name or pattern (e.g., TestMethodName or ClassName)",
"default": ""
},
{
"id": "buildTarget",
"type": "pickString",
"description": "Select build target",
"options": [
"clean",
"restore",
"build",
"test",
"format",
"publish",
"default"
],
"default": "build"
}
]
}

29
.vscode/settings.json vendored
View File

@@ -1,29 +0,0 @@
{
"dotnet.defaultSolution": "SharpCompress.sln",
"files.exclude": {
"**/bin": true,
"**/obj": true
},
"files.watcherExclude": {
"**/bin/**": true,
"**/obj/**": true,
"**/artifacts/**": true
},
"search.exclude": {
"**/bin": true,
"**/obj": true,
"**/artifacts": true
},
"editor.formatOnSave": false,
"[csharp]": {
"editor.defaultFormatter": "csharpier.csharpier-vscode",
"editor.formatOnSave": true,
"editor.codeActionsOnSave": {
"source.fixAll": "explicit"
}
},
"csharpier.enableDebugLogs": false,
"omnisharp.enableRoslynAnalyzers": true,
"omnisharp.enableEditorConfigSupport": true,
"dotnet-test-explorer.testProjectPath": "tests/**/*.csproj"
}

178
.vscode/tasks.json vendored
View File

@@ -1,178 +0,0 @@
{
"version": "2.0.0",
"tasks": [
{
"label": "build",
"command": "dotnet",
"type": "process",
"args": [
"build",
"${workspaceFolder}/SharpCompress.sln",
"/property:GenerateFullPaths=true",
"/consoleloggerparameters:NoSummary;ForceNoAlign"
],
"problemMatcher": "$msCompile",
"group": {
"kind": "build",
"isDefault": true
}
},
{
"label": "build-release",
"command": "dotnet",
"type": "process",
"args": [
"build",
"${workspaceFolder}/SharpCompress.sln",
"-c",
"Release",
"/property:GenerateFullPaths=true",
"/consoleloggerparameters:NoSummary;ForceNoAlign"
],
"problemMatcher": "$msCompile",
"group": "build"
},
{
"label": "build-library",
"command": "dotnet",
"type": "process",
"args": [
"build",
"${workspaceFolder}/src/SharpCompress/SharpCompress.csproj",
"/property:GenerateFullPaths=true",
"/consoleloggerparameters:NoSummary;ForceNoAlign"
],
"problemMatcher": "$msCompile",
"group": "build"
},
{
"label": "restore",
"command": "dotnet",
"type": "process",
"args": [
"restore",
"${workspaceFolder}/SharpCompress.sln"
],
"problemMatcher": "$msCompile"
},
{
"label": "clean",
"command": "dotnet",
"type": "process",
"args": [
"clean",
"${workspaceFolder}/SharpCompress.sln"
],
"problemMatcher": "$msCompile"
},
{
"label": "test",
"command": "dotnet",
"type": "process",
"args": [
"test",
"${workspaceFolder}/tests/SharpCompress.Test/SharpCompress.Test.csproj",
"--no-build",
"--verbosity=normal"
],
"problemMatcher": "$msCompile",
"group": {
"kind": "test",
"isDefault": true
},
"dependsOn": "build"
},
{
"label": "test-net10",
"command": "dotnet",
"type": "process",
"args": [
"test",
"${workspaceFolder}/tests/SharpCompress.Test/SharpCompress.Test.csproj",
"-f",
"net10.0",
"--no-build",
"--verbosity=normal"
],
"problemMatcher": "$msCompile",
"group": "test",
"dependsOn": "build"
},
{
"label": "test-net48",
"command": "dotnet",
"type": "process",
"args": [
"test",
"${workspaceFolder}/tests/SharpCompress.Test/SharpCompress.Test.csproj",
"-f",
"net48",
"--no-build",
"--verbosity=normal"
],
"problemMatcher": "$msCompile",
"group": "test",
"dependsOn": "build"
},
{
"label": "format",
"command": "dotnet",
"type": "process",
"args": [
"csharpier",
"."
],
"problemMatcher": []
},
{
"label": "format-check",
"command": "dotnet",
"type": "process",
"args": [
"csharpier",
"check",
"."
],
"problemMatcher": []
},
{
"label": "run-build-script",
"command": "dotnet",
"type": "process",
"args": [
"run",
"--project",
"${workspaceFolder}/build/build.csproj"
],
"problemMatcher": "$msCompile"
},
{
"label": "pack",
"command": "dotnet",
"type": "process",
"args": [
"pack",
"${workspaceFolder}/src/SharpCompress/SharpCompress.csproj",
"-c",
"Release",
"-o",
"${workspaceFolder}/artifacts/"
],
"problemMatcher": "$msCompile",
"dependsOn": "build-release"
},
{
"label": "performance-tests",
"command": "dotnet",
"type": "process",
"args": [
"run",
"--project",
"${workspaceFolder}/tests/SharpCompress.Performance/SharpCompress.Performance.csproj",
"-c",
"Release"
],
"problemMatcher": "$msCompile"
}
]
}

View File

@@ -28,38 +28,14 @@ SharpCompress is a pure C# compression library supporting multiple archive forma
## Code Formatting
**Copilot agents: You MUST run the `format` task after making code changes to ensure consistency.**
- Use CSharpier for code formatting to ensure consistent style across the project
- CSharpier is configured as a local tool in `.config/dotnet-tools.json`
### Commands
1. **Restore tools** (first time only):
```bash
dotnet tool restore
```
2. **Check if files are formatted correctly** (doesn't modify files):
```bash
dotnet csharpier check .
```
- Exit code 0: All files are properly formatted
- Exit code 1: Some files need formatting (will show which files and differences)
3. **Format files** (modifies files):
```bash
dotnet csharpier format .
```
- Formats all files in the project to match CSharpier style
- Run from project root directory
4. **Configure your IDE** to format on save using CSharpier for the best experience
### Additional Notes
- Restore tools with: `dotnet tool restore`
- Format files from the project root with: `dotnet csharpier .`
- **Run `dotnet csharpier .` from the project root after making code changes before committing**
- Configure your IDE to format on save using CSharpier for the best experience
- The project also uses `.editorconfig` for editor settings (indentation, encoding, etc.)
- Let CSharpier handle code style while `.editorconfig` handles editor behavior
- Always run `dotnet csharpier check .` before committing to verify formatting
## Project Setup and Structure
@@ -73,30 +49,6 @@ SharpCompress is a pure C# compression library supporting multiple archive forma
- Use `dotnet test` to run tests
- Solution file: `SharpCompress.sln`
### Directory Structure
```
src/SharpCompress/
├── Archives/ # IArchive implementations (Zip, Tar, Rar, 7Zip, GZip)
├── Readers/ # IReader implementations (forward-only)
├── Writers/ # IWriter implementations (forward-only)
├── Compressors/ # Low-level compression streams (BZip2, Deflate, LZMA, etc.)
├── Factories/ # Format detection and factory pattern
├── Common/ # Shared types (ArchiveType, Entry, Options)
├── Crypto/ # Encryption implementations
└── IO/ # Stream utilities and wrappers
tests/SharpCompress.Test/
├── Zip/, Tar/, Rar/, SevenZip/, GZip/, BZip2/ # Format-specific tests
├── TestBase.cs # Base test class with helper methods
└── TestArchives/ # Test data (not checked into main test project)
```
### Factory Pattern
All format types implement factory interfaces (`IArchiveFactory`, `IReaderFactory`, `IWriterFactory`) for auto-detection:
- `ReaderFactory.Open()` - Auto-detects format by probing stream
- `WriterFactory.Open()` - Creates writer for specified `ArchiveType`
- Factories located in: `src/SharpCompress/Factories/`
## Nullable Reference Types
- Declare variables non-nullable, and check for `null` at entry points.
@@ -164,18 +116,3 @@ SharpCompress supports multiple archive and compression formats:
- Use test archives from `tests/TestArchives` directory for consistency.
- Test stream disposal and `LeaveStreamOpen` behavior.
- Test edge cases: empty archives, large files, corrupted archives, encrypted archives.
### Test Organization
- Base class: `TestBase` - Provides `TEST_ARCHIVES_PATH`, `SCRATCH_FILES_PATH`, temp directory management
- Framework: xUnit with AwesomeAssertions
- Test archives: `tests/TestArchives/` - Use existing archives, don't create new ones unnecessarily
- Match naming style of nearby test files
## Common Pitfalls
1. **Don't mix Archive and Reader APIs** - Archive needs seekable stream, Reader doesn't
2. **Solid archives (Rar, 7Zip)** - Use `ExtractAllEntries()` for best performance, not individual entry extraction
3. **Stream disposal** - Always set `LeaveStreamOpen` explicitly when needed (default is to close)
4. **Tar + non-seekable stream** - Must provide file size or it will throw
5. **Multi-framework differences** - Some features differ between .NET Framework and modern .NET (e.g., Mono.Posix)
6. **Format detection** - Use `ReaderFactory.Open()` for auto-detection, test with actual archive files

View File

@@ -1,19 +1,19 @@
<Project>
<ItemGroup>
<PackageVersion Include="Bullseye" Version="6.1.0" />
<PackageVersion Include="AwesomeAssertions" Version="9.3.0" />
<PackageVersion Include="Bullseye" Version="6.0.0" />
<PackageVersion Include="AwesomeAssertions" Version="9.2.1" />
<PackageVersion Include="Glob" Version="1.1.9" />
<PackageVersion Include="JetBrains.Profiler.SelfApi" Version="2.5.15" />
<PackageVersion Include="Microsoft.Bcl.AsyncInterfaces" Version="10.0.0" />
<PackageVersion Include="Microsoft.NET.Test.Sdk" Version="18.0.1" />
<PackageVersion Include="JetBrains.Profiler.SelfApi" Version="2.5.14" />
<PackageVersion Include="Microsoft.Bcl.AsyncInterfaces" Version="8.0.0" />
<PackageVersion Include="Microsoft.NET.Test.Sdk" Version="18.0.0" />
<PackageVersion Include="Mono.Posix.NETStandard" Version="1.0.0" />
<PackageVersion Include="SimpleExec" Version="12.1.0" />
<PackageVersion Include="System.Text.Encoding.CodePages" Version="10.0.0" />
<PackageVersion Include="SimpleExec" Version="12.0.0" />
<PackageVersion Include="System.Buffers" Version="4.6.1" />
<PackageVersion Include="System.Memory" Version="4.6.3" />
<PackageVersion Include="System.Text.Encoding.CodePages" Version="8.0.0" />
<PackageVersion Include="xunit" Version="2.9.3" />
<PackageVersion Include="xunit.runner.visualstudio" Version="3.1.5" />
<PackageVersion Include="Microsoft.NET.ILLink.Tasks" Version="10.0.0" />
<PackageVersion Include="ZstdSharp.Port" Version="0.8.6" />
<PackageVersion Include="Microsoft.SourceLink.GitHub" Version="8.0.0" />
<PackageVersion Include="Microsoft.NETFramework.ReferenceAssemblies" Version="1.0.3" />
</ItemGroup>

View File

@@ -22,28 +22,11 @@
| 7Zip (4) | LZMA, LZMA2, BZip2, PPMd, BCJ, BCJ2, Deflate | Decompress | SevenZipArchive | N/A | N/A |
1. SOLID Rars are only supported in the RarReader API.
2. Zip format supports pkware and WinzipAES encryption. However, encrypted LZMA is not supported. Zip64 reading/writing is supported but only with seekable streams as the Zip spec doesn't support Zip64 data in post data descriptors. Deflate64 is only supported for reading. See [Zip Format Notes](#zip-format-notes) for details on multi-volume archives and streaming behavior.
2. Zip format supports pkware and WinzipAES encryption. However, encrypted LZMA is not supported. Zip64 reading/writing is supported but only with seekable streams as the Zip spec doesn't support Zip64 data in post data descriptors. Deflate64 is only supported for reading.
3. The Tar format requires a file size in the header. If no size is specified to the TarWriter and the stream is not seekable, then an exception will be thrown.
4. The 7Zip format doesn't allow for reading as a forward-only stream so 7Zip is only supported through the Archive API. See [7Zip Format Notes](#7zip-format-notes) for details on async extraction behavior.
4. The 7Zip format doesn't allow for reading as a forward-only stream so 7Zip is only supported through the Archive API
5. LZip has no support for extra data like the file name or timestamp. There is a default filename used when looking at the entry Key on the archive.
### Zip Format Notes
- Multi-volume/split ZIP archives require ZipArchive (seekable streams) as ZipReader cannot seek across volume files.
- ZipReader processes entries from LocalEntry headers (which include directory entries ending with `/`) and intentionally skips DirectoryEntry headers from the central directory, as they are redundant in streaming mode - all entry data comes from LocalEntry headers which ZipReader has already processed.
### 7Zip Format Notes
- **Async Extraction Performance**: When using async extraction methods (e.g., `ExtractAllEntries()` with `MoveToNextEntryAsync()`), each file creates its own decompression stream to avoid state corruption in the LZMA decoder. This is less efficient than synchronous extraction, which can reuse a single decompression stream for multiple files in the same folder.
**Performance Impact**: For archives with many small files in the same compression folder, async extraction will be slower than synchronous extraction because it must:
1. Create a new LZMA decoder for each file
2. Skip through the decompressed data to reach each file's starting position
**Recommendation**: For best performance with 7Zip archives, use synchronous extraction methods (`MoveToNextEntry()` and `WriteEntryToDirectory()`) when possible. Use async methods only when you need to avoid blocking the thread (e.g., in UI applications or async-only contexts).
**Technical Details**: 7Zip archives group files into "folders" (compression units), where all files in a folder share one continuous LZMA-compressed stream. The LZMA decoder maintains internal state (dictionary window, decoder positions) that assumes sequential, non-interruptible processing. Async operations can yield control during awaits, which would corrupt this shared state. To avoid this, async extraction creates a fresh decoder stream for each file.
## Compression Streams
For those who want to directly compress/decompress bits. The single file formats are represented here as well. However, BZip2, LZip and XZ have no metadata (GZip has a little) so using them without something like a Tar file makes little sense.

View File

@@ -1,6 +1,6 @@
# SharpCompress
SharpCompress is a compression library in pure C# for .NET Framework 4.8, .NET 8.0 and .NET 10.0 that can unrar, un7zip, unzip, untar unbzip2, ungzip, unlzip, unzstd, unarc and unarj with forward-only reading and file random access APIs. Write support for zip/tar/bzip2/gzip/lzip are implemented.
SharpCompress is a compression library in pure C# for .NET Framework 4.62, .NET Standard 2.1, .NET 6.0 and NET 8.0 that can unrar, un7zip, unzip, untar unbzip2, ungzip, unlzip, unzstd with forward-only reading and file random access APIs. Write support for zip/tar/bzip2/gzip/lzip are implemented.
The major feature is support for non-seekable streams so large files can be processed on the fly (i.e. download stream).

View File

@@ -87,17 +87,20 @@ memoryStream.Position = 0;
### Extract all files from a rar file to a directory using RarArchive
Note: Extracting a solid rar or 7z file needs to be done in sequential order to get acceptable decompression speed.
`ExtractAllEntries` is primarily intended for solid archives (like solid Rar) or 7Zip archives, where sequential extraction provides the best performance. For general/simple extraction with any supported archive type, use `archive.WriteToDirectory()` instead.
It is explicitly recommended to use `ExtractAllEntries` when extracting an entire `IArchive` instead of iterating over all its `Entries`.
Alternatively, use `IArchive.WriteToDirectory`.
```C#
using (var archive = RarArchive.Open("Test.rar"))
{
// Simple extraction with RarArchive; this WriteToDirectory pattern works for all archive types
archive.WriteToDirectory(@"D:\temp", new ExtractionOptions()
using (var reader = archive.ExtractAllEntries())
{
ExtractFullPath = true,
Overwrite = true
});
reader.WriteAllToDirectory(@"D:\temp", new ExtractionOptions()
{
ExtractFullPath = true,
Overwrite = true
});
}
}
```
@@ -113,41 +116,6 @@ using (var archive = RarArchive.Open("Test.rar"))
}
```
### Extract solid Rar or 7Zip archives with manual progress reporting
`ExtractAllEntries` only works for solid archives (Rar) or 7Zip archives. For optimal performance with these archive types, use this method:
```C#
using (var archive = RarArchive.Open("archive.rar")) // Must be solid Rar or 7Zip
{
if (archive.IsSolid || archive.Type == ArchiveType.SevenZip)
{
// Calculate total size for progress reporting
double totalSize = archive.Entries.Where(e => !e.IsDirectory).Sum(e => e.Size);
long completed = 0;
using (var reader = archive.ExtractAllEntries())
{
while (reader.MoveToNextEntry())
{
if (!reader.Entry.IsDirectory)
{
reader.WriteEntryToDirectory(@"D:\output", new ExtractionOptions()
{
ExtractFullPath = true,
Overwrite = true
});
completed += reader.Entry.Size;
double progress = completed / totalSize;
Console.WriteLine($"Progress: {progress:P}");
}
}
}
}
}
```
### Use ReaderFactory to autodetect archive type and Open the entry stream
```C#
@@ -330,12 +298,14 @@ using (var writer = WriterFactory.Open(stream, ArchiveType.Zip, CompressionType.
```C#
using (var archive = ZipArchive.Open("archive.zip"))
{
// Simple async extraction - works for all archive types
await archive.WriteToDirectoryAsync(
@"C:\output",
new ExtractionOptions() { ExtractFullPath = true, Overwrite = true },
cancellationToken
);
using (var reader = archive.ExtractAllEntries())
{
await reader.WriteAllToDirectoryAsync(
@"C:\output",
new ExtractionOptions() { ExtractFullPath = true, Overwrite = true },
cancellationToken
);
}
}
```

View File

@@ -11,7 +11,6 @@ const string Restore = "restore";
const string Build = "build";
const string Test = "test";
const string Format = "format";
const string CheckFormat = "check-format";
const string Publish = "publish";
Target(
@@ -43,20 +42,12 @@ Target(
Target(
Format,
() =>
{
Run("dotnet", "tool restore");
Run("dotnet", "csharpier format .");
}
);
Target(
CheckFormat,
() =>
{
Run("dotnet", "tool restore");
Run("dotnet", "csharpier check .");
}
);
Target(Restore, [CheckFormat], () => Run("dotnet", "restore"));
Target(Restore, [Format], () => Run("dotnet", "restore"));
Target(
Build,
@@ -70,7 +61,7 @@ Target(
Target(
Test,
[Build],
["net10.0", "net48"],
["net8.0", "net48"],
framework =>
{
IEnumerable<string> GetFiles(string d)

View File

@@ -1,7 +1,7 @@
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>net10.0</TargetFramework>
<TargetFramework>net8.0</TargetFramework>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="Bullseye" />

View File

@@ -1,12 +1,12 @@
{
"version": 2,
"dependencies": {
"net10.0": {
"net8.0": {
"Bullseye": {
"type": "Direct",
"requested": "[6.1.0, )",
"resolved": "6.1.0",
"contentHash": "fltnAJDe0BEX5eymXGUq+il2rSUA0pHqUonNDRH2TrvRu8SkU17mYG0IVpdmG2ibtfhdjNrv4CuTCxHOwcozCA=="
"requested": "[6.0.0, )",
"resolved": "6.0.0",
"contentHash": "vgwwXfzs7jJrskWH7saHRMgPzziq/e86QZNWY1MnMxd7e+De7E7EX4K3C7yrvaK9y02SJoLxNxcLG/q5qUAghw=="
},
"Glob": {
"type": "Direct",
@@ -16,9 +16,9 @@
},
"SimpleExec": {
"type": "Direct",
"requested": "[12.1.0, )",
"resolved": "12.1.0",
"contentHash": "PcCSAlMcKr5yTd571MgEMoGmoSr+omwziq2crB47lKP740lrmjuBocAUXHj+Q6LR6aUDFyhszot2wbtFJTClkA=="
"requested": "[12.0.0, )",
"resolved": "12.0.0",
"contentHash": "ptxlWtxC8vM6Y6e3h9ZTxBBkOWnWrm/Sa1HT+2i1xcXY3Hx2hmKDZP5RShPf8Xr9D+ivlrXNy57ktzyH8kyt+Q=="
}
}
}

View File

@@ -1,6 +1,6 @@
{
"sdk": {
"version": "10.0.100",
"version": "8.0.100",
"rollForward": "latestFeature"
}
}

View File

@@ -8,7 +8,7 @@ using SharpCompress.Readers;
namespace SharpCompress.Archives;
public abstract class AbstractArchive<TEntry, TVolume> : IArchive
public abstract class AbstractArchive<TEntry, TVolume> : IArchive, IArchiveExtractionListener
where TEntry : IArchiveEntry
where TVolume : IVolume
{
@@ -17,6 +17,11 @@ public abstract class AbstractArchive<TEntry, TVolume> : IArchive
private bool _disposed;
private readonly SourceStream? _sourceStream;
public event EventHandler<ArchiveExtractionEventArgs<IArchiveEntry>>? EntryExtractionBegin;
public event EventHandler<ArchiveExtractionEventArgs<IArchiveEntry>>? EntryExtractionEnd;
public event EventHandler<CompressedBytesReadEventArgs>? CompressedBytesRead;
public event EventHandler<FilePartExtractionBeginEventArgs>? FilePartExtractionBegin;
protected ReaderOptions ReaderOptions { get; }
internal AbstractArchive(ArchiveType type, SourceStream sourceStream)
@@ -38,6 +43,12 @@ public abstract class AbstractArchive<TEntry, TVolume> : IArchive
public ArchiveType Type { get; }
void IArchiveExtractionListener.FireEntryExtractionBegin(IArchiveEntry entry) =>
EntryExtractionBegin?.Invoke(this, new ArchiveExtractionEventArgs<IArchiveEntry>(entry));
void IArchiveExtractionListener.FireEntryExtractionEnd(IArchiveEntry entry) =>
EntryExtractionEnd?.Invoke(this, new ArchiveExtractionEventArgs<IArchiveEntry>(entry));
private static Stream CheckStreams(Stream stream)
{
if (!stream.CanSeek || !stream.CanRead)
@@ -88,12 +99,38 @@ public abstract class AbstractArchive<TEntry, TVolume> : IArchive
}
}
private void EnsureEntriesLoaded()
void IArchiveExtractionListener.EnsureEntriesLoaded()
{
_lazyEntries.EnsureFullyLoaded();
_lazyVolumes.EnsureFullyLoaded();
}
void IExtractionListener.FireCompressedBytesRead(
long currentPartCompressedBytes,
long compressedReadBytes
) =>
CompressedBytesRead?.Invoke(
this,
new CompressedBytesReadEventArgs(
currentFilePartCompressedBytesRead: currentPartCompressedBytes,
compressedBytesRead: compressedReadBytes
)
);
void IExtractionListener.FireFilePartExtractionBegin(
string name,
long size,
long compressedSize
) =>
FilePartExtractionBegin?.Invoke(
this,
new FilePartExtractionBeginEventArgs(
compressedSize: compressedSize,
size: size,
name: name
)
);
/// <summary>
/// Use this method to extract all entries in an archive in order.
/// This is primarily for SOLID Rar Archives or 7Zip Archives as they need to be
@@ -109,11 +146,11 @@ public abstract class AbstractArchive<TEntry, TVolume> : IArchive
{
if (!IsSolid && Type != ArchiveType.SevenZip)
{
throw new SharpCompressException(
throw new InvalidOperationException(
"ExtractAllEntries can only be used on solid archives or 7Zip archives (which require random access)."
);
}
EnsureEntriesLoaded();
((IArchiveExtractionListener)this).EnsureEntriesLoaded();
return CreateReaderForSolidExtraction();
}
@@ -124,11 +161,6 @@ public abstract class AbstractArchive<TEntry, TVolume> : IArchive
/// </summary>
public virtual bool IsSolid => false;
/// <summary>
/// Archive is ENCRYPTED (this means the Archive has password-protected files).
/// </summary>
public virtual bool IsEncrypted => false;
/// <summary>
/// The archive can find all the parts of the archive needed to fully extract the archive. This forces the parsing of the entire archive.
/// </summary>
@@ -136,7 +168,7 @@ public abstract class AbstractArchive<TEntry, TVolume> : IArchive
{
get
{
EnsureEntriesLoaded();
((IArchiveExtractionListener)this).EnsureEntriesLoaded();
return Entries.All(x => x.IsComplete);
}
}

View File

@@ -2,8 +2,6 @@ using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Threading;
using System.Threading.Tasks;
using SharpCompress.Common;
using SharpCompress.Factories;
using SharpCompress.IO;
@@ -22,31 +20,10 @@ public static class ArchiveFactory
public static IArchive Open(Stream stream, ReaderOptions? readerOptions = null)
{
readerOptions ??= new ReaderOptions();
stream = SharpCompressStream.Create(stream, bufferSize: readerOptions.BufferSize);
stream = new SharpCompressStream(stream, bufferSize: readerOptions.BufferSize);
return FindFactory<IArchiveFactory>(stream).Open(stream, readerOptions);
}
/// <summary>
/// Opens an Archive for random access asynchronously
/// </summary>
/// <param name="stream"></param>
/// <param name="readerOptions"></param>
/// <param name="cancellationToken"></param>
/// <returns></returns>
public static async Task<IArchive> OpenAsync(
Stream stream,
ReaderOptions? readerOptions = null,
CancellationToken cancellationToken = default
)
{
readerOptions ??= new ReaderOptions();
stream = SharpCompressStream.Create(stream, bufferSize: readerOptions.BufferSize);
var factory = FindFactory<IArchiveFactory>(stream);
return await factory
.OpenAsync(stream, readerOptions, cancellationToken)
.ConfigureAwait(false);
}
public static IWritableArchive Create(ArchiveType type)
{
var factory = Factory
@@ -72,22 +49,6 @@ public static class ArchiveFactory
return Open(new FileInfo(filePath), options);
}
/// <summary>
/// Opens an Archive from a filepath asynchronously.
/// </summary>
/// <param name="filePath"></param>
/// <param name="options"></param>
/// <param name="cancellationToken"></param>
public static Task<IArchive> OpenAsync(
string filePath,
ReaderOptions? options = null,
CancellationToken cancellationToken = default
)
{
filePath.NotNullOrEmpty(nameof(filePath));
return OpenAsync(new FileInfo(filePath), options, cancellationToken);
}
/// <summary>
/// Constructor with a FileInfo object to an existing file.
/// </summary>
@@ -100,24 +61,6 @@ public static class ArchiveFactory
return FindFactory<IArchiveFactory>(fileInfo).Open(fileInfo, options);
}
/// <summary>
/// Opens an Archive from a FileInfo object asynchronously.
/// </summary>
/// <param name="fileInfo"></param>
/// <param name="options"></param>
/// <param name="cancellationToken"></param>
public static async Task<IArchive> OpenAsync(
FileInfo fileInfo,
ReaderOptions? options = null,
CancellationToken cancellationToken = default
)
{
options ??= new ReaderOptions { LeaveStreamOpen = false };
var factory = FindFactory<IArchiveFactory>(fileInfo);
return await factory.OpenAsync(fileInfo, options, cancellationToken).ConfigureAwait(false);
}
/// <summary>
/// Constructor with IEnumerable FileInfo objects, multi and split support.
/// </summary>
@@ -144,40 +87,6 @@ public static class ArchiveFactory
return FindFactory<IMultiArchiveFactory>(fileInfo).Open(filesArray, options);
}
/// <summary>
/// Opens a multi-part archive from files asynchronously.
/// </summary>
/// <param name="fileInfos"></param>
/// <param name="options"></param>
/// <param name="cancellationToken"></param>
public static async Task<IArchive> OpenAsync(
IEnumerable<FileInfo> fileInfos,
ReaderOptions? options = null,
CancellationToken cancellationToken = default
)
{
fileInfos.NotNull(nameof(fileInfos));
var filesArray = fileInfos.ToArray();
if (filesArray.Length == 0)
{
throw new InvalidOperationException("No files to open");
}
var fileInfo = filesArray[0];
if (filesArray.Length == 1)
{
return await OpenAsync(fileInfo, options, cancellationToken).ConfigureAwait(false);
}
fileInfo.NotNull(nameof(fileInfo));
options ??= new ReaderOptions { LeaveStreamOpen = false };
var factory = FindFactory<IMultiArchiveFactory>(fileInfo);
return await factory
.OpenAsync(filesArray, options, cancellationToken)
.ConfigureAwait(false);
}
/// <summary>
/// Constructor with IEnumerable FileInfo objects, multi and split support.
/// </summary>
@@ -204,41 +113,6 @@ public static class ArchiveFactory
return FindFactory<IMultiArchiveFactory>(firstStream).Open(streamsArray, options);
}
/// <summary>
/// Opens a multi-part archive from streams asynchronously.
/// </summary>
/// <param name="streams"></param>
/// <param name="options"></param>
/// <param name="cancellationToken"></param>
public static async Task<IArchive> OpenAsync(
IEnumerable<Stream> streams,
ReaderOptions? options = null,
CancellationToken cancellationToken = default
)
{
cancellationToken.ThrowIfCancellationRequested();
streams.NotNull(nameof(streams));
var streamsArray = streams.ToArray();
if (streamsArray.Length == 0)
{
throw new InvalidOperationException("No streams");
}
var firstStream = streamsArray[0];
if (streamsArray.Length == 1)
{
return await OpenAsync(firstStream, options, cancellationToken).ConfigureAwait(false);
}
firstStream.NotNull(nameof(firstStream));
options ??= new ReaderOptions();
var factory = FindFactory<IMultiArchiveFactory>(firstStream);
return await factory
.OpenAsync(streamsArray, options, cancellationToken)
.ConfigureAwait(false);
}
/// <summary>
/// Extract to specific directory, retaining filename
/// </summary>

View File

@@ -1,8 +1,6 @@
using System;
using System.Collections.Generic;
using System.IO;
using System.Threading;
using System.Threading.Tasks;
using SharpCompress.Common;
using SharpCompress.Readers;
@@ -22,30 +20,11 @@ class AutoArchiveFactory : IArchiveFactory
int bufferSize = ReaderOptions.DefaultBufferSize
) => throw new NotSupportedException();
public Task<bool> IsArchiveAsync(
Stream stream,
string? password = null,
int bufferSize = ReaderOptions.DefaultBufferSize,
CancellationToken cancellationToken = default
) => throw new NotSupportedException();
public FileInfo? GetFilePart(int index, FileInfo part1) => throw new NotSupportedException();
public IArchive Open(Stream stream, ReaderOptions? readerOptions = null) =>
ArchiveFactory.Open(stream, readerOptions);
public Task<IArchive> OpenAsync(
Stream stream,
ReaderOptions? readerOptions = null,
CancellationToken cancellationToken = default
) => ArchiveFactory.OpenAsync(stream, readerOptions, cancellationToken);
public IArchive Open(FileInfo fileInfo, ReaderOptions? readerOptions = null) =>
ArchiveFactory.Open(fileInfo, readerOptions);
public Task<IArchive> OpenAsync(
FileInfo fileInfo,
ReaderOptions? readerOptions = null,
CancellationToken cancellationToken = default
) => ArchiveFactory.OpenAsync(fileInfo, readerOptions, cancellationToken);
}

View File

@@ -102,70 +102,6 @@ public class GZipArchive : AbstractWritableArchive<GZipArchiveEntry, GZipVolume>
);
}
/// <summary>
/// Opens a GZipArchive asynchronously from a stream.
/// </summary>
/// <param name="stream"></param>
/// <param name="readerOptions"></param>
/// <param name="cancellationToken"></param>
public static async Task<IArchive> OpenAsync(
Stream stream,
ReaderOptions? readerOptions = null,
CancellationToken cancellationToken = default
)
{
cancellationToken.ThrowIfCancellationRequested();
return await Task.FromResult(Open(stream, readerOptions)).ConfigureAwait(false);
}
/// <summary>
/// Opens a GZipArchive asynchronously from a FileInfo.
/// </summary>
/// <param name="fileInfo"></param>
/// <param name="readerOptions"></param>
/// <param name="cancellationToken"></param>
public static async Task<IArchive> OpenAsync(
FileInfo fileInfo,
ReaderOptions? readerOptions = null,
CancellationToken cancellationToken = default
)
{
cancellationToken.ThrowIfCancellationRequested();
return await Task.FromResult(Open(fileInfo, readerOptions)).ConfigureAwait(false);
}
/// <summary>
/// Opens a GZipArchive asynchronously from multiple streams.
/// </summary>
/// <param name="streams"></param>
/// <param name="readerOptions"></param>
/// <param name="cancellationToken"></param>
public static async Task<IArchive> OpenAsync(
IReadOnlyList<Stream> streams,
ReaderOptions? readerOptions = null,
CancellationToken cancellationToken = default
)
{
cancellationToken.ThrowIfCancellationRequested();
return await Task.FromResult(Open(streams, readerOptions)).ConfigureAwait(false);
}
/// <summary>
/// Opens a GZipArchive asynchronously from multiple FileInfo objects.
/// </summary>
/// <param name="fileInfos"></param>
/// <param name="readerOptions"></param>
/// <param name="cancellationToken"></param>
public static async Task<IArchive> OpenAsync(
IReadOnlyList<FileInfo> fileInfos,
ReaderOptions? readerOptions = null,
CancellationToken cancellationToken = default
)
{
cancellationToken.ThrowIfCancellationRequested();
return await Task.FromResult(Open(fileInfos, readerOptions)).ConfigureAwait(false);
}
public static GZipArchive Create() => new();
/// <summary>
@@ -231,28 +167,6 @@ public class GZipArchive : AbstractWritableArchive<GZipArchiveEntry, GZipVolume>
return true;
}
public static async Task<bool> IsGZipFileAsync(
Stream stream,
CancellationToken cancellationToken = default
)
{
// read the header on the first read
byte[] header = new byte[10];
// workitem 8501: handle edge case (decompress empty stream)
if (!await stream.ReadFullyAsync(header, cancellationToken).ConfigureAwait(false))
{
return false;
}
if (header[0] != 0x1F || header[1] != 0x8B || header[2] != 8)
{
return false;
}
return true;
}
internal GZipArchive()
: base(ArchiveType.GZip) { }

View File

@@ -7,6 +7,12 @@ namespace SharpCompress.Archives;
public interface IArchive : IDisposable
{
event EventHandler<ArchiveExtractionEventArgs<IArchiveEntry>> EntryExtractionBegin;
event EventHandler<ArchiveExtractionEventArgs<IArchiveEntry>> EntryExtractionEnd;
event EventHandler<CompressedBytesReadEventArgs> CompressedBytesRead;
event EventHandler<FilePartExtractionBeginEventArgs> FilePartExtractionBegin;
IEnumerable<IArchiveEntry> Entries { get; }
IEnumerable<IVolume> Volumes { get; }

View File

@@ -1,4 +1,3 @@
using System;
using System.IO;
using System.Threading;
using System.Threading.Tasks;
@@ -9,153 +8,126 @@ namespace SharpCompress.Archives;
public static class IArchiveEntryExtensions
{
private const int BufferSize = 81920;
/// <param name="archiveEntry">The archive entry to extract.</param>
extension(IArchiveEntry archiveEntry)
public static void WriteTo(this IArchiveEntry archiveEntry, Stream streamToWriteTo)
{
/// <summary>
/// Extract entry to the specified stream.
/// </summary>
/// <param name="streamToWriteTo">The stream to write the entry content to.</param>
/// <param name="progress">Optional progress reporter for tracking extraction progress.</param>
public void WriteTo(Stream streamToWriteTo, IProgress<ProgressReport>? progress = null)
if (archiveEntry.IsDirectory)
{
if (archiveEntry.IsDirectory)
{
throw new ExtractionException("Entry is a file directory and cannot be extracted.");
}
using var entryStream = archiveEntry.OpenEntryStream();
var sourceStream = WrapWithProgress(entryStream, archiveEntry, progress);
sourceStream.CopyTo(streamToWriteTo, BufferSize);
throw new ExtractionException("Entry is a file directory and cannot be extracted.");
}
/// <summary>
/// Extract entry to the specified stream asynchronously.
/// </summary>
/// <param name="streamToWriteTo">The stream to write the entry content to.</param>
/// <param name="cancellationToken">Cancellation token.</param>
/// <param name="progress">Optional progress reporter for tracking extraction progress.</param>
public async Task WriteToAsync(
Stream streamToWriteTo,
IProgress<ProgressReport>? progress = null,
CancellationToken cancellationToken = default
)
var streamListener = (IArchiveExtractionListener)archiveEntry.Archive;
streamListener.EnsureEntriesLoaded();
streamListener.FireEntryExtractionBegin(archiveEntry);
streamListener.FireFilePartExtractionBegin(
archiveEntry.Key ?? "Key",
archiveEntry.Size,
archiveEntry.CompressedSize
);
var entryStream = archiveEntry.OpenEntryStream();
using (entryStream)
{
if (archiveEntry.IsDirectory)
{
throw new ExtractionException("Entry is a file directory and cannot be extracted.");
}
using var entryStream = await archiveEntry.OpenEntryStreamAsync(cancellationToken);
var sourceStream = WrapWithProgress(entryStream, archiveEntry, progress);
await sourceStream
.CopyToAsync(streamToWriteTo, BufferSize, cancellationToken)
.ConfigureAwait(false);
using Stream s = new ListeningStream(streamListener, entryStream);
s.CopyTo(streamToWriteTo);
}
streamListener.FireEntryExtractionEnd(archiveEntry);
}
private static Stream WrapWithProgress(
Stream source,
IArchiveEntry entry,
IProgress<ProgressReport>? progress
public static async Task WriteToAsync(
this IArchiveEntry archiveEntry,
Stream streamToWriteTo,
CancellationToken cancellationToken = default
)
{
if (progress is null)
if (archiveEntry.IsDirectory)
{
return source;
throw new ExtractionException("Entry is a file directory and cannot be extracted.");
}
var entryPath = entry.Key ?? string.Empty;
var totalBytes = GetEntrySizeSafe(entry);
return new ProgressReportingStream(
source,
progress,
entryPath,
totalBytes,
leaveOpen: true
var streamListener = (IArchiveExtractionListener)archiveEntry.Archive;
streamListener.EnsureEntriesLoaded();
streamListener.FireEntryExtractionBegin(archiveEntry);
streamListener.FireFilePartExtractionBegin(
archiveEntry.Key ?? "Key",
archiveEntry.Size,
archiveEntry.CompressedSize
);
}
private static long? GetEntrySizeSafe(IArchiveEntry entry)
{
try
var entryStream = archiveEntry.OpenEntryStream();
using (entryStream)
{
var size = entry.Size;
return size >= 0 ? size : null;
}
catch (NotImplementedException)
{
return null;
using Stream s = new ListeningStream(streamListener, entryStream);
await s.CopyToAsync(streamToWriteTo, 81920, cancellationToken).ConfigureAwait(false);
}
streamListener.FireEntryExtractionEnd(archiveEntry);
}
extension(IArchiveEntry entry)
{
/// <summary>
/// Extract to specific directory, retaining filename
/// </summary>
public void WriteToDirectory(
string destinationDirectory,
ExtractionOptions? options = null
) =>
ExtractionMethods.WriteEntryToDirectory(
entry,
destinationDirectory,
options,
entry.WriteToFile
);
/// <summary>
/// Extract to specific directory, retaining filename
/// </summary>
public static void WriteToDirectory(
this IArchiveEntry entry,
string destinationDirectory,
ExtractionOptions? options = null
) =>
ExtractionMethods.WriteEntryToDirectory(
entry,
destinationDirectory,
options,
entry.WriteToFile
);
/// <summary>
/// Extract to specific directory asynchronously, retaining filename
/// </summary>
public Task WriteToDirectoryAsync(
string destinationDirectory,
ExtractionOptions? options = null,
CancellationToken cancellationToken = default
) =>
ExtractionMethods.WriteEntryToDirectoryAsync(
entry,
destinationDirectory,
options,
entry.WriteToFileAsync,
cancellationToken
);
/// <summary>
/// Extract to specific directory asynchronously, retaining filename
/// </summary>
public static Task WriteToDirectoryAsync(
this IArchiveEntry entry,
string destinationDirectory,
ExtractionOptions? options = null,
CancellationToken cancellationToken = default
) =>
ExtractionMethods.WriteEntryToDirectoryAsync(
entry,
destinationDirectory,
options,
(x, opt) => entry.WriteToFileAsync(x, opt, cancellationToken),
cancellationToken
);
/// <summary>
/// Extract to specific file
/// </summary>
public void WriteToFile(string destinationFileName, ExtractionOptions? options = null) =>
ExtractionMethods.WriteEntryToFile(
entry,
destinationFileName,
options,
(x, fm) =>
{
using var fs = File.Open(destinationFileName, fm);
entry.WriteTo(fs);
}
);
/// <summary>
/// Extract to specific file
/// </summary>
public static void WriteToFile(
this IArchiveEntry entry,
string destinationFileName,
ExtractionOptions? options = null
) =>
ExtractionMethods.WriteEntryToFile(
entry,
destinationFileName,
options,
(x, fm) =>
{
using var fs = File.Open(destinationFileName, fm);
entry.WriteTo(fs);
}
);
/// <summary>
/// Extract to specific file asynchronously
/// </summary>
public Task WriteToFileAsync(
string destinationFileName,
ExtractionOptions? options = null,
CancellationToken cancellationToken = default
) =>
ExtractionMethods.WriteEntryToFileAsync(
entry,
destinationFileName,
options,
async (x, fm, ct) =>
{
using var fs = File.Open(destinationFileName, fm);
await entry.WriteToAsync(fs, null, ct).ConfigureAwait(false);
},
cancellationToken
);
}
/// <summary>
/// Extract to specific file asynchronously
/// </summary>
public static Task WriteToFileAsync(
this IArchiveEntry entry,
string destinationFileName,
ExtractionOptions? options = null,
CancellationToken cancellationToken = default
) =>
ExtractionMethods.WriteEntryToFileAsync(
entry,
destinationFileName,
options,
async (x, fm) =>
{
using var fs = File.Open(destinationFileName, fm);
await entry.WriteToAsync(fs, cancellationToken).ConfigureAwait(false);
}
);
}

View File

@@ -1,8 +1,8 @@
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Threading;
using System.Threading.Tasks;
using SharpCompress.Common;
using SharpCompress.Readers;
@@ -10,159 +10,76 @@ namespace SharpCompress.Archives;
public static class IArchiveExtensions
{
/// <param name="archive">The archive to extract.</param>
extension(IArchive archive)
/// <summary>
/// Extract to specific directory, retaining filename
/// </summary>
public static void WriteToDirectory(
this IArchive archive,
string destinationDirectory,
ExtractionOptions? options = null
)
{
/// <summary>
/// Extract to specific directory with progress reporting
/// </summary>
/// <param name="destinationDirectory">The folder to extract into.</param>
/// <param name="options">Extraction options.</param>
/// <param name="progress">Optional progress reporter for tracking extraction progress.</param>
public void WriteToDirectory(
string destinationDirectory,
ExtractionOptions? options = null,
IProgress<ProgressReport>? progress = null
)
using var reader = archive.ExtractAllEntries();
reader.WriteAllToDirectory(destinationDirectory, options);
}
/// <summary>
/// Extracts the archive to the destination directory. Directories will be created as needed.
/// </summary>
/// <param name="archive">The archive to extract.</param>
/// <param name="destination">The folder to extract into.</param>
/// <param name="progressReport">Optional progress report callback.</param>
/// <param name="cancellationToken">Optional cancellation token.</param>
public static void ExtractToDirectory(
this IArchive archive,
string destination,
Action<double>? progressReport = null,
CancellationToken cancellationToken = default
)
{
// Prepare for progress reporting
var totalBytes = archive.TotalUncompressSize;
var bytesRead = 0L;
// Tracking for created directories.
var seenDirectories = new HashSet<string>();
// Extract
foreach (var entry in archive.Entries)
{
// For solid archives (Rar, 7Zip), use the optimized reader-based approach
if (archive.IsSolid || archive.Type == ArchiveType.SevenZip)
{
using var reader = archive.ExtractAllEntries();
reader.WriteAllToDirectory(destinationDirectory, options);
}
else
{
// For non-solid archives, extract entries directly
archive.WriteToDirectoryInternal(destinationDirectory, options, progress);
}
}
cancellationToken.ThrowIfCancellationRequested();
private void WriteToDirectoryInternal(
string destinationDirectory,
ExtractionOptions? options,
IProgress<ProgressReport>? progress
)
{
// Prepare for progress reporting
var totalBytes = archive.TotalUncompressSize;
var bytesRead = 0L;
// Tracking for created directories.
var seenDirectories = new HashSet<string>();
// Extract
foreach (var entry in archive.Entries)
if (entry.IsDirectory)
{
if (entry.IsDirectory)
var dirPath = Path.Combine(destination, entry.Key.NotNull("Entry Key is null"));
if (
Path.GetDirectoryName(dirPath + "/") is { } emptyDirectory
&& seenDirectories.Add(dirPath)
)
{
var dirPath = Path.Combine(
destinationDirectory,
entry.Key.NotNull("Entry Key is null")
);
if (
Path.GetDirectoryName(dirPath + "/") is { } parentDirectory
&& seenDirectories.Add(dirPath)
)
{
Directory.CreateDirectory(parentDirectory);
}
continue;
Directory.CreateDirectory(emptyDirectory);
}
// Use the entry's WriteToDirectory method which respects ExtractionOptions
entry.WriteToDirectory(destinationDirectory, options);
// Update progress
bytesRead += entry.Size;
progress?.Report(
new ProgressReport(entry.Key ?? string.Empty, bytesRead, totalBytes)
);
continue;
}
}
/// <summary>
/// Extract to specific directory asynchronously with progress reporting and cancellation support
/// </summary>
/// <param name="destinationDirectory">The folder to extract into.</param>
/// <param name="options">Extraction options.</param>
/// <param name="progress">Optional progress reporter for tracking extraction progress.</param>
/// <param name="cancellationToken">Optional cancellation token.</param>
public async Task WriteToDirectoryAsync(
string destinationDirectory,
ExtractionOptions? options = null,
IProgress<ProgressReport>? progress = null,
CancellationToken cancellationToken = default
)
{
// For solid archives (Rar, 7Zip), use the optimized reader-based approach
if (archive.IsSolid || archive.Type == ArchiveType.SevenZip)
// Create each directory if not already created
var path = Path.Combine(destination, entry.Key.NotNull("Entry Key is null"));
if (Path.GetDirectoryName(path) is { } directory)
{
using var reader = archive.ExtractAllEntries();
await reader.WriteAllToDirectoryAsync(
destinationDirectory,
options,
cancellationToken
);
}
else
{
// For non-solid archives, extract entries directly
await archive.WriteToDirectoryAsyncInternal(
destinationDirectory,
options,
progress,
cancellationToken
);
}
}
private async Task WriteToDirectoryAsyncInternal(
string destinationDirectory,
ExtractionOptions? options,
IProgress<ProgressReport>? progress,
CancellationToken cancellationToken
)
{
// Prepare for progress reporting
var totalBytes = archive.TotalUncompressSize;
var bytesRead = 0L;
// Tracking for created directories.
var seenDirectories = new HashSet<string>();
// Extract
foreach (var entry in archive.Entries)
{
cancellationToken.ThrowIfCancellationRequested();
if (entry.IsDirectory)
if (!Directory.Exists(directory) && !seenDirectories.Contains(directory))
{
var dirPath = Path.Combine(
destinationDirectory,
entry.Key.NotNull("Entry Key is null")
);
if (
Path.GetDirectoryName(dirPath + "/") is { } parentDirectory
&& seenDirectories.Add(dirPath)
)
{
Directory.CreateDirectory(parentDirectory);
}
continue;
Directory.CreateDirectory(directory);
seenDirectories.Add(directory);
}
// Use the entry's WriteToDirectoryAsync method which respects ExtractionOptions
await entry
.WriteToDirectoryAsync(destinationDirectory, options, cancellationToken)
.ConfigureAwait(false);
// Update progress
bytesRead += entry.Size;
progress?.Report(
new ProgressReport(entry.Key ?? string.Empty, bytesRead, totalBytes)
);
}
// Write file
using var fs = File.OpenWrite(path);
entry.WriteTo(fs);
// Update progress
bytesRead += entry.Size;
progressReport?.Invoke(bytesRead / (double)totalBytes);
}
}
}

View File

@@ -0,0 +1,10 @@
using SharpCompress.Common;
namespace SharpCompress.Archives;
internal interface IArchiveExtractionListener : IExtractionListener
{
void EnsureEntriesLoaded();
void FireEntryExtractionBegin(IArchiveEntry entry);
void FireEntryExtractionEnd(IArchiveEntry entry);
}

View File

@@ -1,6 +1,4 @@
using System.IO;
using System.Threading;
using System.Threading.Tasks;
using SharpCompress.Factories;
using SharpCompress.Readers;
@@ -28,34 +26,10 @@ public interface IArchiveFactory : IFactory
/// <param name="readerOptions">reading options.</param>
IArchive Open(Stream stream, ReaderOptions? readerOptions = null);
/// <summary>
/// Opens an Archive for random access asynchronously.
/// </summary>
/// <param name="stream">An open, readable and seekable stream.</param>
/// <param name="readerOptions">reading options.</param>
/// <param name="cancellationToken">Cancellation token.</param>
Task<IArchive> OpenAsync(
Stream stream,
ReaderOptions? readerOptions = null,
CancellationToken cancellationToken = default
);
/// <summary>
/// Constructor with a FileInfo object to an existing file.
/// </summary>
/// <param name="fileInfo">the file to open.</param>
/// <param name="readerOptions">reading options.</param>
IArchive Open(FileInfo fileInfo, ReaderOptions? readerOptions = null);
/// <summary>
/// Opens an Archive from a FileInfo object asynchronously.
/// </summary>
/// <param name="fileInfo">the file to open.</param>
/// <param name="readerOptions">reading options.</param>
/// <param name="cancellationToken">Cancellation token.</param>
Task<IArchive> OpenAsync(
FileInfo fileInfo,
ReaderOptions? readerOptions = null,
CancellationToken cancellationToken = default
);
}

View File

@@ -1,7 +1,5 @@
using System.Collections.Generic;
using System.IO;
using System.Threading;
using System.Threading.Tasks;
using SharpCompress.Factories;
using SharpCompress.Readers;
@@ -29,34 +27,10 @@ public interface IMultiArchiveFactory : IFactory
/// <param name="readerOptions">reading options.</param>
IArchive Open(IReadOnlyList<Stream> streams, ReaderOptions? readerOptions = null);
/// <summary>
/// Opens a multi-part archive from streams asynchronously.
/// </summary>
/// <param name="streams"></param>
/// <param name="readerOptions">reading options.</param>
/// <param name="cancellationToken">Cancellation token.</param>
Task<IArchive> OpenAsync(
IReadOnlyList<Stream> streams,
ReaderOptions? readerOptions = null,
CancellationToken cancellationToken = default
);
/// <summary>
/// Constructor with IEnumerable Stream objects, multi and split support.
/// </summary>
/// <param name="fileInfos"></param>
/// <param name="readerOptions">reading options.</param>
IArchive Open(IReadOnlyList<FileInfo> fileInfos, ReaderOptions? readerOptions = null);
/// <summary>
/// Opens a multi-part archive from files asynchronously.
/// </summary>
/// <param name="fileInfos"></param>
/// <param name="readerOptions">reading options.</param>
/// <param name="cancellationToken">Cancellation token.</param>
Task<IArchive> OpenAsync(
IReadOnlyList<FileInfo> fileInfos,
ReaderOptions? readerOptions = null,
CancellationToken cancellationToken = default
);
}

View File

@@ -2,8 +2,6 @@ using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Threading;
using System.Threading.Tasks;
using SharpCompress.Common;
using SharpCompress.Common.Rar;
using SharpCompress.Common.Rar.Headers;
@@ -86,8 +84,6 @@ public class RarArchive : AbstractArchive<RarArchiveEntry, RarVolume>
public override bool IsSolid => Volumes.First().IsSolidArchive;
public override bool IsEncrypted => Entries.First(x => !x.IsDirectory).IsEncrypted;
public virtual int MinVersion => Volumes.First().MinVersion;
public virtual int MaxVersion => Volumes.First().MaxVersion;
@@ -183,70 +179,6 @@ public class RarArchive : AbstractArchive<RarArchiveEntry, RarVolume>
);
}
/// <summary>
/// Opens a RarArchive asynchronously from a stream.
/// </summary>
/// <param name="stream"></param>
/// <param name="readerOptions"></param>
/// <param name="cancellationToken"></param>
public static async Task<IArchive> OpenAsync(
Stream stream,
ReaderOptions? readerOptions = null,
CancellationToken cancellationToken = default
)
{
cancellationToken.ThrowIfCancellationRequested();
return await Task.FromResult(Open(stream, readerOptions)).ConfigureAwait(false);
}
/// <summary>
/// Opens a RarArchive asynchronously from a FileInfo.
/// </summary>
/// <param name="fileInfo"></param>
/// <param name="readerOptions"></param>
/// <param name="cancellationToken"></param>
public static async Task<IArchive> OpenAsync(
FileInfo fileInfo,
ReaderOptions? readerOptions = null,
CancellationToken cancellationToken = default
)
{
cancellationToken.ThrowIfCancellationRequested();
return await Task.FromResult(Open(fileInfo, readerOptions)).ConfigureAwait(false);
}
/// <summary>
/// Opens a RarArchive asynchronously from multiple streams.
/// </summary>
/// <param name="streams"></param>
/// <param name="readerOptions"></param>
/// <param name="cancellationToken"></param>
public static async Task<IArchive> OpenAsync(
IReadOnlyList<Stream> streams,
ReaderOptions? readerOptions = null,
CancellationToken cancellationToken = default
)
{
cancellationToken.ThrowIfCancellationRequested();
return await Task.FromResult(Open(streams, readerOptions)).ConfigureAwait(false);
}
/// <summary>
/// Opens a RarArchive asynchronously from multiple FileInfo objects.
/// </summary>
/// <param name="fileInfos"></param>
/// <param name="readerOptions"></param>
/// <param name="cancellationToken"></param>
public static async Task<IArchive> OpenAsync(
IReadOnlyList<FileInfo> fileInfos,
ReaderOptions? readerOptions = null,
CancellationToken cancellationToken = default
)
{
cancellationToken.ThrowIfCancellationRequested();
return await Task.FromResult(Open(fileInfos, readerOptions)).ConfigureAwait(false);
}
public static bool IsRarFile(string filePath) => IsRarFile(new FileInfo(filePath));
public static bool IsRarFile(FileInfo fileInfo)

View File

@@ -70,51 +70,24 @@ public class RarArchiveEntry : RarEntry, IArchiveEntry
public Stream OpenEntryStream()
{
RarStream stream;
if (IsRarV3)
{
stream = new RarStream(
return new RarStream(
archive.UnpackV1.Value,
FileHeader,
new MultiVolumeReadOnlyStream(Parts.Cast<RarFilePart>())
);
}
else
{
stream = new RarStream(
archive.UnpackV2017.Value,
FileHeader,
new MultiVolumeReadOnlyStream(Parts.Cast<RarFilePart>())
new MultiVolumeReadOnlyStream(Parts.Cast<RarFilePart>(), archive)
);
}
stream.Initialize();
return stream;
return new RarStream(
archive.UnpackV2017.Value,
FileHeader,
new MultiVolumeReadOnlyStream(Parts.Cast<RarFilePart>(), archive)
);
}
public async Task<Stream> OpenEntryStreamAsync(CancellationToken cancellationToken = default)
{
RarStream stream;
if (IsRarV3)
{
stream = new RarStream(
archive.UnpackV1.Value,
FileHeader,
new MultiVolumeReadOnlyStream(Parts.Cast<RarFilePart>())
);
}
else
{
stream = new RarStream(
archive.UnpackV2017.Value,
FileHeader,
new MultiVolumeReadOnlyStream(Parts.Cast<RarFilePart>())
);
}
await stream.InitializeAsync(cancellationToken);
return stream;
}
public Task<Stream> OpenEntryStreamAsync(CancellationToken cancellationToken = default) =>
Task.FromResult(OpenEntryStream());
public bool IsComplete
{

View File

@@ -2,8 +2,6 @@ using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Threading;
using System.Threading.Tasks;
using SharpCompress.Common;
using SharpCompress.Common.SevenZip;
using SharpCompress.Compressors.LZMA.Utilites;
@@ -105,70 +103,6 @@ public class SevenZipArchive : AbstractArchive<SevenZipArchiveEntry, SevenZipVol
);
}
/// <summary>
/// Opens a SevenZipArchive asynchronously from a stream.
/// </summary>
/// <param name="stream"></param>
/// <param name="readerOptions"></param>
/// <param name="cancellationToken"></param>
public static async Task<IArchive> OpenAsync(
Stream stream,
ReaderOptions? readerOptions = null,
CancellationToken cancellationToken = default
)
{
cancellationToken.ThrowIfCancellationRequested();
return await Task.FromResult(Open(stream, readerOptions)).ConfigureAwait(false);
}
/// <summary>
/// Opens a SevenZipArchive asynchronously from a FileInfo.
/// </summary>
/// <param name="fileInfo"></param>
/// <param name="readerOptions"></param>
/// <param name="cancellationToken"></param>
public static async Task<IArchive> OpenAsync(
FileInfo fileInfo,
ReaderOptions? readerOptions = null,
CancellationToken cancellationToken = default
)
{
cancellationToken.ThrowIfCancellationRequested();
return await Task.FromResult(Open(fileInfo, readerOptions)).ConfigureAwait(false);
}
/// <summary>
/// Opens a SevenZipArchive asynchronously from multiple streams.
/// </summary>
/// <param name="streams"></param>
/// <param name="readerOptions"></param>
/// <param name="cancellationToken"></param>
public static async Task<IArchive> OpenAsync(
IReadOnlyList<Stream> streams,
ReaderOptions? readerOptions = null,
CancellationToken cancellationToken = default
)
{
cancellationToken.ThrowIfCancellationRequested();
return await Task.FromResult(Open(streams, readerOptions)).ConfigureAwait(false);
}
/// <summary>
/// Opens a SevenZipArchive asynchronously from multiple FileInfo objects.
/// </summary>
/// <param name="fileInfos"></param>
/// <param name="readerOptions"></param>
/// <param name="cancellationToken"></param>
public static async Task<IArchive> OpenAsync(
IReadOnlyList<FileInfo> fileInfos,
ReaderOptions? readerOptions = null,
CancellationToken cancellationToken = default
)
{
cancellationToken.ThrowIfCancellationRequested();
return await Task.FromResult(Open(fileInfos, readerOptions)).ConfigureAwait(false);
}
/// <summary>
/// Constructor with a SourceStream able to handle FileInfo and Streams.
/// </summary>
@@ -271,15 +205,15 @@ public class SevenZipArchive : AbstractArchive<SevenZipArchiveEntry, SevenZipVol
.GroupBy(x => x.FilePart.Folder)
.Any(folder => folder.Count() > 1);
public override bool IsEncrypted => Entries.First(x => !x.IsDirectory).IsEncrypted;
public override long TotalSize =>
_database?._packSizes.Aggregate(0L, (total, packSize) => total + packSize) ?? 0;
private sealed class SevenZipReader : AbstractReader<SevenZipEntry, SevenZipVolume>
{
private readonly SevenZipArchive _archive;
private SevenZipEntry? _currentEntry;
private CFolder? _currentFolder;
private Stream? _currentStream;
private CFileItem? _currentItem;
internal SevenZipReader(ReaderOptions readerOptions, SevenZipArchive archive)
: base(readerOptions, ArchiveType.SevenZip) => this._archive = archive;
@@ -292,135 +226,40 @@ public class SevenZipArchive : AbstractArchive<SevenZipArchiveEntry, SevenZipVol
stream.Position = 0;
foreach (var dir in entries.Where(x => x.IsDirectory))
{
_currentEntry = dir;
yield return dir;
}
// For non-directory entries, yield them without creating shared streams
// Each call to GetEntryStream() will create a fresh decompression stream
// to avoid state corruption issues with async operations
foreach (var entry in entries.Where(x => !x.IsDirectory))
foreach (
var group in entries.Where(x => !x.IsDirectory).GroupBy(x => x.FilePart.Folder)
)
{
_currentEntry = entry;
yield return entry;
_currentFolder = group.Key;
if (group.Key is null)
{
_currentStream = Stream.Null;
}
else
{
_currentStream = _archive._database?.GetFolderStream(
stream,
_currentFolder,
new PasswordProvider(Options.Password)
);
}
foreach (var entry in group)
{
_currentItem = entry.FilePart.Header;
yield return entry;
}
}
}
protected override EntryStream GetEntryStream()
{
// Create a fresh decompression stream for each file (no state sharing).
// However, the LZMA decoder has bugs in its async implementation that cause
// state corruption even on fresh streams. The SyncOnlyStream wrapper
// works around these bugs by forcing async operations to use sync equivalents.
//
// TODO: Fix the LZMA decoder async bugs (in LzmaStream, Decoder, OutWindow)
// so this wrapper is no longer necessary.
var entry = _currentEntry.NotNull("currentEntry is not null");
if (entry.IsDirectory)
{
return CreateEntryStream(Stream.Null);
}
return CreateEntryStream(new SyncOnlyStream(entry.FilePart.GetCompressedStream()));
}
}
/// <summary>
/// WORKAROUND: Forces async operations to use synchronous equivalents.
/// This is necessary because the LZMA decoder has bugs in its async implementation
/// that cause state corruption (IndexOutOfRangeException, DataErrorException).
///
/// The proper fix would be to repair the LZMA decoder's async methods
/// (LzmaStream.ReadAsync, Decoder.CodeAsync, OutWindow async operations),
/// but that requires deep changes to the decoder state machine.
/// </summary>
private sealed class SyncOnlyStream : Stream
{
private readonly Stream _baseStream;
public SyncOnlyStream(Stream baseStream) => _baseStream = baseStream;
public override bool CanRead => _baseStream.CanRead;
public override bool CanSeek => _baseStream.CanSeek;
public override bool CanWrite => _baseStream.CanWrite;
public override long Length => _baseStream.Length;
public override long Position
{
get => _baseStream.Position;
set => _baseStream.Position = value;
}
public override void Flush() => _baseStream.Flush();
public override int Read(byte[] buffer, int offset, int count) =>
_baseStream.Read(buffer, offset, count);
public override long Seek(long offset, SeekOrigin origin) =>
_baseStream.Seek(offset, origin);
public override void SetLength(long value) => _baseStream.SetLength(value);
public override void Write(byte[] buffer, int offset, int count) =>
_baseStream.Write(buffer, offset, count);
// Force async operations to use sync equivalents to avoid LZMA decoder bugs
public override Task<int> ReadAsync(
byte[] buffer,
int offset,
int count,
CancellationToken cancellationToken
)
{
cancellationToken.ThrowIfCancellationRequested();
return Task.FromResult(_baseStream.Read(buffer, offset, count));
}
public override Task WriteAsync(
byte[] buffer,
int offset,
int count,
CancellationToken cancellationToken
)
{
cancellationToken.ThrowIfCancellationRequested();
_baseStream.Write(buffer, offset, count);
return Task.CompletedTask;
}
public override Task FlushAsync(CancellationToken cancellationToken)
{
cancellationToken.ThrowIfCancellationRequested();
_baseStream.Flush();
return Task.CompletedTask;
}
#if !NETFRAMEWORK && !NETSTANDARD2_0
public override ValueTask<int> ReadAsync(
Memory<byte> buffer,
CancellationToken cancellationToken = default
)
{
cancellationToken.ThrowIfCancellationRequested();
return new ValueTask<int>(_baseStream.Read(buffer.Span));
}
public override ValueTask WriteAsync(
ReadOnlyMemory<byte> buffer,
CancellationToken cancellationToken = default
)
{
cancellationToken.ThrowIfCancellationRequested();
_baseStream.Write(buffer.Span);
return ValueTask.CompletedTask;
}
#endif
protected override void Dispose(bool disposing)
{
if (disposing)
{
_baseStream.Dispose();
}
base.Dispose(disposing);
}
protected override EntryStream GetEntryStream() =>
CreateEntryStream(
new ReadOnlySubStream(
_currentStream.NotNull("currentStream is not null"),
_currentItem?.Size ?? 0
)
);
}
private class PasswordProvider : IPasswordProvider

View File

@@ -103,70 +103,6 @@ public class TarArchive : AbstractWritableArchive<TarArchiveEntry, TarVolume>
);
}
/// <summary>
/// Opens a TarArchive asynchronously from a stream.
/// </summary>
/// <param name="stream"></param>
/// <param name="readerOptions"></param>
/// <param name="cancellationToken"></param>
public static async Task<IArchive> OpenAsync(
Stream stream,
ReaderOptions? readerOptions = null,
CancellationToken cancellationToken = default
)
{
cancellationToken.ThrowIfCancellationRequested();
return await Task.FromResult(Open(stream, readerOptions)).ConfigureAwait(false);
}
/// <summary>
/// Opens a TarArchive asynchronously from a FileInfo.
/// </summary>
/// <param name="fileInfo"></param>
/// <param name="readerOptions"></param>
/// <param name="cancellationToken"></param>
public static async Task<IArchive> OpenAsync(
FileInfo fileInfo,
ReaderOptions? readerOptions = null,
CancellationToken cancellationToken = default
)
{
cancellationToken.ThrowIfCancellationRequested();
return await Task.FromResult(Open(fileInfo, readerOptions)).ConfigureAwait(false);
}
/// <summary>
/// Opens a TarArchive asynchronously from multiple streams.
/// </summary>
/// <param name="streams"></param>
/// <param name="readerOptions"></param>
/// <param name="cancellationToken"></param>
public static async Task<IArchive> OpenAsync(
IReadOnlyList<Stream> streams,
ReaderOptions? readerOptions = null,
CancellationToken cancellationToken = default
)
{
cancellationToken.ThrowIfCancellationRequested();
return await Task.FromResult(Open(streams, readerOptions)).ConfigureAwait(false);
}
/// <summary>
/// Opens a TarArchive asynchronously from multiple FileInfo objects.
/// </summary>
/// <param name="fileInfos"></param>
/// <param name="readerOptions"></param>
/// <param name="cancellationToken"></param>
public static async Task<IArchive> OpenAsync(
IReadOnlyList<FileInfo> fileInfos,
ReaderOptions? readerOptions = null,
CancellationToken cancellationToken = default
)
{
cancellationToken.ThrowIfCancellationRequested();
return await Task.FromResult(Open(fileInfos, readerOptions)).ConfigureAwait(false);
}
public static bool IsTarFile(string filePath) => IsTarFile(new FileInfo(filePath));
public static bool IsTarFile(FileInfo fileInfo)

View File

@@ -124,70 +124,6 @@ public class ZipArchive : AbstractWritableArchive<ZipArchiveEntry, ZipVolume>
);
}
/// <summary>
/// Opens a ZipArchive asynchronously from a stream.
/// </summary>
/// <param name="stream"></param>
/// <param name="readerOptions"></param>
/// <param name="cancellationToken"></param>
public static async Task<IArchive> OpenAsync(
Stream stream,
ReaderOptions? readerOptions = null,
CancellationToken cancellationToken = default
)
{
cancellationToken.ThrowIfCancellationRequested();
return await Task.FromResult(Open(stream, readerOptions)).ConfigureAwait(false);
}
/// <summary>
/// Opens a ZipArchive asynchronously from a FileInfo.
/// </summary>
/// <param name="fileInfo"></param>
/// <param name="readerOptions"></param>
/// <param name="cancellationToken"></param>
public static async Task<IArchive> OpenAsync(
FileInfo fileInfo,
ReaderOptions? readerOptions = null,
CancellationToken cancellationToken = default
)
{
cancellationToken.ThrowIfCancellationRequested();
return await Task.FromResult(Open(fileInfo, readerOptions)).ConfigureAwait(false);
}
/// <summary>
/// Opens a ZipArchive asynchronously from multiple streams.
/// </summary>
/// <param name="streams"></param>
/// <param name="readerOptions"></param>
/// <param name="cancellationToken"></param>
public static async Task<IArchive> OpenAsync(
IReadOnlyList<Stream> streams,
ReaderOptions? readerOptions = null,
CancellationToken cancellationToken = default
)
{
cancellationToken.ThrowIfCancellationRequested();
return await Task.FromResult(Open(streams, readerOptions)).ConfigureAwait(false);
}
/// <summary>
/// Opens a ZipArchive asynchronously from multiple FileInfo objects.
/// </summary>
/// <param name="fileInfos"></param>
/// <param name="readerOptions"></param>
/// <param name="cancellationToken"></param>
public static async Task<IArchive> OpenAsync(
IReadOnlyList<FileInfo> fileInfos,
ReaderOptions? readerOptions = null,
CancellationToken cancellationToken = default
)
{
cancellationToken.ThrowIfCancellationRequested();
return await Task.FromResult(Open(fileInfos, readerOptions)).ConfigureAwait(false);
}
public static bool IsZipFile(
string filePath,
string? password = null,
@@ -263,93 +199,7 @@ public class ZipArchive : AbstractWritableArchive<ZipArchiveEntry, ZipVolume>
if (stream.CanSeek) //could be multipart. Test for central directory - might not be z64 safe
{
var z = new SeekableZipHeaderFactory(password, new ArchiveEncoding());
var x = z.ReadSeekableHeader(stream, useSync: true).FirstOrDefault();
return x?.ZipHeaderType == ZipHeaderType.DirectoryEntry;
}
else
{
return false;
}
}
return Enum.IsDefined(typeof(ZipHeaderType), header.ZipHeaderType);
}
catch (CryptographicException)
{
return true;
}
catch
{
return false;
}
}
public static async Task<bool> IsZipFileAsync(
Stream stream,
string? password = null,
int bufferSize = ReaderOptions.DefaultBufferSize,
CancellationToken cancellationToken = default
)
{
cancellationToken.ThrowIfCancellationRequested();
var headerFactory = new StreamingZipHeaderFactory(password, new ArchiveEncoding(), null);
try
{
if (stream is not SharpCompressStream)
{
stream = new SharpCompressStream(stream, bufferSize: bufferSize);
}
var header = headerFactory
.ReadStreamHeader(stream)
.FirstOrDefault(x => x.ZipHeaderType != ZipHeaderType.Split);
if (header is null)
{
return false;
}
return Enum.IsDefined(typeof(ZipHeaderType), header.ZipHeaderType);
}
catch (CryptographicException)
{
return true;
}
catch
{
return false;
}
}
public static async Task<bool> IsZipMultiAsync(
Stream stream,
string? password = null,
int bufferSize = ReaderOptions.DefaultBufferSize,
CancellationToken cancellationToken = default
)
{
cancellationToken.ThrowIfCancellationRequested();
var headerFactory = new StreamingZipHeaderFactory(password, new ArchiveEncoding(), null);
try
{
if (stream is not SharpCompressStream)
{
stream = new SharpCompressStream(stream, bufferSize: bufferSize);
}
var header = headerFactory
.ReadStreamHeader(stream)
.FirstOrDefault(x => x.ZipHeaderType != ZipHeaderType.Split);
if (header is null)
{
if (stream.CanSeek) //could be multipart. Test for central directory - might not be z64 safe
{
var z = new SeekableZipHeaderFactory(password, new ArchiveEncoding());
ZipHeader? x = null;
await foreach (
var h in z.ReadSeekableHeader(stream).WithCancellation(cancellationToken)
)
{
x = h;
break;
}
var x = z.ReadSeekableHeader(stream).FirstOrDefault();
return x?.ZipHeaderType == ZipHeaderType.DirectoryEntry;
}
else
@@ -404,9 +254,7 @@ public class ZipArchive : AbstractWritableArchive<ZipArchiveEntry, ZipVolume>
protected override IEnumerable<ZipArchiveEntry> LoadEntries(IEnumerable<ZipVolume> volumes)
{
var vols = volumes.ToArray();
foreach (
var h in headerFactory.NotNull().ReadSeekableHeader(vols.Last().Stream, useSync: true)
)
foreach (var h in headerFactory.NotNull().ReadSeekableHeader(vols.Last().Stream))
{
if (h != null)
{

View File

@@ -1,8 +1,7 @@
using System;
using System.Runtime.CompilerServices;
// CLSCompliant(false) is required because ZStandard integration uses unsafe code
[assembly: CLSCompliant(false)]
[assembly: CLSCompliant(true)]
[assembly: InternalsVisibleTo(
"SharpCompress.Test,PublicKey=0024000004800000940000000602000000240000525341310004000001000100158bebf1433f76dffc356733c138babea7a47536c65ed8009b16372c6f4edbb20554db74a62687f56b97c20a6ce8c4b123280279e33c894e7b3aa93ab3c573656fde4db576cfe07dba09619ead26375b25d2c4a8e43f7be257d712b0dd2eb546f67adb09281338618a58ac834fc038dd7e2740a7ab3591826252e4f4516306dc"
)]

View File

@@ -57,7 +57,7 @@ namespace SharpCompress.Common.Arc
return value switch
{
1 or 2 => CompressionType.None,
3 => CompressionType.Packed,
3 => CompressionType.RLE90,
4 => CompressionType.Squeezed,
5 or 6 or 7 or 8 => CompressionType.Crunched,
9 => CompressionType.Squashed,

View File

@@ -44,7 +44,7 @@ namespace SharpCompress.Common.Arc
Header.CompressedSize
);
break;
case CompressionType.Packed:
case CompressionType.RLE90:
compressedStream = new RunLength90Stream(
_stream,
(int)Header.CompressedSize
@@ -54,14 +54,6 @@ namespace SharpCompress.Common.Arc
compressedStream = new SqueezeStream(_stream, (int)Header.CompressedSize);
break;
case CompressionType.Crunched:
if (Header.OriginalSize > 128 * 1024)
{
throw new NotSupportedException(
"CompressionMethod: "
+ Header.CompressionMethod
+ " with size > 128KB"
);
}
compressedStream = new ArcLzwStream(
_stream,
(int)Header.CompressedSize,

View File

@@ -0,0 +1,10 @@
using System;
namespace SharpCompress.Common;
public class ArchiveExtractionEventArgs<T> : EventArgs
{
internal ArchiveExtractionEventArgs(T entry) => Item = entry;
public T Item { get; }
}

View File

@@ -8,5 +8,4 @@ public enum ArchiveType
SevenZip,
GZip,
Arc,
Arj,
}

View File

@@ -1,58 +0,0 @@
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using SharpCompress.Common.Arc;
using SharpCompress.Common.Arj.Headers;
namespace SharpCompress.Common.Arj
{
public class ArjEntry : Entry
{
private readonly ArjFilePart _filePart;
internal ArjEntry(ArjFilePart filePart)
{
_filePart = filePart;
}
public override long Crc => _filePart.Header.OriginalCrc32;
public override string? Key => _filePart?.Header.Name;
public override string? LinkTarget => null;
public override long CompressedSize => _filePart?.Header.CompressedSize ?? 0;
public override CompressionType CompressionType
{
get
{
if (_filePart.Header.CompressionMethod == CompressionMethod.Stored)
{
return CompressionType.None;
}
return CompressionType.ArjLZ77;
}
}
public override long Size => _filePart?.Header.OriginalSize ?? 0;
public override DateTime? LastModifiedTime => _filePart.Header.DateTimeModified.DateTime;
public override DateTime? CreatedTime => _filePart.Header.DateTimeCreated.DateTime;
public override DateTime? LastAccessedTime => _filePart.Header.DateTimeAccessed.DateTime;
public override DateTime? ArchivedTime => null;
public override bool IsEncrypted => false;
public override bool IsDirectory => _filePart.Header.FileType == FileType.Directory;
public override bool IsSplitAfter => false;
internal override IEnumerable<FilePart> Parts => _filePart.Empty();
}
}

View File

@@ -1,72 +0,0 @@
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using SharpCompress.Common.Arj.Headers;
using SharpCompress.Compressors.Arj;
using SharpCompress.IO;
namespace SharpCompress.Common.Arj
{
public class ArjFilePart : FilePart
{
private readonly Stream _stream;
internal ArjLocalHeader Header { get; set; }
internal ArjFilePart(ArjLocalHeader localArjHeader, Stream seekableStream)
: base(localArjHeader.ArchiveEncoding)
{
_stream = seekableStream;
Header = localArjHeader;
}
internal override string? FilePartName => Header.Name;
internal override Stream GetCompressedStream()
{
if (_stream != null)
{
Stream compressedStream;
switch (Header.CompressionMethod)
{
case CompressionMethod.Stored:
compressedStream = new ReadOnlySubStream(
_stream,
Header.DataStartPosition,
Header.CompressedSize
);
break;
case CompressionMethod.CompressedMost:
case CompressionMethod.Compressed:
case CompressionMethod.CompressedFaster:
if (Header.OriginalSize > 128 * 1024)
{
throw new NotSupportedException(
"CompressionMethod: "
+ Header.CompressionMethod
+ " with size > 128KB"
);
}
compressedStream = new LhaStream<Lh7DecoderCfg>(
_stream,
(int)Header.OriginalSize
);
break;
case CompressionMethod.CompressedFastest:
compressedStream = new LHDecoderStream(_stream, (int)Header.OriginalSize);
break;
default:
throw new NotSupportedException(
"CompressionMethod: " + Header.CompressionMethod
);
}
return compressedStream;
}
return _stream.NotNull();
}
internal override Stream GetRawStream() => _stream;
}
}

View File

@@ -1,36 +0,0 @@
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using SharpCompress.Common.Rar;
using SharpCompress.Common.Rar.Headers;
using SharpCompress.Readers;
namespace SharpCompress.Common.Arj
{
public class ArjVolume : Volume
{
public ArjVolume(Stream stream, ReaderOptions readerOptions, int index = 0)
: base(stream, readerOptions, index) { }
public override bool IsFirstVolume
{
get { return true; }
}
/// <summary>
/// ArjArchive is part of a multi-part archive.
/// </summary>
public override bool IsMultiVolume
{
get { return false; }
}
internal IEnumerable<ArjFilePart> GetVolumeFileParts()
{
return new List<ArjFilePart>();
}
}
}

View File

@@ -1,142 +0,0 @@
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using SharpCompress.Common.Zip.Headers;
using SharpCompress.Crypto;
namespace SharpCompress.Common.Arj.Headers
{
public enum ArjHeaderType
{
MainHeader,
LocalHeader,
}
public abstract class ArjHeader
{
private const int FIRST_HDR_SIZE = 34;
private const ushort ARJ_MAGIC = 0xEA60;
public ArjHeader(ArjHeaderType type)
{
ArjHeaderType = type;
}
public ArjHeaderType ArjHeaderType { get; }
public byte Flags { get; set; }
public FileType FileType { get; set; }
public abstract ArjHeader? Read(Stream reader);
public byte[] ReadHeader(Stream stream)
{
// check for magic bytes
Span<byte> magic = stackalloc byte[2];
if (stream.Read(magic) != 2)
{
return Array.Empty<byte>();
}
var magicValue = (ushort)(magic[0] | magic[1] << 8);
if (magicValue != ARJ_MAGIC)
{
throw new InvalidDataException("Not an ARJ file (wrong magic bytes)");
}
// read header_size
byte[] headerBytes = new byte[2];
stream.Read(headerBytes, 0, 2);
var headerSize = (ushort)(headerBytes[0] | headerBytes[1] << 8);
if (headerSize < 1)
{
return Array.Empty<byte>();
}
var body = new byte[headerSize];
var read = stream.Read(body, 0, headerSize);
if (read < headerSize)
{
return Array.Empty<byte>();
}
byte[] crc = new byte[4];
read = stream.Read(crc, 0, 4);
var checksum = Crc32Stream.Compute(body);
// Compute the hash value
if (checksum != BitConverter.ToUInt32(crc, 0))
{
throw new InvalidDataException("Header checksum is invalid");
}
return body;
}
protected List<byte[]> ReadExtendedHeaders(Stream reader)
{
List<byte[]> extendedHeader = new List<byte[]>();
byte[] buffer = new byte[2];
while (true)
{
int bytesRead = reader.Read(buffer, 0, 2);
if (bytesRead < 2)
{
throw new EndOfStreamException(
"Unexpected end of stream while reading extended header size."
);
}
var extHeaderSize = (ushort)(buffer[0] | (buffer[1] << 8));
if (extHeaderSize == 0)
{
return extendedHeader;
}
byte[] header = new byte[extHeaderSize];
bytesRead = reader.Read(header, 0, extHeaderSize);
if (bytesRead < extHeaderSize)
{
throw new EndOfStreamException(
"Unexpected end of stream while reading extended header data."
);
}
byte[] crc = new byte[4];
bytesRead = reader.Read(crc, 0, 4);
if (bytesRead < 4)
{
throw new EndOfStreamException(
"Unexpected end of stream while reading extended header CRC."
);
}
var checksum = Crc32Stream.Compute(header);
if (checksum != BitConverter.ToUInt32(crc, 0))
{
throw new InvalidDataException("Extended header checksum is invalid");
}
extendedHeader.Add(header);
}
}
// Flag helpers
public bool IsGabled => (Flags & 0x01) != 0;
public bool IsAnsiPage => (Flags & 0x02) != 0;
public bool IsVolume => (Flags & 0x04) != 0;
public bool IsArjProtected => (Flags & 0x08) != 0;
public bool IsPathSym => (Flags & 0x10) != 0;
public bool IsBackup => (Flags & 0x20) != 0;
public bool IsSecured => (Flags & 0x40) != 0;
public bool IsAltName => (Flags & 0x80) != 0;
public static FileType FileTypeFromByte(byte value)
{
return Enum.IsDefined(typeof(FileType), value)
? (FileType)value
: Headers.FileType.Unknown;
}
}
}

View File

@@ -1,161 +0,0 @@
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Runtime.CompilerServices;
using System.Text;
using System.Threading.Tasks;
namespace SharpCompress.Common.Arj.Headers
{
public class ArjLocalHeader : ArjHeader
{
public ArchiveEncoding ArchiveEncoding { get; }
public long DataStartPosition { get; protected set; }
public byte ArchiverVersionNumber { get; set; }
public byte MinVersionToExtract { get; set; }
public HostOS HostOS { get; set; }
public CompressionMethod CompressionMethod { get; set; }
public DosDateTime DateTimeModified { get; set; } = new DosDateTime(0);
public long CompressedSize { get; set; }
public long OriginalSize { get; set; }
public long OriginalCrc32 { get; set; }
public int FileSpecPosition { get; set; }
public int FileAccessMode { get; set; }
public byte FirstChapter { get; set; }
public byte LastChapter { get; set; }
public long ExtendedFilePosition { get; set; }
public DosDateTime DateTimeAccessed { get; set; } = new DosDateTime(0);
public DosDateTime DateTimeCreated { get; set; } = new DosDateTime(0);
public long OriginalSizeEvenForVolumes { get; set; }
public string Name { get; set; } = string.Empty;
public string Comment { get; set; } = string.Empty;
private const byte StdHdrSize = 30;
private const byte R9HdrSize = 46;
public ArjLocalHeader(ArchiveEncoding archiveEncoding)
: base(ArjHeaderType.LocalHeader)
{
ArchiveEncoding =
archiveEncoding ?? throw new ArgumentNullException(nameof(archiveEncoding));
}
public override ArjHeader? Read(Stream stream)
{
var body = ReadHeader(stream);
if (body.Length > 0)
{
ReadExtendedHeaders(stream);
var header = LoadFrom(body);
header.DataStartPosition = stream.Position;
return header;
}
return null;
}
public ArjLocalHeader LoadFrom(byte[] headerBytes)
{
int offset = 0;
int ReadInt16()
{
if (offset + 1 >= headerBytes.Length)
{
throw new EndOfStreamException();
}
var v = headerBytes[offset] & 0xFF | (headerBytes[offset + 1] & 0xFF) << 8;
offset += 2;
return v;
}
long ReadInt32()
{
if (offset + 3 >= headerBytes.Length)
{
throw new EndOfStreamException();
}
long v =
headerBytes[offset] & 0xFF
| (headerBytes[offset + 1] & 0xFF) << 8
| (headerBytes[offset + 2] & 0xFF) << 16
| (headerBytes[offset + 3] & 0xFF) << 24;
offset += 4;
return v;
}
byte headerSize = headerBytes[offset++];
ArchiverVersionNumber = headerBytes[offset++];
MinVersionToExtract = headerBytes[offset++];
HostOS hostOS = (HostOS)headerBytes[offset++];
Flags = headerBytes[offset++];
CompressionMethod = CompressionMethodFromByte(headerBytes[offset++]);
FileType = FileTypeFromByte(headerBytes[offset++]);
offset++; // Skip 1 byte
var rawTimestamp = ReadInt32();
DateTimeModified =
rawTimestamp != 0 ? new DosDateTime(rawTimestamp) : new DosDateTime(0);
CompressedSize = ReadInt32();
OriginalSize = ReadInt32();
OriginalCrc32 = ReadInt32();
FileSpecPosition = ReadInt16();
FileAccessMode = ReadInt16();
FirstChapter = headerBytes[offset++];
LastChapter = headerBytes[offset++];
ExtendedFilePosition = 0;
OriginalSizeEvenForVolumes = 0;
if (headerSize > StdHdrSize)
{
ExtendedFilePosition = ReadInt32();
if (headerSize >= R9HdrSize)
{
rawTimestamp = ReadInt32();
DateTimeAccessed =
rawTimestamp != 0 ? new DosDateTime(rawTimestamp) : new DosDateTime(0);
rawTimestamp = ReadInt32();
DateTimeCreated =
rawTimestamp != 0 ? new DosDateTime(rawTimestamp) : new DosDateTime(0);
OriginalSizeEvenForVolumes = ReadInt32();
}
}
Name = Encoding.ASCII.GetString(
headerBytes,
offset,
Array.IndexOf(headerBytes, (byte)0, offset) - offset
);
offset += Name.Length + 1;
Comment = Encoding.ASCII.GetString(
headerBytes,
offset,
Array.IndexOf(headerBytes, (byte)0, offset) - offset
);
offset += Comment.Length + 1;
return this;
}
public static CompressionMethod CompressionMethodFromByte(byte value)
{
return value switch
{
0 => CompressionMethod.Stored,
1 => CompressionMethod.CompressedMost,
2 => CompressionMethod.Compressed,
3 => CompressionMethod.CompressedFaster,
4 => CompressionMethod.CompressedFastest,
8 => CompressionMethod.NoDataNoCrc,
9 => CompressionMethod.NoData,
_ => CompressionMethod.Unknown,
};
}
}
}

View File

@@ -1,138 +0,0 @@
using System;
using System.IO;
using System.Text;
using SharpCompress.Compressors.Deflate;
using SharpCompress.Crypto;
namespace SharpCompress.Common.Arj.Headers
{
public class ArjMainHeader : ArjHeader
{
private const int FIRST_HDR_SIZE = 34;
private const ushort ARJ_MAGIC = 0xEA60;
public ArchiveEncoding ArchiveEncoding { get; }
public int ArchiverVersionNumber { get; private set; }
public int MinVersionToExtract { get; private set; }
public HostOS HostOs { get; private set; }
public int SecurityVersion { get; private set; }
public DosDateTime CreationDateTime { get; private set; } = new DosDateTime(0);
public long CompressedSize { get; private set; }
public long ArchiveSize { get; private set; }
public long SecurityEnvelope { get; private set; }
public int FileSpecPosition { get; private set; }
public int SecurityEnvelopeLength { get; private set; }
public int EncryptionVersion { get; private set; }
public int LastChapter { get; private set; }
public int ArjProtectionFactor { get; private set; }
public int Flags2 { get; private set; }
public string Name { get; private set; } = string.Empty;
public string Comment { get; private set; } = string.Empty;
public ArjMainHeader(ArchiveEncoding archiveEncoding)
: base(ArjHeaderType.MainHeader)
{
ArchiveEncoding =
archiveEncoding ?? throw new ArgumentNullException(nameof(archiveEncoding));
}
public override ArjHeader? Read(Stream stream)
{
var body = ReadHeader(stream);
ReadExtendedHeaders(stream);
return LoadFrom(body);
}
public ArjMainHeader LoadFrom(byte[] headerBytes)
{
var offset = 1;
byte ReadByte()
{
if (offset >= headerBytes.Length)
{
throw new EndOfStreamException();
}
return (byte)(headerBytes[offset++] & 0xFF);
}
int ReadInt16()
{
if (offset + 1 >= headerBytes.Length)
{
throw new EndOfStreamException();
}
var v = headerBytes[offset] & 0xFF | (headerBytes[offset + 1] & 0xFF) << 8;
offset += 2;
return v;
}
long ReadInt32()
{
if (offset + 3 >= headerBytes.Length)
{
throw new EndOfStreamException();
}
long v =
headerBytes[offset] & 0xFF
| (headerBytes[offset + 1] & 0xFF) << 8
| (headerBytes[offset + 2] & 0xFF) << 16
| (headerBytes[offset + 3] & 0xFF) << 24;
offset += 4;
return v;
}
string ReadNullTerminatedString(byte[] x, int startIndex)
{
var result = new StringBuilder();
int i = startIndex;
while (i < x.Length && x[i] != 0)
{
result.Append((char)x[i]);
i++;
}
// Skip the null terminator
i++;
if (i < x.Length)
{
byte[] remainder = new byte[x.Length - i];
Array.Copy(x, i, remainder, 0, remainder.Length);
x = remainder;
}
return result.ToString();
}
ArchiverVersionNumber = ReadByte();
MinVersionToExtract = ReadByte();
var hostOsByte = ReadByte();
HostOs = hostOsByte <= 11 ? (HostOS)hostOsByte : HostOS.Unknown;
Flags = ReadByte();
SecurityVersion = ReadByte();
FileType = FileTypeFromByte(ReadByte());
offset++; // skip reserved
CreationDateTime = new DosDateTime((int)ReadInt32());
CompressedSize = ReadInt32();
ArchiveSize = ReadInt32();
SecurityEnvelope = ReadInt32();
FileSpecPosition = ReadInt16();
SecurityEnvelopeLength = ReadInt16();
EncryptionVersion = ReadByte();
LastChapter = ReadByte();
Name = ReadNullTerminatedString(headerBytes, offset);
Comment = ReadNullTerminatedString(headerBytes, offset + 1 + Name.Length);
return this;
}
}
}

View File

@@ -1,20 +0,0 @@
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace SharpCompress.Common.Arj.Headers
{
public enum CompressionMethod
{
Stored = 0,
CompressedMost = 1,
Compressed = 2,
CompressedFaster = 3,
CompressedFastest = 4,
NoDataNoCrc = 8,
NoData = 9,
Unknown,
}
}

View File

@@ -1,37 +0,0 @@
using System;
namespace SharpCompress.Common.Arj.Headers
{
public class DosDateTime
{
public DateTime DateTime { get; }
public DosDateTime(long dosValue)
{
// Ensure only the lower 32 bits are used
int value = unchecked((int)(dosValue & 0xFFFFFFFF));
var date = (value >> 16) & 0xFFFF;
var time = value & 0xFFFF;
var day = date & 0x1F;
var month = (date >> 5) & 0x0F;
var year = ((date >> 9) & 0x7F) + 1980;
var second = (time & 0x1F) * 2;
var minute = (time >> 5) & 0x3F;
var hour = (time >> 11) & 0x1F;
try
{
DateTime = new DateTime(year, month, day, hour, minute, second);
}
catch
{
DateTime = DateTime.MinValue;
}
}
public override string ToString() => DateTime.ToString("yyyy-MM-dd HH:mm:ss");
}
}

View File

@@ -1,13 +0,0 @@
namespace SharpCompress.Common.Arj.Headers
{
public enum FileType : byte
{
Binary = 0,
Text7Bit = 1,
CommentHeader = 2,
Directory = 3,
VolumeLabel = 4,
ChapterLabel = 5,
Unknown = 255,
}
}

View File

@@ -1,19 +0,0 @@
namespace SharpCompress.Common.Arj.Headers
{
public enum HostOS
{
MsDos = 0,
PrimOS = 1,
Unix = 2,
Amiga = 3,
MacOs = 4,
OS2 = 5,
AppleGS = 6,
AtariST = 7,
NeXT = 8,
VaxVMS = 9,
Win95 = 10,
Win32 = 11,
Unknown = 255,
}
}

View File

@@ -1,117 +0,0 @@
using System;
using System.Buffers.Binary;
using System.IO;
using System.Threading;
using System.Threading.Tasks;
namespace SharpCompress.Common
{
public sealed class AsyncBinaryReader : IDisposable
{
private readonly Stream _stream;
private readonly Stream _originalStream;
private readonly bool _leaveOpen;
private readonly byte[] _buffer = new byte[8];
private bool _disposed;
public AsyncBinaryReader(Stream stream, bool leaveOpen = false, int bufferSize = 4096)
{
_originalStream = stream ?? throw new ArgumentNullException(nameof(stream));
_leaveOpen = leaveOpen;
// Use the stream directly without wrapping in BufferedStream
// BufferedStream uses synchronous Read internally which doesn't work with async-only streams
// SharpCompress uses SharpCompressStream for buffering which supports true async reads
_stream = stream;
}
public Stream BaseStream => _stream;
public async ValueTask<byte> ReadByteAsync(CancellationToken ct = default)
{
await ReadExactAsync(_buffer, 0, 1, ct).ConfigureAwait(false);
return _buffer[0];
}
public async ValueTask<ushort> ReadUInt16Async(CancellationToken ct = default)
{
await ReadExactAsync(_buffer, 0, 2, ct).ConfigureAwait(false);
return BinaryPrimitives.ReadUInt16LittleEndian(_buffer);
}
public async ValueTask<uint> ReadUInt32Async(CancellationToken ct = default)
{
await ReadExactAsync(_buffer, 0, 4, ct).ConfigureAwait(false);
return BinaryPrimitives.ReadUInt32LittleEndian(_buffer);
}
public async ValueTask<ulong> ReadUInt64Async(CancellationToken ct = default)
{
await ReadExactAsync(_buffer, 0, 8, ct).ConfigureAwait(false);
return BinaryPrimitives.ReadUInt64LittleEndian(_buffer);
}
public async ValueTask<byte[]> ReadBytesAsync(int count, CancellationToken ct = default)
{
var result = new byte[count];
await ReadExactAsync(result, 0, count, ct).ConfigureAwait(false);
return result;
}
private async ValueTask ReadExactAsync(
byte[] destination,
int offset,
int length,
CancellationToken ct
)
{
var read = 0;
while (read < length)
{
var n = await _stream
.ReadAsync(destination, offset + read, length - read, ct)
.ConfigureAwait(false);
if (n == 0)
{
throw new EndOfStreamException();
}
read += n;
}
}
public void Dispose()
{
if (_disposed)
{
return;
}
_disposed = true;
// Dispose the original stream if we own it
if (!_leaveOpen)
{
_originalStream.Dispose();
}
}
#if NET6_0_OR_GREATER
public async ValueTask DisposeAsync()
{
if (_disposed)
{
return;
}
_disposed = true;
// Dispose the original stream if we own it
if (!_leaveOpen)
{
await _originalStream.DisposeAsync().ConfigureAwait(false);
}
}
#endif
}
}

View File

@@ -0,0 +1,25 @@
using System;
namespace SharpCompress.Common;
public sealed class CompressedBytesReadEventArgs : EventArgs
{
public CompressedBytesReadEventArgs(
long compressedBytesRead,
long currentFilePartCompressedBytesRead
)
{
CompressedBytesRead = compressedBytesRead;
CurrentFilePartCompressedBytesRead = currentFilePartCompressedBytesRead;
}
/// <summary>
/// Compressed bytes read for the current entry
/// </summary>
public long CompressedBytesRead { get; }
/// <summary>
/// Current file part read for Multipart files (e.g. Rar)
/// </summary>
public long CurrentFilePartCompressedBytesRead { get; }
}

View File

@@ -23,11 +23,10 @@ public enum CompressionType
Reduce4,
Explode,
Squeezed,
Packed,
RLE90,
Crunched,
Squashed,
Crushed,
Distilled,
ZStandard,
ArjLZ77,
}

View File

@@ -64,11 +64,6 @@ public class EntryStream : Stream, IStreamStack
protected override void Dispose(bool disposing)
{
if (_isDisposed)
{
return;
}
_isDisposed = true;
if (!(_completed || _reader.Cancelled))
{
SkipEntry();
@@ -86,6 +81,12 @@ public class EntryStream : Stream, IStreamStack
lzmaStream.Flush(); //Lzma over reads. Knock it back
}
}
if (_isDisposed)
{
return;
}
_isDisposed = true;
#if DEBUG_STREAMS
this.DebugDispose(typeof(EntryStream));
#endif
@@ -96,11 +97,6 @@ public class EntryStream : Stream, IStreamStack
#if !NETFRAMEWORK && !NETSTANDARD2_0
public override async ValueTask DisposeAsync()
{
if (_isDisposed)
{
return;
}
_isDisposed = true;
if (!(_completed || _reader.Cancelled))
{
await SkipEntryAsync().ConfigureAwait(false);
@@ -118,6 +114,12 @@ public class EntryStream : Stream, IStreamStack
await lzmaStream.FlushAsync().ConfigureAwait(false);
}
}
if (_isDisposed)
{
return;
}
_isDisposed = true;
#if DEBUG_STREAMS
this.DebugDispose(typeof(EntryStream));
#endif
@@ -202,11 +204,4 @@ public class EntryStream : Stream, IStreamStack
public override void Write(byte[] buffer, int offset, int count) =>
throw new NotSupportedException();
public override Task WriteAsync(
byte[] buffer,
int offset,
int count,
CancellationToken cancellationToken
) => throw new NotSupportedException();
}

View File

@@ -128,7 +128,7 @@ internal static class ExtractionMethods
IEntry entry,
string destinationDirectory,
ExtractionOptions? options,
Func<string, ExtractionOptions?, CancellationToken, Task> writeAsync,
Func<string, ExtractionOptions?, Task> writeAsync,
CancellationToken cancellationToken = default
)
{
@@ -189,7 +189,7 @@ internal static class ExtractionMethods
"Entry is trying to write a file outside of the destination directory."
);
}
await writeAsync(destinationFileName, options, cancellationToken).ConfigureAwait(false);
await writeAsync(destinationFileName, options).ConfigureAwait(false);
}
else if (options.ExtractFullPath && !Directory.Exists(destinationFileName))
{
@@ -201,7 +201,7 @@ internal static class ExtractionMethods
IEntry entry,
string destinationFileName,
ExtractionOptions? options,
Func<string, FileMode, CancellationToken, Task> openAndWriteAsync,
Func<string, FileMode, Task> openAndWriteAsync,
CancellationToken cancellationToken = default
)
{
@@ -225,8 +225,7 @@ internal static class ExtractionMethods
fm = FileMode.CreateNew;
}
await openAndWriteAsync(destinationFileName, fm, cancellationToken)
.ConfigureAwait(false);
await openAndWriteAsync(destinationFileName, fm).ConfigureAwait(false);
entry.PreserveExtractionOptions(destinationFileName, options);
}
}

View File

@@ -1,6 +1,4 @@
using System.IO;
using System.Threading;
using System.Threading.Tasks;
namespace SharpCompress.Common;
@@ -16,8 +14,4 @@ public abstract class FilePart
internal abstract Stream? GetCompressedStream();
internal abstract Stream? GetRawStream();
internal bool Skipped { get; set; }
internal virtual Task<Stream?> GetCompressedStreamAsync(
CancellationToken cancellationToken = default
) => Task.FromResult(GetCompressedStream());
}

View File

@@ -0,0 +1,28 @@
using System;
namespace SharpCompress.Common;
public sealed class FilePartExtractionBeginEventArgs : EventArgs
{
public FilePartExtractionBeginEventArgs(string name, long size, long compressedSize)
{
Name = name;
Size = size;
CompressedSize = compressedSize;
}
/// <summary>
/// File name for the part for the current entry
/// </summary>
public string Name { get; }
/// <summary>
/// Uncompressed size of the current entry in the part
/// </summary>
public long Size { get; }
/// <summary>
/// Compressed size of the current entry in the part
/// </summary>
public long CompressedSize { get; }
}

View File

@@ -0,0 +1,7 @@
namespace SharpCompress.Common;
public interface IExtractionListener
{
void FireFilePartExtractionBegin(string name, long size, long compressedSize);
void FireCompressedBytesRead(long currentPartCompressedBytes, long compressedReadBytes);
}

View File

@@ -1,43 +0,0 @@
namespace SharpCompress.Common;
/// <summary>
/// Represents progress information for compression or extraction operations.
/// </summary>
public sealed class ProgressReport
{
/// <summary>
/// Initializes a new instance of the <see cref="ProgressReport"/> class.
/// </summary>
/// <param name="entryPath">The path of the entry being processed.</param>
/// <param name="bytesTransferred">Number of bytes transferred so far.</param>
/// <param name="totalBytes">Total bytes to be transferred, or null if unknown.</param>
public ProgressReport(string entryPath, long bytesTransferred, long? totalBytes)
{
EntryPath = entryPath;
BytesTransferred = bytesTransferred;
TotalBytes = totalBytes;
}
/// <summary>
/// Gets the path of the entry being processed.
/// </summary>
public string EntryPath { get; }
/// <summary>
/// Gets the number of bytes transferred so far.
/// </summary>
public long BytesTransferred { get; }
/// <summary>
/// Gets the total number of bytes to be transferred, or null if unknown.
/// </summary>
public long? TotalBytes { get; }
/// <summary>
/// Gets the progress percentage (0-100), or null if total bytes is unknown.
/// </summary>
public double? PercentComplete =>
TotalBytes.HasValue && TotalBytes.Value > 0
? (double)BytesTransferred / TotalBytes.Value * 100
: null;
}

View File

@@ -0,0 +1,17 @@
using System;
using SharpCompress.Readers;
namespace SharpCompress.Common;
public sealed class ReaderExtractionEventArgs<T> : EventArgs
{
internal ReaderExtractionEventArgs(T entry, ReaderProgress? readerProgress = null)
{
Item = entry;
ReaderProgress = readerProgress;
}
public T Item { get; }
public ReaderProgress? ReaderProgress { get; }
}

View File

@@ -1,6 +1,5 @@
using System;
using System.Buffers.Binary;
using System.Collections.Generic;
using System.IO;
using System.Text;
@@ -10,16 +9,8 @@ internal sealed class TarHeader
{
internal static readonly DateTime EPOCH = new(1970, 1, 1, 0, 0, 0, DateTimeKind.Utc);
public TarHeader(
ArchiveEncoding archiveEncoding,
TarHeaderWriteFormat writeFormat = TarHeaderWriteFormat.GNU_TAR_LONG_LINK
)
{
ArchiveEncoding = archiveEncoding;
WriteFormat = writeFormat;
}
public TarHeader(ArchiveEncoding archiveEncoding) => ArchiveEncoding = archiveEncoding;
internal TarHeaderWriteFormat WriteFormat { get; set; }
internal string? Name { get; set; }
internal string? LinkName { get; set; }
@@ -34,119 +25,7 @@ internal sealed class TarHeader
internal const int BLOCK_SIZE = 512;
// Maximum size for long name/link headers to prevent memory exhaustion attacks
// This is generous enough for most real-world scenarios (32KB)
private const int MAX_LONG_NAME_SIZE = 32768;
internal void Write(Stream output)
{
switch (WriteFormat)
{
case TarHeaderWriteFormat.GNU_TAR_LONG_LINK:
WriteGnuTarLongLink(output);
break;
case TarHeaderWriteFormat.USTAR:
WriteUstar(output);
break;
default:
throw new Exception("This should be impossible...");
}
}
internal void WriteUstar(Stream output)
{
var buffer = new byte[BLOCK_SIZE];
WriteOctalBytes(511, buffer, 100, 8); // file mode
WriteOctalBytes(0, buffer, 108, 8); // owner ID
WriteOctalBytes(0, buffer, 116, 8); // group ID
//ArchiveEncoding.UTF8.GetBytes("magic").CopyTo(buffer, 257);
var nameByteCount = ArchiveEncoding
.GetEncoding()
.GetByteCount(Name.NotNull("Name is null"));
if (nameByteCount > 100)
{
// if name is longer, try to split it into name and namePrefix
string fullName = Name.NotNull("Name is null");
// find all directory separators
List<int> dirSeps = new List<int>();
for (int i = 0; i < fullName.Length; i++)
{
if (fullName[i] == Path.DirectorySeparatorChar)
{
dirSeps.Add(i);
}
}
// find the right place to split the name
int splitIndex = -1;
for (int i = 0; i < dirSeps.Count; i++)
{
int count = ArchiveEncoding
.GetEncoding()
.GetByteCount(fullName.Substring(0, dirSeps[i]));
if (count < 155)
{
splitIndex = dirSeps[i];
}
else
{
break;
}
}
if (splitIndex == -1)
{
throw new Exception(
$"Tar header USTAR format can not fit file name \"{fullName}\" of length {nameByteCount}! Directory separator not found! Try using GNU Tar format instead!"
);
}
string namePrefix = fullName.Substring(0, splitIndex);
string name = fullName.Substring(splitIndex + 1);
if (this.ArchiveEncoding.GetEncoding().GetByteCount(namePrefix) >= 155)
throw new Exception(
$"Tar header USTAR format can not fit file name \"{fullName}\" of length {nameByteCount}! Try using GNU Tar format instead!"
);
if (this.ArchiveEncoding.GetEncoding().GetByteCount(name) >= 100)
throw new Exception(
$"Tar header USTAR format can not fit file name \"{fullName}\" of length {nameByteCount}! Try using GNU Tar format instead!"
);
// write name prefix
WriteStringBytes(ArchiveEncoding.Encode(namePrefix), buffer, 345, 100);
// write partial name
WriteStringBytes(ArchiveEncoding.Encode(name), buffer, 100);
}
else
{
WriteStringBytes(ArchiveEncoding.Encode(Name.NotNull("Name is null")), buffer, 100);
}
WriteOctalBytes(Size, buffer, 124, 12);
var time = (long)(LastModifiedTime.ToUniversalTime() - EPOCH).TotalSeconds;
WriteOctalBytes(time, buffer, 136, 12);
buffer[156] = (byte)EntryType;
// write ustar magic field
WriteStringBytes(Encoding.ASCII.GetBytes("ustar"), buffer, 257, 6);
// write ustar version "00"
buffer[263] = 0x30;
buffer[264] = 0x30;
var crc = RecalculateChecksum(buffer);
WriteOctalBytes(crc, buffer, 148, 8);
output.Write(buffer, 0, buffer.Length);
}
internal void WriteGnuTarLongLink(Stream output)
{
var buffer = new byte[BLOCK_SIZE];
@@ -202,7 +81,7 @@ internal sealed class TarHeader
0,
100 - ArchiveEncoding.GetEncoding().GetMaxByteCount(1)
);
WriteGnuTarLongLink(output);
Write(output);
}
}
@@ -307,15 +186,6 @@ internal sealed class TarHeader
private string ReadLongName(BinaryReader reader, byte[] buffer)
{
var size = ReadSize(buffer);
// Validate size to prevent memory exhaustion from malformed headers
if (size < 0 || size > MAX_LONG_NAME_SIZE)
{
throw new InvalidFormatException(
$"Long name size {size} is invalid or exceeds maximum allowed size of {MAX_LONG_NAME_SIZE} bytes"
);
}
var nameLength = (int)size;
var nameBytes = reader.ReadBytes(nameLength);
var remainingBytesToRead = BLOCK_SIZE - (nameLength % BLOCK_SIZE);
@@ -358,18 +228,6 @@ internal sealed class TarHeader
buffer.Slice(i, length - i).Clear();
}
private static void WriteStringBytes(
ReadOnlySpan<byte> name,
Span<byte> buffer,
int offset,
int length
)
{
name.CopyTo(buffer.Slice(offset));
var i = Math.Min(length, name.Length);
buffer.Slice(offset + i, length - i).Clear();
}
private static void WriteStringBytes(string name, byte[] buffer, int offset, int length)
{
int i;

View File

@@ -1,7 +0,0 @@
namespace SharpCompress.Common.Tar.Headers;
public enum TarHeaderWriteFormat
{
GNU_TAR_LONG_LINK,
USTAR,
}

View File

@@ -1,5 +1,4 @@
using System.IO;
using System.Threading.Tasks;
namespace SharpCompress.Common.Zip.Headers;
@@ -20,18 +19,6 @@ internal class DirectoryEndHeader : ZipHeader
Comment = reader.ReadBytes(CommentLength);
}
internal override async ValueTask Read(AsyncBinaryReader reader)
{
VolumeNumber = await reader.ReadUInt16Async();
FirstVolumeWithDirectory = await reader.ReadUInt16Async();
TotalNumberOfEntriesInDisk = await reader.ReadUInt16Async();
TotalNumberOfEntries = await reader.ReadUInt16Async();
DirectorySize = await reader.ReadUInt32Async();
DirectoryStartOffsetRelativeToDisk = await reader.ReadUInt32Async();
CommentLength = await reader.ReadUInt16Async();
Comment = await reader.ReadBytesAsync(CommentLength);
}
public ushort VolumeNumber { get; private set; }
public ushort FirstVolumeWithDirectory { get; private set; }

View File

@@ -1,6 +1,5 @@
using System.IO;
using System.Linq;
using System.Threading.Tasks;
namespace SharpCompress.Common.Zip.Headers;
@@ -32,37 +31,7 @@ internal class DirectoryEntryHeader : ZipFileEntry
var extra = reader.ReadBytes(extraLength);
var comment = reader.ReadBytes(commentLength);
ProcessReadData(name, extra, comment);
}
internal override async ValueTask Read(AsyncBinaryReader reader)
{
Version = await reader.ReadUInt16Async();
VersionNeededToExtract = await reader.ReadUInt16Async();
Flags = (HeaderFlags)await reader.ReadUInt16Async();
CompressionMethod = (ZipCompressionMethod)await reader.ReadUInt16Async();
OriginalLastModifiedTime = LastModifiedTime = await reader.ReadUInt16Async();
OriginalLastModifiedDate = LastModifiedDate = await reader.ReadUInt16Async();
Crc = await reader.ReadUInt32Async();
CompressedSize = await reader.ReadUInt32Async();
UncompressedSize = await reader.ReadUInt32Async();
var nameLength = await reader.ReadUInt16Async();
var extraLength = await reader.ReadUInt16Async();
var commentLength = await reader.ReadUInt16Async();
DiskNumberStart = await reader.ReadUInt16Async();
InternalFileAttributes = await reader.ReadUInt16Async();
ExternalFileAttributes = await reader.ReadUInt32Async();
RelativeOffsetOfEntryHeader = await reader.ReadUInt32Async();
var name = await reader.ReadBytesAsync(nameLength);
var extra = await reader.ReadBytesAsync(extraLength);
var comment = await reader.ReadBytesAsync(commentLength);
ProcessReadData(name, extra, comment);
}
private void ProcessReadData(byte[] name, byte[] extra, byte[] comment)
{
// According to .ZIP File Format Specification
//
// For example: https://pkware.cachefly.net/webdocs/casestudies/APPNOTE.TXT
//

View File

@@ -1,5 +1,4 @@
using System.IO;
using System.Threading.Tasks;
namespace SharpCompress.Common.Zip.Headers;
@@ -9,6 +8,4 @@ internal class IgnoreHeader : ZipHeader
: base(type) { }
internal override void Read(BinaryReader reader) { }
internal override ValueTask Read(AsyncBinaryReader reader) => default;
}

View File

@@ -1,12 +1,13 @@
using System.IO;
using System.Linq;
using System.Threading.Tasks;
namespace SharpCompress.Common.Zip.Headers;
internal class LocalEntryHeader(ArchiveEncoding archiveEncoding)
: ZipFileEntry(ZipHeaderType.LocalEntry, archiveEncoding)
internal class LocalEntryHeader : ZipFileEntry
{
public LocalEntryHeader(ArchiveEncoding archiveEncoding)
: base(ZipHeaderType.LocalEntry, archiveEncoding) { }
internal override void Read(BinaryReader reader)
{
Version = reader.ReadUInt16();
@@ -22,29 +23,7 @@ internal class LocalEntryHeader(ArchiveEncoding archiveEncoding)
var name = reader.ReadBytes(nameLength);
var extra = reader.ReadBytes(extraLength);
ProcessReadData(name, extra);
}
internal override async ValueTask Read(AsyncBinaryReader reader)
{
Version = await reader.ReadUInt16Async();
Flags = (HeaderFlags)await reader.ReadUInt16Async();
CompressionMethod = (ZipCompressionMethod)await reader.ReadUInt16Async();
OriginalLastModifiedTime = LastModifiedTime = await reader.ReadUInt16Async();
OriginalLastModifiedDate = LastModifiedDate = await reader.ReadUInt16Async();
Crc = await reader.ReadUInt32Async();
CompressedSize = await reader.ReadUInt32Async();
UncompressedSize = await reader.ReadUInt32Async();
var nameLength = await reader.ReadUInt16Async();
var extraLength = await reader.ReadUInt16Async();
var name = await reader.ReadBytesAsync(nameLength);
var extra = await reader.ReadBytesAsync(extraLength);
ProcessReadData(name, extra);
}
private void ProcessReadData(byte[] name, byte[] extra)
{
// According to .ZIP File Format Specification
//
// For example: https://pkware.cachefly.net/webdocs/casestudies/APPNOTE.TXT
//

View File

@@ -1,6 +1,5 @@
using System;
using System.IO;
using System.Threading.Tasks;
namespace SharpCompress.Common.Zip.Headers;
@@ -10,7 +9,4 @@ internal class SplitHeader : ZipHeader
: base(ZipHeaderType.Split) { }
internal override void Read(BinaryReader reader) => throw new NotImplementedException();
internal override ValueTask Read(AsyncBinaryReader reader) =>
throw new NotImplementedException();
}

View File

@@ -1,5 +1,4 @@
using System.IO;
using System.Threading.Tasks;
namespace SharpCompress.Common.Zip.Headers;
@@ -27,25 +26,6 @@ internal class Zip64DirectoryEndHeader : ZipHeader
);
}
internal override async ValueTask Read(AsyncBinaryReader reader)
{
SizeOfDirectoryEndRecord = (long)await reader.ReadUInt64Async();
VersionMadeBy = await reader.ReadUInt16Async();
VersionNeededToExtract = await reader.ReadUInt16Async();
VolumeNumber = await reader.ReadUInt32Async();
FirstVolumeWithDirectory = await reader.ReadUInt32Async();
TotalNumberOfEntriesInDisk = (long)await reader.ReadUInt64Async();
TotalNumberOfEntries = (long)await reader.ReadUInt64Async();
DirectorySize = (long)await reader.ReadUInt64Async();
DirectoryStartOffsetRelativeToDisk = (long)await reader.ReadUInt64Async();
DataSector = await reader.ReadBytesAsync(
(int)(
SizeOfDirectoryEndRecord
- SIZE_OF_FIXED_HEADER_DATA_EXCEPT_SIGNATURE_AND_SIZE_FIELDS
)
);
}
private const int SIZE_OF_FIXED_HEADER_DATA_EXCEPT_SIGNATURE_AND_SIZE_FIELDS = 44;
public long SizeOfDirectoryEndRecord { get; private set; }

View File

@@ -1,10 +1,12 @@
using System.IO;
using System.Threading.Tasks;
namespace SharpCompress.Common.Zip.Headers;
internal class Zip64DirectoryEndLocatorHeader() : ZipHeader(ZipHeaderType.Zip64DirectoryEndLocator)
internal class Zip64DirectoryEndLocatorHeader : ZipHeader
{
public Zip64DirectoryEndLocatorHeader()
: base(ZipHeaderType.Zip64DirectoryEndLocator) { }
internal override void Read(BinaryReader reader)
{
FirstVolumeWithDirectory = reader.ReadUInt32();
@@ -12,13 +14,6 @@ internal class Zip64DirectoryEndLocatorHeader() : ZipHeader(ZipHeaderType.Zip64D
TotalNumberOfVolumes = reader.ReadUInt32();
}
internal override async ValueTask Read(AsyncBinaryReader reader)
{
FirstVolumeWithDirectory = await reader.ReadUInt32Async();
RelativeOffsetOfTheEndOfDirectoryRecord = (long)await reader.ReadUInt64Async();
TotalNumberOfVolumes = await reader.ReadUInt32Async();
}
public uint FirstVolumeWithDirectory { get; private set; }
public long RelativeOffsetOfTheEndOfDirectoryRecord { get; private set; }

View File

@@ -2,14 +2,18 @@ using System;
using System.Buffers.Binary;
using System.Collections.Generic;
using System.IO;
using System.Threading;
using System.Threading.Tasks;
namespace SharpCompress.Common.Zip.Headers;
internal abstract class ZipFileEntry(ZipHeaderType type, ArchiveEncoding archiveEncoding)
: ZipHeader(type)
internal abstract class ZipFileEntry : ZipHeader
{
protected ZipFileEntry(ZipHeaderType type, ArchiveEncoding archiveEncoding)
: base(type)
{
Extra = new List<ExtraData>();
ArchiveEncoding = archiveEncoding;
}
internal bool IsDirectory
{
get
@@ -26,7 +30,7 @@ internal abstract class ZipFileEntry(ZipHeaderType type, ArchiveEncoding archive
internal Stream? PackedStream { get; set; }
internal ArchiveEncoding ArchiveEncoding { get; } = archiveEncoding;
internal ArchiveEncoding ArchiveEncoding { get; }
internal string? Name { get; set; }
@@ -40,7 +44,7 @@ internal abstract class ZipFileEntry(ZipHeaderType type, ArchiveEncoding archive
internal long UncompressedSize { get; set; }
internal List<ExtraData> Extra { get; set; } = new();
internal List<ExtraData> Extra { get; set; }
public string? Password { get; set; }
@@ -59,24 +63,6 @@ internal abstract class ZipFileEntry(ZipHeaderType type, ArchiveEncoding archive
return encryptionData;
}
internal async Task<PkwareTraditionalEncryptionData> ComposeEncryptionDataAsync(
Stream archiveStream,
CancellationToken cancellationToken = default
)
{
if (archiveStream is null)
{
throw new ArgumentNullException(nameof(archiveStream));
}
var buffer = new byte[12];
await archiveStream.ReadFullyAsync(buffer, 0, 12, cancellationToken).ConfigureAwait(false);
var encryptionData = PkwareTraditionalEncryptionData.ForRead(Password!, this, buffer);
return encryptionData;
}
internal WinzipAesEncryptionData? WinzipAesEncryptionData { get; set; }
/// <summary>

View File

@@ -1,14 +1,18 @@
using System.IO;
using System.Threading.Tasks;
namespace SharpCompress.Common.Zip.Headers;
internal abstract class ZipHeader(ZipHeaderType type)
internal abstract class ZipHeader
{
internal ZipHeaderType ZipHeaderType { get; } = type;
protected ZipHeader(ZipHeaderType type)
{
ZipHeaderType = type;
HasData = true;
}
internal ZipHeaderType ZipHeaderType { get; }
internal abstract void Read(BinaryReader reader);
internal abstract ValueTask Read(AsyncBinaryReader reader);
internal bool HasData { get; set; } = true;
internal bool HasData { get; set; }
}

View File

@@ -1,7 +1,6 @@
using System;
using System.Collections.Generic;
using System.IO;
using System.Threading.Tasks;
using SharpCompress.Common.Zip.Headers;
using SharpCompress.IO;
@@ -19,74 +18,7 @@ internal sealed class SeekableZipHeaderFactory : ZipHeaderFactory
internal SeekableZipHeaderFactory(string? password, ArchiveEncoding archiveEncoding)
: base(StreamingMode.Seekable, password, archiveEncoding) { }
internal async IAsyncEnumerable<ZipHeader> ReadSeekableHeader(Stream stream)
{
var reader = new AsyncBinaryReader(stream);
await SeekBackToHeader(stream, reader);
var eocd_location = stream.Position;
var entry = new DirectoryEndHeader();
await entry.Read(reader);
if (entry.IsZip64)
{
_zip64 = true;
// ZIP64_END_OF_CENTRAL_DIRECTORY_LOCATOR should be before the EOCD
stream.Seek(eocd_location - ZIP64_EOCD_LENGTH - 4, SeekOrigin.Begin);
uint zip64_locator = await reader.ReadUInt32Async();
if (zip64_locator != ZIP64_END_OF_CENTRAL_DIRECTORY_LOCATOR)
{
throw new ArchiveException("Failed to locate the Zip64 Directory Locator");
}
var zip64Locator = new Zip64DirectoryEndLocatorHeader();
await zip64Locator.Read(reader);
stream.Seek(zip64Locator.RelativeOffsetOfTheEndOfDirectoryRecord, SeekOrigin.Begin);
var zip64Signature = await reader.ReadUInt32Async();
if (zip64Signature != ZIP64_END_OF_CENTRAL_DIRECTORY)
{
throw new ArchiveException("Failed to locate the Zip64 Header");
}
var zip64Entry = new Zip64DirectoryEndHeader();
await zip64Entry.Read(reader);
stream.Seek(zip64Entry.DirectoryStartOffsetRelativeToDisk, SeekOrigin.Begin);
}
else
{
stream.Seek(entry.DirectoryStartOffsetRelativeToDisk, SeekOrigin.Begin);
}
var position = stream.Position;
while (true)
{
stream.Position = position;
var signature = await reader.ReadUInt32Async();
var nextHeader = await ReadHeader(signature, reader, _zip64);
position = stream.Position;
if (nextHeader is null)
{
yield break;
}
if (nextHeader is DirectoryEntryHeader entryHeader)
{
//entry could be zero bytes so we need to know that.
entryHeader.HasData = entryHeader.CompressedSize != 0;
yield return entryHeader;
}
else if (nextHeader is DirectoryEndHeader endHeader)
{
yield return endHeader;
}
}
}
internal IEnumerable<ZipHeader> ReadSeekableHeader(Stream stream, bool useSync)
internal IEnumerable<ZipHeader> ReadSeekableHeader(Stream stream)
{
var reader = new BinaryReader(stream);
@@ -166,45 +98,6 @@ internal sealed class SeekableZipHeaderFactory : ZipHeaderFactory
return true;
}
private static async ValueTask SeekBackToHeader(Stream stream, AsyncBinaryReader reader)
{
// Minimum EOCD length
if (stream.Length < MINIMUM_EOCD_LENGTH)
{
throw new ArchiveException(
"Could not find Zip file Directory at the end of the file. File may be corrupted."
);
}
var len =
stream.Length < MAX_SEARCH_LENGTH_FOR_EOCD
? (int)stream.Length
: MAX_SEARCH_LENGTH_FOR_EOCD;
// We search for marker in reverse to find the first occurance
byte[] needle = { 0x06, 0x05, 0x4b, 0x50 };
stream.Seek(-len, SeekOrigin.End);
var seek = await reader.ReadBytesAsync(len);
// Search in reverse
Array.Reverse(seek);
// don't exclude the minimum eocd region, otherwise you fail to locate the header in empty zip files
var max_search_area = len; // - MINIMUM_EOCD_LENGTH;
for (var pos_from_end = 0; pos_from_end < max_search_area; ++pos_from_end)
{
if (IsMatch(seek, pos_from_end, needle))
{
stream.Seek(-pos_from_end, SeekOrigin.End);
return;
}
}
throw new ArchiveException("Failed to locate the Zip Header");
}
private static void SeekBackToHeader(Stream stream, BinaryReader reader)
{
// Minimum EOCD length

View File

@@ -1,6 +1,4 @@
using System.IO;
using System.Threading;
using System.Threading.Tasks;
using SharpCompress.Common.Zip.Headers;
using SharpCompress.Compressors.Deflate;
using SharpCompress.IO;
@@ -33,28 +31,6 @@ internal sealed class StreamingZipFilePart : ZipFilePart
return _decompressionStream;
}
internal override async Task<Stream?> GetCompressedStreamAsync(
CancellationToken cancellationToken = default
)
{
if (!Header.HasData)
{
return Stream.Null;
}
_decompressionStream = await CreateDecompressionStreamAsync(
await GetCryptoStreamAsync(CreateBaseStream(), cancellationToken)
.ConfigureAwait(false),
Header.CompressionMethod,
cancellationToken
)
.ConfigureAwait(false);
if (LeaveStreamOpen)
{
return SharpCompressStream.Create(_decompressionStream, leaveOpen: true);
}
return _decompressionStream;
}
internal BinaryReader FixStreamedFileLocation(ref SharpCompressStream rewindableStream)
{
if (Header.IsDirectory)

View File

@@ -1,7 +1,6 @@
using System;
using System.Buffers.Binary;
using System.Security.Cryptography;
using System.Text;
namespace SharpCompress.Common.Zip;
@@ -20,24 +19,8 @@ internal class WinzipAesEncryptionData
{
_keySize = keySize;
#if NETFRAMEWORK
#if NETFRAMEWORK || NETSTANDARD2_0
var rfc2898 = new Rfc2898DeriveBytes(password, salt, RFC2898_ITERATIONS);
KeyBytes = rfc2898.GetBytes(KeySizeInBytes);
IvBytes = rfc2898.GetBytes(KeySizeInBytes);
var generatedVerifyValue = rfc2898.GetBytes(2);
#elif NET10_0_OR_GREATER
var derivedKeySize = (KeySizeInBytes * 2) + 2;
var passwordBytes = Encoding.UTF8.GetBytes(password);
var derivedKey = Rfc2898DeriveBytes.Pbkdf2(
passwordBytes,
salt,
RFC2898_ITERATIONS,
HashAlgorithmName.SHA1,
derivedKeySize
);
KeyBytes = derivedKey.AsSpan(0, KeySizeInBytes).ToArray();
IvBytes = derivedKey.AsSpan(KeySizeInBytes, KeySizeInBytes).ToArray();
var generatedVerifyValue = derivedKey.AsSpan((KeySizeInBytes * 2), 2).ToArray();
#else
var rfc2898 = new Rfc2898DeriveBytes(
password,
@@ -45,10 +28,11 @@ internal class WinzipAesEncryptionData
RFC2898_ITERATIONS,
HashAlgorithmName.SHA1
);
KeyBytes = rfc2898.GetBytes(KeySizeInBytes);
#endif
KeyBytes = rfc2898.GetBytes(KeySizeInBytes); // 16 or 24 or 32 ???
IvBytes = rfc2898.GetBytes(KeySizeInBytes);
var generatedVerifyValue = rfc2898.GetBytes(2);
#endif
var verify = BinaryPrimitives.ReadInt16LittleEndian(passwordVerifyValue);
var generated = BinaryPrimitives.ReadInt16LittleEndian(generatedVerifyValue);

View File

@@ -2,8 +2,6 @@ using System;
using System.Buffers.Binary;
using System.IO;
using System.Linq;
using System.Threading;
using System.Threading.Tasks;
using SharpCompress.Common.Zip.Headers;
using SharpCompress.Compressors;
using SharpCompress.Compressors.BZip2;
@@ -15,8 +13,8 @@ using SharpCompress.Compressors.PPMd;
using SharpCompress.Compressors.Reduce;
using SharpCompress.Compressors.Shrink;
using SharpCompress.Compressors.Xz;
using SharpCompress.Compressors.ZStandard;
using SharpCompress.IO;
using ZstdSharp;
namespace SharpCompress.Common.Zip;
@@ -266,220 +264,4 @@ internal abstract class ZipFilePart : FilePart
}
return plainStream;
}
internal override async Task<Stream?> GetCompressedStreamAsync(
CancellationToken cancellationToken = default
)
{
if (!Header.HasData)
{
return Stream.Null;
}
var decompressionStream = await CreateDecompressionStreamAsync(
await GetCryptoStreamAsync(CreateBaseStream(), cancellationToken)
.ConfigureAwait(false),
Header.CompressionMethod,
cancellationToken
)
.ConfigureAwait(false);
if (LeaveStreamOpen)
{
return SharpCompressStream.Create(decompressionStream, leaveOpen: true);
}
return decompressionStream;
}
protected async Task<Stream> GetCryptoStreamAsync(
Stream plainStream,
CancellationToken cancellationToken = default
)
{
var isFileEncrypted = FlagUtility.HasFlag(Header.Flags, HeaderFlags.Encrypted);
if (Header.CompressedSize == 0 && isFileEncrypted)
{
throw new NotSupportedException("Cannot encrypt file with unknown size at start.");
}
if (
(
Header.CompressedSize == 0
&& FlagUtility.HasFlag(Header.Flags, HeaderFlags.UsePostDataDescriptor)
) || Header.IsZip64
)
{
plainStream = SharpCompressStream.Create(plainStream, leaveOpen: true); //make sure AES doesn't close
}
else
{
plainStream = new ReadOnlySubStream(plainStream, Header.CompressedSize); //make sure AES doesn't close
}
if (isFileEncrypted)
{
switch (Header.CompressionMethod)
{
case ZipCompressionMethod.None:
case ZipCompressionMethod.Shrink:
case ZipCompressionMethod.Reduce1:
case ZipCompressionMethod.Reduce2:
case ZipCompressionMethod.Reduce3:
case ZipCompressionMethod.Reduce4:
case ZipCompressionMethod.Deflate:
case ZipCompressionMethod.Deflate64:
case ZipCompressionMethod.BZip2:
case ZipCompressionMethod.LZMA:
case ZipCompressionMethod.PPMd:
{
return new PkwareTraditionalCryptoStream(
plainStream,
await Header
.ComposeEncryptionDataAsync(plainStream, cancellationToken)
.ConfigureAwait(false),
CryptoMode.Decrypt
);
}
case ZipCompressionMethod.WinzipAes:
{
if (Header.WinzipAesEncryptionData != null)
{
return new WinzipAesCryptoStream(
plainStream,
Header.WinzipAesEncryptionData,
Header.CompressedSize - 10
);
}
return plainStream;
}
default:
{
throw new InvalidOperationException("Header.CompressionMethod is invalid");
}
}
}
return plainStream;
}
protected async Task<Stream> CreateDecompressionStreamAsync(
Stream stream,
ZipCompressionMethod method,
CancellationToken cancellationToken = default
)
{
switch (method)
{
case ZipCompressionMethod.None:
{
if (Header.CompressedSize is 0)
{
return new DataDescriptorStream(stream);
}
return stream;
}
case ZipCompressionMethod.Shrink:
{
return new ShrinkStream(
stream,
CompressionMode.Decompress,
Header.CompressedSize,
Header.UncompressedSize
);
}
case ZipCompressionMethod.Reduce1:
{
return new ReduceStream(stream, Header.CompressedSize, Header.UncompressedSize, 1);
}
case ZipCompressionMethod.Reduce2:
{
return new ReduceStream(stream, Header.CompressedSize, Header.UncompressedSize, 2);
}
case ZipCompressionMethod.Reduce3:
{
return new ReduceStream(stream, Header.CompressedSize, Header.UncompressedSize, 3);
}
case ZipCompressionMethod.Reduce4:
{
return new ReduceStream(stream, Header.CompressedSize, Header.UncompressedSize, 4);
}
case ZipCompressionMethod.Explode:
{
return new ExplodeStream(
stream,
Header.CompressedSize,
Header.UncompressedSize,
Header.Flags
);
}
case ZipCompressionMethod.Deflate:
{
return new DeflateStream(stream, CompressionMode.Decompress);
}
case ZipCompressionMethod.Deflate64:
{
return new Deflate64Stream(stream, CompressionMode.Decompress);
}
case ZipCompressionMethod.BZip2:
{
return new BZip2Stream(stream, CompressionMode.Decompress, false);
}
case ZipCompressionMethod.LZMA:
{
if (FlagUtility.HasFlag(Header.Flags, HeaderFlags.Encrypted))
{
throw new NotSupportedException("LZMA with pkware encryption.");
}
var buffer = new byte[4];
await stream.ReadFullyAsync(buffer, 0, 4, cancellationToken).ConfigureAwait(false);
var version = BinaryPrimitives.ReadUInt16LittleEndian(buffer.AsSpan(0, 2));
var propsSize = BinaryPrimitives.ReadUInt16LittleEndian(buffer.AsSpan(2, 2));
var props = new byte[propsSize];
await stream
.ReadFullyAsync(props, 0, propsSize, cancellationToken)
.ConfigureAwait(false);
return new LzmaStream(
props,
stream,
Header.CompressedSize > 0 ? Header.CompressedSize - 4 - props.Length : -1,
FlagUtility.HasFlag(Header.Flags, HeaderFlags.Bit1)
? -1
: Header.UncompressedSize
);
}
case ZipCompressionMethod.Xz:
{
return new XZStream(stream);
}
case ZipCompressionMethod.ZStandard:
{
return new DecompressionStream(stream);
}
case ZipCompressionMethod.PPMd:
{
var props = new byte[2];
await stream.ReadFullyAsync(props, 0, 2, cancellationToken).ConfigureAwait(false);
return new PpmdStream(new PpmdProperties(props), stream, false);
}
case ZipCompressionMethod.WinzipAes:
{
var data = Header.Extra.SingleOrDefault(x => x.Type == ExtraDataType.WinZipAes);
if (data is null)
{
throw new InvalidFormatException("No Winzip AES extra data found.");
}
if (data.Length != 7)
{
throw new InvalidFormatException("Winzip data length is not 7.");
}
throw new NotSupportedException("WinzipAes isn't supported for streaming");
}
default:
{
throw new NotSupportedException("CompressionMethod: " + Header.CompressionMethod);
}
}
}
}

View File

@@ -1,7 +1,6 @@
using System;
using System.IO;
using System.Linq;
using System.Threading.Tasks;
using SharpCompress.Common.Zip.Headers;
using SharpCompress.IO;
@@ -35,82 +34,6 @@ internal class ZipHeaderFactory
_archiveEncoding = archiveEncoding;
}
protected async ValueTask<ZipHeader?> ReadHeader(
uint headerBytes,
AsyncBinaryReader reader,
bool zip64 = false
)
{
switch (headerBytes)
{
case ENTRY_HEADER_BYTES:
{
var entryHeader = new LocalEntryHeader(_archiveEncoding);
await entryHeader.Read(reader);
LoadHeader(entryHeader, reader.BaseStream);
_lastEntryHeader = entryHeader;
return entryHeader;
}
case DIRECTORY_START_HEADER_BYTES:
{
var entry = new DirectoryEntryHeader(_archiveEncoding);
await entry.Read(reader);
return entry;
}
case POST_DATA_DESCRIPTOR:
{
if (
_lastEntryHeader != null
&& FlagUtility.HasFlag(
_lastEntryHeader.NotNull().Flags,
HeaderFlags.UsePostDataDescriptor
)
)
{
_lastEntryHeader.Crc = await reader.ReadUInt32Async();
_lastEntryHeader.CompressedSize = zip64
? (long)await reader.ReadUInt64Async()
: await reader.ReadUInt32Async();
_lastEntryHeader.UncompressedSize = zip64
? (long)await reader.ReadUInt64Async()
: await reader.ReadUInt32Async();
}
else
{
await reader.ReadBytesAsync(zip64 ? 20 : 12);
}
return null;
}
case DIGITAL_SIGNATURE:
return null;
case DIRECTORY_END_HEADER_BYTES:
{
var entry = new DirectoryEndHeader();
await entry.Read(reader);
return entry;
}
case SPLIT_ARCHIVE_HEADER_BYTES:
{
return new SplitHeader();
}
case ZIP64_END_OF_CENTRAL_DIRECTORY:
{
var entry = new Zip64DirectoryEndHeader();
await entry.Read(reader);
return entry;
}
case ZIP64_END_OF_CENTRAL_DIRECTORY_LOCATOR:
{
var entry = new Zip64DirectoryEndLocatorHeader();
await entry.Read(reader);
return entry;
}
default:
return null;
}
}
protected ZipHeader? ReadHeader(uint headerBytes, BinaryReader reader, bool zip64 = false)
{
switch (headerBytes)

View File

@@ -24,29 +24,10 @@
// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
// THE SOFTWARE.
using System;
using System.Buffers;
using System.IO;
using System.Threading;
using System.Threading.Tasks;
namespace SharpCompress.Compressors.ADC;
/// <summary>
/// Result of an ADC decompression operation
/// </summary>
public class AdcDecompressResult
{
/// <summary>
/// Number of bytes read from input
/// </summary>
public int BytesRead { get; set; }
/// <summary>
/// Decompressed output buffer
/// </summary>
public byte[]? Output { get; set; }
}
/// <summary>
/// Provides static methods for decompressing Apple Data Compression data
/// </summary>
@@ -97,173 +78,6 @@ public static class ADCBase
public static int Decompress(byte[] input, out byte[]? output, int bufferSize = 262144) =>
Decompress(new MemoryStream(input), out output, bufferSize);
/// <summary>
/// Decompresses a byte buffer asynchronously that's compressed with ADC
/// </summary>
/// <param name="input">Compressed buffer</param>
/// <param name="bufferSize">Max size for decompressed data</param>
/// <param name="cancellationToken">Cancellation token</param>
/// <returns>Result containing bytes read and decompressed data</returns>
public static async Task<AdcDecompressResult> DecompressAsync(
byte[] input,
int bufferSize = 262144,
CancellationToken cancellationToken = default
) => await DecompressAsync(new MemoryStream(input), bufferSize, cancellationToken);
/// <summary>
/// Decompresses a stream asynchronously that's compressed with ADC
/// </summary>
/// <param name="input">Stream containing compressed data</param>
/// <param name="bufferSize">Max size for decompressed data</param>
/// <param name="cancellationToken">Cancellation token</param>
/// <returns>Result containing bytes read and decompressed data</returns>
public static async Task<AdcDecompressResult> DecompressAsync(
Stream input,
int bufferSize = 262144,
CancellationToken cancellationToken = default
)
{
var result = new AdcDecompressResult();
if (input is null || input.Length == 0)
{
result.BytesRead = 0;
result.Output = null;
return result;
}
var start = (int)input.Position;
var position = (int)input.Position;
int chunkSize;
int offset;
int chunkType;
var buffer = ArrayPool<byte>.Shared.Rent(bufferSize);
var outPosition = 0;
var full = false;
byte[] temp = ArrayPool<byte>.Shared.Rent(3);
try
{
while (position < input.Length)
{
cancellationToken.ThrowIfCancellationRequested();
var readByte = input.ReadByte();
if (readByte == -1)
{
break;
}
chunkType = GetChunkType((byte)readByte);
switch (chunkType)
{
case PLAIN:
chunkSize = GetChunkSize((byte)readByte);
if (outPosition + chunkSize > bufferSize)
{
full = true;
break;
}
var readCount = await input.ReadAsync(
buffer,
outPosition,
chunkSize,
cancellationToken
);
outPosition += readCount;
position += readCount + 1;
break;
case TWO_BYTE:
chunkSize = GetChunkSize((byte)readByte);
temp[0] = (byte)readByte;
temp[1] = (byte)input.ReadByte();
offset = GetOffset(temp.AsSpan(0, 2));
if (outPosition + chunkSize > bufferSize)
{
full = true;
break;
}
if (offset == 0)
{
var lastByte = buffer[outPosition - 1];
for (var i = 0; i < chunkSize; i++)
{
buffer[outPosition] = lastByte;
outPosition++;
}
position += 2;
}
else
{
for (var i = 0; i < chunkSize; i++)
{
buffer[outPosition] = buffer[outPosition - offset - 1];
outPosition++;
}
position += 2;
}
break;
case THREE_BYTE:
chunkSize = GetChunkSize((byte)readByte);
temp[0] = (byte)readByte;
temp[1] = (byte)input.ReadByte();
temp[2] = (byte)input.ReadByte();
offset = GetOffset(temp.AsSpan(0, 3));
if (outPosition + chunkSize > bufferSize)
{
full = true;
break;
}
if (offset == 0)
{
var lastByte = buffer[outPosition - 1];
for (var i = 0; i < chunkSize; i++)
{
buffer[outPosition] = lastByte;
outPosition++;
}
position += 3;
}
else
{
for (var i = 0; i < chunkSize; i++)
{
buffer[outPosition] = buffer[outPosition - offset - 1];
outPosition++;
}
position += 3;
}
break;
}
if (full)
{
break;
}
}
var output = new byte[outPosition];
Array.Copy(buffer, output, outPosition);
result.BytesRead = position - start;
result.Output = output;
return result;
}
finally
{
ArrayPool<byte>.Shared.Return(buffer);
ArrayPool<byte>.Shared.Return(temp);
}
}
/// <summary>
/// Decompresses a stream that's compressed with ADC
/// </summary>

View File

@@ -28,8 +28,6 @@
using System;
using System.IO;
using System.Threading;
using System.Threading.Tasks;
using SharpCompress.IO;
namespace SharpCompress.Compressors.ADC;
@@ -189,76 +187,6 @@ public sealed class ADCStream : Stream, IStreamStack
return copied;
}
public override async Task<int> ReadAsync(
byte[] buffer,
int offset,
int count,
CancellationToken cancellationToken = default
)
{
if (count == 0)
{
return 0;
}
if (buffer is null)
{
throw new ArgumentNullException(nameof(buffer));
}
if (count < 0)
{
throw new ArgumentOutOfRangeException(nameof(count));
}
if (offset < buffer.GetLowerBound(0))
{
throw new ArgumentOutOfRangeException(nameof(offset));
}
if ((offset + count) > buffer.GetLength(0))
{
throw new ArgumentOutOfRangeException(nameof(count));
}
if (_outBuffer is null)
{
var result = await ADCBase.DecompressAsync(
_stream,
cancellationToken: cancellationToken
);
_outBuffer = result.Output;
_outPosition = 0;
}
var inPosition = offset;
var toCopy = count;
var copied = 0;
while (_outPosition + toCopy >= _outBuffer.Length)
{
cancellationToken.ThrowIfCancellationRequested();
var piece = _outBuffer.Length - _outPosition;
Array.Copy(_outBuffer, _outPosition, buffer, inPosition, piece);
inPosition += piece;
copied += piece;
_position += piece;
toCopy -= piece;
var result = await ADCBase.DecompressAsync(
_stream,
cancellationToken: cancellationToken
);
_outBuffer = result.Output;
_outPosition = 0;
if (result.BytesRead == 0 || _outBuffer is null || _outBuffer.Length == 0)
{
return copied;
}
}
Array.Copy(_outBuffer, _outPosition, buffer, inPosition, toCopy);
_outPosition += toCopy;
_position += toCopy;
copied += toCopy;
return copied;
}
public override long Seek(long offset, SeekOrigin origin) => throw new NotSupportedException();
public override void SetLength(long value) => throw new NotSupportedException();

View File

@@ -1,72 +0,0 @@
using System;
using System.IO;
namespace SharpCompress.Compressors.Arj
{
[CLSCompliant(true)]
public class BitReader
{
private readonly Stream _input;
private int _bitBuffer; // currently buffered bits
private int _bitCount; // number of bits in buffer
public BitReader(Stream input)
{
_input = input ?? throw new ArgumentNullException(nameof(input));
_bitBuffer = 0;
_bitCount = 0;
}
/// <summary>
/// Reads a single bit from the stream. Returns 0 or 1.
/// </summary>
public int ReadBit()
{
if (_bitCount == 0)
{
int nextByte = _input.ReadByte();
if (nextByte < 0)
{
throw new EndOfStreamException("No more data available in BitReader.");
}
_bitBuffer = nextByte;
_bitCount = 8;
}
int bit = (_bitBuffer >> (_bitCount - 1)) & 1;
_bitCount--;
return bit;
}
/// <summary>
/// Reads n bits (up to 32) from the stream.
/// </summary>
public int ReadBits(int count)
{
if (count < 0 || count > 32)
{
throw new ArgumentOutOfRangeException(
nameof(count),
"Count must be between 0 and 32."
);
}
int result = 0;
for (int i = 0; i < count; i++)
{
result = (result << 1) | ReadBit();
}
return result;
}
/// <summary>
/// Resets any buffered bits.
/// </summary>
public void AlignToByte()
{
_bitCount = 0;
_bitBuffer = 0;
}
}
}

View File

@@ -1,43 +0,0 @@
using System;
using System.Collections;
using System.Collections.Generic;
namespace SharpCompress.Compressors.Arj
{
/// <summary>
/// Iterator that reads & pushes values back into the ring buffer.
/// </summary>
public class HistoryIterator : IEnumerator<byte>
{
private int _index;
private readonly IRingBuffer _ring;
public HistoryIterator(IRingBuffer ring, int startIndex)
{
_ring = ring;
_index = startIndex;
}
public bool MoveNext()
{
Current = _ring[_index];
_index = unchecked(_index + 1);
// Push value back into the ring buffer
_ring.Push(Current);
return true; // iterator is infinite
}
public void Reset()
{
throw new NotSupportedException();
}
public byte Current { get; private set; }
object IEnumerator.Current => Current;
public void Dispose() { }
}
}

View File

@@ -1,218 +0,0 @@
using System;
using System.Collections.Generic;
using System.IO;
using System.Text;
namespace SharpCompress.Compressors.Arj
{
[CLSCompliant(true)]
public enum NodeType
{
Leaf,
Branch,
}
[CLSCompliant(true)]
public sealed class TreeEntry
{
public readonly NodeType Type;
public readonly int LeafValue;
public readonly int BranchIndex;
public const int MAX_INDEX = 4096;
private TreeEntry(NodeType type, int leafValue, int branchIndex)
{
Type = type;
LeafValue = leafValue;
BranchIndex = branchIndex;
}
public static TreeEntry Leaf(int value)
{
return new TreeEntry(NodeType.Leaf, value, -1);
}
public static TreeEntry Branch(int index)
{
if (index >= MAX_INDEX)
{
throw new ArgumentOutOfRangeException(
nameof(index),
"Branch index exceeds MAX_INDEX"
);
}
return new TreeEntry(NodeType.Branch, 0, index);
}
}
[CLSCompliant(true)]
public sealed class HuffTree
{
private readonly List<TreeEntry> _tree;
public HuffTree(int capacity = 0)
{
_tree = new List<TreeEntry>(capacity);
}
public void SetSingle(int value)
{
_tree.Clear();
_tree.Add(TreeEntry.Leaf(value));
}
public void BuildTree(byte[] lengths, int count)
{
if (lengths == null)
{
throw new ArgumentNullException(nameof(lengths));
}
if (count < 0 || count > lengths.Length)
{
throw new ArgumentOutOfRangeException(nameof(count));
}
if (count > TreeEntry.MAX_INDEX / 2)
{
throw new ArgumentException(
$"Count exceeds maximum allowed: {TreeEntry.MAX_INDEX / 2}"
);
}
byte[] slice = new byte[count];
Array.Copy(lengths, slice, count);
BuildTree(slice);
}
public void BuildTree(byte[] valueLengths)
{
if (valueLengths == null)
{
throw new ArgumentNullException(nameof(valueLengths));
}
if (valueLengths.Length > TreeEntry.MAX_INDEX / 2)
{
throw new InvalidOperationException("Too many code lengths");
}
_tree.Clear();
int maxAllocated = 1; // start with a single (root) node
for (byte currentLen = 1; ; currentLen++)
{
// add missing branches up to current limit
int maxLimit = maxAllocated;
for (int i = _tree.Count; i < maxLimit; i++)
{
// TreeEntry.Branch may throw if index too large
try
{
_tree.Add(TreeEntry.Branch(maxAllocated));
}
catch (ArgumentOutOfRangeException e)
{
_tree.Clear();
throw new InvalidOperationException("Branch index exceeds limit", e);
}
// each branch node allocates two children
maxAllocated += 2;
}
// fill tree with leaves found in the lengths table at the current length
bool moreLeaves = false;
for (int value = 0; value < valueLengths.Length; value++)
{
byte len = valueLengths[value];
if (len == currentLen)
{
_tree.Add(TreeEntry.Leaf(value));
}
else if (len > currentLen)
{
moreLeaves = true; // there are more leaves to process
}
}
// sanity check (too many leaves)
if (_tree.Count > maxAllocated)
{
throw new InvalidOperationException("Too many leaves");
}
// stop when no longer finding longer codes
if (!moreLeaves)
{
break;
}
}
// ensure tree is complete
if (_tree.Count != maxAllocated)
{
throw new InvalidOperationException(
$"Missing some leaves: tree count = {_tree.Count}, expected = {maxAllocated}"
);
}
}
public int ReadEntry(BitReader reader)
{
if (_tree.Count == 0)
{
throw new InvalidOperationException("Tree not initialized");
}
TreeEntry node = _tree[0];
while (true)
{
if (node.Type == NodeType.Leaf)
{
return node.LeafValue;
}
int bit = reader.ReadBit();
int index = node.BranchIndex + bit;
if (index >= _tree.Count)
{
throw new InvalidOperationException("Invalid branch index during read");
}
node = _tree[index];
}
}
public override string ToString()
{
var result = new StringBuilder();
void FormatStep(int index, string prefix)
{
var node = _tree[index];
if (node.Type == NodeType.Leaf)
{
result.AppendLine($"{prefix} -> {node.LeafValue}");
}
else
{
FormatStep(node.BranchIndex, prefix + "0");
FormatStep(node.BranchIndex + 1, prefix + "1");
}
}
if (_tree.Count > 0)
{
FormatStep(0, "");
}
return result.ToString();
}
}
}

View File

@@ -1,9 +0,0 @@
namespace SharpCompress.Compressors.Arj
{
public interface ILhaDecoderConfig
{
int HistoryBits { get; }
int OffsetBits { get; }
RingBuffer RingBuffer { get; }
}
}

View File

@@ -1,17 +0,0 @@
namespace SharpCompress.Compressors.Arj
{
public interface IRingBuffer
{
int BufferSize { get; }
int Cursor { get; }
void SetCursor(int pos);
void Push(byte value);
HistoryIterator IterFromOffset(int offset);
HistoryIterator IterFromPos(int pos);
byte this[int index] { get; }
}
}

View File

@@ -1,191 +0,0 @@
using System;
using System.Collections.Generic;
using System.IO;
using SharpCompress.IO;
namespace SharpCompress.Compressors.Arj
{
[CLSCompliant(true)]
public sealed class LHDecoderStream : Stream, IStreamStack
{
#if DEBUG_STREAMS
long IStreamStack.InstanceId { get; set; }
#endif
int IStreamStack.DefaultBufferSize { get; set; }
Stream IStreamStack.BaseStream() => _stream;
int IStreamStack.BufferSize
{
get => 0;
set { }
}
int IStreamStack.BufferPosition
{
get => 0;
set { }
}
void IStreamStack.SetPosition(long position) { }
private readonly BitReader _bitReader;
private readonly Stream _stream;
// Buffer containing *all* bytes decoded so far.
private readonly List<byte> _buffer = new();
private long _readPosition;
private readonly int _originalSize;
private bool _finishedDecoding;
private bool _disposed;
private const int THRESHOLD = 3;
public LHDecoderStream(Stream compressedStream, int originalSize)
{
_stream = compressedStream ?? throw new ArgumentNullException(nameof(compressedStream));
if (!compressedStream.CanRead)
throw new ArgumentException(
"compressedStream must be readable.",
nameof(compressedStream)
);
_bitReader = new BitReader(compressedStream);
_originalSize = originalSize;
_readPosition = 0;
_finishedDecoding = (originalSize == 0);
}
public Stream BaseStream => _stream;
public override bool CanRead => true;
public override bool CanSeek => false;
public override bool CanWrite => false;
public override long Length => _originalSize;
public override long Position
{
get => _readPosition;
set => throw new NotSupportedException();
}
/// <summary>
/// Decodes a single element (literal or back-reference) and appends it to _buffer.
/// Returns true if data was added, or false if all input has already been decoded.
/// </summary>
private bool DecodeNext()
{
if (_buffer.Count >= _originalSize)
{
_finishedDecoding = true;
return false;
}
int len = DecodeVal(0, 7);
if (len == 0)
{
byte nextChar = (byte)_bitReader.ReadBits(8);
_buffer.Add(nextChar);
}
else
{
int repCount = len + THRESHOLD - 1;
int backPtr = DecodeVal(9, 13);
if (backPtr >= _buffer.Count)
throw new InvalidDataException("Invalid back_ptr in LH stream");
int srcIndex = _buffer.Count - 1 - backPtr;
for (int j = 0; j < repCount && _buffer.Count < _originalSize; j++)
{
byte b = _buffer[srcIndex];
_buffer.Add(b);
srcIndex++;
// srcIndex may grow; it's allowed (source region can overlap destination)
}
}
if (_buffer.Count >= _originalSize)
{
_finishedDecoding = true;
}
return true;
}
private int DecodeVal(int from, int to)
{
int add = 0;
int bit = from;
while (bit < to && _bitReader.ReadBits(1) == 1)
{
add |= 1 << bit;
bit++;
}
int res = bit > 0 ? _bitReader.ReadBits(bit) : 0;
return res + add;
}
/// <summary>
/// Reads decompressed bytes into buffer[offset..offset+count].
/// The method decodes additional data on demand when needed.
/// </summary>
public override int Read(byte[] buffer, int offset, int count)
{
if (_disposed)
throw new ObjectDisposedException(nameof(LHDecoderStream));
if (buffer == null)
throw new ArgumentNullException(nameof(buffer));
if (offset < 0 || count < 0 || offset + count > buffer.Length)
throw new ArgumentOutOfRangeException("offset/count");
if (_readPosition >= _originalSize)
return 0; // EOF
int totalRead = 0;
while (totalRead < count && _readPosition < _originalSize)
{
if (_readPosition >= _buffer.Count)
{
bool had = DecodeNext();
if (!had)
{
break;
}
}
int available = _buffer.Count - (int)_readPosition;
if (available <= 0)
{
if (!_finishedDecoding)
{
continue;
}
break;
}
int toCopy = Math.Min(available, count - totalRead);
_buffer.CopyTo((int)_readPosition, buffer, offset + totalRead, toCopy);
_readPosition += toCopy;
totalRead += toCopy;
}
return totalRead;
}
public override void Flush() => throw new NotSupportedException();
public override long Seek(long offset, SeekOrigin origin) =>
throw new NotSupportedException();
public override void SetLength(long value) => throw new NotSupportedException();
public override void Write(byte[] buffer, int offset, int count) =>
throw new NotSupportedException();
}
}

View File

@@ -1,9 +0,0 @@
namespace SharpCompress.Compressors.Arj
{
public class Lh5DecoderCfg : ILhaDecoderConfig
{
public int HistoryBits => 14;
public int OffsetBits => 4;
public RingBuffer RingBuffer { get; } = new RingBuffer(1 << 14);
}
}

View File

@@ -1,9 +0,0 @@
namespace SharpCompress.Compressors.Arj
{
public class Lh7DecoderCfg : ILhaDecoderConfig
{
public int HistoryBits => 17;
public int OffsetBits => 5;
public RingBuffer RingBuffer { get; } = new RingBuffer(1 << 17);
}
}

View File

@@ -1,363 +0,0 @@
using System;
using System.Data;
using System.IO;
using System.Linq;
using SharpCompress.IO;
namespace SharpCompress.Compressors.Arj
{
[CLSCompliant(true)]
public sealed class LhaStream<C> : Stream, IStreamStack
where C : ILhaDecoderConfig, new()
{
private readonly BitReader _bitReader;
private readonly Stream _stream;
private readonly HuffTree _commandTree;
private readonly HuffTree _offsetTree;
private int _remainingCommands;
private (int offset, int count)? _copyProgress;
private readonly RingBuffer _ringBuffer;
private readonly C _config = new C();
private const int NUM_COMMANDS = 510;
private const int NUM_TEMP_CODELEN = 20;
private readonly int _originalSize;
private int _producedBytes = 0;
#if DEBUG_STREAMS
long IStreamStack.InstanceId { get; set; }
#endif
int IStreamStack.DefaultBufferSize { get; set; }
Stream IStreamStack.BaseStream() => _stream;
int IStreamStack.BufferSize
{
get => 0;
set { }
}
int IStreamStack.BufferPosition
{
get => 0;
set { }
}
void IStreamStack.SetPosition(long position) { }
public LhaStream(Stream compressedStream, int originalSize)
{
_stream = compressedStream ?? throw new ArgumentNullException(nameof(compressedStream));
_bitReader = new BitReader(compressedStream);
_ringBuffer = _config.RingBuffer;
_commandTree = new HuffTree(NUM_COMMANDS * 2);
_offsetTree = new HuffTree(NUM_TEMP_CODELEN * 2);
_remainingCommands = 0;
_copyProgress = null;
_originalSize = originalSize;
}
public override bool CanRead => true;
public override bool CanSeek => false;
public override bool CanWrite => false;
public override long Length => throw new NotSupportedException();
public override long Position
{
get => throw new NotSupportedException();
set => throw new NotSupportedException();
}
public override void Flush() { }
public override long Seek(long offset, SeekOrigin origin) =>
throw new NotSupportedException();
public override void SetLength(long value) => throw new NotSupportedException();
public override void Write(byte[] buffer, int offset, int count) =>
throw new NotSupportedException();
public override int Read(byte[] buffer, int offset, int count)
{
if (buffer == null)
{
throw new ArgumentNullException(nameof(buffer));
}
if (offset < 0 || count < 0 || (offset + count) > buffer.Length)
{
throw new ArgumentOutOfRangeException();
}
if (_producedBytes >= _originalSize)
{
return 0; // EOF
}
if (count == 0)
{
return 0;
}
int bytesRead = FillBuffer(buffer);
return bytesRead;
}
private byte ReadCodeLength()
{
byte len = (byte)_bitReader.ReadBits(3);
if (len == 7)
{
while (_bitReader.ReadBit() != 0)
{
len++;
if (len > 255)
{
throw new InvalidOperationException("Code length overflow");
}
}
}
return len;
}
private int ReadCodeSkip(int skipRange)
{
int bits;
int increment;
switch (skipRange)
{
case 0:
return 1;
case 1:
bits = 4;
increment = 3; // 3..=18
break;
default:
bits = 9;
increment = 20; // 20..=531
break;
}
int skip = _bitReader.ReadBits(bits);
return skip + increment;
}
private void ReadTempTree()
{
byte[] codeLengths = new byte[NUM_TEMP_CODELEN];
// number of codes to read (5 bits)
int numCodes = _bitReader.ReadBits(5);
// single code only
if (numCodes == 0)
{
int code = _bitReader.ReadBits(5);
_offsetTree.SetSingle((byte)code);
return;
}
if (numCodes > NUM_TEMP_CODELEN)
{
throw new Exception("temporary codelen table has invalid size");
}
// read actual lengths
int count = Math.Min(3, numCodes);
for (int i = 0; i < count; i++)
{
codeLengths[i] = (byte)ReadCodeLength();
}
// 2-bit skip value follows
int skip = _bitReader.ReadBits(2);
if (3 + skip > numCodes)
{
throw new Exception("temporary codelen table has invalid size");
}
for (int i = 3 + skip; i < numCodes; i++)
{
codeLengths[i] = (byte)ReadCodeLength();
}
_offsetTree.BuildTree(codeLengths, numCodes);
}
private void ReadCommandTree()
{
byte[] codeLengths = new byte[NUM_COMMANDS];
// number of codes to read (9 bits)
int numCodes = _bitReader.ReadBits(9);
// single code only
if (numCodes == 0)
{
int code = _bitReader.ReadBits(9);
_commandTree.SetSingle((ushort)code);
return;
}
if (numCodes > NUM_COMMANDS)
{
throw new Exception("commands codelen table has invalid size");
}
int index = 0;
while (index < numCodes)
{
for (int n = 0; n < numCodes - index; n++)
{
int code = _offsetTree.ReadEntry(_bitReader);
if (code >= 0 && code <= 2) // skip range
{
int skipCount = ReadCodeSkip(code);
index += n + skipCount;
goto outerLoop;
}
else
{
codeLengths[index + n] = (byte)(code - 2);
}
}
break;
outerLoop:
;
}
_commandTree.BuildTree(codeLengths, numCodes);
}
private void ReadOffsetTree()
{
int numCodes = _bitReader.ReadBits(_config.OffsetBits);
if (numCodes == 0)
{
int code = _bitReader.ReadBits(_config.OffsetBits);
_offsetTree.SetSingle(code);
return;
}
if (numCodes > _config.HistoryBits)
{
throw new InvalidDataException("Offset code table too large");
}
byte[] codeLengths = new byte[NUM_TEMP_CODELEN];
for (int i = 0; i < numCodes; i++)
{
codeLengths[i] = (byte)ReadCodeLength();
}
_offsetTree.BuildTree(codeLengths, numCodes);
}
private void BeginNewBlock()
{
ReadTempTree();
ReadCommandTree();
ReadOffsetTree();
}
private int ReadCommand() => _commandTree.ReadEntry(_bitReader);
private int ReadOffset()
{
int bits = _offsetTree.ReadEntry(_bitReader);
if (bits <= 1)
{
return bits;
}
int res = _bitReader.ReadBits(bits - 1);
return res | (1 << (bits - 1));
}
private int CopyFromHistory(byte[] target, int targetIndex, int offset, int count)
{
var historyIter = _ringBuffer.IterFromOffset(offset);
int copied = 0;
while (
copied < count && historyIter.MoveNext() && (targetIndex + copied) < target.Length
)
{
target[targetIndex + copied] = historyIter.Current;
copied++;
}
if (copied < count)
{
_copyProgress = (offset, count - copied);
}
return copied;
}
public int FillBuffer(byte[] buffer)
{
int bufLen = buffer.Length;
int bufIndex = 0;
// stop when we reached original size
if (_producedBytes >= _originalSize)
{
return 0;
}
// calculate limit, so that we don't go over the original size
int remaining = (int)Math.Min(bufLen, _originalSize - _producedBytes);
while (bufIndex < remaining)
{
if (_copyProgress.HasValue)
{
var (offset, count) = _copyProgress.Value;
int copied = CopyFromHistory(
buffer,
bufIndex,
offset,
(int)Math.Min(count, remaining - bufIndex)
);
bufIndex += copied;
_copyProgress = null;
}
if (_remainingCommands == 0)
{
_remainingCommands = _bitReader.ReadBits(16);
if (bufIndex + _remainingCommands > remaining)
{
break;
}
BeginNewBlock();
}
_remainingCommands--;
int command = ReadCommand();
if (command >= 0 && command <= 0xFF)
{
byte value = (byte)command;
buffer[bufIndex++] = value;
_ringBuffer.Push(value);
}
else
{
int count = command - 0x100 + 3;
int offset = ReadOffset();
int copyCount = (int)Math.Min(count, remaining - bufIndex);
bufIndex += CopyFromHistory(buffer, bufIndex, offset, copyCount);
}
}
_producedBytes += bufIndex;
return bufIndex;
}
}
}

View File

@@ -1,67 +0,0 @@
using System;
using System.Collections;
using System.Collections.Generic;
namespace SharpCompress.Compressors.Arj
{
/// <summary>
/// A fixed-size ring buffer where N must be a power of two.
/// </summary>
public class RingBuffer : IRingBuffer
{
private readonly byte[] _buffer;
private int _cursor;
public int BufferSize { get; }
public int Cursor => _cursor;
private readonly int _mask;
public RingBuffer(int size)
{
if ((size & (size - 1)) != 0)
{
throw new ArgumentException("RingArrayBuffer size must be a power of two");
}
BufferSize = size;
_buffer = new byte[size];
_cursor = 0;
_mask = size - 1;
// Fill with spaces
for (int i = 0; i < size; i++)
{
_buffer[i] = (byte)' ';
}
}
public void SetCursor(int pos)
{
_cursor = pos & _mask;
}
public void Push(byte value)
{
int index = _cursor;
_buffer[index & _mask] = value;
_cursor = (index + 1) & _mask;
}
public byte this[int index] => _buffer[index & _mask];
public HistoryIterator IterFromOffset(int offset)
{
int masked = (offset & _mask) + 1;
int startIndex = _cursor + BufferSize - masked;
return new HistoryIterator(this, startIndex);
}
public HistoryIterator IterFromPos(int pos)
{
int startIndex = pos & _mask;
return new HistoryIterator(this, startIndex);
}
}
}

View File

@@ -1,7 +1,5 @@
using System;
using System.IO;
using System.Threading;
using System.Threading.Tasks;
using SharpCompress.IO;
namespace SharpCompress.Compressors.BZip2;
@@ -98,37 +96,13 @@ public sealed class BZip2Stream : Stream, IStreamStack
public override void SetLength(long value) => stream.SetLength(value);
#if !NETFRAMEWORK && !NETSTANDARD2_0
#if !NETFRAMEWORK&& !NETSTANDARD2_0
public override int Read(Span<byte> buffer) => stream.Read(buffer);
public override void Write(ReadOnlySpan<byte> buffer) => stream.Write(buffer);
public override async ValueTask<int> ReadAsync(
Memory<byte> buffer,
CancellationToken cancellationToken = default
) => await stream.ReadAsync(buffer, cancellationToken).ConfigureAwait(false);
public override async ValueTask WriteAsync(
ReadOnlyMemory<byte> buffer,
CancellationToken cancellationToken = default
) => await stream.WriteAsync(buffer, cancellationToken).ConfigureAwait(false);
#endif
public override async Task<int> ReadAsync(
byte[] buffer,
int offset,
int count,
CancellationToken cancellationToken = default
) => await stream.ReadAsync(buffer, offset, count, cancellationToken).ConfigureAwait(false);
public override async Task WriteAsync(
byte[] buffer,
int offset,
int count,
CancellationToken cancellationToken = default
) => await stream.WriteAsync(buffer, offset, count, cancellationToken).ConfigureAwait(false);
public override void Write(byte[] buffer, int offset, int count) =>
stream.Write(buffer, offset, count);

View File

@@ -2,8 +2,6 @@
using System;
using System.IO;
using System.Threading;
using System.Threading.Tasks;
using SharpCompress.IO;
/*
@@ -1129,28 +1127,6 @@ internal class CBZip2InputStream : Stream, IStreamStack
return k;
}
public override Task<int> ReadAsync(
byte[] buffer,
int offset,
int count,
CancellationToken cancellationToken = default
)
{
var c = -1;
int k;
for (k = 0; k < count; ++k)
{
cancellationToken.ThrowIfCancellationRequested();
c = ReadByte();
if (c == -1)
{
break;
}
buffer[k + offset] = (byte)c;
}
return Task.FromResult(k);
}
public override long Seek(long offset, SeekOrigin origin) => 0;
public override void SetLength(long value) { }

View File

@@ -1,7 +1,5 @@
using System;
using System.IO;
using System.Threading;
using System.Threading.Tasks;
using SharpCompress.IO;
/*
@@ -544,12 +542,6 @@ internal sealed class CBZip2OutputStream : Stream, IStreamStack
private void EndBlock()
{
// Skip block processing for empty input (no data written)
if (last < 0)
{
return;
}
blockCRC = mCrc.GetFinalCRC();
combinedCRC = (combinedCRC << 1) | (int)(((uint)combinedCRC) >> 31);
combinedCRC ^= blockCRC;
@@ -2030,21 +2022,6 @@ internal sealed class CBZip2OutputStream : Stream, IStreamStack
}
}
public override Task WriteAsync(
byte[] buffer,
int offset,
int count,
CancellationToken cancellationToken = default
)
{
for (var k = 0; k < count; ++k)
{
cancellationToken.ThrowIfCancellationRequested();
WriteByte(buffer[k + offset]);
}
return Task.CompletedTask;
}
public override bool CanRead => false;
public override bool CanSeek => false;

View File

@@ -2,8 +2,6 @@ using System;
using System.IO;
using System.Security.Cryptography;
using System.Text;
using System.Threading;
using System.Threading.Tasks;
using SharpCompress.Compressors.LZMA.Utilites;
using SharpCompress.IO;
@@ -285,70 +283,5 @@ internal sealed class AesDecoderStream : DecoderStream2, IStreamStack
return count;
}
public override async Task<int> ReadAsync(
byte[] buffer,
int offset,
int count,
CancellationToken cancellationToken = default
)
{
if (count == 0 || mWritten == mLimit)
{
return 0;
}
if (mUnderflow > 0)
{
return HandleUnderflow(buffer, offset, count);
}
// Need at least 16 bytes to proceed.
if (mEnding - mOffset < 16)
{
Buffer.BlockCopy(mBuffer, mOffset, mBuffer, 0, mEnding - mOffset);
mEnding -= mOffset;
mOffset = 0;
do
{
cancellationToken.ThrowIfCancellationRequested();
var read = await mStream
.ReadAsync(mBuffer, mEnding, mBuffer.Length - mEnding, cancellationToken)
.ConfigureAwait(false);
if (read == 0)
{
// We are not done decoding and have less than 16 bytes.
throw new EndOfStreamException();
}
mEnding += read;
} while (mEnding - mOffset < 16);
}
// We shouldn't return more data than we are limited to.
if (count > mLimit - mWritten)
{
count = (int)(mLimit - mWritten);
}
// We cannot transform less than 16 bytes into the target buffer,
// but we also cannot return zero, so we need to handle this.
if (count < 16)
{
return HandleUnderflow(buffer, offset, count);
}
if (count > mEnding - mOffset)
{
count = mEnding - mOffset;
}
// Otherwise we transform directly into the target buffer.
var processed = mDecoder.TransformBlock(mBuffer, mOffset, count & ~15, buffer, offset);
mOffset += processed;
mWritten += processed;
return processed;
}
#endregion
}

View File

@@ -1,8 +1,6 @@
using System;
using System.Collections.Generic;
using System.IO;
using System.Threading;
using System.Threading.Tasks;
using SharpCompress.IO;
namespace SharpCompress.Compressors.LZMA;
@@ -193,18 +191,6 @@ internal class Bcj2DecoderStream : DecoderStream2, IStreamStack
return count;
}
public override Task<int> ReadAsync(
byte[] buffer,
int offset,
int count,
CancellationToken cancellationToken = default
)
{
cancellationToken.ThrowIfCancellationRequested();
// Bcj2DecoderStream uses complex state machine with multiple streams
return Task.FromResult(Read(buffer, offset, count));
}
public override int ReadByte()
{
if (_mFinished)

View File

@@ -3,8 +3,6 @@
using System;
using System.Buffers;
using System.IO;
using System.Threading;
using System.Threading.Tasks;
namespace SharpCompress.Compressors.LZMA.LZ;
@@ -87,12 +85,6 @@ internal class OutWindow : IDisposable
_stream = null;
}
public async Task ReleaseStreamAsync(CancellationToken cancellationToken = default)
{
await FlushAsync(cancellationToken).ConfigureAwait(false);
_stream = null;
}
private void Flush()
{
if (_stream is null)
@@ -112,27 +104,6 @@ internal class OutWindow : IDisposable
_streamPos = _pos;
}
private async Task FlushAsync(CancellationToken cancellationToken = default)
{
if (_stream is null)
{
return;
}
var size = _pos - _streamPos;
if (size == 0)
{
return;
}
await _stream
.WriteAsync(_buffer, _streamPos, size, cancellationToken)
.ConfigureAwait(false);
if (_pos >= _windowSize)
{
_pos = 0;
}
_streamPos = _pos;
}
public void CopyPending()
{
if (_pendingLen < 1)
@@ -153,26 +124,6 @@ internal class OutWindow : IDisposable
_pendingLen = rem;
}
public async Task CopyPendingAsync(CancellationToken cancellationToken = default)
{
if (_pendingLen < 1)
{
return;
}
var rem = _pendingLen;
var pos = (_pendingDist < _pos ? _pos : _pos + _windowSize) - _pendingDist - 1;
while (rem > 0 && HasSpace)
{
if (pos >= _windowSize)
{
pos = 0;
}
await PutByteAsync(_buffer[pos++], cancellationToken).ConfigureAwait(false);
rem--;
}
_pendingLen = rem;
}
public void CopyBlock(int distance, int len)
{
var rem = len;
@@ -206,43 +157,6 @@ internal class OutWindow : IDisposable
_pendingDist = distance;
}
public async Task CopyBlockAsync(
int distance,
int len,
CancellationToken cancellationToken = default
)
{
var rem = len;
var pos = (distance < _pos ? _pos : _pos + _windowSize) - distance - 1;
var targetSize = HasSpace ? (int)Math.Min(rem, _limit - _total) : 0;
var sizeUntilWindowEnd = Math.Min(_windowSize - _pos, _windowSize - pos);
var sizeUntilOverlap = Math.Abs(pos - _pos);
var fastSize = Math.Min(Math.Min(sizeUntilWindowEnd, sizeUntilOverlap), targetSize);
if (fastSize >= 2)
{
_buffer.AsSpan(pos, fastSize).CopyTo(_buffer.AsSpan(_pos, fastSize));
_pos += fastSize;
pos += fastSize;
_total += fastSize;
if (_pos >= _windowSize)
{
await FlushAsync(cancellationToken).ConfigureAwait(false);
}
rem -= fastSize;
}
while (rem > 0 && HasSpace)
{
if (pos >= _windowSize)
{
pos = 0;
}
await PutByteAsync(_buffer[pos++], cancellationToken).ConfigureAwait(false);
rem--;
}
_pendingLen = rem;
_pendingDist = distance;
}
public void PutByte(byte b)
{
_buffer[_pos++] = b;
@@ -253,16 +167,6 @@ internal class OutWindow : IDisposable
}
}
public async Task PutByteAsync(byte b, CancellationToken cancellationToken = default)
{
_buffer[_pos++] = b;
_total++;
if (_pos >= _windowSize)
{
await FlushAsync(cancellationToken).ConfigureAwait(false);
}
}
public byte GetByte(int distance)
{
var pos = _pos - distance - 1;
@@ -303,44 +207,6 @@ internal class OutWindow : IDisposable
return len - size;
}
public async Task<int> CopyStreamAsync(
Stream stream,
int len,
CancellationToken cancellationToken = default
)
{
var size = len;
while (size > 0 && _pos < _windowSize && _total < _limit)
{
cancellationToken.ThrowIfCancellationRequested();
var curSize = _windowSize - _pos;
if (curSize > _limit - _total)
{
curSize = (int)(_limit - _total);
}
if (curSize > size)
{
curSize = size;
}
var numReadBytes = await stream
.ReadAsync(_buffer, _pos, curSize, cancellationToken)
.ConfigureAwait(false);
if (numReadBytes == 0)
{
throw new DataErrorException();
}
size -= numReadBytes;
_pos += numReadBytes;
_total += numReadBytes;
if (_pos >= _windowSize)
{
await FlushAsync(cancellationToken).ConfigureAwait(false);
}
}
return len - size;
}
public void SetLimit(long size) => _limit = _total + size;
public bool HasSpace => _pos < _windowSize && _total < _limit;

View File

@@ -1,8 +1,6 @@
using System;
using System.Buffers.Binary;
using System.IO;
using System.Threading;
using System.Threading.Tasks;
using SharpCompress.Common;
using SharpCompress.Crypto;
using SharpCompress.IO;
@@ -159,11 +157,6 @@ public sealed class LZipStream : Stream, IStreamStack
#if !NETFRAMEWORK && !NETSTANDARD2_0
public override ValueTask<int> ReadAsync(
Memory<byte> buffer,
CancellationToken cancellationToken = default
) => _stream.ReadAsync(buffer, cancellationToken);
public override int Read(Span<byte> buffer) => _stream.Read(buffer);
public override void Write(ReadOnlySpan<byte> buffer)
@@ -186,25 +179,6 @@ public sealed class LZipStream : Stream, IStreamStack
++_writeCount;
}
public override Task<int> ReadAsync(
byte[] buffer,
int offset,
int count,
CancellationToken cancellationToken = default
) => _stream.ReadAsync(buffer, offset, count, cancellationToken);
public override async Task WriteAsync(
byte[] buffer,
int offset,
int count,
CancellationToken cancellationToken
)
{
cancellationToken.ThrowIfCancellationRequested();
await _stream.WriteAsync(buffer, offset, count, cancellationToken);
_writeCount += count;
}
#endregion
/// <summary>

View File

@@ -1,7 +1,6 @@
#nullable disable
using System;
using System.Diagnostics.CodeAnalysis;
using System.IO;
using SharpCompress.Compressors.LZMA.LZ;
using SharpCompress.Compressors.LZMA.RangeCoder;
@@ -200,9 +199,6 @@ public class Decoder : ICoder, ISetDecoderProperties // ,System.IO.Stream
}
}
#if !NETFRAMEWORK && !NETSTANDARD2_0
[MemberNotNull(nameof(_outWindow))]
#endif
private void CreateDictionary()
{
if (_dictionarySize < 0)
@@ -313,42 +309,6 @@ public class Decoder : ICoder, ISetDecoderProperties // ,System.IO.Stream
_outWindow = null;
}
public async System.Threading.Tasks.Task CodeAsync(
Stream inStream,
Stream outStream,
long inSize,
long outSize,
ICodeProgress progress,
System.Threading.CancellationToken cancellationToken = default
)
{
if (_outWindow is null)
{
CreateDictionary();
}
_outWindow.Init(outStream);
if (outSize > 0)
{
_outWindow.SetLimit(outSize);
}
else
{
_outWindow.SetLimit(long.MaxValue - _outWindow.Total);
}
var rangeDecoder = new RangeCoder.Decoder();
rangeDecoder.Init(inStream);
await CodeAsync(_dictionarySize, _outWindow, rangeDecoder, cancellationToken)
.ConfigureAwait(false);
await _outWindow.ReleaseStreamAsync(cancellationToken).ConfigureAwait(false);
rangeDecoder.ReleaseStream();
_outWindow.Dispose();
_outWindow = null;
}
internal bool Code(int dictionarySize, OutWindow outWindow, RangeCoder.Decoder rangeDecoder)
{
var dictionarySizeCheck = Math.Max(dictionarySize, 1);
@@ -475,143 +435,6 @@ public class Decoder : ICoder, ISetDecoderProperties // ,System.IO.Stream
return false;
}
internal async System.Threading.Tasks.Task<bool> CodeAsync(
int dictionarySize,
OutWindow outWindow,
RangeCoder.Decoder rangeDecoder,
System.Threading.CancellationToken cancellationToken = default
)
{
var dictionarySizeCheck = Math.Max(dictionarySize, 1);
await outWindow.CopyPendingAsync(cancellationToken).ConfigureAwait(false);
while (outWindow.HasSpace)
{
cancellationToken.ThrowIfCancellationRequested();
var posState = (uint)outWindow.Total & _posStateMask;
if (
_isMatchDecoders[(_state._index << Base.K_NUM_POS_STATES_BITS_MAX) + posState]
.Decode(rangeDecoder) == 0
)
{
byte b;
var prevByte = outWindow.GetByte(0);
if (!_state.IsCharState())
{
b = _literalDecoder.DecodeWithMatchByte(
rangeDecoder,
(uint)outWindow.Total,
prevByte,
outWindow.GetByte((int)_rep0)
);
}
else
{
b = _literalDecoder.DecodeNormal(rangeDecoder, (uint)outWindow.Total, prevByte);
}
await outWindow.PutByteAsync(b, cancellationToken).ConfigureAwait(false);
_state.UpdateChar();
}
else
{
uint len;
if (_isRepDecoders[_state._index].Decode(rangeDecoder) == 1)
{
if (_isRepG0Decoders[_state._index].Decode(rangeDecoder) == 0)
{
if (
_isRep0LongDecoders[
(_state._index << Base.K_NUM_POS_STATES_BITS_MAX) + posState
]
.Decode(rangeDecoder) == 0
)
{
_state.UpdateShortRep();
await outWindow
.PutByteAsync(outWindow.GetByte((int)_rep0), cancellationToken)
.ConfigureAwait(false);
continue;
}
}
else
{
uint distance;
if (_isRepG1Decoders[_state._index].Decode(rangeDecoder) == 0)
{
distance = _rep1;
}
else
{
if (_isRepG2Decoders[_state._index].Decode(rangeDecoder) == 0)
{
distance = _rep2;
}
else
{
distance = _rep3;
_rep3 = _rep2;
}
_rep2 = _rep1;
}
_rep1 = _rep0;
_rep0 = distance;
}
len = _repLenDecoder.Decode(rangeDecoder, posState) + Base.K_MATCH_MIN_LEN;
_state.UpdateRep();
}
else
{
_rep3 = _rep2;
_rep2 = _rep1;
_rep1 = _rep0;
len = Base.K_MATCH_MIN_LEN + _lenDecoder.Decode(rangeDecoder, posState);
_state.UpdateMatch();
var posSlot = _posSlotDecoder[Base.GetLenToPosState(len)].Decode(rangeDecoder);
if (posSlot >= Base.K_START_POS_MODEL_INDEX)
{
var numDirectBits = (int)((posSlot >> 1) - 1);
_rep0 = ((2 | (posSlot & 1)) << numDirectBits);
if (posSlot < Base.K_END_POS_MODEL_INDEX)
{
_rep0 += BitTreeDecoder.ReverseDecode(
_posDecoders,
_rep0 - posSlot - 1,
rangeDecoder,
numDirectBits
);
}
else
{
_rep0 += (
rangeDecoder.DecodeDirectBits(numDirectBits - Base.K_NUM_ALIGN_BITS)
<< Base.K_NUM_ALIGN_BITS
);
_rep0 += _posAlignDecoder.ReverseDecode(rangeDecoder);
}
}
else
{
_rep0 = posSlot;
}
}
if (_rep0 >= outWindow.Total || _rep0 >= dictionarySizeCheck)
{
if (_rep0 == 0xFFFFFFFF)
{
return true;
}
throw new DataErrorException();
}
await outWindow
.CopyBlockAsync((int)_rep0, (int)len, cancellationToken)
.ConfigureAwait(false);
}
}
return false;
}
public void SetDecoderProperties(byte[] properties)
{
if (properties.Length < 1)
@@ -647,4 +470,29 @@ public class Decoder : ICoder, ISetDecoderProperties // ,System.IO.Stream
}
_outWindow.Train(stream);
}
/*
public override bool CanRead { get { return true; }}
public override bool CanWrite { get { return true; }}
public override bool CanSeek { get { return true; }}
public override long Length { get { return 0; }}
public override long Position
{
get { return 0; }
set { }
}
public override void Flush() { }
public override int Read(byte[] buffer, int offset, int count)
{
return 0;
}
public override void Write(byte[] buffer, int offset, int count)
{
}
public override long Seek(long offset, System.IO.SeekOrigin origin)
{
return 0;
}
public override void SetLength(long value) {}
*/
}

View File

@@ -3,8 +3,6 @@
using System;
using System.Buffers.Binary;
using System.IO;
using System.Threading;
using System.Threading.Tasks;
using SharpCompress.Compressors.LZMA.LZ;
using SharpCompress.IO;
@@ -425,90 +423,6 @@ public class LzmaStream : Stream, IStreamStack
}
}
private async Task DecodeChunkHeaderAsync(CancellationToken cancellationToken = default)
{
var controlBuffer = new byte[1];
await _inputStream
.ReadExactlyAsync(controlBuffer, 0, 1, cancellationToken)
.ConfigureAwait(false);
var control = controlBuffer[0];
_inputPosition++;
if (control == 0x00)
{
_endReached = true;
return;
}
if (control >= 0xE0 || control == 0x01)
{
_needProps = true;
_needDictReset = false;
_outWindow.Reset();
}
else if (_needDictReset)
{
throw new DataErrorException();
}
if (control >= 0x80)
{
_uncompressedChunk = false;
_availableBytes = (control & 0x1F) << 16;
var buffer = new byte[2];
await _inputStream
.ReadExactlyAsync(buffer, 0, 2, cancellationToken)
.ConfigureAwait(false);
_availableBytes += (buffer[0] << 8) + buffer[1] + 1;
_inputPosition += 2;
await _inputStream
.ReadExactlyAsync(buffer, 0, 2, cancellationToken)
.ConfigureAwait(false);
_rangeDecoderLimit = (buffer[0] << 8) + buffer[1] + 1;
_inputPosition += 2;
if (control >= 0xC0)
{
_needProps = false;
await _inputStream
.ReadExactlyAsync(controlBuffer, 0, 1, cancellationToken)
.ConfigureAwait(false);
Properties[0] = controlBuffer[0];
_inputPosition++;
_decoder = new Decoder();
_decoder.SetDecoderProperties(Properties);
}
else if (_needProps)
{
throw new DataErrorException();
}
else if (control >= 0xA0)
{
_decoder = new Decoder();
_decoder.SetDecoderProperties(Properties);
}
_rangeDecoder.Init(_inputStream);
}
else if (control > 0x02)
{
throw new DataErrorException();
}
else
{
_uncompressedChunk = true;
var buffer = new byte[2];
await _inputStream
.ReadExactlyAsync(buffer, 0, 2, cancellationToken)
.ConfigureAwait(false);
_availableBytes = (buffer[0] << 8) + buffer[1] + 1;
_inputPosition += 2;
}
}
public override long Seek(long offset, SeekOrigin origin) => throw new NotSupportedException();
public override void SetLength(long value) => throw new NotSupportedException();
@@ -521,128 +435,5 @@ public class LzmaStream : Stream, IStreamStack
}
}
public override async Task<int> ReadAsync(
byte[] buffer,
int offset,
int count,
CancellationToken cancellationToken
)
{
if (_endReached)
{
return 0;
}
var total = 0;
while (total < count)
{
cancellationToken.ThrowIfCancellationRequested();
if (_availableBytes == 0)
{
if (_isLzma2)
{
await DecodeChunkHeaderAsync(cancellationToken).ConfigureAwait(false);
}
else
{
_endReached = true;
}
if (_endReached)
{
break;
}
}
var toProcess = count - total;
if (toProcess > _availableBytes)
{
toProcess = (int)_availableBytes;
}
_outWindow.SetLimit(toProcess);
if (_uncompressedChunk)
{
_inputPosition += await _outWindow
.CopyStreamAsync(_inputStream, toProcess, cancellationToken)
.ConfigureAwait(false);
}
else if (
await _decoder
.CodeAsync(_dictionarySize, _outWindow, _rangeDecoder, cancellationToken)
.ConfigureAwait(false)
&& _outputSize < 0
)
{
_availableBytes = _outWindow.AvailableBytes;
}
var read = _outWindow.Read(buffer, offset, toProcess);
total += read;
offset += read;
_position += read;
_availableBytes -= read;
if (_availableBytes == 0 && !_uncompressedChunk)
{
if (
!_rangeDecoder.IsFinished
|| (_rangeDecoderLimit >= 0 && _rangeDecoder._total != _rangeDecoderLimit)
)
{
_outWindow.SetLimit(toProcess + 1);
if (
!await _decoder
.CodeAsync(
_dictionarySize,
_outWindow,
_rangeDecoder,
cancellationToken
)
.ConfigureAwait(false)
)
{
_rangeDecoder.ReleaseStream();
throw new DataErrorException();
}
}
_rangeDecoder.ReleaseStream();
_inputPosition += _rangeDecoder._total;
if (_outWindow.HasPending)
{
throw new DataErrorException();
}
}
}
if (_endReached)
{
if (_inputSize >= 0 && _inputPosition != _inputSize)
{
throw new DataErrorException();
}
if (_outputSize >= 0 && _position != _outputSize)
{
throw new DataErrorException();
}
}
return total;
}
public override Task WriteAsync(
byte[] buffer,
int offset,
int count,
CancellationToken cancellationToken
)
{
cancellationToken.ThrowIfCancellationRequested();
Write(buffer, offset, count);
return Task.CompletedTask;
}
public byte[] Properties { get; } = new byte[5];
}

View File

@@ -7,7 +7,7 @@ using SharpCompress.Compressors.Deflate;
using SharpCompress.Compressors.Filters;
using SharpCompress.Compressors.LZMA.Utilites;
using SharpCompress.Compressors.PPMd;
using SharpCompress.Compressors.ZStandard;
using ZstdSharp;
namespace SharpCompress.Compressors.LZMA;

View File

@@ -1,7 +1,5 @@
using System;
using System.IO;
using System.Threading;
using System.Threading.Tasks;
using SharpCompress.IO;
namespace SharpCompress.Compressors.LZMA.Utilites;
@@ -103,22 +101,4 @@ internal class CrcBuilderStream : Stream, IStreamStack
_mCrc = Crc.Update(_mCrc, buffer, offset, count);
_mTarget.Write(buffer, offset, count);
}
public override async Task WriteAsync(
byte[] buffer,
int offset,
int count,
CancellationToken cancellationToken = default
)
{
cancellationToken.ThrowIfCancellationRequested();
if (_mFinished)
{
throw new InvalidOperationException("CRC calculation has been finished.");
}
Processed += count;
_mCrc = Crc.Update(_mCrc, buffer, offset, count);
await _mTarget.WriteAsync(buffer, offset, count, cancellationToken);
}
}

View File

@@ -1,9 +1,6 @@
using System;
using System.Buffers;
using System.Diagnostics;
using System.IO;
using System.Threading;
using System.Threading.Tasks;
namespace SharpCompress.Compressors.LZMA.Utilites;
@@ -14,7 +11,7 @@ public class CrcCheckStream : Stream
private uint _mCurrentCrc;
private bool _mClosed;
private readonly long[] _mBytes = ArrayPool<long>.Shared.Rent(256);
private readonly long[] _mBytes = new long[256];
private long _mLength;
public CrcCheckStream(uint crc)
@@ -68,7 +65,6 @@ public class CrcCheckStream : Stream
finally
{
base.Dispose(disposing);
ArrayPool<long>.Shared.Return(_mBytes);
}
}
@@ -105,16 +101,4 @@ public class CrcCheckStream : Stream
_mCurrentCrc = Crc.Update(_mCurrentCrc, buffer, offset, count);
}
public override Task WriteAsync(
byte[] buffer,
int offset,
int count,
CancellationToken cancellationToken
)
{
cancellationToken.ThrowIfCancellationRequested();
Write(buffer, offset, count);
return Task.CompletedTask;
}
}

View File

@@ -1,13 +1,13 @@
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using SharpCompress.IO;
namespace SharpCompress.Compressors.RLE90
{
/// <summary>
/// Real-time streaming RLE90 decompression stream.
/// Decompresses bytes on demand without buffering the entire file in memory.
/// </summary>
public class RunLength90Stream : Stream, IStreamStack
{
#if DEBUG_STREAMS
@@ -31,19 +31,13 @@ namespace SharpCompress.Compressors.RLE90
void IStreamStack.SetPosition(long position) { }
private readonly Stream _stream;
private readonly int _compressedSize;
private int _bytesReadFromSource;
private const byte DLE = 0x90;
private bool _inDleMode;
private byte _lastByte;
private int _repeatCount;
private bool _endOfCompressedData;
private int _compressedSize;
private bool _processed = false;
public RunLength90Stream(Stream stream, int compressedSize)
{
_stream = stream ?? throw new ArgumentNullException(nameof(stream));
_stream = stream;
_compressedSize = compressedSize;
#if DEBUG_STREAMS
this.DebugConstruct(typeof(RunLength90Stream));
@@ -59,93 +53,44 @@ namespace SharpCompress.Compressors.RLE90
}
public override bool CanRead => true;
public override bool CanSeek => false;
public override bool CanWrite => false;
public override long Length => throw new NotSupportedException();
public override long Length => throw new NotImplementedException();
public override long Position
{
get => throw new NotSupportedException();
set => throw new NotSupportedException();
get => _stream.Position;
set => throw new NotImplementedException();
}
public override void Flush() => throw new NotSupportedException();
public override long Seek(long offset, SeekOrigin origin) =>
throw new NotSupportedException();
public override void SetLength(long value) => throw new NotSupportedException();
public override void Write(byte[] buffer, int offset, int count) =>
throw new NotSupportedException();
public override void Flush() => throw new NotImplementedException();
public override int Read(byte[] buffer, int offset, int count)
{
if (buffer == null)
throw new ArgumentNullException(nameof(buffer));
if (offset < 0 || count < 0 || offset + count > buffer.Length)
throw new ArgumentOutOfRangeException();
int bytesWritten = 0;
while (bytesWritten < count && !_endOfCompressedData)
if (_processed)
{
// Handle pending repeat bytes first
if (_repeatCount > 0)
{
int toWrite = Math.Min(_repeatCount, count - bytesWritten);
for (int i = 0; i < toWrite; i++)
{
buffer[offset + bytesWritten++] = _lastByte;
}
_repeatCount -= toWrite;
continue;
}
// Try to read the next byte from compressed data
if (_bytesReadFromSource >= _compressedSize)
{
_endOfCompressedData = true;
break;
}
int next = _stream.ReadByte();
if (next == -1)
{
_endOfCompressedData = true;
break;
}
_bytesReadFromSource++;
byte c = (byte)next;
if (_inDleMode)
{
_inDleMode = false;
if (c == 0)
{
buffer[offset + bytesWritten++] = DLE;
_lastByte = DLE;
}
else
{
_repeatCount = c - 1;
// Well handle these repeats in next loop iteration.
}
}
else if (c == DLE)
{
_inDleMode = true;
}
else
{
buffer[offset + bytesWritten++] = c;
_lastByte = c;
}
return 0;
}
_processed = true;
return bytesWritten;
using var binaryReader = new BinaryReader(_stream);
byte[] compressedBuffer = binaryReader.ReadBytes(_compressedSize);
var unpacked = RLE.UnpackRLE(compressedBuffer);
unpacked.CopyTo(buffer);
return unpacked.Count;
}
public override long Seek(long offset, SeekOrigin origin) =>
throw new NotImplementedException();
public override void SetLength(long value) => throw new NotImplementedException();
public override void Write(byte[] buffer, int offset, int count) =>
throw new NotImplementedException();
}
}

Some files were not shown because too many files have changed in this diff Show More