mirror of
https://github.com/adamhathcock/sharpcompress.git
synced 2026-02-18 13:35:33 +00:00
Compare commits
1 Commits
copilot/ad
...
copilot/su
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
a5bf73dc82 |
@@ -3,7 +3,7 @@
|
||||
"isRoot": true,
|
||||
"tools": {
|
||||
"csharpier": {
|
||||
"version": "1.2.5",
|
||||
"version": "1.2.3",
|
||||
"commands": [
|
||||
"csharpier"
|
||||
],
|
||||
|
||||
15
.github/COPILOT_AGENT_README.md
vendored
Normal file
15
.github/COPILOT_AGENT_README.md
vendored
Normal file
@@ -0,0 +1,15 @@
|
||||
# Copilot Coding Agent Configuration
|
||||
|
||||
This repository includes a minimal opt-in configuration and CI workflow to allow the GitHub Copilot coding agent to open and validate PRs.
|
||||
|
||||
- .copilot-agent.yml: opt-in config for automated agents
|
||||
- .github/agents/copilot-agent.yml: detailed agent policy configuration
|
||||
- .github/workflows/dotnetcore.yml: CI runs on PRs touching the solution, source, or tests to validate changes
|
||||
- AGENTS.md: general instructions for Copilot coding agent with project-specific guidelines
|
||||
|
||||
Maintainers can adjust the allowed paths or disable the agent by editing or removing .copilot-agent.yml.
|
||||
|
||||
Notes:
|
||||
- The agent can create, modify, and delete files within the allowed paths (src, tests, README.md, AGENTS.md)
|
||||
- All changes require review before merge
|
||||
- If build/test paths are different, update the workflow accordingly; this workflow targets SharpCompress.sln and the SharpCompress.Test test project.
|
||||
25
.github/prompts/plan-async.prompt.md
vendored
25
.github/prompts/plan-async.prompt.md
vendored
@@ -1,25 +0,0 @@
|
||||
# Plan: Implement Missing Async Functionality in SharpCompress
|
||||
|
||||
SharpCompress has async support for low-level stream operations and Reader/Writer APIs, but critical entry points (Archive.Open, factory methods, initialization) remain synchronous. This plan adds async overloads for all user-facing I/O operations and fixes existing async bugs, enabling full end-to-end async workflows.
|
||||
|
||||
## Steps
|
||||
|
||||
1. **Add async factory methods** to [ArchiveFactory.cs](src/SharpCompress/Factories/ArchiveFactory.cs), [ReaderFactory.cs](src/SharpCompress/Factories/ReaderFactory.cs), and [WriterFactory.cs](src/SharpCompress/Factories/WriterFactory.cs) with `OpenAsync` and `CreateAsync` overloads accepting `CancellationToken`
|
||||
|
||||
2. **Implement async Open methods** on concrete archive types ([ZipArchive.cs](src/SharpCompress/Archives/Zip/ZipArchive.cs), [TarArchive.cs](src/SharpCompress/Archives/Tar/TarArchive.cs), [RarArchive.cs](src/SharpCompress/Archives/Rar/RarArchive.cs), [GZipArchive.cs](src/SharpCompress/Archives/GZip/GZipArchive.cs), [SevenZipArchive.cs](src/SharpCompress/Archives/SevenZip/SevenZipArchive.cs)) and reader types ([ZipReader.cs](src/SharpCompress/Readers/Zip/ZipReader.cs), [TarReader.cs](src/SharpCompress/Readers/Tar/TarReader.cs), etc.)
|
||||
|
||||
3. **Convert archive initialization logic to async** including header reading, volume loading, and format signature detection across archive constructors and internal initialization methods
|
||||
|
||||
4. **Fix LZMA decoder async bugs** in [LzmaStream.cs](src/SharpCompress/Compressors/LZMA/LzmaStream.cs), [Decoder.cs](src/SharpCompress/Compressors/LZMA/Decoder.cs), and [OutWindow.cs](src/SharpCompress/Compressors/LZMA/OutWindow.cs) to enable true async 7Zip support and remove `NonDisposingStream` workaround
|
||||
|
||||
5. **Complete Rar async implementation** by converting `UnpackV2017` methods to async in [UnpackV2017.cs](src/SharpCompress/Compressors/Rar/UnpackV2017.cs) and updating Rar20 decompression
|
||||
|
||||
6. **Add comprehensive async tests** covering all new async entry points, cancellation scenarios, and concurrent operations across all archive formats in test files
|
||||
|
||||
## Further Considerations
|
||||
|
||||
1. **Breaking changes** - Should new async methods be added alongside existing sync methods (non-breaking), or should sync methods eventually be deprecated? Recommend additive approach for backward compatibility.
|
||||
|
||||
2. **Performance impact** - Header parsing for formats like Zip/Tar is often small; consider whether truly async parsing adds value vs sync parsing wrapped in Task, or make it conditional based on stream type (network vs file).
|
||||
|
||||
3. **7Zip complexity** - The LZMA async bug fix (Step 4) may be challenging due to state management in the decoder; consider whether to scope it separately or implement a simpler workaround that maintains correctness.
|
||||
123
.github/prompts/plan-for-next.prompt.md
vendored
123
.github/prompts/plan-for-next.prompt.md
vendored
@@ -1,123 +0,0 @@
|
||||
# Plan: Modernize SharpCompress Public API
|
||||
|
||||
Based on comprehensive analysis, the API has several inconsistencies around factory patterns, async support, format capabilities, and options classes. Most improvements can be done incrementally without breaking changes.
|
||||
|
||||
## Steps
|
||||
|
||||
1. **Standardize factory patterns** by deprecating format-specific static `Open` methods in [Archives/Zip/ZipArchive.cs](src/SharpCompress/Archives/Zip/ZipArchive.cs), [Archives/Tar/TarArchive.cs](src/SharpCompress/Archives/Tar/TarArchive.cs), etc. in favor of centralized [Factories/ArchiveFactory.cs](src/SharpCompress/Factories/ArchiveFactory.cs)
|
||||
|
||||
2. **Complete async implementation** in [Writers/Zip/ZipWriter.cs](src/SharpCompress/Writers/Zip/ZipWriter.cs) and other writers that currently use sync-over-async, implementing true async I/O throughout the writer hierarchy
|
||||
|
||||
3. **Unify options classes** by making [Common/ExtractionOptions.cs](src/SharpCompress/Common/ExtractionOptions.cs) inherit from `OptionsBase` and adding progress reporting to extraction methods consistently
|
||||
|
||||
4. **Clarify GZip semantics** in [Archives/GZip/GZipArchive.cs](src/SharpCompress/Archives/GZip/GZipArchive.cs) by adding XML documentation explaining single-entry limitation and relationship to GZip compression used in Tar.gz
|
||||
|
||||
## Further Considerations
|
||||
|
||||
1. **Breaking changes roadmap** - Should we plan a major version (2.0) to remove deprecated factory methods, clean up `ArchiveType` enum (remove Arc/Arj or add full support), and consolidate naming patterns?
|
||||
|
||||
2. **Progress reporting consistency** - Should `IProgress<ArchiveExtractionProgress<IEntry>>` be added to all extraction extension methods or consolidated into options classes?
|
||||
|
||||
## Detailed Analysis
|
||||
|
||||
### Factory Pattern Issues
|
||||
|
||||
Three different factory patterns exist with overlapping functionality:
|
||||
|
||||
1. **Static Factories**: ArchiveFactory, ReaderFactory, WriterFactory
|
||||
2. **Instance Factories**: IArchiveFactory, IReaderFactory, IWriterFactory
|
||||
3. **Format-specific static methods**: Each archive class has static `Open` methods
|
||||
|
||||
**Example confusion:**
|
||||
```csharp
|
||||
// Three ways to open a Zip archive - which is recommended?
|
||||
var archive1 = ArchiveFactory.Open("file.zip");
|
||||
var archive2 = ZipArchive.Open("file.zip");
|
||||
var archive3 = ArchiveFactory.AutoFactory.Open(fileInfo, options);
|
||||
```
|
||||
|
||||
### Async Support Gaps
|
||||
|
||||
Base `IWriter` interface has async methods, but writer implementations provide minimal async support. Most writers just call synchronous methods:
|
||||
|
||||
```csharp
|
||||
public virtual async Task WriteAsync(...)
|
||||
{
|
||||
// Default implementation calls synchronous version
|
||||
Write(filename, source, modificationTime);
|
||||
await Task.CompletedTask.ConfigureAwait(false);
|
||||
}
|
||||
```
|
||||
|
||||
Real async implementations only in:
|
||||
- `TarWriter` - Proper async implementation
|
||||
- Most other writers use sync-over-async
|
||||
|
||||
### GZip Archive Special Case
|
||||
|
||||
GZip is treated as both a compression format and an archive format, but only supports single-entry archives:
|
||||
|
||||
```csharp
|
||||
protected override GZipArchiveEntry CreateEntryInternal(...)
|
||||
{
|
||||
if (Entries.Any())
|
||||
{
|
||||
throw new InvalidFormatException("Only one entry is allowed in a GZip Archive");
|
||||
}
|
||||
// ...
|
||||
}
|
||||
```
|
||||
|
||||
### Options Class Hierarchy
|
||||
|
||||
```
|
||||
OptionsBase (LeaveStreamOpen, ArchiveEncoding)
|
||||
├─ ReaderOptions (LookForHeader, Password, DisableCheckIncomplete, BufferSize, ExtensionHint, Progress)
|
||||
├─ WriterOptions (CompressionType, CompressionLevel, Progress)
|
||||
│ ├─ ZipWriterOptions (ArchiveComment, UseZip64)
|
||||
│ ├─ TarWriterOptions (FinalizeArchiveOnClose, HeaderFormat)
|
||||
│ └─ GZipWriterOptions (no additional properties)
|
||||
└─ ExtractionOptions (standalone - Overwrite, ExtractFullPath, PreserveFileTime, PreserveAttributes)
|
||||
```
|
||||
|
||||
**Issues:**
|
||||
- `ExtractionOptions` doesn't inherit from `OptionsBase` - no encoding support during extraction
|
||||
- Progress reporting inconsistency between readers and extraction
|
||||
- Obsolete properties (`ChecksumIsValid`, `Version`) with unclear migration path
|
||||
|
||||
### Implementation Priorities
|
||||
|
||||
**High Priority (Non-Breaking):**
|
||||
1. Add API usage guide (Archive vs Reader, factory recommendations, async best practices)
|
||||
2. Fix progress reporting consistency
|
||||
3. Complete async implementation in writers
|
||||
|
||||
**Medium Priority (Next Major Version):**
|
||||
1. Unify factory pattern - deprecate format-specific static `Open` methods
|
||||
2. Clean up options classes - make `ExtractionOptions` inherit from `OptionsBase`
|
||||
3. Clarify archive types - remove Arc/Arj from `ArchiveType` enum or add full support
|
||||
4. Standardize naming across archive types
|
||||
|
||||
**Low Priority:**
|
||||
1. Add BZip2 archive support similar to GZipArchive
|
||||
2. Complete obsolete property cleanup with migration guide
|
||||
|
||||
### Backward Compatibility Strategy
|
||||
|
||||
**Safe (Non-Breaking) Changes:**
|
||||
- Add new methods to interfaces (use default implementations)
|
||||
- Add new options properties (with defaults)
|
||||
- Add new factory methods
|
||||
- Improve async implementations
|
||||
- Add progress reporting support
|
||||
|
||||
**Breaking Changes to Avoid:**
|
||||
- ❌ Removing format-specific `Open` methods (deprecate instead)
|
||||
- ❌ Changing `LeaveStreamOpen` default (currently `true`)
|
||||
- ❌ Removing obsolete properties before major version bump
|
||||
- ❌ Changing return types or signatures of existing methods
|
||||
|
||||
**Deprecation Pattern:**
|
||||
- Use `[Obsolete]` for one major version
|
||||
- Use `[EditorBrowsable(EditorBrowsableState.Never)]` in next major version
|
||||
- Remove in following major version
|
||||
155
.github/workflows/NUGET_RELEASE.md
vendored
155
.github/workflows/NUGET_RELEASE.md
vendored
@@ -1,155 +0,0 @@
|
||||
# NuGet Release Workflow
|
||||
|
||||
This document describes the automated NuGet release workflow for SharpCompress.
|
||||
|
||||
## Overview
|
||||
|
||||
The `nuget-release.yml` workflow automatically builds, tests, and publishes SharpCompress packages to NuGet.org when:
|
||||
- Changes are pushed to the `master` or `release` branch
|
||||
- A version tag (format: `MAJOR.MINOR.PATCH`) is pushed
|
||||
|
||||
The workflow runs on both Windows and Ubuntu, but only the Windows build publishes to NuGet.
|
||||
|
||||
## How It Works
|
||||
|
||||
### Version Determination
|
||||
|
||||
The workflow automatically determines the version based on whether the commit is tagged using C# code in the build project:
|
||||
|
||||
1. **Tagged Release (Stable)**:
|
||||
- If the current commit has a version tag (e.g., `0.42.1`)
|
||||
- Uses the tag as the version number
|
||||
- Published as a stable release
|
||||
|
||||
2. **Untagged Release (Prerelease)**:
|
||||
- If the current commit is NOT tagged
|
||||
- Creates a prerelease version based on the next minor version
|
||||
- Format: `{NEXT_MINOR_VERSION}-beta.{COMMIT_COUNT}`
|
||||
- Example: `0.43.0-beta.123` (if last tag is 0.42.x)
|
||||
- Published as a prerelease to NuGet.org (Windows build only)
|
||||
|
||||
### Workflow Steps
|
||||
|
||||
The workflow runs on a matrix of operating systems (Windows and Ubuntu):
|
||||
|
||||
1. **Checkout**: Fetches the repository with full history for version detection
|
||||
2. **Setup .NET**: Installs .NET 10.0
|
||||
3. **Determine Version**: Runs `determine-version` build target to check for tags and determine version
|
||||
4. **Update Version**: Runs `update-version` build target to update the version in the project file
|
||||
5. **Build and Test**: Runs the full build and test suite on both platforms
|
||||
6. **Upload Artifacts**: Uploads the generated `.nupkg` files as workflow artifacts (separate for each OS)
|
||||
7. **Push to NuGet**: (Windows only) Runs `push-to-nuget` build target to publish the package to NuGet.org using the API key
|
||||
|
||||
All version detection, file updates, and publishing logic is implemented in C# in the `build/Program.cs` file using build targets.
|
||||
|
||||
## Setup Requirements
|
||||
|
||||
### 1. NuGet API Key Secret
|
||||
|
||||
The workflow requires a `NUGET_API_KEY` secret to be configured in the repository settings:
|
||||
|
||||
1. Go to https://www.nuget.org/account/apikeys
|
||||
2. Create a new API key with "Push" permission for the SharpCompress package
|
||||
3. In GitHub, go to: **Settings** → **Secrets and variables** → **Actions**
|
||||
4. Create a new secret named `NUGET_API_KEY` with the API key value
|
||||
|
||||
### 2. Branch Protection (Recommended)
|
||||
|
||||
Consider enabling branch protection rules for the `release` branch to ensure:
|
||||
- Code reviews are required before merging
|
||||
- Status checks pass before merging
|
||||
- Only authorized users can push to the branch
|
||||
|
||||
## Usage
|
||||
|
||||
### Creating a Stable Release
|
||||
|
||||
There are two ways to trigger a stable release:
|
||||
|
||||
**Method 1: Push tag to trigger workflow**
|
||||
1. Ensure all changes are committed on the `master` or `release` branch
|
||||
2. Create and push a version tag:
|
||||
```bash
|
||||
git checkout master # or release
|
||||
git tag 0.43.0
|
||||
git push origin 0.43.0
|
||||
```
|
||||
3. The workflow will automatically trigger, build, test, and publish `SharpCompress 0.43.0` to NuGet.org (Windows build)
|
||||
|
||||
**Method 2: Tag after pushing to branch**
|
||||
1. Ensure all changes are merged and pushed to the `master` or `release` branch
|
||||
2. Create and push a version tag on the already-pushed commit:
|
||||
```bash
|
||||
git checkout master # or release
|
||||
git tag 0.43.0
|
||||
git push origin 0.43.0
|
||||
```
|
||||
3. The workflow will automatically trigger, build, test, and publish `SharpCompress 0.43.0` to NuGet.org (Windows build)
|
||||
|
||||
### Creating a Prerelease
|
||||
|
||||
1. Push changes to the `master` or `release` branch without tagging:
|
||||
```bash
|
||||
git checkout master # or release
|
||||
git push origin master # or release
|
||||
```
|
||||
2. The workflow will automatically:
|
||||
- Build and test the project on both Windows and Ubuntu
|
||||
- Publish a prerelease version like `0.43.0-beta.456` to NuGet.org (Windows build)
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Workflow Fails to Push to NuGet
|
||||
|
||||
- **Check the API Key**: Ensure `NUGET_API_KEY` is set correctly in repository secrets
|
||||
- **Check API Key Permissions**: Verify the API key has "Push" permission for SharpCompress
|
||||
- **Check API Key Expiration**: NuGet API keys may expire; create a new one if needed
|
||||
|
||||
### Version Conflict
|
||||
|
||||
If you see "Package already exists" errors:
|
||||
- The workflow uses `--skip-duplicate` flag to handle this gracefully
|
||||
- If you need to republish the same version, delete it from NuGet.org first (if allowed)
|
||||
|
||||
### Build or Test Failures
|
||||
|
||||
- The workflow will not push to NuGet if build or tests fail
|
||||
- Check the workflow logs in GitHub Actions for details
|
||||
- Fix the issues and push again
|
||||
|
||||
## Manual Package Creation
|
||||
|
||||
If you need to create a package manually without publishing:
|
||||
|
||||
```bash
|
||||
dotnet run --project build/build.csproj -- publish
|
||||
```
|
||||
|
||||
The package will be created in the `artifacts/` directory.
|
||||
|
||||
## Build Targets
|
||||
|
||||
The workflow uses the following C# build targets defined in `build/Program.cs`:
|
||||
|
||||
- **determine-version**: Detects version from git tags and outputs VERSION and PRERELEASE variables
|
||||
- **update-version**: Updates VersionPrefix, AssemblyVersion, and FileVersion in the project file
|
||||
- **push-to-nuget**: Pushes the generated NuGet packages to NuGet.org (requires NUGET_API_KEY)
|
||||
|
||||
These targets can be run manually for testing:
|
||||
|
||||
```bash
|
||||
# Determine the version
|
||||
dotnet run --project build/build.csproj -- determine-version
|
||||
|
||||
# Update version in project file
|
||||
VERSION=0.43.0 dotnet run --project build/build.csproj -- update-version
|
||||
|
||||
# Push to NuGet (requires NUGET_API_KEY environment variable)
|
||||
NUGET_API_KEY=your-key dotnet run --project build/build.csproj -- push-to-nuget
|
||||
```
|
||||
|
||||
## Related Files
|
||||
|
||||
- `.github/workflows/nuget-release.yml` - The workflow definition
|
||||
- `build/Program.cs` - Build script with version detection and publishing logic
|
||||
- `src/SharpCompress/SharpCompress.csproj` - Project file with version information
|
||||
120
.github/workflows/TESTING.md
vendored
120
.github/workflows/TESTING.md
vendored
@@ -1,120 +0,0 @@
|
||||
# Testing Guide for NuGet Release Workflow
|
||||
|
||||
This document describes how to test the NuGet release workflow.
|
||||
|
||||
## Testing Strategy
|
||||
|
||||
Since this workflow publishes to NuGet.org and requires repository secrets, testing should be done carefully. The workflow runs on both Windows and Ubuntu, but only the Windows build publishes to NuGet.
|
||||
|
||||
## Pre-Testing Checklist
|
||||
|
||||
- [x] Workflow YAML syntax validated
|
||||
- [x] Version determination logic tested locally
|
||||
- [x] Version update logic tested locally
|
||||
- [x] Build script works (`dotnet run --project build/build.csproj`)
|
||||
|
||||
## Manual Testing Steps
|
||||
|
||||
### 1. Test Prerelease Publishing (Recommended First Test)
|
||||
|
||||
This tests the workflow on untagged commits to the master or release branch.
|
||||
|
||||
**Steps:**
|
||||
1. Ensure `NUGET_API_KEY` secret is configured in repository settings
|
||||
2. Create a test commit on the `master` or `release` branch (e.g., update a comment or README)
|
||||
3. Push to the `master` or `release` branch
|
||||
4. Monitor the GitHub Actions workflow at: https://github.com/adamhathcock/sharpcompress/actions
|
||||
5. Verify:
|
||||
- Workflow triggers and runs successfully on both Windows and Ubuntu
|
||||
- Version is determined correctly (e.g., `0.43.0-beta.XXX` if last tag is 0.42.x)
|
||||
- Build and tests pass on both platforms
|
||||
- Package artifacts are uploaded for both platforms
|
||||
- Package is pushed to NuGet.org as prerelease (Windows build only)
|
||||
|
||||
**Expected Outcome:**
|
||||
- A new prerelease package appears on NuGet.org: https://www.nuget.org/packages/SharpCompress/
|
||||
- Package version follows pattern: `{NEXT_MINOR_VERSION}-beta.{COMMIT_COUNT}`
|
||||
|
||||
### 2. Test Tagged Release Publishing
|
||||
|
||||
This tests the workflow when a version tag is pushed.
|
||||
|
||||
**Steps:**
|
||||
1. Prepare the `master` or `release` branch with all desired changes
|
||||
2. Create a version tag (must be a pure semantic version like `MAJOR.MINOR.PATCH`):
|
||||
```bash
|
||||
git checkout master # or release
|
||||
git tag 0.42.2
|
||||
git push origin 0.42.2
|
||||
```
|
||||
3. Monitor the GitHub Actions workflow
|
||||
4. Verify:
|
||||
- Workflow triggers and runs successfully on both Windows and Ubuntu
|
||||
- Version is determined as the tag (e.g., `0.42.2`)
|
||||
- Build and tests pass on both platforms
|
||||
- Package artifacts are uploaded for both platforms
|
||||
- Package is pushed to NuGet.org as stable release (Windows build only)
|
||||
|
||||
**Expected Outcome:**
|
||||
- A new stable release package appears on NuGet.org
|
||||
- Package version matches the tag
|
||||
|
||||
### 3. Test Duplicate Package Handling
|
||||
|
||||
This tests the `--skip-duplicate` flag behavior.
|
||||
|
||||
**Steps:**
|
||||
1. Push to the `release` branch without making changes
|
||||
2. Monitor the workflow
|
||||
3. Verify:
|
||||
- Workflow runs but NuGet push is skipped with "duplicate" message
|
||||
- No errors occur
|
||||
|
||||
### 4. Test Build Failure Handling
|
||||
|
||||
This tests that failed builds don't publish packages.
|
||||
|
||||
**Steps:**
|
||||
1. Introduce a breaking change in a test or code
|
||||
2. Push to the `release` branch
|
||||
3. Verify:
|
||||
- Workflow runs and detects the failure
|
||||
- Build or test step fails
|
||||
- NuGet push step is skipped
|
||||
- No package is published
|
||||
|
||||
## Verification
|
||||
|
||||
After each test, verify:
|
||||
|
||||
1. **GitHub Actions Logs**: Check the workflow logs for any errors or warnings
|
||||
2. **NuGet.org**: Verify the package appears with correct version and metadata
|
||||
3. **Artifacts**: Download and inspect the uploaded artifacts
|
||||
|
||||
## Rollback/Cleanup
|
||||
|
||||
If testing produces unwanted packages:
|
||||
|
||||
1. **Prerelease packages**: Can be unlisted on NuGet.org (Settings → Unlist)
|
||||
2. **Stable packages**: Cannot be deleted, only unlisted (use test versions)
|
||||
3. **Tags**: Can be deleted with:
|
||||
```bash
|
||||
git tag -d 0.42.2
|
||||
git push origin :refs/tags/0.42.2
|
||||
```
|
||||
|
||||
## Known Limitations
|
||||
|
||||
- NuGet.org does not allow re-uploading the same version
|
||||
- Deleted packages on NuGet.org reserve the version number
|
||||
- The workflow requires the `NUGET_API_KEY` secret to be set
|
||||
|
||||
## Success Criteria
|
||||
|
||||
The workflow is considered successful if:
|
||||
|
||||
- ✅ Prerelease versions are published correctly with beta suffix
|
||||
- ✅ Tagged versions are published as stable releases
|
||||
- ✅ Build and test failures prevent publishing
|
||||
- ✅ Duplicate packages are handled gracefully
|
||||
- ✅ Workflow logs are clear and informative
|
||||
25
.github/workflows/dotnetcore.yml
vendored
Normal file
25
.github/workflows/dotnetcore.yml
vendored
Normal file
@@ -0,0 +1,25 @@
|
||||
name: SharpCompress
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- 'master'
|
||||
pull_request:
|
||||
types: [ opened, synchronize, reopened, ready_for_review ]
|
||||
|
||||
jobs:
|
||||
build:
|
||||
runs-on: ${{ matrix.os }}
|
||||
strategy:
|
||||
matrix:
|
||||
os: [windows-latest, ubuntu-latest]
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v6
|
||||
- uses: actions/setup-dotnet@v5
|
||||
with:
|
||||
dotnet-version: 10.0.x
|
||||
- run: dotnet run --project build/build.csproj
|
||||
- uses: actions/upload-artifact@v6
|
||||
with:
|
||||
name: ${{ matrix.os }}-sharpcompress.nupkg
|
||||
path: artifacts/*
|
||||
61
.github/workflows/nuget-release.yml
vendored
61
.github/workflows/nuget-release.yml
vendored
@@ -1,61 +0,0 @@
|
||||
name: NuGet Release
|
||||
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- 'master'
|
||||
- 'release'
|
||||
tags:
|
||||
- '[0-9]+.[0-9]+.[0-9]+'
|
||||
pull_request:
|
||||
branches:
|
||||
- 'master'
|
||||
- 'release'
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
|
||||
jobs:
|
||||
build-and-publish:
|
||||
runs-on: ${{ matrix.os }}
|
||||
strategy:
|
||||
matrix:
|
||||
os: [windows-latest, ubuntu-latest]
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v6
|
||||
with:
|
||||
fetch-depth: 0 # Fetch all history for versioning
|
||||
|
||||
- uses: actions/setup-dotnet@v5
|
||||
with:
|
||||
dotnet-version: 10.0.x
|
||||
|
||||
# Determine version using C# build target
|
||||
- name: Determine Version
|
||||
id: version
|
||||
run: dotnet run --project build/build.csproj -- determine-version
|
||||
|
||||
# Update version in project file using C# build target
|
||||
- name: Update Version in Project
|
||||
run: dotnet run --project build/build.csproj -- update-version
|
||||
env:
|
||||
VERSION: ${{ steps.version.outputs.version }}
|
||||
|
||||
# Build and test
|
||||
- name: Build and Test
|
||||
run: dotnet run --project build/build.csproj
|
||||
|
||||
# Upload artifacts for verification
|
||||
- name: Upload NuGet Package
|
||||
uses: actions/upload-artifact@v6
|
||||
with:
|
||||
name: ${{ matrix.os }}-nuget-package
|
||||
path: artifacts/*.nupkg
|
||||
|
||||
# Push to NuGet.org using C# build target (Windows only, not on PRs)
|
||||
- name: Push to NuGet
|
||||
if: success() && matrix.os == 'windows-latest' && github.event_name != 'pull_request'
|
||||
run: dotnet run --project build/build.csproj -- push-to-nuget
|
||||
env:
|
||||
NUGET_API_KEY: ${{ secrets.NUGET_API_KEY }}
|
||||
1
.gitignore
vendored
1
.gitignore
vendored
@@ -16,7 +16,6 @@ tests/TestArchives/*/Scratch2
|
||||
.vs
|
||||
tools
|
||||
.idea/
|
||||
artifacts/
|
||||
|
||||
.DS_Store
|
||||
*.snupkg
|
||||
|
||||
32
AGENTS.md
32
AGENTS.md
@@ -28,38 +28,14 @@ SharpCompress is a pure C# compression library supporting multiple archive forma
|
||||
|
||||
## Code Formatting
|
||||
|
||||
**Copilot agents: You MUST run the `format` task after making code changes to ensure consistency.**
|
||||
|
||||
- Use CSharpier for code formatting to ensure consistent style across the project
|
||||
- CSharpier is configured as a local tool in `.config/dotnet-tools.json`
|
||||
|
||||
### Commands
|
||||
|
||||
1. **Restore tools** (first time only):
|
||||
```bash
|
||||
dotnet tool restore
|
||||
```
|
||||
|
||||
2. **Check if files are formatted correctly** (doesn't modify files):
|
||||
```bash
|
||||
dotnet csharpier check .
|
||||
```
|
||||
- Exit code 0: All files are properly formatted
|
||||
- Exit code 1: Some files need formatting (will show which files and differences)
|
||||
|
||||
3. **Format files** (modifies files):
|
||||
```bash
|
||||
dotnet csharpier format .
|
||||
```
|
||||
- Formats all files in the project to match CSharpier style
|
||||
- Run from project root directory
|
||||
|
||||
4. **Configure your IDE** to format on save using CSharpier for the best experience
|
||||
|
||||
### Additional Notes
|
||||
- Restore tools with: `dotnet tool restore`
|
||||
- Format files from the project root with: `dotnet csharpier .`
|
||||
- **Run `dotnet csharpier .` from the project root after making code changes before committing**
|
||||
- Configure your IDE to format on save using CSharpier for the best experience
|
||||
- The project also uses `.editorconfig` for editor settings (indentation, encoding, etc.)
|
||||
- Let CSharpier handle code style while `.editorconfig` handles editor behavior
|
||||
- Always run `dotnet csharpier check .` before committing to verify formatting
|
||||
|
||||
## Project Setup and Structure
|
||||
|
||||
|
||||
@@ -1,18 +1,19 @@
|
||||
<Project>
|
||||
<ItemGroup>
|
||||
<PackageVersion Include="Bullseye" Version="6.1.0" />
|
||||
<PackageVersion Include="Bullseye" Version="6.0.0" />
|
||||
<PackageVersion Include="AwesomeAssertions" Version="9.3.0" />
|
||||
<PackageVersion Include="Glob" Version="1.1.9" />
|
||||
<PackageVersion Include="JetBrains.Profiler.SelfApi" Version="2.5.15" />
|
||||
<PackageVersion Include="JetBrains.Profiler.SelfApi" Version="2.5.14" />
|
||||
<PackageVersion Include="Microsoft.Bcl.AsyncInterfaces" Version="10.0.0" />
|
||||
<PackageVersion Include="Microsoft.NET.Test.Sdk" Version="18.0.1" />
|
||||
<PackageVersion Include="Mono.Posix.NETStandard" Version="1.0.0" />
|
||||
<PackageVersion Include="SimpleExec" Version="13.0.0" />
|
||||
<PackageVersion Include="System.Text.Encoding.CodePages" Version="10.0.0" />
|
||||
<PackageVersion Include="SimpleExec" Version="12.0.0" />
|
||||
<PackageVersion Include="System.Buffers" Version="4.6.1" />
|
||||
<PackageVersion Include="System.Memory" Version="4.6.3" />
|
||||
<PackageVersion Include="System.Text.Encoding.CodePages" Version="10.0.0" />
|
||||
<PackageVersion Include="xunit" Version="2.9.3" />
|
||||
<PackageVersion Include="xunit.runner.visualstudio" Version="3.1.5" />
|
||||
<PackageVersion Include="ZstdSharp.Port" Version="0.8.6" />
|
||||
<PackageVersion Include="Microsoft.NET.ILLink.Tasks" Version="10.0.0" />
|
||||
<PackageVersion Include="Microsoft.SourceLink.GitHub" Version="8.0.0" />
|
||||
<PackageVersion Include="Microsoft.NETFramework.ReferenceAssemblies" Version="1.0.3" />
|
||||
|
||||
21
FORMATS.md
21
FORMATS.md
@@ -10,10 +10,7 @@
|
||||
|
||||
| Archive Format | Compression Format(s) | Compress/Decompress | Archive API | Reader API | Writer API |
|
||||
| ---------------------- | ------------------------------------------------- | ------------------- | --------------- | ---------- | ------------- |
|
||||
| Ace | None | Decompress | N/A | AceReader | N/A |
|
||||
| Arc | None, Packed, Squeezed, Crunched | Decompress | N/A | ArcReader | N/A |
|
||||
| Arj | None | Decompress | N/A | ArjReader | N/A |
|
||||
| Rar | Rar | Decompress | RarArchive | RarReader | N/A |
|
||||
| Rar | Rar | Decompress (1) | RarArchive | RarReader | N/A |
|
||||
| Zip (2) | None, Shrink, Reduce, Implode, DEFLATE, Deflate64, BZip2, LZMA/LZMA2, PPMd | Both | ZipArchive | ZipReader | ZipWriter |
|
||||
| Tar | None | Both | TarArchive | TarReader | TarWriter (3) |
|
||||
| Tar.GZip | DEFLATE | Both | TarArchive | TarReader | TarWriter (3) |
|
||||
@@ -25,9 +22,9 @@
|
||||
| 7Zip (4) | LZMA, LZMA2, BZip2, PPMd, BCJ, BCJ2, Deflate | Decompress | SevenZipArchive | N/A | N/A |
|
||||
|
||||
1. SOLID Rars are only supported in the RarReader API.
|
||||
2. Zip format supports pkware and WinzipAES encryption. However, encrypted LZMA is not supported. Zip64 reading/writing is supported but only with seekable streams as the Zip spec doesn't support Zip64 data in post data descriptors. Deflate64 is only supported for reading. SOZip (Seek-Optimized ZIP) detection is supported for reading. See [Zip Format Notes](#zip-format-notes) for details on multi-volume archives and streaming behavior.
|
||||
2. Zip format supports pkware and WinzipAES encryption. However, encrypted LZMA is not supported. Zip64 reading/writing is supported but only with seekable streams as the Zip spec doesn't support Zip64 data in post data descriptors. Deflate64 is only supported for reading. See [Zip Format Notes](#zip-format-notes) for details on multi-volume archives and streaming behavior.
|
||||
3. The Tar format requires a file size in the header. If no size is specified to the TarWriter and the stream is not seekable, then an exception will be thrown.
|
||||
4. The 7Zip format doesn't allow for reading as a forward-only stream so 7Zip is only supported through the Archive API. See [7Zip Format Notes](#7zip-format-notes) for details on async extraction behavior.
|
||||
4. The 7Zip format doesn't allow for reading as a forward-only stream so 7Zip is only supported through the Archive API
|
||||
5. LZip has no support for extra data like the file name or timestamp. There is a default filename used when looking at the entry Key on the archive.
|
||||
|
||||
### Zip Format Notes
|
||||
@@ -35,18 +32,6 @@
|
||||
- Multi-volume/split ZIP archives require ZipArchive (seekable streams) as ZipReader cannot seek across volume files.
|
||||
- ZipReader processes entries from LocalEntry headers (which include directory entries ending with `/`) and intentionally skips DirectoryEntry headers from the central directory, as they are redundant in streaming mode - all entry data comes from LocalEntry headers which ZipReader has already processed.
|
||||
|
||||
### 7Zip Format Notes
|
||||
|
||||
- **Async Extraction Performance**: When using async extraction methods (e.g., `ExtractAllEntries()` with `MoveToNextEntryAsync()`), each file creates its own decompression stream to avoid state corruption in the LZMA decoder. This is less efficient than synchronous extraction, which can reuse a single decompression stream for multiple files in the same folder.
|
||||
|
||||
**Performance Impact**: For archives with many small files in the same compression folder, async extraction will be slower than synchronous extraction because it must:
|
||||
1. Create a new LZMA decoder for each file
|
||||
2. Skip through the decompressed data to reach each file's starting position
|
||||
|
||||
**Recommendation**: For best performance with 7Zip archives, use synchronous extraction methods (`MoveToNextEntry()` and `WriteEntryToDirectory()`) when possible. Use async methods only when you need to avoid blocking the thread (e.g., in UI applications or async-only contexts).
|
||||
|
||||
**Technical Details**: 7Zip archives group files into "folders" (compression units), where all files in a folder share one continuous LZMA-compressed stream. The LZMA decoder maintains internal state (dictionary window, decoder positions) that assumes sequential, non-interruptible processing. Async operations can yield control during awaits, which would corrupt this shared state. To avoid this, async extraction creates a fresh decoder stream for each file.
|
||||
|
||||
## Compression Streams
|
||||
|
||||
For those who want to directly compress/decompress bits. The single file formats are represented here as well. However, BZip2, LZip and XZ have no metadata (GZip has a little) so using them without something like a Tar file makes little sense.
|
||||
|
||||
@@ -20,7 +20,7 @@ Project("{2150E333-8FDC-42A3-9474-1A3956D46DE8}") = "Config", "Config", "{CDB425
|
||||
.editorconfig = .editorconfig
|
||||
Directory.Packages.props = Directory.Packages.props
|
||||
NuGet.config = NuGet.config
|
||||
.github\workflows\nuget-release.yml = .github\workflows\nuget-release.yml
|
||||
.github\workflows\dotnetcore.yml = .github\workflows\dotnetcore.yml
|
||||
USAGE.md = USAGE.md
|
||||
README.md = README.md
|
||||
FORMATS.md = FORMATS.md
|
||||
|
||||
64
USAGE.md
64
USAGE.md
@@ -87,17 +87,20 @@ memoryStream.Position = 0;
|
||||
### Extract all files from a rar file to a directory using RarArchive
|
||||
|
||||
Note: Extracting a solid rar or 7z file needs to be done in sequential order to get acceptable decompression speed.
|
||||
`ExtractAllEntries` is primarily intended for solid archives (like solid Rar) or 7Zip archives, where sequential extraction provides the best performance. For general/simple extraction with any supported archive type, use `archive.WriteToDirectory()` instead.
|
||||
It is explicitly recommended to use `ExtractAllEntries` when extracting an entire `IArchive` instead of iterating over all its `Entries`.
|
||||
Alternatively, use `IArchive.WriteToDirectory`.
|
||||
|
||||
```C#
|
||||
using (var archive = RarArchive.Open("Test.rar"))
|
||||
{
|
||||
// Simple extraction with RarArchive; this WriteToDirectory pattern works for all archive types
|
||||
archive.WriteToDirectory(@"D:\temp", new ExtractionOptions()
|
||||
using (var reader = archive.ExtractAllEntries())
|
||||
{
|
||||
ExtractFullPath = true,
|
||||
Overwrite = true
|
||||
});
|
||||
reader.WriteAllToDirectory(@"D:\temp", new ExtractionOptions()
|
||||
{
|
||||
ExtractFullPath = true,
|
||||
Overwrite = true
|
||||
});
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
@@ -113,41 +116,6 @@ using (var archive = RarArchive.Open("Test.rar"))
|
||||
}
|
||||
```
|
||||
|
||||
### Extract solid Rar or 7Zip archives with manual progress reporting
|
||||
|
||||
`ExtractAllEntries` only works for solid archives (Rar) or 7Zip archives. For optimal performance with these archive types, use this method:
|
||||
|
||||
```C#
|
||||
using (var archive = RarArchive.Open("archive.rar")) // Must be solid Rar or 7Zip
|
||||
{
|
||||
if (archive.IsSolid || archive.Type == ArchiveType.SevenZip)
|
||||
{
|
||||
// Calculate total size for progress reporting
|
||||
double totalSize = archive.Entries.Where(e => !e.IsDirectory).Sum(e => e.Size);
|
||||
long completed = 0;
|
||||
|
||||
using (var reader = archive.ExtractAllEntries())
|
||||
{
|
||||
while (reader.MoveToNextEntry())
|
||||
{
|
||||
if (!reader.Entry.IsDirectory)
|
||||
{
|
||||
reader.WriteEntryToDirectory(@"D:\output", new ExtractionOptions()
|
||||
{
|
||||
ExtractFullPath = true,
|
||||
Overwrite = true
|
||||
});
|
||||
|
||||
completed += reader.Entry.Size;
|
||||
double progress = completed / totalSize;
|
||||
Console.WriteLine($"Progress: {progress:P}");
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Use ReaderFactory to autodetect archive type and Open the entry stream
|
||||
|
||||
```C#
|
||||
@@ -330,12 +298,14 @@ using (var writer = WriterFactory.Open(stream, ArchiveType.Zip, CompressionType.
|
||||
```C#
|
||||
using (var archive = ZipArchive.Open("archive.zip"))
|
||||
{
|
||||
// Simple async extraction - works for all archive types
|
||||
await archive.WriteToDirectoryAsync(
|
||||
@"C:\output",
|
||||
new ExtractionOptions() { ExtractFullPath = true, Overwrite = true },
|
||||
cancellationToken
|
||||
);
|
||||
using (var reader = archive.ExtractAllEntries())
|
||||
{
|
||||
await reader.WriteAllToDirectoryAsync(
|
||||
@"C:\output",
|
||||
new ExtractionOptions() { ExtractFullPath = true, Overwrite = true },
|
||||
cancellationToken
|
||||
);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
175
build/Program.cs
175
build/Program.cs
@@ -1,10 +1,7 @@
|
||||
using System;
|
||||
using System.Collections.Generic;
|
||||
using System.IO;
|
||||
using System.Linq;
|
||||
using System.Runtime.InteropServices;
|
||||
using System.Text.RegularExpressions;
|
||||
using System.Threading.Tasks;
|
||||
using GlobExpressions;
|
||||
using static Bullseye.Targets;
|
||||
using static SimpleExec.Command;
|
||||
@@ -14,11 +11,7 @@ const string Restore = "restore";
|
||||
const string Build = "build";
|
||||
const string Test = "test";
|
||||
const string Format = "format";
|
||||
const string CheckFormat = "check-format";
|
||||
const string Publish = "publish";
|
||||
const string DetermineVersion = "determine-version";
|
||||
const string UpdateVersion = "update-version";
|
||||
const string PushToNuGet = "push-to-nuget";
|
||||
|
||||
Target(
|
||||
Clean,
|
||||
@@ -49,20 +42,12 @@ Target(
|
||||
Target(
|
||||
Format,
|
||||
() =>
|
||||
{
|
||||
Run("dotnet", "tool restore");
|
||||
Run("dotnet", "csharpier format .");
|
||||
}
|
||||
);
|
||||
Target(
|
||||
CheckFormat,
|
||||
() =>
|
||||
{
|
||||
Run("dotnet", "tool restore");
|
||||
Run("dotnet", "csharpier check .");
|
||||
}
|
||||
);
|
||||
Target(Restore, [CheckFormat], () => Run("dotnet", "restore"));
|
||||
Target(Restore, [Format], () => Run("dotnet", "restore"));
|
||||
|
||||
Target(
|
||||
Build,
|
||||
@@ -105,164 +90,6 @@ Target(
|
||||
}
|
||||
);
|
||||
|
||||
Target(
|
||||
DetermineVersion,
|
||||
async () =>
|
||||
{
|
||||
var (version, isPrerelease) = await GetVersion();
|
||||
Console.WriteLine($"VERSION={version}");
|
||||
Console.WriteLine($"PRERELEASE={isPrerelease.ToString().ToLower()}");
|
||||
|
||||
// Write to environment file for GitHub Actions
|
||||
var githubOutput = Environment.GetEnvironmentVariable("GITHUB_OUTPUT");
|
||||
if (!string.IsNullOrEmpty(githubOutput))
|
||||
{
|
||||
File.AppendAllText(githubOutput, $"version={version}\n");
|
||||
File.AppendAllText(githubOutput, $"prerelease={isPrerelease.ToString().ToLower()}\n");
|
||||
}
|
||||
}
|
||||
);
|
||||
|
||||
Target(
|
||||
UpdateVersion,
|
||||
async () =>
|
||||
{
|
||||
var version = Environment.GetEnvironmentVariable("VERSION");
|
||||
if (string.IsNullOrEmpty(version))
|
||||
{
|
||||
var (detectedVersion, _) = await GetVersion();
|
||||
version = detectedVersion;
|
||||
}
|
||||
|
||||
Console.WriteLine($"Updating project file with version: {version}");
|
||||
|
||||
var projectPath = "src/SharpCompress/SharpCompress.csproj";
|
||||
var content = File.ReadAllText(projectPath);
|
||||
|
||||
// Get base version (without prerelease suffix)
|
||||
var baseVersion = version.Split('-')[0];
|
||||
|
||||
// Update VersionPrefix
|
||||
content = Regex.Replace(
|
||||
content,
|
||||
@"<VersionPrefix>[^<]*</VersionPrefix>",
|
||||
$"<VersionPrefix>{version}</VersionPrefix>"
|
||||
);
|
||||
|
||||
// Update AssemblyVersion
|
||||
content = Regex.Replace(
|
||||
content,
|
||||
@"<AssemblyVersion>[^<]*</AssemblyVersion>",
|
||||
$"<AssemblyVersion>{baseVersion}</AssemblyVersion>"
|
||||
);
|
||||
|
||||
// Update FileVersion
|
||||
content = Regex.Replace(
|
||||
content,
|
||||
@"<FileVersion>[^<]*</FileVersion>",
|
||||
$"<FileVersion>{baseVersion}</FileVersion>"
|
||||
);
|
||||
|
||||
File.WriteAllText(projectPath, content);
|
||||
Console.WriteLine($"Updated VersionPrefix to: {version}");
|
||||
Console.WriteLine($"Updated AssemblyVersion and FileVersion to: {baseVersion}");
|
||||
}
|
||||
);
|
||||
|
||||
Target(
|
||||
PushToNuGet,
|
||||
() =>
|
||||
{
|
||||
var apiKey = Environment.GetEnvironmentVariable("NUGET_API_KEY");
|
||||
if (string.IsNullOrEmpty(apiKey))
|
||||
{
|
||||
Console.WriteLine(
|
||||
"NUGET_API_KEY environment variable is not set. Skipping NuGet push."
|
||||
);
|
||||
return;
|
||||
}
|
||||
|
||||
var packages = Directory.GetFiles("artifacts", "*.nupkg");
|
||||
if (packages.Length == 0)
|
||||
{
|
||||
Console.WriteLine("No packages found in artifacts directory.");
|
||||
return;
|
||||
}
|
||||
|
||||
foreach (var package in packages)
|
||||
{
|
||||
Console.WriteLine($"Pushing {package} to NuGet.org");
|
||||
try
|
||||
{
|
||||
// Note: API key is passed via command line argument which is standard practice for dotnet nuget push
|
||||
// The key is already in an environment variable and not displayed in normal output
|
||||
Run(
|
||||
"dotnet",
|
||||
$"nuget push \"{package}\" --api-key {apiKey} --source https://api.nuget.org/v3/index.json --skip-duplicate"
|
||||
);
|
||||
}
|
||||
catch (Exception ex)
|
||||
{
|
||||
Console.WriteLine($"Failed to push {package}: {ex.Message}");
|
||||
throw;
|
||||
}
|
||||
}
|
||||
}
|
||||
);
|
||||
|
||||
Target("default", [Publish], () => Console.WriteLine("Done!"));
|
||||
|
||||
await RunTargetsAndExitAsync(args);
|
||||
|
||||
static async Task<(string version, bool isPrerelease)> GetVersion()
|
||||
{
|
||||
// Check if current commit has a version tag
|
||||
var currentTag = (await GetGitOutput("tag", "--points-at HEAD"))
|
||||
.Split('\n', StringSplitOptions.RemoveEmptyEntries)
|
||||
.FirstOrDefault(tag => Regex.IsMatch(tag.Trim(), @"^\d+\.\d+\.\d+$"));
|
||||
|
||||
if (!string.IsNullOrEmpty(currentTag))
|
||||
{
|
||||
// Tagged release - use the tag as version
|
||||
var version = currentTag.Trim();
|
||||
Console.WriteLine($"Building tagged release version: {version}");
|
||||
return (version, false);
|
||||
}
|
||||
else
|
||||
{
|
||||
// Not tagged - create prerelease version based on next minor version
|
||||
var allTags = (await GetGitOutput("tag", "--list"))
|
||||
.Split('\n', StringSplitOptions.RemoveEmptyEntries)
|
||||
.Where(tag => Regex.IsMatch(tag.Trim(), @"^\d+\.\d+\.\d+$"))
|
||||
.Select(tag => tag.Trim())
|
||||
.ToList();
|
||||
|
||||
var lastTag = allTags.OrderBy(tag => Version.Parse(tag)).LastOrDefault() ?? "0.0.0";
|
||||
var lastVersion = Version.Parse(lastTag);
|
||||
|
||||
// Increment minor version for next release
|
||||
var nextVersion = new Version(lastVersion.Major, lastVersion.Minor + 1, 0);
|
||||
|
||||
// Use commit count since the last version tag if available; otherwise, fall back to total count
|
||||
var revListArgs = allTags.Any() ? $"--count {lastTag}..HEAD" : "--count HEAD";
|
||||
var commitCount = (await GetGitOutput("rev-list", revListArgs)).Trim();
|
||||
|
||||
var version = $"{nextVersion}-beta.{commitCount}";
|
||||
Console.WriteLine($"Building prerelease version: {version}");
|
||||
return (version, true);
|
||||
}
|
||||
}
|
||||
|
||||
static async Task<string> GetGitOutput(string command, string args)
|
||||
{
|
||||
try
|
||||
{
|
||||
// Use SimpleExec's Read to execute git commands in a cross-platform way
|
||||
var (output, _) = await ReadAsync("git", $"{command} {args}");
|
||||
return output;
|
||||
}
|
||||
catch (Exception ex)
|
||||
{
|
||||
throw new Exception($"Git command failed: git {command} {args}\n{ex.Message}", ex);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -4,9 +4,9 @@
|
||||
"net10.0": {
|
||||
"Bullseye": {
|
||||
"type": "Direct",
|
||||
"requested": "[6.1.0, )",
|
||||
"resolved": "6.1.0",
|
||||
"contentHash": "fltnAJDe0BEX5eymXGUq+il2rSUA0pHqUonNDRH2TrvRu8SkU17mYG0IVpdmG2ibtfhdjNrv4CuTCxHOwcozCA=="
|
||||
"requested": "[6.0.0, )",
|
||||
"resolved": "6.0.0",
|
||||
"contentHash": "vgwwXfzs7jJrskWH7saHRMgPzziq/e86QZNWY1MnMxd7e+De7E7EX4K3C7yrvaK9y02SJoLxNxcLG/q5qUAghw=="
|
||||
},
|
||||
"Glob": {
|
||||
"type": "Direct",
|
||||
@@ -16,9 +16,9 @@
|
||||
},
|
||||
"SimpleExec": {
|
||||
"type": "Direct",
|
||||
"requested": "[13.0.0, )",
|
||||
"resolved": "13.0.0",
|
||||
"contentHash": "zcCR1pupa1wI1VqBULRiQKeHKKZOuJhi/K+4V5oO+rHJZlaOD53ViFo1c3PavDoMAfSn/FAXGAWpPoF57rwhYg=="
|
||||
"requested": "[12.0.0, )",
|
||||
"resolved": "12.0.0",
|
||||
"contentHash": "ptxlWtxC8vM6Y6e3h9ZTxBBkOWnWrm/Sa1HT+2i1xcXY3Hx2hmKDZP5RShPf8Xr9D+ivlrXNy57ktzyH8kyt+Q=="
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -8,7 +8,7 @@ using SharpCompress.Readers;
|
||||
|
||||
namespace SharpCompress.Archives;
|
||||
|
||||
public abstract class AbstractArchive<TEntry, TVolume> : IArchive
|
||||
public abstract class AbstractArchive<TEntry, TVolume> : IArchive, IArchiveExtractionListener
|
||||
where TEntry : IArchiveEntry
|
||||
where TVolume : IVolume
|
||||
{
|
||||
@@ -17,6 +17,11 @@ public abstract class AbstractArchive<TEntry, TVolume> : IArchive
|
||||
private bool _disposed;
|
||||
private readonly SourceStream? _sourceStream;
|
||||
|
||||
public event EventHandler<ArchiveExtractionEventArgs<IArchiveEntry>>? EntryExtractionBegin;
|
||||
public event EventHandler<ArchiveExtractionEventArgs<IArchiveEntry>>? EntryExtractionEnd;
|
||||
|
||||
public event EventHandler<CompressedBytesReadEventArgs>? CompressedBytesRead;
|
||||
public event EventHandler<FilePartExtractionBeginEventArgs>? FilePartExtractionBegin;
|
||||
protected ReaderOptions ReaderOptions { get; }
|
||||
|
||||
internal AbstractArchive(ArchiveType type, SourceStream sourceStream)
|
||||
@@ -38,6 +43,12 @@ public abstract class AbstractArchive<TEntry, TVolume> : IArchive
|
||||
|
||||
public ArchiveType Type { get; }
|
||||
|
||||
void IArchiveExtractionListener.FireEntryExtractionBegin(IArchiveEntry entry) =>
|
||||
EntryExtractionBegin?.Invoke(this, new ArchiveExtractionEventArgs<IArchiveEntry>(entry));
|
||||
|
||||
void IArchiveExtractionListener.FireEntryExtractionEnd(IArchiveEntry entry) =>
|
||||
EntryExtractionEnd?.Invoke(this, new ArchiveExtractionEventArgs<IArchiveEntry>(entry));
|
||||
|
||||
private static Stream CheckStreams(Stream stream)
|
||||
{
|
||||
if (!stream.CanSeek || !stream.CanRead)
|
||||
@@ -88,12 +99,38 @@ public abstract class AbstractArchive<TEntry, TVolume> : IArchive
|
||||
}
|
||||
}
|
||||
|
||||
private void EnsureEntriesLoaded()
|
||||
void IArchiveExtractionListener.EnsureEntriesLoaded()
|
||||
{
|
||||
_lazyEntries.EnsureFullyLoaded();
|
||||
_lazyVolumes.EnsureFullyLoaded();
|
||||
}
|
||||
|
||||
void IExtractionListener.FireCompressedBytesRead(
|
||||
long currentPartCompressedBytes,
|
||||
long compressedReadBytes
|
||||
) =>
|
||||
CompressedBytesRead?.Invoke(
|
||||
this,
|
||||
new CompressedBytesReadEventArgs(
|
||||
currentFilePartCompressedBytesRead: currentPartCompressedBytes,
|
||||
compressedBytesRead: compressedReadBytes
|
||||
)
|
||||
);
|
||||
|
||||
void IExtractionListener.FireFilePartExtractionBegin(
|
||||
string name,
|
||||
long size,
|
||||
long compressedSize
|
||||
) =>
|
||||
FilePartExtractionBegin?.Invoke(
|
||||
this,
|
||||
new FilePartExtractionBeginEventArgs(
|
||||
compressedSize: compressedSize,
|
||||
size: size,
|
||||
name: name
|
||||
)
|
||||
);
|
||||
|
||||
/// <summary>
|
||||
/// Use this method to extract all entries in an archive in order.
|
||||
/// This is primarily for SOLID Rar Archives or 7Zip Archives as they need to be
|
||||
@@ -109,11 +146,11 @@ public abstract class AbstractArchive<TEntry, TVolume> : IArchive
|
||||
{
|
||||
if (!IsSolid && Type != ArchiveType.SevenZip)
|
||||
{
|
||||
throw new SharpCompressException(
|
||||
throw new InvalidOperationException(
|
||||
"ExtractAllEntries can only be used on solid archives or 7Zip archives (which require random access)."
|
||||
);
|
||||
}
|
||||
EnsureEntriesLoaded();
|
||||
((IArchiveExtractionListener)this).EnsureEntriesLoaded();
|
||||
return CreateReaderForSolidExtraction();
|
||||
}
|
||||
|
||||
@@ -136,7 +173,7 @@ public abstract class AbstractArchive<TEntry, TVolume> : IArchive
|
||||
{
|
||||
get
|
||||
{
|
||||
EnsureEntriesLoaded();
|
||||
((IArchiveExtractionListener)this).EnsureEntriesLoaded();
|
||||
return Entries.All(x => x.IsComplete);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -123,7 +123,7 @@ public static class ArchiveFactory
|
||||
)
|
||||
{
|
||||
using var archive = Open(sourceArchive);
|
||||
archive.WriteToDirectory(destinationDirectory, options);
|
||||
archive.ExtractToDirectory(destinationDirectory, options);
|
||||
}
|
||||
|
||||
private static T FindFactory<T>(FileInfo finfo)
|
||||
|
||||
@@ -7,6 +7,12 @@ namespace SharpCompress.Archives;
|
||||
|
||||
public interface IArchive : IDisposable
|
||||
{
|
||||
event EventHandler<ArchiveExtractionEventArgs<IArchiveEntry>> EntryExtractionBegin;
|
||||
event EventHandler<ArchiveExtractionEventArgs<IArchiveEntry>> EntryExtractionEnd;
|
||||
|
||||
event EventHandler<CompressedBytesReadEventArgs> CompressedBytesRead;
|
||||
event EventHandler<FilePartExtractionBeginEventArgs> FilePartExtractionBegin;
|
||||
|
||||
IEnumerable<IArchiveEntry> Entries { get; }
|
||||
IEnumerable<IVolume> Volumes { get; }
|
||||
|
||||
|
||||
@@ -1,4 +1,3 @@
|
||||
using System;
|
||||
using System.IO;
|
||||
using System.Threading;
|
||||
using System.Threading.Tasks;
|
||||
@@ -9,153 +8,127 @@ namespace SharpCompress.Archives;
|
||||
|
||||
public static class IArchiveEntryExtensions
|
||||
{
|
||||
private const int BufferSize = 81920;
|
||||
|
||||
/// <param name="archiveEntry">The archive entry to extract.</param>
|
||||
extension(IArchiveEntry archiveEntry)
|
||||
public static void WriteTo(this IArchiveEntry archiveEntry, Stream streamToWriteTo)
|
||||
{
|
||||
/// <summary>
|
||||
/// Extract entry to the specified stream.
|
||||
/// </summary>
|
||||
/// <param name="streamToWriteTo">The stream to write the entry content to.</param>
|
||||
/// <param name="progress">Optional progress reporter for tracking extraction progress.</param>
|
||||
public void WriteTo(Stream streamToWriteTo, IProgress<ProgressReport>? progress = null)
|
||||
if (archiveEntry.IsDirectory)
|
||||
{
|
||||
if (archiveEntry.IsDirectory)
|
||||
{
|
||||
throw new ExtractionException("Entry is a file directory and cannot be extracted.");
|
||||
}
|
||||
|
||||
using var entryStream = archiveEntry.OpenEntryStream();
|
||||
var sourceStream = WrapWithProgress(entryStream, archiveEntry, progress);
|
||||
sourceStream.CopyTo(streamToWriteTo, BufferSize);
|
||||
throw new ExtractionException("Entry is a file directory and cannot be extracted.");
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
/// Extract entry to the specified stream asynchronously.
|
||||
/// </summary>
|
||||
/// <param name="streamToWriteTo">The stream to write the entry content to.</param>
|
||||
/// <param name="cancellationToken">Cancellation token.</param>
|
||||
/// <param name="progress">Optional progress reporter for tracking extraction progress.</param>
|
||||
public async Task WriteToAsync(
|
||||
Stream streamToWriteTo,
|
||||
IProgress<ProgressReport>? progress = null,
|
||||
CancellationToken cancellationToken = default
|
||||
)
|
||||
var streamListener = (IArchiveExtractionListener)archiveEntry.Archive;
|
||||
streamListener.EnsureEntriesLoaded();
|
||||
streamListener.FireEntryExtractionBegin(archiveEntry);
|
||||
streamListener.FireFilePartExtractionBegin(
|
||||
archiveEntry.Key ?? "Key",
|
||||
archiveEntry.Size,
|
||||
archiveEntry.CompressedSize
|
||||
);
|
||||
var entryStream = archiveEntry.OpenEntryStream();
|
||||
using (entryStream)
|
||||
{
|
||||
if (archiveEntry.IsDirectory)
|
||||
{
|
||||
throw new ExtractionException("Entry is a file directory and cannot be extracted.");
|
||||
}
|
||||
|
||||
using var entryStream = await archiveEntry.OpenEntryStreamAsync(cancellationToken);
|
||||
var sourceStream = WrapWithProgress(entryStream, archiveEntry, progress);
|
||||
await sourceStream
|
||||
.CopyToAsync(streamToWriteTo, BufferSize, cancellationToken)
|
||||
.ConfigureAwait(false);
|
||||
using Stream s = new ListeningStream(streamListener, entryStream);
|
||||
s.CopyTo(streamToWriteTo);
|
||||
}
|
||||
streamListener.FireEntryExtractionEnd(archiveEntry);
|
||||
}
|
||||
|
||||
private static Stream WrapWithProgress(
|
||||
Stream source,
|
||||
IArchiveEntry entry,
|
||||
IProgress<ProgressReport>? progress
|
||||
public static async Task WriteToAsync(
|
||||
this IArchiveEntry archiveEntry,
|
||||
Stream streamToWriteTo,
|
||||
CancellationToken cancellationToken = default
|
||||
)
|
||||
{
|
||||
if (progress is null)
|
||||
if (archiveEntry.IsDirectory)
|
||||
{
|
||||
return source;
|
||||
throw new ExtractionException("Entry is a file directory and cannot be extracted.");
|
||||
}
|
||||
|
||||
var entryPath = entry.Key ?? string.Empty;
|
||||
var totalBytes = GetEntrySizeSafe(entry);
|
||||
return new ProgressReportingStream(
|
||||
source,
|
||||
progress,
|
||||
entryPath,
|
||||
totalBytes,
|
||||
leaveOpen: true
|
||||
var streamListener = (IArchiveExtractionListener)archiveEntry.Archive;
|
||||
streamListener.EnsureEntriesLoaded();
|
||||
streamListener.FireEntryExtractionBegin(archiveEntry);
|
||||
streamListener.FireFilePartExtractionBegin(
|
||||
archiveEntry.Key ?? "Key",
|
||||
archiveEntry.Size,
|
||||
archiveEntry.CompressedSize
|
||||
);
|
||||
}
|
||||
|
||||
private static long? GetEntrySizeSafe(IArchiveEntry entry)
|
||||
{
|
||||
try
|
||||
var entryStream = archiveEntry.OpenEntryStream();
|
||||
using (entryStream)
|
||||
{
|
||||
var size = entry.Size;
|
||||
return size >= 0 ? size : null;
|
||||
}
|
||||
catch (NotImplementedException)
|
||||
{
|
||||
return null;
|
||||
using Stream s = new ListeningStream(streamListener, entryStream);
|
||||
await s.CopyToAsync(streamToWriteTo, 81920, cancellationToken).ConfigureAwait(false);
|
||||
}
|
||||
streamListener.FireEntryExtractionEnd(archiveEntry);
|
||||
}
|
||||
|
||||
extension(IArchiveEntry entry)
|
||||
{
|
||||
/// <summary>
|
||||
/// Extract to specific directory, retaining filename
|
||||
/// </summary>
|
||||
public void WriteToDirectory(
|
||||
string destinationDirectory,
|
||||
ExtractionOptions? options = null
|
||||
) =>
|
||||
ExtractionMethods.WriteEntryToDirectory(
|
||||
entry,
|
||||
destinationDirectory,
|
||||
options,
|
||||
entry.WriteToFile
|
||||
);
|
||||
/// <summary>
|
||||
/// Extract to specific directory, retaining filename
|
||||
/// </summary>
|
||||
public static void WriteToDirectory(
|
||||
this IArchiveEntry entry,
|
||||
string destinationDirectory,
|
||||
ExtractionOptions? options = null
|
||||
) =>
|
||||
ExtractionMethods.WriteEntryToDirectory(
|
||||
entry,
|
||||
destinationDirectory,
|
||||
options,
|
||||
entry.WriteToFile
|
||||
);
|
||||
|
||||
/// <summary>
|
||||
/// Extract to specific directory asynchronously, retaining filename
|
||||
/// </summary>
|
||||
public Task WriteToDirectoryAsync(
|
||||
string destinationDirectory,
|
||||
ExtractionOptions? options = null,
|
||||
CancellationToken cancellationToken = default
|
||||
) =>
|
||||
ExtractionMethods.WriteEntryToDirectoryAsync(
|
||||
entry,
|
||||
destinationDirectory,
|
||||
options,
|
||||
entry.WriteToFileAsync,
|
||||
cancellationToken
|
||||
);
|
||||
/// <summary>
|
||||
/// Extract to specific directory asynchronously, retaining filename
|
||||
/// </summary>
|
||||
public static Task WriteToDirectoryAsync(
|
||||
this IArchiveEntry entry,
|
||||
string destinationDirectory,
|
||||
ExtractionOptions? options = null,
|
||||
CancellationToken cancellationToken = default
|
||||
) =>
|
||||
ExtractionMethods.WriteEntryToDirectoryAsync(
|
||||
entry,
|
||||
destinationDirectory,
|
||||
options,
|
||||
(x, opt) => entry.WriteToFileAsync(x, opt, cancellationToken),
|
||||
cancellationToken
|
||||
);
|
||||
|
||||
/// <summary>
|
||||
/// Extract to specific file
|
||||
/// </summary>
|
||||
public void WriteToFile(string destinationFileName, ExtractionOptions? options = null) =>
|
||||
ExtractionMethods.WriteEntryToFile(
|
||||
entry,
|
||||
destinationFileName,
|
||||
options,
|
||||
(x, fm) =>
|
||||
{
|
||||
using var fs = File.Open(destinationFileName, fm);
|
||||
entry.WriteTo(fs);
|
||||
}
|
||||
);
|
||||
/// <summary>
|
||||
/// Extract to specific file
|
||||
/// </summary>
|
||||
public static void WriteToFile(
|
||||
this IArchiveEntry entry,
|
||||
string destinationFileName,
|
||||
ExtractionOptions? options = null
|
||||
) =>
|
||||
ExtractionMethods.WriteEntryToFile(
|
||||
entry,
|
||||
destinationFileName,
|
||||
options,
|
||||
(x, fm) =>
|
||||
{
|
||||
using var fs = File.Open(destinationFileName, fm);
|
||||
entry.WriteTo(fs);
|
||||
}
|
||||
);
|
||||
|
||||
/// <summary>
|
||||
/// Extract to specific file asynchronously
|
||||
/// </summary>
|
||||
public Task WriteToFileAsync(
|
||||
string destinationFileName,
|
||||
ExtractionOptions? options = null,
|
||||
CancellationToken cancellationToken = default
|
||||
) =>
|
||||
ExtractionMethods.WriteEntryToFileAsync(
|
||||
entry,
|
||||
destinationFileName,
|
||||
options,
|
||||
async (x, fm, ct) =>
|
||||
{
|
||||
using var fs = File.Open(destinationFileName, fm);
|
||||
await entry.WriteToAsync(fs, null, ct).ConfigureAwait(false);
|
||||
},
|
||||
cancellationToken
|
||||
);
|
||||
}
|
||||
/// <summary>
|
||||
/// Extract to specific file asynchronously
|
||||
/// </summary>
|
||||
public static Task WriteToFileAsync(
|
||||
this IArchiveEntry entry,
|
||||
string destinationFileName,
|
||||
ExtractionOptions? options = null,
|
||||
CancellationToken cancellationToken = default
|
||||
) =>
|
||||
ExtractionMethods.WriteEntryToFileAsync(
|
||||
entry,
|
||||
destinationFileName,
|
||||
options,
|
||||
async (x, fm) =>
|
||||
{
|
||||
using var fs = File.Open(destinationFileName, fm);
|
||||
await entry.WriteToAsync(fs, cancellationToken).ConfigureAwait(false);
|
||||
},
|
||||
cancellationToken
|
||||
);
|
||||
}
|
||||
|
||||
@@ -2,7 +2,6 @@ using System;
|
||||
using System.Collections.Generic;
|
||||
using System.IO;
|
||||
using System.Threading;
|
||||
using System.Threading.Tasks;
|
||||
using SharpCompress.Common;
|
||||
using SharpCompress.Readers;
|
||||
|
||||
@@ -10,159 +9,69 @@ namespace SharpCompress.Archives;
|
||||
|
||||
public static class IArchiveExtensions
|
||||
{
|
||||
/// <param name="archive">The archive to extract.</param>
|
||||
extension(IArchive archive)
|
||||
/// <summary>
|
||||
/// Extract to specific directory, retaining filename
|
||||
/// </summary>
|
||||
public static void WriteToDirectory(
|
||||
this IArchive archive,
|
||||
string destinationDirectory,
|
||||
ExtractionOptions? options = null
|
||||
)
|
||||
{
|
||||
/// <summary>
|
||||
/// Extract to specific directory with progress reporting
|
||||
/// </summary>
|
||||
/// <param name="destinationDirectory">The folder to extract into.</param>
|
||||
/// <param name="options">Extraction options.</param>
|
||||
/// <param name="progress">Optional progress reporter for tracking extraction progress.</param>
|
||||
public void WriteToDirectory(
|
||||
string destinationDirectory,
|
||||
ExtractionOptions? options = null,
|
||||
IProgress<ProgressReport>? progress = null
|
||||
)
|
||||
using var reader = archive.ExtractAllEntries();
|
||||
reader.WriteAllToDirectory(destinationDirectory, options);
|
||||
}
|
||||
|
||||
public static void ExtractToDirectory(
|
||||
this IArchive archive,
|
||||
string destination,
|
||||
ExtractionOptions? options = null,
|
||||
Action<double>? progressReport = null,
|
||||
CancellationToken cancellationToken = default
|
||||
)
|
||||
{
|
||||
// Prepare for progress reporting
|
||||
var totalBytes = archive.TotalUncompressSize;
|
||||
var bytesRead = 0L;
|
||||
|
||||
// Tracking for created directories.
|
||||
var seenDirectories = new HashSet<string>();
|
||||
|
||||
// Extract
|
||||
foreach (var entry in archive.Entries)
|
||||
{
|
||||
// For solid archives (Rar, 7Zip), use the optimized reader-based approach
|
||||
if (archive.IsSolid || archive.Type == ArchiveType.SevenZip)
|
||||
{
|
||||
using var reader = archive.ExtractAllEntries();
|
||||
reader.WriteAllToDirectory(destinationDirectory, options);
|
||||
}
|
||||
else
|
||||
{
|
||||
// For non-solid archives, extract entries directly
|
||||
archive.WriteToDirectoryInternal(destinationDirectory, options, progress);
|
||||
}
|
||||
}
|
||||
cancellationToken.ThrowIfCancellationRequested();
|
||||
|
||||
private void WriteToDirectoryInternal(
|
||||
string destinationDirectory,
|
||||
ExtractionOptions? options,
|
||||
IProgress<ProgressReport>? progress
|
||||
)
|
||||
{
|
||||
// Prepare for progress reporting
|
||||
var totalBytes = archive.TotalUncompressSize;
|
||||
var bytesRead = 0L;
|
||||
|
||||
// Tracking for created directories.
|
||||
var seenDirectories = new HashSet<string>();
|
||||
|
||||
// Extract
|
||||
foreach (var entry in archive.Entries)
|
||||
if (entry.IsDirectory)
|
||||
{
|
||||
if (entry.IsDirectory)
|
||||
var dirPath = Path.Combine(destination, entry.Key.NotNull("Entry Key is null"));
|
||||
if (
|
||||
Path.GetDirectoryName(dirPath + "/") is { } emptyDirectory
|
||||
&& seenDirectories.Add(dirPath)
|
||||
)
|
||||
{
|
||||
var dirPath = Path.Combine(
|
||||
destinationDirectory,
|
||||
entry.Key.NotNull("Entry Key is null")
|
||||
);
|
||||
if (
|
||||
Path.GetDirectoryName(dirPath + "/") is { } parentDirectory
|
||||
&& seenDirectories.Add(dirPath)
|
||||
)
|
||||
{
|
||||
Directory.CreateDirectory(parentDirectory);
|
||||
}
|
||||
continue;
|
||||
Directory.CreateDirectory(emptyDirectory);
|
||||
}
|
||||
|
||||
// Use the entry's WriteToDirectory method which respects ExtractionOptions
|
||||
entry.WriteToDirectory(destinationDirectory, options);
|
||||
|
||||
// Update progress
|
||||
bytesRead += entry.Size;
|
||||
progress?.Report(
|
||||
new ProgressReport(entry.Key ?? string.Empty, bytesRead, totalBytes)
|
||||
);
|
||||
continue;
|
||||
}
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
/// Extract to specific directory asynchronously with progress reporting and cancellation support
|
||||
/// </summary>
|
||||
/// <param name="destinationDirectory">The folder to extract into.</param>
|
||||
/// <param name="options">Extraction options.</param>
|
||||
/// <param name="progress">Optional progress reporter for tracking extraction progress.</param>
|
||||
/// <param name="cancellationToken">Optional cancellation token.</param>
|
||||
public async Task WriteToDirectoryAsync(
|
||||
string destinationDirectory,
|
||||
ExtractionOptions? options = null,
|
||||
IProgress<ProgressReport>? progress = null,
|
||||
CancellationToken cancellationToken = default
|
||||
)
|
||||
{
|
||||
// For solid archives (Rar, 7Zip), use the optimized reader-based approach
|
||||
if (archive.IsSolid || archive.Type == ArchiveType.SevenZip)
|
||||
// Create each directory if not already created
|
||||
var path = Path.Combine(destination, entry.Key.NotNull("Entry Key is null"));
|
||||
if (Path.GetDirectoryName(path) is { } directory)
|
||||
{
|
||||
using var reader = archive.ExtractAllEntries();
|
||||
await reader.WriteAllToDirectoryAsync(
|
||||
destinationDirectory,
|
||||
options,
|
||||
cancellationToken
|
||||
);
|
||||
}
|
||||
else
|
||||
{
|
||||
// For non-solid archives, extract entries directly
|
||||
await archive.WriteToDirectoryAsyncInternal(
|
||||
destinationDirectory,
|
||||
options,
|
||||
progress,
|
||||
cancellationToken
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
private async Task WriteToDirectoryAsyncInternal(
|
||||
string destinationDirectory,
|
||||
ExtractionOptions? options,
|
||||
IProgress<ProgressReport>? progress,
|
||||
CancellationToken cancellationToken
|
||||
)
|
||||
{
|
||||
// Prepare for progress reporting
|
||||
var totalBytes = archive.TotalUncompressSize;
|
||||
var bytesRead = 0L;
|
||||
|
||||
// Tracking for created directories.
|
||||
var seenDirectories = new HashSet<string>();
|
||||
|
||||
// Extract
|
||||
foreach (var entry in archive.Entries)
|
||||
{
|
||||
cancellationToken.ThrowIfCancellationRequested();
|
||||
|
||||
if (entry.IsDirectory)
|
||||
if (!Directory.Exists(directory) && !seenDirectories.Contains(directory))
|
||||
{
|
||||
var dirPath = Path.Combine(
|
||||
destinationDirectory,
|
||||
entry.Key.NotNull("Entry Key is null")
|
||||
);
|
||||
if (
|
||||
Path.GetDirectoryName(dirPath + "/") is { } parentDirectory
|
||||
&& seenDirectories.Add(dirPath)
|
||||
)
|
||||
{
|
||||
Directory.CreateDirectory(parentDirectory);
|
||||
}
|
||||
continue;
|
||||
Directory.CreateDirectory(directory);
|
||||
seenDirectories.Add(directory);
|
||||
}
|
||||
|
||||
// Use the entry's WriteToDirectoryAsync method which respects ExtractionOptions
|
||||
await entry
|
||||
.WriteToDirectoryAsync(destinationDirectory, options, cancellationToken)
|
||||
.ConfigureAwait(false);
|
||||
|
||||
// Update progress
|
||||
bytesRead += entry.Size;
|
||||
progress?.Report(
|
||||
new ProgressReport(entry.Key ?? string.Empty, bytesRead, totalBytes)
|
||||
);
|
||||
}
|
||||
|
||||
// Write file
|
||||
entry.WriteToFile(path, options);
|
||||
|
||||
// Update progress
|
||||
bytesRead += entry.Size;
|
||||
progressReport?.Invoke(bytesRead / (double)totalBytes);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
10
src/SharpCompress/Archives/IArchiveExtractionListener.cs
Normal file
10
src/SharpCompress/Archives/IArchiveExtractionListener.cs
Normal file
@@ -0,0 +1,10 @@
|
||||
using SharpCompress.Common;
|
||||
|
||||
namespace SharpCompress.Archives;
|
||||
|
||||
internal interface IArchiveExtractionListener : IExtractionListener
|
||||
{
|
||||
void EnsureEntriesLoaded();
|
||||
void FireEntryExtractionBegin(IArchiveEntry entry);
|
||||
void FireEntryExtractionEnd(IArchiveEntry entry);
|
||||
}
|
||||
@@ -76,7 +76,7 @@ public class RarArchiveEntry : RarEntry, IArchiveEntry
|
||||
stream = new RarStream(
|
||||
archive.UnpackV1.Value,
|
||||
FileHeader,
|
||||
new MultiVolumeReadOnlyStream(Parts.Cast<RarFilePart>())
|
||||
new MultiVolumeReadOnlyStream(Parts.Cast<RarFilePart>(), archive)
|
||||
);
|
||||
}
|
||||
else
|
||||
@@ -84,7 +84,7 @@ public class RarArchiveEntry : RarEntry, IArchiveEntry
|
||||
stream = new RarStream(
|
||||
archive.UnpackV2017.Value,
|
||||
FileHeader,
|
||||
new MultiVolumeReadOnlyStream(Parts.Cast<RarFilePart>())
|
||||
new MultiVolumeReadOnlyStream(Parts.Cast<RarFilePart>(), archive)
|
||||
);
|
||||
}
|
||||
|
||||
@@ -100,7 +100,7 @@ public class RarArchiveEntry : RarEntry, IArchiveEntry
|
||||
stream = new RarStream(
|
||||
archive.UnpackV1.Value,
|
||||
FileHeader,
|
||||
new MultiVolumeReadOnlyStream(Parts.Cast<RarFilePart>())
|
||||
new MultiVolumeReadOnlyStream(Parts.Cast<RarFilePart>(), archive)
|
||||
);
|
||||
}
|
||||
else
|
||||
@@ -108,7 +108,7 @@ public class RarArchiveEntry : RarEntry, IArchiveEntry
|
||||
stream = new RarStream(
|
||||
archive.UnpackV2017.Value,
|
||||
FileHeader,
|
||||
new MultiVolumeReadOnlyStream(Parts.Cast<RarFilePart>())
|
||||
new MultiVolumeReadOnlyStream(Parts.Cast<RarFilePart>(), archive)
|
||||
);
|
||||
}
|
||||
|
||||
|
||||
@@ -2,8 +2,6 @@ using System;
|
||||
using System.Collections.Generic;
|
||||
using System.IO;
|
||||
using System.Linq;
|
||||
using System.Threading;
|
||||
using System.Threading.Tasks;
|
||||
using SharpCompress.Common;
|
||||
using SharpCompress.Common.SevenZip;
|
||||
using SharpCompress.Compressors.LZMA.Utilites;
|
||||
@@ -215,7 +213,9 @@ public class SevenZipArchive : AbstractArchive<SevenZipArchiveEntry, SevenZipVol
|
||||
private sealed class SevenZipReader : AbstractReader<SevenZipEntry, SevenZipVolume>
|
||||
{
|
||||
private readonly SevenZipArchive _archive;
|
||||
private SevenZipEntry? _currentEntry;
|
||||
private CFolder? _currentFolder;
|
||||
private Stream? _currentStream;
|
||||
private CFileItem? _currentItem;
|
||||
|
||||
internal SevenZipReader(ReaderOptions readerOptions, SevenZipArchive archive)
|
||||
: base(readerOptions, ArchiveType.SevenZip) => this._archive = archive;
|
||||
@@ -228,135 +228,40 @@ public class SevenZipArchive : AbstractArchive<SevenZipArchiveEntry, SevenZipVol
|
||||
stream.Position = 0;
|
||||
foreach (var dir in entries.Where(x => x.IsDirectory))
|
||||
{
|
||||
_currentEntry = dir;
|
||||
yield return dir;
|
||||
}
|
||||
// For non-directory entries, yield them without creating shared streams
|
||||
// Each call to GetEntryStream() will create a fresh decompression stream
|
||||
// to avoid state corruption issues with async operations
|
||||
foreach (var entry in entries.Where(x => !x.IsDirectory))
|
||||
foreach (
|
||||
var group in entries.Where(x => !x.IsDirectory).GroupBy(x => x.FilePart.Folder)
|
||||
)
|
||||
{
|
||||
_currentEntry = entry;
|
||||
yield return entry;
|
||||
_currentFolder = group.Key;
|
||||
if (group.Key is null)
|
||||
{
|
||||
_currentStream = Stream.Null;
|
||||
}
|
||||
else
|
||||
{
|
||||
_currentStream = _archive._database?.GetFolderStream(
|
||||
stream,
|
||||
_currentFolder,
|
||||
new PasswordProvider(Options.Password)
|
||||
);
|
||||
}
|
||||
foreach (var entry in group)
|
||||
{
|
||||
_currentItem = entry.FilePart.Header;
|
||||
yield return entry;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
protected override EntryStream GetEntryStream()
|
||||
{
|
||||
// Create a fresh decompression stream for each file (no state sharing).
|
||||
// However, the LZMA decoder has bugs in its async implementation that cause
|
||||
// state corruption even on fresh streams. The SyncOnlyStream wrapper
|
||||
// works around these bugs by forcing async operations to use sync equivalents.
|
||||
//
|
||||
// TODO: Fix the LZMA decoder async bugs (in LzmaStream, Decoder, OutWindow)
|
||||
// so this wrapper is no longer necessary.
|
||||
var entry = _currentEntry.NotNull("currentEntry is not null");
|
||||
if (entry.IsDirectory)
|
||||
{
|
||||
return CreateEntryStream(Stream.Null);
|
||||
}
|
||||
return CreateEntryStream(new SyncOnlyStream(entry.FilePart.GetCompressedStream()));
|
||||
}
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
/// WORKAROUND: Forces async operations to use synchronous equivalents.
|
||||
/// This is necessary because the LZMA decoder has bugs in its async implementation
|
||||
/// that cause state corruption (IndexOutOfRangeException, DataErrorException).
|
||||
///
|
||||
/// The proper fix would be to repair the LZMA decoder's async methods
|
||||
/// (LzmaStream.ReadAsync, Decoder.CodeAsync, OutWindow async operations),
|
||||
/// but that requires deep changes to the decoder state machine.
|
||||
/// </summary>
|
||||
private sealed class SyncOnlyStream : Stream
|
||||
{
|
||||
private readonly Stream _baseStream;
|
||||
|
||||
public SyncOnlyStream(Stream baseStream) => _baseStream = baseStream;
|
||||
|
||||
public override bool CanRead => _baseStream.CanRead;
|
||||
public override bool CanSeek => _baseStream.CanSeek;
|
||||
public override bool CanWrite => _baseStream.CanWrite;
|
||||
public override long Length => _baseStream.Length;
|
||||
public override long Position
|
||||
{
|
||||
get => _baseStream.Position;
|
||||
set => _baseStream.Position = value;
|
||||
}
|
||||
|
||||
public override void Flush() => _baseStream.Flush();
|
||||
|
||||
public override int Read(byte[] buffer, int offset, int count) =>
|
||||
_baseStream.Read(buffer, offset, count);
|
||||
|
||||
public override long Seek(long offset, SeekOrigin origin) =>
|
||||
_baseStream.Seek(offset, origin);
|
||||
|
||||
public override void SetLength(long value) => _baseStream.SetLength(value);
|
||||
|
||||
public override void Write(byte[] buffer, int offset, int count) =>
|
||||
_baseStream.Write(buffer, offset, count);
|
||||
|
||||
// Force async operations to use sync equivalents to avoid LZMA decoder bugs
|
||||
public override Task<int> ReadAsync(
|
||||
byte[] buffer,
|
||||
int offset,
|
||||
int count,
|
||||
CancellationToken cancellationToken
|
||||
)
|
||||
{
|
||||
cancellationToken.ThrowIfCancellationRequested();
|
||||
return Task.FromResult(_baseStream.Read(buffer, offset, count));
|
||||
}
|
||||
|
||||
public override Task WriteAsync(
|
||||
byte[] buffer,
|
||||
int offset,
|
||||
int count,
|
||||
CancellationToken cancellationToken
|
||||
)
|
||||
{
|
||||
cancellationToken.ThrowIfCancellationRequested();
|
||||
_baseStream.Write(buffer, offset, count);
|
||||
return Task.CompletedTask;
|
||||
}
|
||||
|
||||
public override Task FlushAsync(CancellationToken cancellationToken)
|
||||
{
|
||||
cancellationToken.ThrowIfCancellationRequested();
|
||||
_baseStream.Flush();
|
||||
return Task.CompletedTask;
|
||||
}
|
||||
|
||||
#if !NETFRAMEWORK && !NETSTANDARD2_0
|
||||
public override ValueTask<int> ReadAsync(
|
||||
Memory<byte> buffer,
|
||||
CancellationToken cancellationToken = default
|
||||
)
|
||||
{
|
||||
cancellationToken.ThrowIfCancellationRequested();
|
||||
return new ValueTask<int>(_baseStream.Read(buffer.Span));
|
||||
}
|
||||
|
||||
public override ValueTask WriteAsync(
|
||||
ReadOnlyMemory<byte> buffer,
|
||||
CancellationToken cancellationToken = default
|
||||
)
|
||||
{
|
||||
cancellationToken.ThrowIfCancellationRequested();
|
||||
_baseStream.Write(buffer.Span);
|
||||
return ValueTask.CompletedTask;
|
||||
}
|
||||
#endif
|
||||
|
||||
protected override void Dispose(bool disposing)
|
||||
{
|
||||
if (disposing)
|
||||
{
|
||||
_baseStream.Dispose();
|
||||
}
|
||||
base.Dispose(disposing);
|
||||
}
|
||||
protected override EntryStream GetEntryStream() =>
|
||||
CreateEntryStream(
|
||||
new ReadOnlySubStream(
|
||||
_currentStream.NotNull("currentStream is not null"),
|
||||
_currentItem?.Size ?? 0
|
||||
)
|
||||
);
|
||||
}
|
||||
|
||||
private class PasswordProvider : IPasswordProvider
|
||||
|
||||
@@ -1,61 +0,0 @@
|
||||
using System;
|
||||
using System.Collections.Generic;
|
||||
using System.Linq;
|
||||
using System.Text;
|
||||
using System.Threading.Tasks;
|
||||
|
||||
namespace SharpCompress.Common.Ace
|
||||
{
|
||||
public class AceCrc
|
||||
{
|
||||
// CRC-32 lookup table (standard polynomial 0xEDB88320, reflected)
|
||||
private static readonly uint[] Crc32Table = GenerateTable();
|
||||
|
||||
private static uint[] GenerateTable()
|
||||
{
|
||||
var table = new uint[256];
|
||||
|
||||
for (int i = 0; i < 256; i++)
|
||||
{
|
||||
uint crc = (uint)i;
|
||||
|
||||
for (int j = 0; j < 8; j++)
|
||||
{
|
||||
if ((crc & 1) != 0)
|
||||
crc = (crc >> 1) ^ 0xEDB88320u;
|
||||
else
|
||||
crc >>= 1;
|
||||
}
|
||||
|
||||
table[i] = crc;
|
||||
}
|
||||
|
||||
return table;
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
/// Calculate ACE CRC-32 checksum.
|
||||
/// ACE CRC-32 uses standard CRC-32 polynomial (0xEDB88320, reflected)
|
||||
/// with init=0xFFFFFFFF but NO final XOR.
|
||||
/// </summary>
|
||||
public static uint AceCrc32(ReadOnlySpan<byte> data)
|
||||
{
|
||||
uint crc = 0xFFFFFFFFu;
|
||||
|
||||
foreach (byte b in data)
|
||||
{
|
||||
crc = (crc >> 8) ^ Crc32Table[(crc ^ b) & 0xFF];
|
||||
}
|
||||
|
||||
return crc; // No final XOR for ACE
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
/// ACE CRC-16 is the lower 16 bits of the ACE CRC-32.
|
||||
/// </summary>
|
||||
public static ushort AceCrc16(ReadOnlySpan<byte> data)
|
||||
{
|
||||
return (ushort)(AceCrc32(data) & 0xFFFF);
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,68 +0,0 @@
|
||||
using System;
|
||||
using System.Collections.Generic;
|
||||
using System.IO;
|
||||
using System.Linq;
|
||||
using System.Text;
|
||||
using System.Threading.Tasks;
|
||||
using SharpCompress.Common.Ace.Headers;
|
||||
|
||||
namespace SharpCompress.Common.Ace
|
||||
{
|
||||
public class AceEntry : Entry
|
||||
{
|
||||
private readonly AceFilePart _filePart;
|
||||
|
||||
internal AceEntry(AceFilePart filePart)
|
||||
{
|
||||
_filePart = filePart;
|
||||
}
|
||||
|
||||
public override long Crc
|
||||
{
|
||||
get
|
||||
{
|
||||
if (_filePart == null)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
return _filePart.Header.Crc32;
|
||||
}
|
||||
}
|
||||
|
||||
public override string? Key => _filePart?.Header.Filename;
|
||||
|
||||
public override string? LinkTarget => null;
|
||||
|
||||
public override long CompressedSize => _filePart?.Header.PackedSize ?? 0;
|
||||
|
||||
public override CompressionType CompressionType
|
||||
{
|
||||
get
|
||||
{
|
||||
if (_filePart.Header.CompressionType == Headers.CompressionType.Stored)
|
||||
{
|
||||
return CompressionType.None;
|
||||
}
|
||||
return CompressionType.AceLZ77;
|
||||
}
|
||||
}
|
||||
|
||||
public override long Size => _filePart?.Header.OriginalSize ?? 0;
|
||||
|
||||
public override DateTime? LastModifiedTime => _filePart.Header.DateTime;
|
||||
|
||||
public override DateTime? CreatedTime => null;
|
||||
|
||||
public override DateTime? LastAccessedTime => null;
|
||||
|
||||
public override DateTime? ArchivedTime => null;
|
||||
|
||||
public override bool IsEncrypted => _filePart.Header.IsFileEncrypted;
|
||||
|
||||
public override bool IsDirectory => _filePart.Header.IsDirectory;
|
||||
|
||||
public override bool IsSplitAfter => false;
|
||||
|
||||
internal override IEnumerable<FilePart> Parts => _filePart.Empty();
|
||||
}
|
||||
}
|
||||
@@ -1,52 +0,0 @@
|
||||
using System;
|
||||
using System.Collections.Generic;
|
||||
using System.IO;
|
||||
using System.Linq;
|
||||
using System.Text;
|
||||
using System.Threading.Tasks;
|
||||
using SharpCompress.Common.Ace.Headers;
|
||||
using SharpCompress.IO;
|
||||
|
||||
namespace SharpCompress.Common.Ace
|
||||
{
|
||||
public class AceFilePart : FilePart
|
||||
{
|
||||
private readonly Stream _stream;
|
||||
internal AceFileHeader Header { get; set; }
|
||||
|
||||
internal AceFilePart(AceFileHeader localAceHeader, Stream seekableStream)
|
||||
: base(localAceHeader.ArchiveEncoding)
|
||||
{
|
||||
_stream = seekableStream;
|
||||
Header = localAceHeader;
|
||||
}
|
||||
|
||||
internal override string? FilePartName => Header.Filename;
|
||||
|
||||
internal override Stream GetCompressedStream()
|
||||
{
|
||||
if (_stream != null)
|
||||
{
|
||||
Stream compressedStream;
|
||||
switch (Header.CompressionType)
|
||||
{
|
||||
case Headers.CompressionType.Stored:
|
||||
compressedStream = new ReadOnlySubStream(
|
||||
_stream,
|
||||
Header.DataStartPosition,
|
||||
Header.PackedSize
|
||||
);
|
||||
break;
|
||||
default:
|
||||
throw new NotSupportedException(
|
||||
"CompressionMethod: " + Header.CompressionQuality
|
||||
);
|
||||
}
|
||||
return compressedStream;
|
||||
}
|
||||
return _stream.NotNull();
|
||||
}
|
||||
|
||||
internal override Stream? GetRawStream() => _stream;
|
||||
}
|
||||
}
|
||||
@@ -1,35 +0,0 @@
|
||||
using System;
|
||||
using System.Collections.Generic;
|
||||
using System.IO;
|
||||
using System.Linq;
|
||||
using System.Text;
|
||||
using System.Threading.Tasks;
|
||||
using SharpCompress.Common.Arj;
|
||||
using SharpCompress.Readers;
|
||||
|
||||
namespace SharpCompress.Common.Ace
|
||||
{
|
||||
public class AceVolume : Volume
|
||||
{
|
||||
public AceVolume(Stream stream, ReaderOptions readerOptions, int index = 0)
|
||||
: base(stream, readerOptions, index) { }
|
||||
|
||||
public override bool IsFirstVolume
|
||||
{
|
||||
get { return true; }
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
/// ArjArchive is part of a multi-part archive.
|
||||
/// </summary>
|
||||
public override bool IsMultiVolume
|
||||
{
|
||||
get { return false; }
|
||||
}
|
||||
|
||||
internal IEnumerable<AceFilePart> GetVolumeFileParts()
|
||||
{
|
||||
return new List<AceFilePart>();
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,171 +0,0 @@
|
||||
using System;
|
||||
using System.Buffers.Binary;
|
||||
using System.Collections.Generic;
|
||||
using System.IO;
|
||||
using System.Xml.Linq;
|
||||
using SharpCompress.Common.Arc;
|
||||
|
||||
namespace SharpCompress.Common.Ace.Headers
|
||||
{
|
||||
/// <summary>
|
||||
/// ACE file entry header
|
||||
/// </summary>
|
||||
public sealed class AceFileHeader : AceHeader
|
||||
{
|
||||
public long DataStartPosition { get; private set; }
|
||||
public long PackedSize { get; set; }
|
||||
public long OriginalSize { get; set; }
|
||||
public DateTime DateTime { get; set; }
|
||||
public int Attributes { get; set; }
|
||||
public uint Crc32 { get; set; }
|
||||
public CompressionType CompressionType { get; set; }
|
||||
public CompressionQuality CompressionQuality { get; set; }
|
||||
public ushort Parameters { get; set; }
|
||||
public string Filename { get; set; } = string.Empty;
|
||||
public List<byte> Comment { get; set; } = new();
|
||||
|
||||
/// <summary>
|
||||
/// File data offset in the archive
|
||||
/// </summary>
|
||||
public ulong DataOffset { get; set; }
|
||||
|
||||
public bool IsDirectory => (Attributes & 0x10) != 0;
|
||||
|
||||
public bool IsContinuedFromPrev =>
|
||||
(HeaderFlags & SharpCompress.Common.Ace.Headers.HeaderFlags.CONTINUED_PREV) != 0;
|
||||
|
||||
public bool IsContinuedToNext =>
|
||||
(HeaderFlags & SharpCompress.Common.Ace.Headers.HeaderFlags.CONTINUED_NEXT) != 0;
|
||||
|
||||
public int DictionarySize
|
||||
{
|
||||
get
|
||||
{
|
||||
int bits = Parameters & 0x0F;
|
||||
return bits < 10 ? 1024 : 1 << bits;
|
||||
}
|
||||
}
|
||||
|
||||
public AceFileHeader(ArchiveEncoding archiveEncoding)
|
||||
: base(archiveEncoding, AceHeaderType.FILE) { }
|
||||
|
||||
/// <summary>
|
||||
/// Reads the next file entry header from the stream.
|
||||
/// Returns null if no more entries or end of archive.
|
||||
/// Supports both ACE 1.0 and ACE 2.0 formats.
|
||||
/// </summary>
|
||||
public override AceHeader? Read(Stream stream)
|
||||
{
|
||||
var headerData = ReadHeader(stream);
|
||||
if (headerData.Length == 0)
|
||||
{
|
||||
return null;
|
||||
}
|
||||
int offset = 0;
|
||||
|
||||
// Header type (1 byte)
|
||||
HeaderType = headerData[offset++];
|
||||
|
||||
// Skip recovery record headers (ACE 2.0 feature)
|
||||
if (HeaderType == (byte)SharpCompress.Common.Ace.Headers.AceHeaderType.RECOVERY32)
|
||||
{
|
||||
// Skip to next header
|
||||
return null;
|
||||
}
|
||||
|
||||
if (HeaderType != (byte)SharpCompress.Common.Ace.Headers.AceHeaderType.FILE)
|
||||
{
|
||||
// Unknown header type - skip
|
||||
return null;
|
||||
}
|
||||
|
||||
// Header flags (2 bytes)
|
||||
HeaderFlags = BitConverter.ToUInt16(headerData, offset);
|
||||
offset += 2;
|
||||
|
||||
// Packed size (4 bytes)
|
||||
PackedSize = BitConverter.ToUInt32(headerData, offset);
|
||||
offset += 4;
|
||||
|
||||
// Original size (4 bytes)
|
||||
OriginalSize = BitConverter.ToUInt32(headerData, offset);
|
||||
offset += 4;
|
||||
|
||||
// File date/time in DOS format (4 bytes)
|
||||
var dosDateTime = BitConverter.ToUInt32(headerData, offset);
|
||||
DateTime = ConvertDosDateTime(dosDateTime);
|
||||
offset += 4;
|
||||
|
||||
// File attributes (4 bytes)
|
||||
Attributes = (int)BitConverter.ToUInt32(headerData, offset);
|
||||
offset += 4;
|
||||
|
||||
// CRC32 (4 bytes)
|
||||
Crc32 = BitConverter.ToUInt32(headerData, offset);
|
||||
offset += 4;
|
||||
|
||||
// Compression type (1 byte)
|
||||
byte compressionType = headerData[offset++];
|
||||
CompressionType = GetCompressionType(compressionType);
|
||||
|
||||
// Compression quality/parameter (1 byte)
|
||||
byte compressionQuality = headerData[offset++];
|
||||
CompressionQuality = GetCompressionQuality(compressionQuality);
|
||||
|
||||
// Parameters (2 bytes)
|
||||
Parameters = BitConverter.ToUInt16(headerData, offset);
|
||||
offset += 2;
|
||||
|
||||
// Reserved (2 bytes) - skip
|
||||
offset += 2;
|
||||
|
||||
// Filename length (2 bytes)
|
||||
var filenameLength = BitConverter.ToUInt16(headerData, offset);
|
||||
offset += 2;
|
||||
|
||||
// Filename
|
||||
if (offset + filenameLength <= headerData.Length)
|
||||
{
|
||||
Filename = ArchiveEncoding.Decode(headerData, offset, filenameLength);
|
||||
offset += filenameLength;
|
||||
}
|
||||
|
||||
// Handle comment if present
|
||||
if ((HeaderFlags & SharpCompress.Common.Ace.Headers.HeaderFlags.COMMENT) != 0)
|
||||
{
|
||||
// Comment length (2 bytes)
|
||||
if (offset + 2 <= headerData.Length)
|
||||
{
|
||||
ushort commentLength = BitConverter.ToUInt16(headerData, offset);
|
||||
offset += 2 + commentLength; // Skip comment
|
||||
}
|
||||
}
|
||||
|
||||
// Store the data start position
|
||||
DataStartPosition = stream.Position;
|
||||
|
||||
return this;
|
||||
}
|
||||
|
||||
public CompressionType GetCompressionType(byte value) =>
|
||||
value switch
|
||||
{
|
||||
0 => CompressionType.Stored,
|
||||
1 => CompressionType.Lz77,
|
||||
2 => CompressionType.Blocked,
|
||||
_ => CompressionType.Unknown,
|
||||
};
|
||||
|
||||
public CompressionQuality GetCompressionQuality(byte value) =>
|
||||
value switch
|
||||
{
|
||||
0 => CompressionQuality.None,
|
||||
1 => CompressionQuality.Fastest,
|
||||
2 => CompressionQuality.Fast,
|
||||
3 => CompressionQuality.Normal,
|
||||
4 => CompressionQuality.Good,
|
||||
5 => CompressionQuality.Best,
|
||||
_ => CompressionQuality.Unknown,
|
||||
};
|
||||
}
|
||||
}
|
||||
@@ -1,153 +0,0 @@
|
||||
using System;
|
||||
using System.IO;
|
||||
using SharpCompress.Common.Arj.Headers;
|
||||
using SharpCompress.Crypto;
|
||||
|
||||
namespace SharpCompress.Common.Ace.Headers
|
||||
{
|
||||
/// <summary>
|
||||
/// Header type constants
|
||||
/// </summary>
|
||||
public enum AceHeaderType
|
||||
{
|
||||
MAIN = 0,
|
||||
FILE = 1,
|
||||
RECOVERY32 = 2,
|
||||
RECOVERY64A = 3,
|
||||
RECOVERY64B = 4,
|
||||
}
|
||||
|
||||
public abstract class AceHeader
|
||||
{
|
||||
// ACE signature: bytes at offset 7 should be "**ACE**"
|
||||
private static readonly byte[] AceSignature =
|
||||
[
|
||||
(byte)'*',
|
||||
(byte)'*',
|
||||
(byte)'A',
|
||||
(byte)'C',
|
||||
(byte)'E',
|
||||
(byte)'*',
|
||||
(byte)'*',
|
||||
];
|
||||
|
||||
public AceHeader(ArchiveEncoding archiveEncoding, AceHeaderType type)
|
||||
{
|
||||
AceHeaderType = type;
|
||||
ArchiveEncoding = archiveEncoding;
|
||||
}
|
||||
|
||||
public ArchiveEncoding ArchiveEncoding { get; }
|
||||
public AceHeaderType AceHeaderType { get; }
|
||||
|
||||
public ushort HeaderFlags { get; set; }
|
||||
public ushort HeaderCrc { get; set; }
|
||||
public ushort HeaderSize { get; set; }
|
||||
public byte HeaderType { get; set; }
|
||||
|
||||
public bool IsFileEncrypted =>
|
||||
(HeaderFlags & SharpCompress.Common.Ace.Headers.HeaderFlags.FILE_ENCRYPTED) != 0;
|
||||
public bool Is64Bit =>
|
||||
(HeaderFlags & SharpCompress.Common.Ace.Headers.HeaderFlags.MEMORY_64BIT) != 0;
|
||||
|
||||
public bool IsSolid =>
|
||||
(HeaderFlags & SharpCompress.Common.Ace.Headers.HeaderFlags.SOLID_MAIN) != 0;
|
||||
|
||||
public bool IsMultiVolume =>
|
||||
(HeaderFlags & SharpCompress.Common.Ace.Headers.HeaderFlags.MULTIVOLUME) != 0;
|
||||
|
||||
public abstract AceHeader? Read(Stream reader);
|
||||
|
||||
public byte[] ReadHeader(Stream stream)
|
||||
{
|
||||
// Read header CRC (2 bytes) and header size (2 bytes)
|
||||
var headerBytes = new byte[4];
|
||||
if (stream.Read(headerBytes, 0, 4) != 4)
|
||||
{
|
||||
return Array.Empty<byte>();
|
||||
}
|
||||
|
||||
HeaderCrc = BitConverter.ToUInt16(headerBytes, 0); // CRC for validation
|
||||
HeaderSize = BitConverter.ToUInt16(headerBytes, 2);
|
||||
if (HeaderSize == 0)
|
||||
{
|
||||
return Array.Empty<byte>();
|
||||
}
|
||||
|
||||
// Read the header data
|
||||
var body = new byte[HeaderSize];
|
||||
if (stream.Read(body, 0, HeaderSize) != HeaderSize)
|
||||
{
|
||||
return Array.Empty<byte>();
|
||||
}
|
||||
|
||||
// Verify crc
|
||||
var checksum = AceCrc.AceCrc16(body);
|
||||
if (checksum != HeaderCrc)
|
||||
{
|
||||
throw new InvalidDataException("Header checksum is invalid");
|
||||
}
|
||||
return body;
|
||||
}
|
||||
|
||||
public static bool IsArchive(Stream stream)
|
||||
{
|
||||
// ACE files have a specific signature
|
||||
// First two bytes are typically 0x60 0xEA (signature bytes)
|
||||
// At offset 7, there should be "**ACE**" (7 bytes)
|
||||
var bytes = new byte[14];
|
||||
if (stream.Read(bytes, 0, 14) != 14)
|
||||
{
|
||||
return false;
|
||||
}
|
||||
|
||||
// Check for "**ACE**" at offset 7
|
||||
return CheckMagicBytes(bytes, 7);
|
||||
}
|
||||
|
||||
protected static bool CheckMagicBytes(byte[] headerBytes, int offset)
|
||||
{
|
||||
// Check for "**ACE**" at specified offset
|
||||
for (int i = 0; i < AceSignature.Length; i++)
|
||||
{
|
||||
if (headerBytes[offset + i] != AceSignature[i])
|
||||
{
|
||||
return false;
|
||||
}
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
protected DateTime ConvertDosDateTime(uint dosDateTime)
|
||||
{
|
||||
try
|
||||
{
|
||||
int second = (int)(dosDateTime & 0x1F) * 2;
|
||||
int minute = (int)((dosDateTime >> 5) & 0x3F);
|
||||
int hour = (int)((dosDateTime >> 11) & 0x1F);
|
||||
int day = (int)((dosDateTime >> 16) & 0x1F);
|
||||
int month = (int)((dosDateTime >> 21) & 0x0F);
|
||||
int year = (int)((dosDateTime >> 25) & 0x7F) + 1980;
|
||||
|
||||
if (
|
||||
day < 1
|
||||
|| day > 31
|
||||
|| month < 1
|
||||
|| month > 12
|
||||
|| hour > 23
|
||||
|| minute > 59
|
||||
|| second > 59
|
||||
)
|
||||
{
|
||||
return DateTime.MinValue;
|
||||
}
|
||||
|
||||
return new DateTime(year, month, day, hour, minute, second);
|
||||
}
|
||||
catch
|
||||
{
|
||||
return DateTime.MinValue;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,97 +0,0 @@
|
||||
using System;
|
||||
using System.Buffers.Binary;
|
||||
using System.Collections.Generic;
|
||||
using System.IO;
|
||||
using SharpCompress.Common.Ace.Headers;
|
||||
using SharpCompress.Common.Zip.Headers;
|
||||
using SharpCompress.Crypto;
|
||||
|
||||
namespace SharpCompress.Common.Ace.Headers
|
||||
{
|
||||
/// <summary>
|
||||
/// ACE main archive header
|
||||
/// </summary>
|
||||
public sealed class AceMainHeader : AceHeader
|
||||
{
|
||||
public byte ExtractVersion { get; set; }
|
||||
public byte CreatorVersion { get; set; }
|
||||
public HostOS HostOS { get; set; }
|
||||
public byte VolumeNumber { get; set; }
|
||||
public DateTime DateTime { get; set; }
|
||||
public string Advert { get; set; } = string.Empty;
|
||||
public List<byte> Comment { get; set; } = new();
|
||||
public byte AceVersion { get; private set; }
|
||||
|
||||
public AceMainHeader(ArchiveEncoding archiveEncoding)
|
||||
: base(archiveEncoding, AceHeaderType.MAIN) { }
|
||||
|
||||
/// <summary>
|
||||
/// Reads the main archive header from the stream.
|
||||
/// Returns header if this is a valid ACE archive.
|
||||
/// Supports both ACE 1.0 and ACE 2.0 formats.
|
||||
/// </summary>
|
||||
public override AceHeader? Read(Stream stream)
|
||||
{
|
||||
var headerData = ReadHeader(stream);
|
||||
if (headerData.Length == 0)
|
||||
{
|
||||
return null;
|
||||
}
|
||||
int offset = 0;
|
||||
|
||||
// Header type should be 0 for main header
|
||||
if (headerData[offset++] != HeaderType)
|
||||
{
|
||||
return null;
|
||||
}
|
||||
|
||||
// Header flags (2 bytes)
|
||||
HeaderFlags = BitConverter.ToUInt16(headerData, offset);
|
||||
offset += 2;
|
||||
|
||||
// Skip signature "**ACE**" (7 bytes)
|
||||
if (!CheckMagicBytes(headerData, offset))
|
||||
{
|
||||
throw new InvalidDataException("Invalid ACE archive signature.");
|
||||
}
|
||||
offset += 7;
|
||||
|
||||
// ACE version (1 byte) - 10 for ACE 1.0, 20 for ACE 2.0
|
||||
AceVersion = headerData[offset++];
|
||||
ExtractVersion = headerData[offset++];
|
||||
|
||||
// Host OS (1 byte)
|
||||
if (offset < headerData.Length)
|
||||
{
|
||||
var hostOsByte = headerData[offset++];
|
||||
HostOS = hostOsByte <= 11 ? (HostOS)hostOsByte : HostOS.Unknown;
|
||||
}
|
||||
// Volume number (1 byte)
|
||||
VolumeNumber = headerData[offset++];
|
||||
|
||||
// Creation date/time (4 bytes)
|
||||
var dosDateTime = BitConverter.ToUInt32(headerData, offset);
|
||||
DateTime = ConvertDosDateTime(dosDateTime);
|
||||
offset += 4;
|
||||
|
||||
// Reserved fields (8 bytes)
|
||||
if (offset + 8 <= headerData.Length)
|
||||
{
|
||||
offset += 8;
|
||||
}
|
||||
|
||||
// Skip additional fields based on flags
|
||||
// Handle comment if present
|
||||
if ((HeaderFlags & SharpCompress.Common.Ace.Headers.HeaderFlags.COMMENT) != 0)
|
||||
{
|
||||
if (offset + 2 <= headerData.Length)
|
||||
{
|
||||
ushort commentLength = BitConverter.ToUInt16(headerData, offset);
|
||||
offset += 2 + commentLength;
|
||||
}
|
||||
}
|
||||
|
||||
return this;
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,16 +0,0 @@
|
||||
namespace SharpCompress.Common.Ace.Headers
|
||||
{
|
||||
/// <summary>
|
||||
/// Compression quality
|
||||
/// </summary>
|
||||
public enum CompressionQuality
|
||||
{
|
||||
None,
|
||||
Fastest,
|
||||
Fast,
|
||||
Normal,
|
||||
Good,
|
||||
Best,
|
||||
Unknown,
|
||||
}
|
||||
}
|
||||
@@ -1,13 +0,0 @@
|
||||
namespace SharpCompress.Common.Ace.Headers
|
||||
{
|
||||
/// <summary>
|
||||
/// Compression types
|
||||
/// </summary>
|
||||
public enum CompressionType
|
||||
{
|
||||
Stored,
|
||||
Lz77,
|
||||
Blocked,
|
||||
Unknown,
|
||||
}
|
||||
}
|
||||
@@ -1,33 +0,0 @@
|
||||
namespace SharpCompress.Common.Ace.Headers
|
||||
{
|
||||
/// <summary>
|
||||
/// Header flags (main + file, overlapping meanings)
|
||||
/// </summary>
|
||||
public static class HeaderFlags
|
||||
{
|
||||
// Shared / low bits
|
||||
public const ushort ADDSIZE = 0x0001; // extra size field present
|
||||
public const ushort COMMENT = 0x0002; // comment present
|
||||
public const ushort MEMORY_64BIT = 0x0004;
|
||||
public const ushort AV_STRING = 0x0008; // AV string present
|
||||
public const ushort SOLID = 0x0010; // solid file
|
||||
public const ushort LOCKED = 0x0020;
|
||||
public const ushort PROTECTED = 0x0040;
|
||||
|
||||
// Main header specific
|
||||
public const ushort V20FORMAT = 0x0100;
|
||||
public const ushort SFX = 0x0200;
|
||||
public const ushort LIMITSFXJR = 0x0400;
|
||||
public const ushort MULTIVOLUME = 0x0800;
|
||||
public const ushort ADVERT = 0x1000;
|
||||
public const ushort RECOVERY = 0x2000;
|
||||
public const ushort LOCKED_MAIN = 0x4000;
|
||||
public const ushort SOLID_MAIN = 0x8000;
|
||||
|
||||
// File header specific (same bits, different meaning)
|
||||
public const ushort NTSECURITY = 0x0400;
|
||||
public const ushort CONTINUED_PREV = 0x1000;
|
||||
public const ushort CONTINUED_NEXT = 0x2000;
|
||||
public const ushort FILE_ENCRYPTED = 0x4000; // file encrypted (file header)
|
||||
}
|
||||
}
|
||||
@@ -1,22 +0,0 @@
|
||||
namespace SharpCompress.Common.Ace.Headers
|
||||
{
|
||||
/// <summary>
|
||||
/// Host OS type
|
||||
/// </summary>
|
||||
public enum HostOS
|
||||
{
|
||||
MsDos = 0,
|
||||
Os2,
|
||||
Windows,
|
||||
Unix,
|
||||
MacOs,
|
||||
WinNt,
|
||||
Primos,
|
||||
AppleGs,
|
||||
Atari,
|
||||
Vax,
|
||||
Amiga,
|
||||
Next,
|
||||
Unknown,
|
||||
}
|
||||
}
|
||||
10
src/SharpCompress/Common/ArchiveExtractionEventArgs.cs
Normal file
10
src/SharpCompress/Common/ArchiveExtractionEventArgs.cs
Normal file
@@ -0,0 +1,10 @@
|
||||
using System;
|
||||
|
||||
namespace SharpCompress.Common;
|
||||
|
||||
public class ArchiveExtractionEventArgs<T> : EventArgs
|
||||
{
|
||||
internal ArchiveExtractionEventArgs(T entry) => Item = entry;
|
||||
|
||||
public T Item { get; }
|
||||
}
|
||||
@@ -9,5 +9,4 @@ public enum ArchiveType
|
||||
GZip,
|
||||
Arc,
|
||||
Arj,
|
||||
Ace,
|
||||
}
|
||||
|
||||
@@ -34,13 +34,14 @@ namespace SharpCompress.Common.Arj.Headers
|
||||
public byte[] ReadHeader(Stream stream)
|
||||
{
|
||||
// check for magic bytes
|
||||
var magic = new byte[2];
|
||||
Span<byte> magic = stackalloc byte[2];
|
||||
if (stream.Read(magic) != 2)
|
||||
{
|
||||
return Array.Empty<byte>();
|
||||
}
|
||||
|
||||
if (!CheckMagicBytes(magic))
|
||||
var magicValue = (ushort)(magic[0] | magic[1] << 8);
|
||||
if (magicValue != ARJ_MAGIC)
|
||||
{
|
||||
throw new InvalidDataException("Not an ARJ file (wrong magic bytes)");
|
||||
}
|
||||
@@ -137,22 +138,5 @@ namespace SharpCompress.Common.Arj.Headers
|
||||
? (FileType)value
|
||||
: Headers.FileType.Unknown;
|
||||
}
|
||||
|
||||
public static bool IsArchive(Stream stream)
|
||||
{
|
||||
var bytes = new byte[2];
|
||||
if (stream.Read(bytes, 0, 2) != 2)
|
||||
{
|
||||
return false;
|
||||
}
|
||||
|
||||
return CheckMagicBytes(bytes);
|
||||
}
|
||||
|
||||
protected static bool CheckMagicBytes(byte[] headerBytes)
|
||||
{
|
||||
var magicValue = (ushort)(headerBytes[0] | headerBytes[1] << 8);
|
||||
return magicValue == ARJ_MAGIC;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
25
src/SharpCompress/Common/CompressedBytesReadEventArgs.cs
Normal file
25
src/SharpCompress/Common/CompressedBytesReadEventArgs.cs
Normal file
@@ -0,0 +1,25 @@
|
||||
using System;
|
||||
|
||||
namespace SharpCompress.Common;
|
||||
|
||||
public sealed class CompressedBytesReadEventArgs : EventArgs
|
||||
{
|
||||
public CompressedBytesReadEventArgs(
|
||||
long compressedBytesRead,
|
||||
long currentFilePartCompressedBytesRead
|
||||
)
|
||||
{
|
||||
CompressedBytesRead = compressedBytesRead;
|
||||
CurrentFilePartCompressedBytesRead = currentFilePartCompressedBytesRead;
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
/// Compressed bytes read for the current entry
|
||||
/// </summary>
|
||||
public long CompressedBytesRead { get; }
|
||||
|
||||
/// <summary>
|
||||
/// Current file part read for Multipart files (e.g. Rar)
|
||||
/// </summary>
|
||||
public long CurrentFilePartCompressedBytesRead { get; }
|
||||
}
|
||||
@@ -30,5 +30,4 @@ public enum CompressionType
|
||||
Distilled,
|
||||
ZStandard,
|
||||
ArjLZ77,
|
||||
AceLZ77,
|
||||
}
|
||||
|
||||
@@ -128,7 +128,7 @@ internal static class ExtractionMethods
|
||||
IEntry entry,
|
||||
string destinationDirectory,
|
||||
ExtractionOptions? options,
|
||||
Func<string, ExtractionOptions?, CancellationToken, Task> writeAsync,
|
||||
Func<string, ExtractionOptions?, Task> writeAsync,
|
||||
CancellationToken cancellationToken = default
|
||||
)
|
||||
{
|
||||
@@ -189,7 +189,7 @@ internal static class ExtractionMethods
|
||||
"Entry is trying to write a file outside of the destination directory."
|
||||
);
|
||||
}
|
||||
await writeAsync(destinationFileName, options, cancellationToken).ConfigureAwait(false);
|
||||
await writeAsync(destinationFileName, options).ConfigureAwait(false);
|
||||
}
|
||||
else if (options.ExtractFullPath && !Directory.Exists(destinationFileName))
|
||||
{
|
||||
@@ -201,7 +201,7 @@ internal static class ExtractionMethods
|
||||
IEntry entry,
|
||||
string destinationFileName,
|
||||
ExtractionOptions? options,
|
||||
Func<string, FileMode, CancellationToken, Task> openAndWriteAsync,
|
||||
Func<string, FileMode, Task> openAndWriteAsync,
|
||||
CancellationToken cancellationToken = default
|
||||
)
|
||||
{
|
||||
@@ -225,8 +225,7 @@ internal static class ExtractionMethods
|
||||
fm = FileMode.CreateNew;
|
||||
}
|
||||
|
||||
await openAndWriteAsync(destinationFileName, fm, cancellationToken)
|
||||
.ConfigureAwait(false);
|
||||
await openAndWriteAsync(destinationFileName, fm).ConfigureAwait(false);
|
||||
entry.PreserveExtractionOptions(destinationFileName, options);
|
||||
}
|
||||
}
|
||||
|
||||
28
src/SharpCompress/Common/FilePartExtractionBeginEventArgs.cs
Normal file
28
src/SharpCompress/Common/FilePartExtractionBeginEventArgs.cs
Normal file
@@ -0,0 +1,28 @@
|
||||
using System;
|
||||
|
||||
namespace SharpCompress.Common;
|
||||
|
||||
public sealed class FilePartExtractionBeginEventArgs : EventArgs
|
||||
{
|
||||
public FilePartExtractionBeginEventArgs(string name, long size, long compressedSize)
|
||||
{
|
||||
Name = name;
|
||||
Size = size;
|
||||
CompressedSize = compressedSize;
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
/// File name for the part for the current entry
|
||||
/// </summary>
|
||||
public string Name { get; }
|
||||
|
||||
/// <summary>
|
||||
/// Uncompressed size of the current entry in the part
|
||||
/// </summary>
|
||||
public long Size { get; }
|
||||
|
||||
/// <summary>
|
||||
/// Compressed size of the current entry in the part
|
||||
/// </summary>
|
||||
public long CompressedSize { get; }
|
||||
}
|
||||
7
src/SharpCompress/Common/IExtractionListener.cs
Normal file
7
src/SharpCompress/Common/IExtractionListener.cs
Normal file
@@ -0,0 +1,7 @@
|
||||
namespace SharpCompress.Common;
|
||||
|
||||
public interface IExtractionListener
|
||||
{
|
||||
void FireFilePartExtractionBegin(string name, long size, long compressedSize);
|
||||
void FireCompressedBytesRead(long currentPartCompressedBytes, long compressedReadBytes);
|
||||
}
|
||||
@@ -1,43 +0,0 @@
|
||||
namespace SharpCompress.Common;
|
||||
|
||||
/// <summary>
|
||||
/// Represents progress information for compression or extraction operations.
|
||||
/// </summary>
|
||||
public sealed class ProgressReport
|
||||
{
|
||||
/// <summary>
|
||||
/// Initializes a new instance of the <see cref="ProgressReport"/> class.
|
||||
/// </summary>
|
||||
/// <param name="entryPath">The path of the entry being processed.</param>
|
||||
/// <param name="bytesTransferred">Number of bytes transferred so far.</param>
|
||||
/// <param name="totalBytes">Total bytes to be transferred, or null if unknown.</param>
|
||||
public ProgressReport(string entryPath, long bytesTransferred, long? totalBytes)
|
||||
{
|
||||
EntryPath = entryPath;
|
||||
BytesTransferred = bytesTransferred;
|
||||
TotalBytes = totalBytes;
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
/// Gets the path of the entry being processed.
|
||||
/// </summary>
|
||||
public string EntryPath { get; }
|
||||
|
||||
/// <summary>
|
||||
/// Gets the number of bytes transferred so far.
|
||||
/// </summary>
|
||||
public long BytesTransferred { get; }
|
||||
|
||||
/// <summary>
|
||||
/// Gets the total number of bytes to be transferred, or null if unknown.
|
||||
/// </summary>
|
||||
public long? TotalBytes { get; }
|
||||
|
||||
/// <summary>
|
||||
/// Gets the progress percentage (0-100), or null if total bytes is unknown.
|
||||
/// </summary>
|
||||
public double? PercentComplete =>
|
||||
TotalBytes.HasValue && TotalBytes.Value > 0
|
||||
? (double)BytesTransferred / TotalBytes.Value * 100
|
||||
: null;
|
||||
}
|
||||
17
src/SharpCompress/Common/ReaderExtractionEventArgs.cs
Normal file
17
src/SharpCompress/Common/ReaderExtractionEventArgs.cs
Normal file
@@ -0,0 +1,17 @@
|
||||
using System;
|
||||
using SharpCompress.Readers;
|
||||
|
||||
namespace SharpCompress.Common;
|
||||
|
||||
public sealed class ReaderExtractionEventArgs<T> : EventArgs
|
||||
{
|
||||
internal ReaderExtractionEventArgs(T entry, ReaderProgress? readerProgress = null)
|
||||
{
|
||||
Item = entry;
|
||||
ReaderProgress = readerProgress;
|
||||
}
|
||||
|
||||
public T Item { get; }
|
||||
|
||||
public ReaderProgress? ReaderProgress { get; }
|
||||
}
|
||||
@@ -15,10 +15,6 @@ internal enum ExtraDataType : ushort
|
||||
UnicodePathExtraField = 0x7075,
|
||||
Zip64ExtendedInformationExtraField = 0x0001,
|
||||
UnixTimeExtraField = 0x5455,
|
||||
|
||||
// SOZip (Seek-Optimized ZIP) extra field
|
||||
// Used to link a main file to its SOZip index file
|
||||
SOZip = 0x564B,
|
||||
}
|
||||
|
||||
internal class ExtraData
|
||||
@@ -237,44 +233,6 @@ internal sealed class UnixTimeExtraField : ExtraData
|
||||
}
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
/// SOZip (Seek-Optimized ZIP) extra field that links a main file to its index file.
|
||||
/// The extra field contains the offset within the ZIP file where the index entry's
|
||||
/// local header is located.
|
||||
/// </summary>
|
||||
internal sealed class SOZipExtraField : ExtraData
|
||||
{
|
||||
public SOZipExtraField(ExtraDataType type, ushort length, byte[] dataBytes)
|
||||
: base(type, length, dataBytes) { }
|
||||
|
||||
/// <summary>
|
||||
/// Gets the offset to the SOZip index file's local entry header within the ZIP archive.
|
||||
/// </summary>
|
||||
internal ulong IndexOffset
|
||||
{
|
||||
get
|
||||
{
|
||||
if (DataBytes is null || DataBytes.Length < 8)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
return BinaryPrimitives.ReadUInt64LittleEndian(DataBytes);
|
||||
}
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
/// Creates a SOZip extra field with the specified index offset
|
||||
/// </summary>
|
||||
/// <param name="indexOffset">The offset to the index file's local entry header</param>
|
||||
/// <returns>A new SOZipExtraField instance</returns>
|
||||
public static SOZipExtraField Create(ulong indexOffset)
|
||||
{
|
||||
var data = new byte[8];
|
||||
BinaryPrimitives.WriteUInt64LittleEndian(data, indexOffset);
|
||||
return new SOZipExtraField(ExtraDataType.SOZip, 8, data);
|
||||
}
|
||||
}
|
||||
|
||||
internal static class LocalEntryHeaderExtraFactory
|
||||
{
|
||||
internal static ExtraData Create(ExtraDataType type, ushort length, byte[] extraData) =>
|
||||
@@ -288,7 +246,6 @@ internal static class LocalEntryHeaderExtraFactory
|
||||
ExtraDataType.Zip64ExtendedInformationExtraField =>
|
||||
new Zip64ExtendedInformationExtraField(type, length, extraData),
|
||||
ExtraDataType.UnixTimeExtraField => new UnixTimeExtraField(type, length, extraData),
|
||||
ExtraDataType.SOZip => new SOZipExtraField(type, length, extraData),
|
||||
_ => new ExtraData(type, length, extraData),
|
||||
};
|
||||
}
|
||||
|
||||
@@ -1,150 +0,0 @@
|
||||
using System;
|
||||
using System.Collections.Generic;
|
||||
using System.IO;
|
||||
using SharpCompress.Compressors;
|
||||
using SharpCompress.Compressors.Deflate;
|
||||
|
||||
namespace SharpCompress.Common.Zip.SOZip;
|
||||
|
||||
/// <summary>
|
||||
/// A Deflate stream that inserts sync flush points at regular intervals
|
||||
/// to enable random access (SOZip optimization).
|
||||
/// </summary>
|
||||
internal sealed class SOZipDeflateStream : Stream
|
||||
{
|
||||
private readonly DeflateStream _deflateStream;
|
||||
private readonly Stream _baseStream;
|
||||
private readonly uint _chunkSize;
|
||||
private readonly List<ulong> _compressedOffsets = new();
|
||||
private readonly long _baseOffset;
|
||||
private long _uncompressedBytesWritten;
|
||||
private long _nextSyncPoint;
|
||||
private bool _disposed;
|
||||
|
||||
/// <summary>
|
||||
/// Creates a new SOZip Deflate stream
|
||||
/// </summary>
|
||||
/// <param name="baseStream">The underlying stream to write to</param>
|
||||
/// <param name="compressionLevel">The compression level</param>
|
||||
/// <param name="chunkSize">The chunk size for sync flush points</param>
|
||||
public SOZipDeflateStream(Stream baseStream, CompressionLevel compressionLevel, int chunkSize)
|
||||
{
|
||||
_baseStream = baseStream;
|
||||
_chunkSize = (uint)chunkSize;
|
||||
_baseOffset = baseStream.Position;
|
||||
_nextSyncPoint = chunkSize;
|
||||
|
||||
// Record the first offset (start of compressed data)
|
||||
_compressedOffsets.Add(0);
|
||||
|
||||
_deflateStream = new DeflateStream(baseStream, CompressionMode.Compress, compressionLevel);
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
/// Gets the array of compressed offsets recorded during writing
|
||||
/// </summary>
|
||||
public ulong[] CompressedOffsets => _compressedOffsets.ToArray();
|
||||
|
||||
/// <summary>
|
||||
/// Gets the total number of uncompressed bytes written
|
||||
/// </summary>
|
||||
public ulong UncompressedBytesWritten => (ulong)_uncompressedBytesWritten;
|
||||
|
||||
/// <summary>
|
||||
/// Gets the total number of compressed bytes written
|
||||
/// </summary>
|
||||
public ulong CompressedBytesWritten => (ulong)(_baseStream.Position - _baseOffset);
|
||||
|
||||
/// <summary>
|
||||
/// Gets the chunk size being used
|
||||
/// </summary>
|
||||
public uint ChunkSize => _chunkSize;
|
||||
|
||||
public override bool CanRead => false;
|
||||
|
||||
public override bool CanSeek => false;
|
||||
|
||||
public override bool CanWrite => !_disposed && _deflateStream.CanWrite;
|
||||
|
||||
public override long Length => throw new NotSupportedException();
|
||||
|
||||
public override long Position
|
||||
{
|
||||
get => throw new NotSupportedException();
|
||||
set => throw new NotSupportedException();
|
||||
}
|
||||
|
||||
public override void Flush() => _deflateStream.Flush();
|
||||
|
||||
public override int Read(byte[] buffer, int offset, int count) =>
|
||||
throw new NotSupportedException();
|
||||
|
||||
public override long Seek(long offset, SeekOrigin origin) => throw new NotSupportedException();
|
||||
|
||||
public override void SetLength(long value) => throw new NotSupportedException();
|
||||
|
||||
public override void Write(byte[] buffer, int offset, int count)
|
||||
{
|
||||
if (_disposed)
|
||||
{
|
||||
throw new ObjectDisposedException(nameof(SOZipDeflateStream));
|
||||
}
|
||||
|
||||
var remaining = count;
|
||||
var currentOffset = offset;
|
||||
|
||||
while (remaining > 0)
|
||||
{
|
||||
// Calculate how many bytes until the next sync point
|
||||
var bytesUntilSync = (int)(_nextSyncPoint - _uncompressedBytesWritten);
|
||||
|
||||
if (bytesUntilSync <= 0)
|
||||
{
|
||||
// We've reached a sync point - perform sync flush
|
||||
PerformSyncFlush();
|
||||
continue;
|
||||
}
|
||||
|
||||
// Write up to the next sync point
|
||||
var bytesToWrite = Math.Min(remaining, bytesUntilSync);
|
||||
_deflateStream.Write(buffer, currentOffset, bytesToWrite);
|
||||
|
||||
_uncompressedBytesWritten += bytesToWrite;
|
||||
currentOffset += bytesToWrite;
|
||||
remaining -= bytesToWrite;
|
||||
}
|
||||
}
|
||||
|
||||
private void PerformSyncFlush()
|
||||
{
|
||||
// Flush with Z_SYNC_FLUSH to create an independent block
|
||||
var originalFlushMode = _deflateStream.FlushMode;
|
||||
_deflateStream.FlushMode = FlushType.Sync;
|
||||
_deflateStream.Flush();
|
||||
_deflateStream.FlushMode = originalFlushMode;
|
||||
|
||||
// Record the compressed offset for this sync point
|
||||
var compressedOffset = (ulong)(_baseStream.Position - _baseOffset);
|
||||
_compressedOffsets.Add(compressedOffset);
|
||||
|
||||
// Set the next sync point
|
||||
_nextSyncPoint += _chunkSize;
|
||||
}
|
||||
|
||||
protected override void Dispose(bool disposing)
|
||||
{
|
||||
if (_disposed)
|
||||
{
|
||||
return;
|
||||
}
|
||||
|
||||
_disposed = true;
|
||||
|
||||
if (disposing)
|
||||
{
|
||||
_deflateStream.Dispose();
|
||||
}
|
||||
|
||||
base.Dispose(disposing);
|
||||
}
|
||||
}
|
||||
@@ -1,367 +0,0 @@
|
||||
using System;
|
||||
using System.Buffers.Binary;
|
||||
using System.IO;
|
||||
|
||||
namespace SharpCompress.Common.Zip.SOZip;
|
||||
|
||||
/// <summary>
|
||||
/// Represents a SOZip (Seek-Optimized ZIP) index that enables random access
|
||||
/// within DEFLATE-compressed files by storing offsets to sync flush points.
|
||||
/// </summary>
|
||||
/// <remarks>
|
||||
/// SOZip index files (.sozip.idx) contain a header followed by offset entries
|
||||
/// that point to the beginning of independently decompressable DEFLATE blocks.
|
||||
/// </remarks>
|
||||
[CLSCompliant(false)]
|
||||
public sealed class SOZipIndex
|
||||
{
|
||||
/// <summary>
|
||||
/// SOZip index file magic number: "SOZo" (0x534F5A6F)
|
||||
/// </summary>
|
||||
public const uint SOZIP_MAGIC = 0x6F5A4F53; // "SOZo" little-endian
|
||||
|
||||
/// <summary>
|
||||
/// Current SOZip specification version
|
||||
/// </summary>
|
||||
public const byte SOZIP_VERSION = 1;
|
||||
|
||||
/// <summary>
|
||||
/// Index file extension suffix
|
||||
/// </summary>
|
||||
public const string INDEX_EXTENSION = ".sozip.idx";
|
||||
|
||||
/// <summary>
|
||||
/// Default chunk size in bytes (32KB)
|
||||
/// </summary>
|
||||
public const uint DEFAULT_CHUNK_SIZE = 32768;
|
||||
|
||||
/// <summary>
|
||||
/// The version of the SOZip index format
|
||||
/// </summary>
|
||||
public byte Version { get; private set; }
|
||||
|
||||
/// <summary>
|
||||
/// Size of each uncompressed chunk in bytes
|
||||
/// </summary>
|
||||
public uint ChunkSize { get; private set; }
|
||||
|
||||
/// <summary>
|
||||
/// Total uncompressed size of the file
|
||||
/// </summary>
|
||||
public ulong UncompressedSize { get; private set; }
|
||||
|
||||
/// <summary>
|
||||
/// Total compressed size of the file
|
||||
/// </summary>
|
||||
public ulong CompressedSize { get; private set; }
|
||||
|
||||
/// <summary>
|
||||
/// Number of offset entries in the index
|
||||
/// </summary>
|
||||
public uint OffsetCount { get; private set; }
|
||||
|
||||
/// <summary>
|
||||
/// Array of compressed offsets for each chunk
|
||||
/// </summary>
|
||||
public ulong[] CompressedOffsets { get; private set; } = Array.Empty<ulong>();
|
||||
|
||||
/// <summary>
|
||||
/// Creates a new empty SOZip index
|
||||
/// </summary>
|
||||
public SOZipIndex() { }
|
||||
|
||||
/// <summary>
|
||||
/// Creates a new SOZip index with specified parameters
|
||||
/// </summary>
|
||||
/// <param name="chunkSize">Size of each uncompressed chunk</param>
|
||||
/// <param name="uncompressedSize">Total uncompressed size</param>
|
||||
/// <param name="compressedSize">Total compressed size</param>
|
||||
/// <param name="compressedOffsets">Array of compressed offsets</param>
|
||||
public SOZipIndex(
|
||||
uint chunkSize,
|
||||
ulong uncompressedSize,
|
||||
ulong compressedSize,
|
||||
ulong[] compressedOffsets
|
||||
)
|
||||
{
|
||||
Version = SOZIP_VERSION;
|
||||
ChunkSize = chunkSize;
|
||||
UncompressedSize = uncompressedSize;
|
||||
CompressedSize = compressedSize;
|
||||
OffsetCount = (uint)compressedOffsets.Length;
|
||||
CompressedOffsets = compressedOffsets;
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
/// Reads a SOZip index from a stream
|
||||
/// </summary>
|
||||
/// <param name="stream">The stream containing the index data</param>
|
||||
/// <returns>A parsed SOZipIndex instance</returns>
|
||||
/// <exception cref="InvalidDataException">If the stream doesn't contain valid SOZip index data</exception>
|
||||
public static SOZipIndex Read(Stream stream)
|
||||
{
|
||||
var index = new SOZipIndex();
|
||||
Span<byte> header = stackalloc byte[4];
|
||||
|
||||
// Read magic number
|
||||
if (stream.Read(header) != 4)
|
||||
{
|
||||
throw new InvalidDataException("Invalid SOZip index: unable to read magic number");
|
||||
}
|
||||
|
||||
var magic = BinaryPrimitives.ReadUInt32LittleEndian(header);
|
||||
if (magic != SOZIP_MAGIC)
|
||||
{
|
||||
throw new InvalidDataException(
|
||||
$"Invalid SOZip index: magic number mismatch (expected 0x{SOZIP_MAGIC:X8}, got 0x{magic:X8})"
|
||||
);
|
||||
}
|
||||
|
||||
// Read version
|
||||
var versionByte = stream.ReadByte();
|
||||
if (versionByte < 0)
|
||||
{
|
||||
throw new InvalidDataException("Invalid SOZip index: unable to read version");
|
||||
}
|
||||
index.Version = (byte)versionByte;
|
||||
|
||||
if (index.Version != SOZIP_VERSION)
|
||||
{
|
||||
throw new InvalidDataException(
|
||||
$"Unsupported SOZip index version: {index.Version} (expected {SOZIP_VERSION})"
|
||||
);
|
||||
}
|
||||
|
||||
// Read reserved byte (padding)
|
||||
stream.ReadByte();
|
||||
|
||||
// Read chunk size (2 bytes)
|
||||
Span<byte> buf2 = stackalloc byte[2];
|
||||
if (stream.Read(buf2) != 2)
|
||||
{
|
||||
throw new InvalidDataException("Invalid SOZip index: unable to read chunk size");
|
||||
}
|
||||
|
||||
// Chunk size is stored as (actual_size / 1024) - 1
|
||||
var chunkSizeEncoded = BinaryPrimitives.ReadUInt16LittleEndian(buf2);
|
||||
index.ChunkSize = ((uint)chunkSizeEncoded + 1) * 1024;
|
||||
|
||||
// Read uncompressed size (8 bytes)
|
||||
Span<byte> buf8 = stackalloc byte[8];
|
||||
if (stream.Read(buf8) != 8)
|
||||
{
|
||||
throw new InvalidDataException("Invalid SOZip index: unable to read uncompressed size");
|
||||
}
|
||||
index.UncompressedSize = BinaryPrimitives.ReadUInt64LittleEndian(buf8);
|
||||
|
||||
// Read compressed size (8 bytes)
|
||||
if (stream.Read(buf8) != 8)
|
||||
{
|
||||
throw new InvalidDataException("Invalid SOZip index: unable to read compressed size");
|
||||
}
|
||||
index.CompressedSize = BinaryPrimitives.ReadUInt64LittleEndian(buf8);
|
||||
|
||||
// Read offset count (4 bytes)
|
||||
if (stream.Read(header) != 4)
|
||||
{
|
||||
throw new InvalidDataException("Invalid SOZip index: unable to read offset count");
|
||||
}
|
||||
index.OffsetCount = BinaryPrimitives.ReadUInt32LittleEndian(header);
|
||||
|
||||
// Read offsets
|
||||
index.CompressedOffsets = new ulong[index.OffsetCount];
|
||||
for (uint i = 0; i < index.OffsetCount; i++)
|
||||
{
|
||||
if (stream.Read(buf8) != 8)
|
||||
{
|
||||
throw new InvalidDataException($"Invalid SOZip index: unable to read offset {i}");
|
||||
}
|
||||
index.CompressedOffsets[i] = BinaryPrimitives.ReadUInt64LittleEndian(buf8);
|
||||
}
|
||||
|
||||
return index;
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
/// Reads a SOZip index from a byte array
|
||||
/// </summary>
|
||||
/// <param name="data">The byte array containing the index data</param>
|
||||
/// <returns>A parsed SOZipIndex instance</returns>
|
||||
public static SOZipIndex Read(byte[] data)
|
||||
{
|
||||
using var stream = new MemoryStream(data);
|
||||
return Read(stream);
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
/// Writes this SOZip index to a stream
|
||||
/// </summary>
|
||||
/// <param name="stream">The stream to write to</param>
|
||||
public void Write(Stream stream)
|
||||
{
|
||||
Span<byte> buf8 = stackalloc byte[8];
|
||||
|
||||
// Write magic number
|
||||
BinaryPrimitives.WriteUInt32LittleEndian(buf8, SOZIP_MAGIC);
|
||||
stream.Write(buf8.Slice(0, 4));
|
||||
|
||||
// Write version
|
||||
stream.WriteByte(SOZIP_VERSION);
|
||||
|
||||
// Write reserved byte (padding)
|
||||
stream.WriteByte(0);
|
||||
|
||||
// Write chunk size (encoded as (size/1024)-1)
|
||||
var chunkSizeEncoded = (ushort)((ChunkSize / 1024) - 1);
|
||||
BinaryPrimitives.WriteUInt16LittleEndian(buf8, chunkSizeEncoded);
|
||||
stream.Write(buf8.Slice(0, 2));
|
||||
|
||||
// Write uncompressed size
|
||||
BinaryPrimitives.WriteUInt64LittleEndian(buf8, UncompressedSize);
|
||||
stream.Write(buf8);
|
||||
|
||||
// Write compressed size
|
||||
BinaryPrimitives.WriteUInt64LittleEndian(buf8, CompressedSize);
|
||||
stream.Write(buf8);
|
||||
|
||||
// Write offset count
|
||||
BinaryPrimitives.WriteUInt32LittleEndian(buf8, OffsetCount);
|
||||
stream.Write(buf8.Slice(0, 4));
|
||||
|
||||
// Write offsets
|
||||
foreach (var offset in CompressedOffsets)
|
||||
{
|
||||
BinaryPrimitives.WriteUInt64LittleEndian(buf8, offset);
|
||||
stream.Write(buf8);
|
||||
}
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
/// Converts this SOZip index to a byte array
|
||||
/// </summary>
|
||||
/// <returns>Byte array containing the serialized index</returns>
|
||||
public byte[] ToByteArray()
|
||||
{
|
||||
using var stream = new MemoryStream();
|
||||
Write(stream);
|
||||
return stream.ToArray();
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
/// Gets the index of the chunk that contains the specified uncompressed offset
|
||||
/// </summary>
|
||||
/// <param name="uncompressedOffset">The uncompressed byte offset</param>
|
||||
/// <returns>The chunk index</returns>
|
||||
public int GetChunkIndex(long uncompressedOffset)
|
||||
{
|
||||
if (uncompressedOffset < 0 || (ulong)uncompressedOffset >= UncompressedSize)
|
||||
{
|
||||
throw new ArgumentOutOfRangeException(
|
||||
nameof(uncompressedOffset),
|
||||
"Offset is out of range"
|
||||
);
|
||||
}
|
||||
|
||||
return (int)((ulong)uncompressedOffset / ChunkSize);
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
/// Gets the compressed offset for the specified chunk index
|
||||
/// </summary>
|
||||
/// <param name="chunkIndex">The chunk index</param>
|
||||
/// <returns>The compressed byte offset for the start of the chunk</returns>
|
||||
public ulong GetCompressedOffset(int chunkIndex)
|
||||
{
|
||||
if (chunkIndex < 0 || chunkIndex >= CompressedOffsets.Length)
|
||||
{
|
||||
throw new ArgumentOutOfRangeException(
|
||||
nameof(chunkIndex),
|
||||
"Chunk index is out of range"
|
||||
);
|
||||
}
|
||||
|
||||
return CompressedOffsets[chunkIndex];
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
/// Gets the uncompressed offset for the start of the specified chunk
|
||||
/// </summary>
|
||||
/// <param name="chunkIndex">The chunk index</param>
|
||||
/// <returns>The uncompressed byte offset for the start of the chunk</returns>
|
||||
public ulong GetUncompressedOffset(int chunkIndex)
|
||||
{
|
||||
if (chunkIndex < 0 || chunkIndex >= CompressedOffsets.Length)
|
||||
{
|
||||
throw new ArgumentOutOfRangeException(
|
||||
nameof(chunkIndex),
|
||||
"Chunk index is out of range"
|
||||
);
|
||||
}
|
||||
|
||||
return (ulong)chunkIndex * ChunkSize;
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
/// Gets the name of the SOZip index file for a given entry name
|
||||
/// </summary>
|
||||
/// <param name="entryName">The main entry name</param>
|
||||
/// <returns>The index file name (hidden with .sozip.idx extension)</returns>
|
||||
public static string GetIndexFileName(string entryName)
|
||||
{
|
||||
var directory = Path.GetDirectoryName(entryName);
|
||||
var fileName = Path.GetFileName(entryName);
|
||||
|
||||
// The index file is hidden (prefixed with .)
|
||||
var indexFileName = $".{fileName}{INDEX_EXTENSION}";
|
||||
|
||||
if (string.IsNullOrEmpty(directory))
|
||||
{
|
||||
return indexFileName;
|
||||
}
|
||||
|
||||
return Path.Combine(directory, indexFileName).Replace('\\', '/');
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
/// Checks if a file name is a SOZip index file
|
||||
/// </summary>
|
||||
/// <param name="fileName">The file name to check</param>
|
||||
/// <returns>True if the file is a SOZip index file</returns>
|
||||
public static bool IsIndexFile(string fileName)
|
||||
{
|
||||
if (string.IsNullOrEmpty(fileName))
|
||||
{
|
||||
return false;
|
||||
}
|
||||
|
||||
var name = Path.GetFileName(fileName);
|
||||
return name.StartsWith(".", StringComparison.Ordinal)
|
||||
&& name.EndsWith(INDEX_EXTENSION, StringComparison.OrdinalIgnoreCase);
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
/// Gets the main file name from a SOZip index file name
|
||||
/// </summary>
|
||||
/// <param name="indexFileName">The index file name</param>
|
||||
/// <returns>The main file name, or null if not a valid index file</returns>
|
||||
public static string? GetMainFileName(string indexFileName)
|
||||
{
|
||||
if (!IsIndexFile(indexFileName))
|
||||
{
|
||||
return null;
|
||||
}
|
||||
|
||||
var directory = Path.GetDirectoryName(indexFileName);
|
||||
var name = Path.GetFileName(indexFileName);
|
||||
|
||||
// Remove leading '.' and trailing '.sozip.idx'
|
||||
var mainName = name.Substring(1, name.Length - 1 - INDEX_EXTENSION.Length);
|
||||
|
||||
if (string.IsNullOrEmpty(directory))
|
||||
{
|
||||
return mainName;
|
||||
}
|
||||
|
||||
return Path.Combine(directory, mainName).Replace('\\', '/');
|
||||
}
|
||||
}
|
||||
@@ -2,7 +2,6 @@ using System;
|
||||
using System.Collections.Generic;
|
||||
using System.Linq;
|
||||
using SharpCompress.Common.Zip.Headers;
|
||||
using SharpCompress.Common.Zip.SOZip;
|
||||
|
||||
namespace SharpCompress.Common.Zip;
|
||||
|
||||
@@ -12,7 +11,7 @@ public class ZipEntry : Entry
|
||||
|
||||
internal ZipEntry(ZipFilePart? filePart)
|
||||
{
|
||||
if (filePart is null)
|
||||
if (filePart == null)
|
||||
{
|
||||
return;
|
||||
}
|
||||
@@ -89,24 +88,4 @@ public class ZipEntry : Entry
|
||||
public override int? Attrib => (int?)_filePart?.Header.ExternalFileAttributes;
|
||||
|
||||
public string? Comment => _filePart?.Header.Comment;
|
||||
|
||||
/// <summary>
|
||||
/// Gets a value indicating whether this entry has SOZip (Seek-Optimized ZIP) support.
|
||||
/// A SOZip entry has an associated index file that enables random access within
|
||||
/// the compressed data.
|
||||
/// </summary>
|
||||
public bool IsSozip => _filePart?.Header.Extra.Any(e => e.Type == ExtraDataType.SOZip) ?? false;
|
||||
|
||||
/// <summary>
|
||||
/// Gets a value indicating whether this entry is a SOZip index file.
|
||||
/// Index files are hidden files with a .sozip.idx extension that contain
|
||||
/// offsets into the main compressed file.
|
||||
/// </summary>
|
||||
public bool IsSozipIndexFile => Key is not null && SOZipIndex.IsIndexFile(Key);
|
||||
|
||||
/// <summary>
|
||||
/// Gets the SOZip extra field data, if present.
|
||||
/// </summary>
|
||||
internal SOZipExtraField? SOZipExtra =>
|
||||
_filePart?.Header.Extra.OfType<SOZipExtraField>().FirstOrDefault();
|
||||
}
|
||||
|
||||
@@ -428,9 +428,7 @@ public class LzmaStream : Stream, IStreamStack
|
||||
private async Task DecodeChunkHeaderAsync(CancellationToken cancellationToken = default)
|
||||
{
|
||||
var controlBuffer = new byte[1];
|
||||
await _inputStream
|
||||
.ReadExactlyAsync(controlBuffer, 0, 1, cancellationToken)
|
||||
.ConfigureAwait(false);
|
||||
await _inputStream.ReadAsync(controlBuffer, 0, 1, cancellationToken).ConfigureAwait(false);
|
||||
var control = controlBuffer[0];
|
||||
_inputPosition++;
|
||||
|
||||
@@ -457,15 +455,11 @@ public class LzmaStream : Stream, IStreamStack
|
||||
|
||||
_availableBytes = (control & 0x1F) << 16;
|
||||
var buffer = new byte[2];
|
||||
await _inputStream
|
||||
.ReadExactlyAsync(buffer, 0, 2, cancellationToken)
|
||||
.ConfigureAwait(false);
|
||||
await _inputStream.ReadAsync(buffer, 0, 2, cancellationToken).ConfigureAwait(false);
|
||||
_availableBytes += (buffer[0] << 8) + buffer[1] + 1;
|
||||
_inputPosition += 2;
|
||||
|
||||
await _inputStream
|
||||
.ReadExactlyAsync(buffer, 0, 2, cancellationToken)
|
||||
.ConfigureAwait(false);
|
||||
await _inputStream.ReadAsync(buffer, 0, 2, cancellationToken).ConfigureAwait(false);
|
||||
_rangeDecoderLimit = (buffer[0] << 8) + buffer[1] + 1;
|
||||
_inputPosition += 2;
|
||||
|
||||
@@ -473,7 +467,7 @@ public class LzmaStream : Stream, IStreamStack
|
||||
{
|
||||
_needProps = false;
|
||||
await _inputStream
|
||||
.ReadExactlyAsync(controlBuffer, 0, 1, cancellationToken)
|
||||
.ReadAsync(controlBuffer, 0, 1, cancellationToken)
|
||||
.ConfigureAwait(false);
|
||||
Properties[0] = controlBuffer[0];
|
||||
_inputPosition++;
|
||||
@@ -501,9 +495,7 @@ public class LzmaStream : Stream, IStreamStack
|
||||
{
|
||||
_uncompressedChunk = true;
|
||||
var buffer = new byte[2];
|
||||
await _inputStream
|
||||
.ReadExactlyAsync(buffer, 0, 2, cancellationToken)
|
||||
.ConfigureAwait(false);
|
||||
await _inputStream.ReadAsync(buffer, 0, 2, cancellationToken).ConfigureAwait(false);
|
||||
_availableBytes = (buffer[0] << 8) + buffer[1] + 1;
|
||||
_inputPosition += 2;
|
||||
}
|
||||
|
||||
@@ -37,8 +37,18 @@ internal sealed class MultiVolumeReadOnlyStream : Stream, IStreamStack
|
||||
private IEnumerator<RarFilePart> filePartEnumerator;
|
||||
private Stream currentStream;
|
||||
|
||||
internal MultiVolumeReadOnlyStream(IEnumerable<RarFilePart> parts)
|
||||
private readonly IExtractionListener streamListener;
|
||||
|
||||
private long currentPartTotalReadBytes;
|
||||
private long currentEntryTotalReadBytes;
|
||||
|
||||
internal MultiVolumeReadOnlyStream(
|
||||
IEnumerable<RarFilePart> parts,
|
||||
IExtractionListener streamListener
|
||||
)
|
||||
{
|
||||
this.streamListener = streamListener;
|
||||
|
||||
filePartEnumerator = parts.GetEnumerator();
|
||||
filePartEnumerator.MoveNext();
|
||||
InitializeNextFilePart();
|
||||
@@ -71,7 +81,15 @@ internal sealed class MultiVolumeReadOnlyStream : Stream, IStreamStack
|
||||
currentPosition = 0;
|
||||
currentStream = filePartEnumerator.Current.GetCompressedStream();
|
||||
|
||||
currentPartTotalReadBytes = 0;
|
||||
|
||||
CurrentCrc = filePartEnumerator.Current.FileHeader.FileCrc;
|
||||
|
||||
streamListener.FireFilePartExtractionBegin(
|
||||
filePartEnumerator.Current.FilePartName,
|
||||
filePartEnumerator.Current.FileHeader.CompressedSize,
|
||||
filePartEnumerator.Current.FileHeader.UncompressedSize
|
||||
);
|
||||
}
|
||||
|
||||
public override int Read(byte[] buffer, int offset, int count)
|
||||
@@ -123,6 +141,12 @@ internal sealed class MultiVolumeReadOnlyStream : Stream, IStreamStack
|
||||
break;
|
||||
}
|
||||
}
|
||||
currentPartTotalReadBytes += totalRead;
|
||||
currentEntryTotalReadBytes += totalRead;
|
||||
streamListener.FireCompressedBytesRead(
|
||||
currentPartTotalReadBytes,
|
||||
currentEntryTotalReadBytes
|
||||
);
|
||||
return totalRead;
|
||||
}
|
||||
|
||||
@@ -182,6 +206,12 @@ internal sealed class MultiVolumeReadOnlyStream : Stream, IStreamStack
|
||||
break;
|
||||
}
|
||||
}
|
||||
currentPartTotalReadBytes += totalRead;
|
||||
currentEntryTotalReadBytes += totalRead;
|
||||
streamListener.FireCompressedBytesRead(
|
||||
currentPartTotalReadBytes,
|
||||
currentEntryTotalReadBytes
|
||||
);
|
||||
return totalRead;
|
||||
}
|
||||
|
||||
@@ -240,6 +270,12 @@ internal sealed class MultiVolumeReadOnlyStream : Stream, IStreamStack
|
||||
break;
|
||||
}
|
||||
}
|
||||
currentPartTotalReadBytes += totalRead;
|
||||
currentEntryTotalReadBytes += totalRead;
|
||||
streamListener.FireCompressedBytesRead(
|
||||
currentPartTotalReadBytes,
|
||||
currentEntryTotalReadBytes
|
||||
);
|
||||
return totalRead;
|
||||
}
|
||||
#endif
|
||||
|
||||
@@ -134,7 +134,7 @@ internal class RarStream : Stream, IStreamStack
|
||||
fetch = false;
|
||||
}
|
||||
_position += outTotal;
|
||||
if (count > 0 && outTotal == 0 && _position < Length)
|
||||
if (count > 0 && outTotal == 0 && _position != Length)
|
||||
{
|
||||
// sanity check, eg if we try to decompress a redir entry
|
||||
throw new InvalidOperationException(
|
||||
@@ -179,7 +179,7 @@ internal class RarStream : Stream, IStreamStack
|
||||
fetch = false;
|
||||
}
|
||||
_position += outTotal;
|
||||
if (count > 0 && outTotal == 0 && _position < Length)
|
||||
if (count > 0 && outTotal == 0 && _position != Length)
|
||||
{
|
||||
// sanity check, eg if we try to decompress a redir entry
|
||||
throw new InvalidOperationException(
|
||||
|
||||
@@ -1,37 +0,0 @@
|
||||
using System;
|
||||
using System.Collections.Generic;
|
||||
using System.IO;
|
||||
using System.Linq;
|
||||
using System.Text;
|
||||
using System.Threading.Tasks;
|
||||
using SharpCompress.Common;
|
||||
using SharpCompress.Common.Ace.Headers;
|
||||
using SharpCompress.Readers;
|
||||
using SharpCompress.Readers.Ace;
|
||||
|
||||
namespace SharpCompress.Factories
|
||||
{
|
||||
public class AceFactory : Factory, IReaderFactory
|
||||
{
|
||||
public override string Name => "Ace";
|
||||
|
||||
public override ArchiveType? KnownArchiveType => ArchiveType.Ace;
|
||||
|
||||
public override IEnumerable<string> GetSupportedExtensions()
|
||||
{
|
||||
yield return "ace";
|
||||
}
|
||||
|
||||
public override bool IsArchive(
|
||||
Stream stream,
|
||||
string? password = null,
|
||||
int bufferSize = ReaderOptions.DefaultBufferSize
|
||||
)
|
||||
{
|
||||
return AceHeader.IsArchive(stream);
|
||||
}
|
||||
|
||||
public IReader OpenReader(Stream stream, ReaderOptions? options) =>
|
||||
AceReader.Open(stream, options);
|
||||
}
|
||||
}
|
||||
@@ -28,7 +28,12 @@ namespace SharpCompress.Factories
|
||||
int bufferSize = ReaderOptions.DefaultBufferSize
|
||||
)
|
||||
{
|
||||
return ArjHeader.IsArchive(stream);
|
||||
var arjHeader = new ArjMainHeader(new ArchiveEncoding());
|
||||
if (arjHeader.Read(stream) == null)
|
||||
{
|
||||
return false;
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
public IReader OpenReader(Stream stream, ReaderOptions? options) =>
|
||||
|
||||
@@ -19,7 +19,6 @@ public abstract class Factory : IFactory
|
||||
RegisterFactory(new TarFactory());
|
||||
RegisterFactory(new ArcFactory());
|
||||
RegisterFactory(new ArjFactory());
|
||||
RegisterFactory(new AceFactory());
|
||||
}
|
||||
|
||||
private static readonly HashSet<Factory> _factories = new();
|
||||
|
||||
97
src/SharpCompress/IO/ListeningStream.cs
Normal file
97
src/SharpCompress/IO/ListeningStream.cs
Normal file
@@ -0,0 +1,97 @@
|
||||
using System.IO;
|
||||
using SharpCompress.Common;
|
||||
|
||||
namespace SharpCompress.IO;
|
||||
|
||||
internal class ListeningStream : Stream, IStreamStack
|
||||
{
|
||||
#if DEBUG_STREAMS
|
||||
long IStreamStack.InstanceId { get; set; }
|
||||
#endif
|
||||
int IStreamStack.DefaultBufferSize { get; set; }
|
||||
|
||||
Stream IStreamStack.BaseStream() => Stream;
|
||||
|
||||
int IStreamStack.BufferSize
|
||||
{
|
||||
get => 0;
|
||||
set { return; }
|
||||
}
|
||||
int IStreamStack.BufferPosition
|
||||
{
|
||||
get => 0;
|
||||
set { return; }
|
||||
}
|
||||
|
||||
void IStreamStack.SetPosition(long position) { }
|
||||
|
||||
private long _currentEntryTotalReadBytes;
|
||||
private readonly IExtractionListener _listener;
|
||||
|
||||
public ListeningStream(IExtractionListener listener, Stream stream)
|
||||
{
|
||||
Stream = stream;
|
||||
this._listener = listener;
|
||||
#if DEBUG_STREAMS
|
||||
this.DebugConstruct(typeof(ListeningStream));
|
||||
#endif
|
||||
}
|
||||
|
||||
protected override void Dispose(bool disposing)
|
||||
{
|
||||
#if DEBUG_STREAMS
|
||||
this.DebugDispose(typeof(ListeningStream));
|
||||
#endif
|
||||
if (disposing)
|
||||
{
|
||||
Stream.Dispose();
|
||||
}
|
||||
base.Dispose(disposing);
|
||||
}
|
||||
|
||||
public Stream Stream { get; }
|
||||
|
||||
public override bool CanRead => Stream.CanRead;
|
||||
|
||||
public override bool CanSeek => Stream.CanSeek;
|
||||
|
||||
public override bool CanWrite => Stream.CanWrite;
|
||||
|
||||
public override void Flush() => Stream.Flush();
|
||||
|
||||
public override long Length => Stream.Length;
|
||||
|
||||
public override long Position
|
||||
{
|
||||
get => Stream.Position;
|
||||
set => Stream.Position = value;
|
||||
}
|
||||
|
||||
public override int Read(byte[] buffer, int offset, int count)
|
||||
{
|
||||
var read = Stream.Read(buffer, offset, count);
|
||||
_currentEntryTotalReadBytes += read;
|
||||
_listener.FireCompressedBytesRead(_currentEntryTotalReadBytes, _currentEntryTotalReadBytes);
|
||||
return read;
|
||||
}
|
||||
|
||||
public override int ReadByte()
|
||||
{
|
||||
var value = Stream.ReadByte();
|
||||
if (value == -1)
|
||||
{
|
||||
return -1;
|
||||
}
|
||||
|
||||
++_currentEntryTotalReadBytes;
|
||||
_listener.FireCompressedBytesRead(_currentEntryTotalReadBytes, _currentEntryTotalReadBytes);
|
||||
return value;
|
||||
}
|
||||
|
||||
public override long Seek(long offset, SeekOrigin origin) => Stream.Seek(offset, origin);
|
||||
|
||||
public override void SetLength(long value) => Stream.SetLength(value);
|
||||
|
||||
public override void Write(byte[] buffer, int offset, int count) =>
|
||||
Stream.Write(buffer, offset, count);
|
||||
}
|
||||
@@ -1,160 +0,0 @@
|
||||
using System;
|
||||
using System.IO;
|
||||
using System.Threading;
|
||||
using System.Threading.Tasks;
|
||||
using SharpCompress.Common;
|
||||
|
||||
namespace SharpCompress.IO;
|
||||
|
||||
/// <summary>
|
||||
/// A stream wrapper that reports progress as data is read from the source.
|
||||
/// Used to track compression or extraction progress by wrapping the source stream.
|
||||
/// </summary>
|
||||
internal sealed class ProgressReportingStream : Stream
|
||||
{
|
||||
private readonly Stream _baseStream;
|
||||
private readonly IProgress<ProgressReport> _progress;
|
||||
private readonly string _entryPath;
|
||||
private readonly long? _totalBytes;
|
||||
private long _bytesTransferred;
|
||||
private readonly bool _leaveOpen;
|
||||
|
||||
public ProgressReportingStream(
|
||||
Stream baseStream,
|
||||
IProgress<ProgressReport> progress,
|
||||
string entryPath,
|
||||
long? totalBytes,
|
||||
bool leaveOpen = false
|
||||
)
|
||||
{
|
||||
_baseStream = baseStream;
|
||||
_progress = progress;
|
||||
_entryPath = entryPath;
|
||||
_totalBytes = totalBytes;
|
||||
_leaveOpen = leaveOpen;
|
||||
}
|
||||
|
||||
public override bool CanRead => _baseStream.CanRead;
|
||||
|
||||
public override bool CanSeek => _baseStream.CanSeek;
|
||||
|
||||
public override bool CanWrite => false;
|
||||
|
||||
public override long Length => _baseStream.Length;
|
||||
|
||||
public override long Position
|
||||
{
|
||||
get => _baseStream.Position;
|
||||
set =>
|
||||
throw new NotSupportedException(
|
||||
"Directly setting Position is not supported in ProgressReportingStream to maintain progress tracking integrity."
|
||||
);
|
||||
}
|
||||
|
||||
public override void Flush() => _baseStream.Flush();
|
||||
|
||||
public override int Read(byte[] buffer, int offset, int count)
|
||||
{
|
||||
var bytesRead = _baseStream.Read(buffer, offset, count);
|
||||
if (bytesRead > 0)
|
||||
{
|
||||
_bytesTransferred += bytesRead;
|
||||
ReportProgress();
|
||||
}
|
||||
return bytesRead;
|
||||
}
|
||||
|
||||
#if !NETFRAMEWORK && !NETSTANDARD2_0
|
||||
public override int Read(Span<byte> buffer)
|
||||
{
|
||||
var bytesRead = _baseStream.Read(buffer);
|
||||
if (bytesRead > 0)
|
||||
{
|
||||
_bytesTransferred += bytesRead;
|
||||
ReportProgress();
|
||||
}
|
||||
return bytesRead;
|
||||
}
|
||||
#endif
|
||||
|
||||
public override async Task<int> ReadAsync(
|
||||
byte[] buffer,
|
||||
int offset,
|
||||
int count,
|
||||
CancellationToken cancellationToken
|
||||
)
|
||||
{
|
||||
var bytesRead = await _baseStream
|
||||
.ReadAsync(buffer, offset, count, cancellationToken)
|
||||
.ConfigureAwait(false);
|
||||
if (bytesRead > 0)
|
||||
{
|
||||
_bytesTransferred += bytesRead;
|
||||
ReportProgress();
|
||||
}
|
||||
return bytesRead;
|
||||
}
|
||||
|
||||
#if !NETFRAMEWORK && !NETSTANDARD2_0
|
||||
public override async ValueTask<int> ReadAsync(
|
||||
Memory<byte> buffer,
|
||||
CancellationToken cancellationToken = default
|
||||
)
|
||||
{
|
||||
var bytesRead = await _baseStream
|
||||
.ReadAsync(buffer, cancellationToken)
|
||||
.ConfigureAwait(false);
|
||||
if (bytesRead > 0)
|
||||
{
|
||||
_bytesTransferred += bytesRead;
|
||||
ReportProgress();
|
||||
}
|
||||
return bytesRead;
|
||||
}
|
||||
#endif
|
||||
|
||||
public override int ReadByte()
|
||||
{
|
||||
var value = _baseStream.ReadByte();
|
||||
if (value != -1)
|
||||
{
|
||||
_bytesTransferred++;
|
||||
ReportProgress();
|
||||
}
|
||||
return value;
|
||||
}
|
||||
|
||||
public override long Seek(long offset, SeekOrigin origin) => _baseStream.Seek(offset, origin);
|
||||
|
||||
public override void SetLength(long value) => _baseStream.SetLength(value);
|
||||
|
||||
public override void Write(byte[] buffer, int offset, int count) =>
|
||||
throw new NotSupportedException(
|
||||
"ProgressReportingStream is designed for read operations to track progress."
|
||||
);
|
||||
|
||||
private void ReportProgress()
|
||||
{
|
||||
_progress.Report(new ProgressReport(_entryPath, _bytesTransferred, _totalBytes));
|
||||
}
|
||||
|
||||
protected override void Dispose(bool disposing)
|
||||
{
|
||||
if (disposing && !_leaveOpen)
|
||||
{
|
||||
_baseStream.Dispose();
|
||||
}
|
||||
base.Dispose(disposing);
|
||||
}
|
||||
|
||||
#if !NETFRAMEWORK && !NETSTANDARD2_0
|
||||
public override async ValueTask DisposeAsync()
|
||||
{
|
||||
if (!_leaveOpen)
|
||||
{
|
||||
await _baseStream.DisposeAsync().ConfigureAwait(false);
|
||||
}
|
||||
await base.DisposeAsync().ConfigureAwait(false);
|
||||
}
|
||||
#endif
|
||||
}
|
||||
@@ -57,8 +57,14 @@ public class SharpCompressStream : Stream, IStreamStack
|
||||
{
|
||||
ValidateBufferState(); // Add here
|
||||
}
|
||||
// Check CanSeek before accessing Position to avoid exception overhead on non-seekable streams.
|
||||
_internalPosition = Stream.CanSeek ? Stream.Position : 0;
|
||||
try
|
||||
{
|
||||
_internalPosition = Stream.Position;
|
||||
}
|
||||
catch
|
||||
{
|
||||
_internalPosition = 0;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -130,8 +136,14 @@ public class SharpCompressStream : Stream, IStreamStack
|
||||
_readOnly = !Stream.CanSeek;
|
||||
|
||||
((IStreamStack)this).SetBuffer(bufferSize, forceBuffer);
|
||||
// Check CanSeek before accessing Position to avoid exception overhead on non-seekable streams.
|
||||
_baseInitialPos = Stream.CanSeek ? Stream.Position : 0;
|
||||
try
|
||||
{
|
||||
_baseInitialPos = stream.Position;
|
||||
}
|
||||
catch
|
||||
{
|
||||
_baseInitialPos = 0;
|
||||
}
|
||||
|
||||
#if DEBUG_STREAMS
|
||||
this.DebugConstruct(typeof(SharpCompressStream));
|
||||
|
||||
@@ -3,8 +3,6 @@
|
||||
using System;
|
||||
using System.Buffers;
|
||||
using System.IO;
|
||||
using System.Threading;
|
||||
using System.Threading.Tasks;
|
||||
|
||||
namespace SharpCompress;
|
||||
|
||||
@@ -43,28 +41,6 @@ internal static class StreamExtensions
|
||||
ArrayPool<byte>.Shared.Return(temp);
|
||||
}
|
||||
}
|
||||
|
||||
internal static async Task ReadExactlyAsync(
|
||||
this Stream stream,
|
||||
byte[] buffer,
|
||||
int offset,
|
||||
int count,
|
||||
CancellationToken cancellationToken
|
||||
)
|
||||
{
|
||||
var totalRead = 0;
|
||||
while (totalRead < count)
|
||||
{
|
||||
var read = await stream
|
||||
.ReadAsync(buffer, offset + totalRead, count - totalRead, cancellationToken)
|
||||
.ConfigureAwait(false);
|
||||
if (read == 0)
|
||||
{
|
||||
throw new EndOfStreamException();
|
||||
}
|
||||
totalRead += read;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#endif
|
||||
|
||||
@@ -5,14 +5,13 @@ using System.Linq;
|
||||
using System.Threading;
|
||||
using System.Threading.Tasks;
|
||||
using SharpCompress.Common;
|
||||
using SharpCompress.IO;
|
||||
|
||||
namespace SharpCompress.Readers;
|
||||
|
||||
/// <summary>
|
||||
/// A generic push reader that reads unseekable comrpessed streams.
|
||||
/// </summary>
|
||||
public abstract class AbstractReader<TEntry, TVolume> : IReader
|
||||
public abstract class AbstractReader<TEntry, TVolume> : IReader, IReaderExtractionListener
|
||||
where TEntry : Entry
|
||||
where TVolume : Volume
|
||||
{
|
||||
@@ -20,6 +19,11 @@ public abstract class AbstractReader<TEntry, TVolume> : IReader
|
||||
private IEnumerator<TEntry>? _entriesForCurrentReadStream;
|
||||
private bool _wroteCurrentEntry;
|
||||
|
||||
public event EventHandler<ReaderExtractionEventArgs<IEntry>>? EntryExtractionProgress;
|
||||
|
||||
public event EventHandler<CompressedBytesReadEventArgs>? CompressedBytesRead;
|
||||
public event EventHandler<FilePartExtractionBeginEventArgs>? FilePartExtractionBegin;
|
||||
|
||||
internal AbstractReader(ReaderOptions options, ArchiveType archiveType)
|
||||
{
|
||||
ArchiveType = archiveType;
|
||||
@@ -260,58 +264,25 @@ public abstract class AbstractReader<TEntry, TVolume> : IReader
|
||||
|
||||
internal void Write(Stream writeStream)
|
||||
{
|
||||
var streamListener = this as IReaderExtractionListener;
|
||||
using Stream s = OpenEntryStream();
|
||||
var sourceStream = WrapWithProgress(s, Entry);
|
||||
sourceStream.CopyTo(writeStream, 81920);
|
||||
s.TransferTo(writeStream, Entry, streamListener);
|
||||
}
|
||||
|
||||
internal async Task WriteAsync(Stream writeStream, CancellationToken cancellationToken)
|
||||
{
|
||||
var streamListener = this as IReaderExtractionListener;
|
||||
#if NETFRAMEWORK || NETSTANDARD2_0
|
||||
using Stream s = OpenEntryStream();
|
||||
var sourceStream = WrapWithProgress(s, Entry);
|
||||
await sourceStream.CopyToAsync(writeStream, 81920, cancellationToken).ConfigureAwait(false);
|
||||
await s.TransferToAsync(writeStream, Entry, streamListener, cancellationToken)
|
||||
.ConfigureAwait(false);
|
||||
#else
|
||||
await using Stream s = OpenEntryStream();
|
||||
var sourceStream = WrapWithProgress(s, Entry);
|
||||
await sourceStream.CopyToAsync(writeStream, 81920, cancellationToken).ConfigureAwait(false);
|
||||
await s.TransferToAsync(writeStream, Entry, streamListener, cancellationToken)
|
||||
.ConfigureAwait(false);
|
||||
#endif
|
||||
}
|
||||
|
||||
private Stream WrapWithProgress(Stream source, Entry entry)
|
||||
{
|
||||
var progress = Options.Progress;
|
||||
if (progress is null)
|
||||
{
|
||||
return source;
|
||||
}
|
||||
|
||||
var entryPath = entry.Key ?? string.Empty;
|
||||
long? totalBytes = GetEntrySizeSafe(entry);
|
||||
return new ProgressReportingStream(
|
||||
source,
|
||||
progress,
|
||||
entryPath,
|
||||
totalBytes,
|
||||
leaveOpen: true
|
||||
);
|
||||
}
|
||||
|
||||
private static long? GetEntrySizeSafe(Entry entry)
|
||||
{
|
||||
try
|
||||
{
|
||||
var size = entry.Size;
|
||||
// Return the actual size (including 0 for empty entries)
|
||||
// Negative values indicate unknown size
|
||||
return size >= 0 ? size : null;
|
||||
}
|
||||
catch (NotImplementedException)
|
||||
{
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
public EntryStream OpenEntryStream()
|
||||
{
|
||||
if (_wroteCurrentEntry)
|
||||
@@ -354,4 +325,43 @@ public abstract class AbstractReader<TEntry, TVolume> : IReader
|
||||
#endregion
|
||||
|
||||
IEntry IReader.Entry => Entry;
|
||||
|
||||
void IExtractionListener.FireCompressedBytesRead(
|
||||
long currentPartCompressedBytes,
|
||||
long compressedReadBytes
|
||||
) =>
|
||||
CompressedBytesRead?.Invoke(
|
||||
this,
|
||||
new CompressedBytesReadEventArgs(
|
||||
currentFilePartCompressedBytesRead: currentPartCompressedBytes,
|
||||
compressedBytesRead: compressedReadBytes
|
||||
)
|
||||
);
|
||||
|
||||
void IExtractionListener.FireFilePartExtractionBegin(
|
||||
string name,
|
||||
long size,
|
||||
long compressedSize
|
||||
) =>
|
||||
FilePartExtractionBegin?.Invoke(
|
||||
this,
|
||||
new FilePartExtractionBeginEventArgs(
|
||||
compressedSize: compressedSize,
|
||||
size: size,
|
||||
name: name
|
||||
)
|
||||
);
|
||||
|
||||
void IReaderExtractionListener.FireEntryExtractionProgress(
|
||||
Entry entry,
|
||||
long bytesTransferred,
|
||||
int iterations
|
||||
) =>
|
||||
EntryExtractionProgress?.Invoke(
|
||||
this,
|
||||
new ReaderExtractionEventArgs<IEntry>(
|
||||
entry,
|
||||
new ReaderProgress(entry, bytesTransferred, iterations)
|
||||
)
|
||||
);
|
||||
}
|
||||
|
||||
@@ -1,115 +0,0 @@
|
||||
using System;
|
||||
using System.Collections.Generic;
|
||||
using System.IO;
|
||||
using System.Linq;
|
||||
using System.Text;
|
||||
using System.Threading.Tasks;
|
||||
using SharpCompress.Common;
|
||||
using SharpCompress.Common.Ace;
|
||||
using SharpCompress.Common.Ace.Headers;
|
||||
using SharpCompress.Common.Arj;
|
||||
|
||||
namespace SharpCompress.Readers.Ace
|
||||
{
|
||||
/// <summary>
|
||||
/// Reader for ACE archives.
|
||||
/// ACE is a proprietary archive format. This implementation supports both ACE 1.0 and ACE 2.0 formats
|
||||
/// and can read archive metadata and extract uncompressed (stored) entries.
|
||||
/// Compressed entries require proprietary decompression algorithms that are not publicly documented.
|
||||
/// </summary>
|
||||
/// <remarks>
|
||||
/// ACE 2.0 additions over ACE 1.0:
|
||||
/// - Improved LZ77 compression (compression type 2)
|
||||
/// - Recovery record support
|
||||
/// - Additional header flags
|
||||
/// </remarks>
|
||||
public abstract class AceReader : AbstractReader<AceEntry, AceVolume>
|
||||
{
|
||||
private readonly ArchiveEncoding _archiveEncoding;
|
||||
|
||||
internal AceReader(ReaderOptions options)
|
||||
: base(options, ArchiveType.Ace)
|
||||
{
|
||||
_archiveEncoding = Options.ArchiveEncoding;
|
||||
}
|
||||
|
||||
private AceReader(Stream stream, ReaderOptions options)
|
||||
: this(options) { }
|
||||
|
||||
/// <summary>
|
||||
/// Derived class must create or manage the Volume itself.
|
||||
/// AbstractReader.Volume is get-only, so it cannot be set here.
|
||||
/// </summary>
|
||||
public override AceVolume? Volume => _volume;
|
||||
|
||||
private AceVolume? _volume;
|
||||
|
||||
/// <summary>
|
||||
/// Opens an AceReader for non-seeking usage with a single volume.
|
||||
/// </summary>
|
||||
/// <param name="stream">The stream containing the ACE archive.</param>
|
||||
/// <param name="options">Reader options.</param>
|
||||
/// <returns>An AceReader instance.</returns>
|
||||
public static AceReader Open(Stream stream, ReaderOptions? options = null)
|
||||
{
|
||||
stream.NotNull(nameof(stream));
|
||||
return new SingleVolumeAceReader(stream, options ?? new ReaderOptions());
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
/// Opens an AceReader for Non-seeking usage with multiple volumes
|
||||
/// </summary>
|
||||
/// <param name="streams"></param>
|
||||
/// <param name="options"></param>
|
||||
/// <returns></returns>
|
||||
public static AceReader Open(IEnumerable<Stream> streams, ReaderOptions? options = null)
|
||||
{
|
||||
streams.NotNull(nameof(streams));
|
||||
return new MultiVolumeAceReader(streams, options ?? new ReaderOptions());
|
||||
}
|
||||
|
||||
protected abstract void ValidateArchive(AceVolume archive);
|
||||
|
||||
protected override IEnumerable<AceEntry> GetEntries(Stream stream)
|
||||
{
|
||||
var mainHeaderReader = new AceMainHeader(_archiveEncoding);
|
||||
var mainHeader = mainHeaderReader.Read(stream);
|
||||
if (mainHeader == null)
|
||||
{
|
||||
yield break;
|
||||
}
|
||||
|
||||
if (mainHeader?.IsMultiVolume == true)
|
||||
{
|
||||
throw new MultiVolumeExtractionException(
|
||||
"Multi volumes are currently not supported"
|
||||
);
|
||||
}
|
||||
|
||||
if (_volume == null)
|
||||
{
|
||||
_volume = new AceVolume(stream, Options, 0);
|
||||
ValidateArchive(_volume);
|
||||
}
|
||||
|
||||
var localHeaderReader = new AceFileHeader(_archiveEncoding);
|
||||
while (true)
|
||||
{
|
||||
var localHeader = localHeaderReader.Read(stream);
|
||||
if (localHeader?.IsFileEncrypted == true)
|
||||
{
|
||||
throw new CryptographicException(
|
||||
"Password protected archives are currently not supported"
|
||||
);
|
||||
}
|
||||
if (localHeader == null)
|
||||
break;
|
||||
|
||||
yield return new AceEntry(new AceFilePart((AceFileHeader)localHeader, stream));
|
||||
}
|
||||
}
|
||||
|
||||
protected virtual IEnumerable<FilePart> CreateFilePartEnumerableForCurrentEntry() =>
|
||||
Entry.Parts;
|
||||
}
|
||||
}
|
||||
@@ -1,117 +0,0 @@
|
||||
#nullable disable
|
||||
|
||||
using System;
|
||||
using System.Collections;
|
||||
using System.Collections.Generic;
|
||||
using System.IO;
|
||||
using System.Linq;
|
||||
using System.Text;
|
||||
using System.Threading.Tasks;
|
||||
using SharpCompress.Common;
|
||||
using SharpCompress.Common.Ace;
|
||||
|
||||
namespace SharpCompress.Readers.Ace
|
||||
{
|
||||
internal class MultiVolumeAceReader : AceReader
|
||||
{
|
||||
private readonly IEnumerator<Stream> streams;
|
||||
private Stream tempStream;
|
||||
|
||||
internal MultiVolumeAceReader(IEnumerable<Stream> streams, ReaderOptions options)
|
||||
: base(options) => this.streams = streams.GetEnumerator();
|
||||
|
||||
protected override void ValidateArchive(AceVolume archive) { }
|
||||
|
||||
protected override Stream RequestInitialStream()
|
||||
{
|
||||
if (streams.MoveNext())
|
||||
{
|
||||
return streams.Current;
|
||||
}
|
||||
throw new MultiVolumeExtractionException(
|
||||
"No stream provided when requested by MultiVolumeAceReader"
|
||||
);
|
||||
}
|
||||
|
||||
internal override bool NextEntryForCurrentStream()
|
||||
{
|
||||
if (!base.NextEntryForCurrentStream())
|
||||
{
|
||||
// if we're got another stream to try to process then do so
|
||||
return streams.MoveNext() && LoadStreamForReading(streams.Current);
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
protected override IEnumerable<FilePart> CreateFilePartEnumerableForCurrentEntry()
|
||||
{
|
||||
var enumerator = new MultiVolumeStreamEnumerator(this, streams, tempStream);
|
||||
tempStream = null;
|
||||
return enumerator;
|
||||
}
|
||||
|
||||
private class MultiVolumeStreamEnumerator : IEnumerable<FilePart>, IEnumerator<FilePart>
|
||||
{
|
||||
private readonly MultiVolumeAceReader reader;
|
||||
private readonly IEnumerator<Stream> nextReadableStreams;
|
||||
private Stream tempStream;
|
||||
private bool isFirst = true;
|
||||
|
||||
internal MultiVolumeStreamEnumerator(
|
||||
MultiVolumeAceReader r,
|
||||
IEnumerator<Stream> nextReadableStreams,
|
||||
Stream tempStream
|
||||
)
|
||||
{
|
||||
reader = r;
|
||||
this.nextReadableStreams = nextReadableStreams;
|
||||
this.tempStream = tempStream;
|
||||
}
|
||||
|
||||
public IEnumerator<FilePart> GetEnumerator() => this;
|
||||
|
||||
IEnumerator IEnumerable.GetEnumerator() => this;
|
||||
|
||||
public FilePart Current { get; private set; }
|
||||
|
||||
public void Dispose() { }
|
||||
|
||||
object IEnumerator.Current => Current;
|
||||
|
||||
public bool MoveNext()
|
||||
{
|
||||
if (isFirst)
|
||||
{
|
||||
Current = reader.Entry.Parts.First();
|
||||
isFirst = false; //first stream already to go
|
||||
return true;
|
||||
}
|
||||
|
||||
if (!reader.Entry.IsSplitAfter)
|
||||
{
|
||||
return false;
|
||||
}
|
||||
if (tempStream != null)
|
||||
{
|
||||
reader.LoadStreamForReading(tempStream);
|
||||
tempStream = null;
|
||||
}
|
||||
else if (!nextReadableStreams.MoveNext())
|
||||
{
|
||||
throw new MultiVolumeExtractionException(
|
||||
"No stream provided when requested by MultiVolumeAceReader"
|
||||
);
|
||||
}
|
||||
else
|
||||
{
|
||||
reader.LoadStreamForReading(nextReadableStreams.Current);
|
||||
}
|
||||
|
||||
Current = reader.Entry.Parts.First();
|
||||
return true;
|
||||
}
|
||||
|
||||
public void Reset() { }
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,31 +0,0 @@
|
||||
using System;
|
||||
using System.IO;
|
||||
using SharpCompress.Common;
|
||||
using SharpCompress.Common.Ace;
|
||||
|
||||
namespace SharpCompress.Readers.Ace
|
||||
{
|
||||
internal class SingleVolumeAceReader : AceReader
|
||||
{
|
||||
private readonly Stream _stream;
|
||||
|
||||
internal SingleVolumeAceReader(Stream stream, ReaderOptions options)
|
||||
: base(options)
|
||||
{
|
||||
stream.NotNull(nameof(stream));
|
||||
_stream = stream;
|
||||
}
|
||||
|
||||
protected override Stream RequestInitialStream() => _stream;
|
||||
|
||||
protected override void ValidateArchive(AceVolume archive)
|
||||
{
|
||||
if (archive.IsMultiVolume)
|
||||
{
|
||||
throw new MultiVolumeExtractionException(
|
||||
"Streamed archive is a Multi-volume archive. Use a different AceReader method to extract."
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -8,6 +8,11 @@ namespace SharpCompress.Readers;
|
||||
|
||||
public interface IReader : IDisposable
|
||||
{
|
||||
event EventHandler<ReaderExtractionEventArgs<IEntry>> EntryExtractionProgress;
|
||||
|
||||
event EventHandler<CompressedBytesReadEventArgs> CompressedBytesRead;
|
||||
event EventHandler<FilePartExtractionBeginEventArgs> FilePartExtractionBegin;
|
||||
|
||||
ArchiveType ArchiveType { get; }
|
||||
|
||||
IEntry Entry { get; }
|
||||
|
||||
@@ -7,121 +7,124 @@ namespace SharpCompress.Readers;
|
||||
|
||||
public static class IReaderExtensions
|
||||
{
|
||||
extension(IReader reader)
|
||||
public static void WriteEntryTo(this IReader reader, string filePath)
|
||||
{
|
||||
public void WriteEntryTo(string filePath)
|
||||
{
|
||||
using Stream stream = File.Open(filePath, FileMode.Create, FileAccess.Write);
|
||||
reader.WriteEntryTo(stream);
|
||||
}
|
||||
using Stream stream = File.Open(filePath, FileMode.Create, FileAccess.Write);
|
||||
reader.WriteEntryTo(stream);
|
||||
}
|
||||
|
||||
public void WriteEntryTo(FileInfo filePath)
|
||||
{
|
||||
using Stream stream = filePath.Open(FileMode.Create);
|
||||
reader.WriteEntryTo(stream);
|
||||
}
|
||||
public static void WriteEntryTo(this IReader reader, FileInfo filePath)
|
||||
{
|
||||
using Stream stream = filePath.Open(FileMode.Create);
|
||||
reader.WriteEntryTo(stream);
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
/// Extract all remaining unread entries to specific directory, retaining filename
|
||||
/// </summary>
|
||||
public void WriteAllToDirectory(
|
||||
string destinationDirectory,
|
||||
ExtractionOptions? options = null
|
||||
)
|
||||
/// <summary>
|
||||
/// Extract all remaining unread entries to specific directory, retaining filename
|
||||
/// </summary>
|
||||
public static void WriteAllToDirectory(
|
||||
this IReader reader,
|
||||
string destinationDirectory,
|
||||
ExtractionOptions? options = null
|
||||
)
|
||||
{
|
||||
while (reader.MoveToNextEntry())
|
||||
{
|
||||
while (reader.MoveToNextEntry())
|
||||
reader.WriteEntryToDirectory(destinationDirectory, options);
|
||||
}
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
/// Extract to specific directory, retaining filename
|
||||
/// </summary>
|
||||
public static void WriteEntryToDirectory(
|
||||
this IReader reader,
|
||||
string destinationDirectory,
|
||||
ExtractionOptions? options = null
|
||||
) =>
|
||||
ExtractionMethods.WriteEntryToDirectory(
|
||||
reader.Entry,
|
||||
destinationDirectory,
|
||||
options,
|
||||
reader.WriteEntryToFile
|
||||
);
|
||||
|
||||
/// <summary>
|
||||
/// Extract to specific file
|
||||
/// </summary>
|
||||
public static void WriteEntryToFile(
|
||||
this IReader reader,
|
||||
string destinationFileName,
|
||||
ExtractionOptions? options = null
|
||||
) =>
|
||||
ExtractionMethods.WriteEntryToFile(
|
||||
reader.Entry,
|
||||
destinationFileName,
|
||||
options,
|
||||
(x, fm) =>
|
||||
{
|
||||
reader.WriteEntryToDirectory(destinationDirectory, options);
|
||||
using var fs = File.Open(destinationFileName, fm);
|
||||
reader.WriteEntryTo(fs);
|
||||
}
|
||||
}
|
||||
);
|
||||
|
||||
/// <summary>
|
||||
/// Extract to specific directory, retaining filename
|
||||
/// </summary>
|
||||
public void WriteEntryToDirectory(
|
||||
string destinationDirectory,
|
||||
ExtractionOptions? options = null
|
||||
) =>
|
||||
ExtractionMethods.WriteEntryToDirectory(
|
||||
/// <summary>
|
||||
/// Extract to specific directory asynchronously, retaining filename
|
||||
/// </summary>
|
||||
public static async Task WriteEntryToDirectoryAsync(
|
||||
this IReader reader,
|
||||
string destinationDirectory,
|
||||
ExtractionOptions? options = null,
|
||||
CancellationToken cancellationToken = default
|
||||
) =>
|
||||
await ExtractionMethods
|
||||
.WriteEntryToDirectoryAsync(
|
||||
reader.Entry,
|
||||
destinationDirectory,
|
||||
options,
|
||||
reader.WriteEntryToFile
|
||||
);
|
||||
(fileName, opts) => reader.WriteEntryToFileAsync(fileName, opts, cancellationToken),
|
||||
cancellationToken
|
||||
)
|
||||
.ConfigureAwait(false);
|
||||
|
||||
/// <summary>
|
||||
/// Extract to specific file
|
||||
/// </summary>
|
||||
public void WriteEntryToFile(
|
||||
string destinationFileName,
|
||||
ExtractionOptions? options = null
|
||||
) =>
|
||||
ExtractionMethods.WriteEntryToFile(
|
||||
/// <summary>
|
||||
/// Extract to specific file asynchronously
|
||||
/// </summary>
|
||||
public static async Task WriteEntryToFileAsync(
|
||||
this IReader reader,
|
||||
string destinationFileName,
|
||||
ExtractionOptions? options = null,
|
||||
CancellationToken cancellationToken = default
|
||||
) =>
|
||||
await ExtractionMethods
|
||||
.WriteEntryToFileAsync(
|
||||
reader.Entry,
|
||||
destinationFileName,
|
||||
options,
|
||||
(x, fm) =>
|
||||
async (x, fm) =>
|
||||
{
|
||||
using var fs = File.Open(destinationFileName, fm);
|
||||
reader.WriteEntryTo(fs);
|
||||
}
|
||||
);
|
||||
await reader.WriteEntryToAsync(fs, cancellationToken).ConfigureAwait(false);
|
||||
},
|
||||
cancellationToken
|
||||
)
|
||||
.ConfigureAwait(false);
|
||||
|
||||
/// <summary>
|
||||
/// Extract to specific directory asynchronously, retaining filename
|
||||
/// </summary>
|
||||
public async Task WriteEntryToDirectoryAsync(
|
||||
string destinationDirectory,
|
||||
ExtractionOptions? options = null,
|
||||
CancellationToken cancellationToken = default
|
||||
) =>
|
||||
await ExtractionMethods
|
||||
.WriteEntryToDirectoryAsync(
|
||||
reader.Entry,
|
||||
destinationDirectory,
|
||||
options,
|
||||
reader.WriteEntryToFileAsync,
|
||||
cancellationToken
|
||||
)
|
||||
.ConfigureAwait(false);
|
||||
|
||||
/// <summary>
|
||||
/// Extract to specific file asynchronously
|
||||
/// </summary>
|
||||
public async Task WriteEntryToFileAsync(
|
||||
string destinationFileName,
|
||||
ExtractionOptions? options = null,
|
||||
CancellationToken cancellationToken = default
|
||||
) =>
|
||||
await ExtractionMethods
|
||||
.WriteEntryToFileAsync(
|
||||
reader.Entry,
|
||||
destinationFileName,
|
||||
options,
|
||||
async (x, fm, ct) =>
|
||||
{
|
||||
using var fs = File.Open(destinationFileName, fm);
|
||||
await reader.WriteEntryToAsync(fs, ct).ConfigureAwait(false);
|
||||
},
|
||||
cancellationToken
|
||||
)
|
||||
.ConfigureAwait(false);
|
||||
|
||||
/// <summary>
|
||||
/// Extract all remaining unread entries to specific directory asynchronously, retaining filename
|
||||
/// </summary>
|
||||
public async Task WriteAllToDirectoryAsync(
|
||||
string destinationDirectory,
|
||||
ExtractionOptions? options = null,
|
||||
CancellationToken cancellationToken = default
|
||||
)
|
||||
/// <summary>
|
||||
/// Extract all remaining unread entries to specific directory asynchronously, retaining filename
|
||||
/// </summary>
|
||||
public static async Task WriteAllToDirectoryAsync(
|
||||
this IReader reader,
|
||||
string destinationDirectory,
|
||||
ExtractionOptions? options = null,
|
||||
CancellationToken cancellationToken = default
|
||||
)
|
||||
{
|
||||
while (reader.MoveToNextEntry())
|
||||
{
|
||||
while (await reader.MoveToNextEntryAsync(cancellationToken))
|
||||
{
|
||||
await reader
|
||||
.WriteEntryToDirectoryAsync(destinationDirectory, options, cancellationToken)
|
||||
.ConfigureAwait(false);
|
||||
}
|
||||
await reader
|
||||
.WriteEntryToDirectoryAsync(destinationDirectory, options, cancellationToken)
|
||||
.ConfigureAwait(false);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
8
src/SharpCompress/Readers/IReaderExtractionListener.cs
Normal file
8
src/SharpCompress/Readers/IReaderExtractionListener.cs
Normal file
@@ -0,0 +1,8 @@
|
||||
using SharpCompress.Common;
|
||||
|
||||
namespace SharpCompress.Readers;
|
||||
|
||||
public interface IReaderExtractionListener : IExtractionListener
|
||||
{
|
||||
void FireEntryExtractionProgress(Entry entry, long sizeTransferred, int iterations);
|
||||
}
|
||||
@@ -108,7 +108,8 @@ public abstract class RarReader : AbstractReader<RarReaderEntry, RarVolume>
|
||||
}
|
||||
|
||||
var stream = new MultiVolumeReadOnlyStream(
|
||||
CreateFilePartEnumerableForCurrentEntry().Cast<RarFilePart>()
|
||||
CreateFilePartEnumerableForCurrentEntry().Cast<RarFilePart>(),
|
||||
this
|
||||
);
|
||||
if (Entry.IsRarV3)
|
||||
{
|
||||
@@ -135,7 +136,8 @@ public abstract class RarReader : AbstractReader<RarReaderEntry, RarVolume>
|
||||
}
|
||||
|
||||
var stream = new MultiVolumeReadOnlyStream(
|
||||
CreateFilePartEnumerableForCurrentEntry().Cast<RarFilePart>()
|
||||
CreateFilePartEnumerableForCurrentEntry().Cast<RarFilePart>(),
|
||||
this
|
||||
);
|
||||
if (Entry.IsRarV3)
|
||||
{
|
||||
|
||||
@@ -70,7 +70,7 @@ public static class ReaderFactory
|
||||
}
|
||||
|
||||
throw new InvalidFormatException(
|
||||
"Cannot determine compressed stream type. Supported Reader Formats: Ace, Arc, Arj, Zip, GZip, BZip2, Tar, Rar, LZip, XZ, ZStandard"
|
||||
"Cannot determine compressed stream type. Supported Reader Formats: Arc, Arj, Zip, GZip, BZip2, Tar, Rar, LZip, XZ, ZStandard"
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,4 +1,3 @@
|
||||
using System;
|
||||
using SharpCompress.Common;
|
||||
|
||||
namespace SharpCompress.Readers;
|
||||
@@ -22,10 +21,4 @@ public class ReaderOptions : OptionsBase
|
||||
/// Provide a hint for the extension of the archive being read, can speed up finding the correct decoder. Should be without the leading period in the form like: tar.gz or zip
|
||||
/// </summary>
|
||||
public string? ExtensionHint { get; set; }
|
||||
|
||||
/// <summary>
|
||||
/// An optional progress reporter for tracking extraction operations.
|
||||
/// When set, progress updates will be reported as entries are extracted.
|
||||
/// </summary>
|
||||
public IProgress<ProgressReport>? Progress { get; set; }
|
||||
}
|
||||
|
||||
21
src/SharpCompress/Readers/ReaderProgress.cs
Normal file
21
src/SharpCompress/Readers/ReaderProgress.cs
Normal file
@@ -0,0 +1,21 @@
|
||||
using System;
|
||||
using SharpCompress.Common;
|
||||
|
||||
namespace SharpCompress.Readers;
|
||||
|
||||
public class ReaderProgress
|
||||
{
|
||||
private readonly IEntry _entry;
|
||||
public long BytesTransferred { get; }
|
||||
public int Iterations { get; }
|
||||
|
||||
public int PercentageRead => (int)Math.Round(PercentageReadExact);
|
||||
public double PercentageReadExact => (float)BytesTransferred / _entry.Size * 100;
|
||||
|
||||
public ReaderProgress(IEntry entry, long bytesTransferred, int iterations)
|
||||
{
|
||||
_entry = entry;
|
||||
BytesTransferred = bytesTransferred;
|
||||
Iterations = iterations;
|
||||
}
|
||||
}
|
||||
@@ -6,6 +6,7 @@ using System.IO;
|
||||
using System.Text;
|
||||
using System.Threading;
|
||||
using System.Threading.Tasks;
|
||||
using SharpCompress.Readers;
|
||||
|
||||
namespace SharpCompress;
|
||||
|
||||
@@ -215,6 +216,34 @@ internal static class Utility
|
||||
}
|
||||
}
|
||||
|
||||
public static long TransferTo(
|
||||
this Stream source,
|
||||
Stream destination,
|
||||
Common.Entry entry,
|
||||
IReaderExtractionListener readerExtractionListener
|
||||
)
|
||||
{
|
||||
var array = ArrayPool<byte>.Shared.Rent(TEMP_BUFFER_SIZE);
|
||||
try
|
||||
{
|
||||
var iterations = 0;
|
||||
long total = 0;
|
||||
int count;
|
||||
while ((count = source.Read(array, 0, array.Length)) != 0)
|
||||
{
|
||||
total += count;
|
||||
destination.Write(array, 0, count);
|
||||
iterations++;
|
||||
readerExtractionListener.FireEntryExtractionProgress(entry, total, iterations);
|
||||
}
|
||||
return total;
|
||||
}
|
||||
finally
|
||||
{
|
||||
ArrayPool<byte>.Shared.Return(array);
|
||||
}
|
||||
}
|
||||
|
||||
public static async Task<long> TransferToAsync(
|
||||
this Stream source,
|
||||
Stream destination,
|
||||
@@ -261,6 +290,43 @@ internal static class Utility
|
||||
}
|
||||
}
|
||||
|
||||
public static async Task<long> TransferToAsync(
|
||||
this Stream source,
|
||||
Stream destination,
|
||||
Common.Entry entry,
|
||||
IReaderExtractionListener readerExtractionListener,
|
||||
CancellationToken cancellationToken = default
|
||||
)
|
||||
{
|
||||
var array = ArrayPool<byte>.Shared.Rent(TEMP_BUFFER_SIZE);
|
||||
try
|
||||
{
|
||||
var iterations = 0;
|
||||
long total = 0;
|
||||
int count;
|
||||
while (
|
||||
(
|
||||
count = await source
|
||||
.ReadAsync(array, 0, array.Length, cancellationToken)
|
||||
.ConfigureAwait(false)
|
||||
) != 0
|
||||
)
|
||||
{
|
||||
total += count;
|
||||
await destination
|
||||
.WriteAsync(array, 0, count, cancellationToken)
|
||||
.ConfigureAwait(false);
|
||||
iterations++;
|
||||
readerExtractionListener.FireEntryExtractionProgress(entry, total, iterations);
|
||||
}
|
||||
return total;
|
||||
}
|
||||
finally
|
||||
{
|
||||
ArrayPool<byte>.Shared.Return(array);
|
||||
}
|
||||
}
|
||||
|
||||
private static bool ReadTransferBlock(Stream source, byte[] array, int maxSize, out int count)
|
||||
{
|
||||
var size = maxSize;
|
||||
|
||||
@@ -3,7 +3,6 @@ using System.IO;
|
||||
using System.Threading;
|
||||
using System.Threading.Tasks;
|
||||
using SharpCompress.Common;
|
||||
using SharpCompress.IO;
|
||||
|
||||
namespace SharpCompress.Writers;
|
||||
|
||||
@@ -23,29 +22,6 @@ public abstract class AbstractWriter(ArchiveType type, WriterOptions writerOptio
|
||||
|
||||
protected WriterOptions WriterOptions { get; } = writerOptions;
|
||||
|
||||
/// <summary>
|
||||
/// Wraps the source stream with a progress-reporting stream if progress reporting is enabled.
|
||||
/// </summary>
|
||||
/// <param name="source">The source stream to wrap.</param>
|
||||
/// <param name="entryPath">The path of the entry being written.</param>
|
||||
/// <returns>A stream that reports progress, or the original stream if progress is not enabled.</returns>
|
||||
protected Stream WrapWithProgress(Stream source, string entryPath)
|
||||
{
|
||||
if (WriterOptions.Progress is null)
|
||||
{
|
||||
return source;
|
||||
}
|
||||
|
||||
long? totalBytes = source.CanSeek ? source.Length : null;
|
||||
return new ProgressReportingStream(
|
||||
source,
|
||||
WriterOptions.Progress,
|
||||
entryPath,
|
||||
totalBytes,
|
||||
leaveOpen: true
|
||||
);
|
||||
}
|
||||
|
||||
public abstract void Write(string filename, Stream source, DateTime? modificationTime);
|
||||
|
||||
public virtual async Task WriteAsync(
|
||||
|
||||
@@ -47,8 +47,7 @@ public sealed class GZipWriter : AbstractWriter
|
||||
var stream = (GZipStream)OutputStream;
|
||||
stream.FileName = filename;
|
||||
stream.LastModified = modificationTime;
|
||||
var progressStream = WrapWithProgress(source, filename);
|
||||
progressStream.CopyTo(stream);
|
||||
source.CopyTo(stream);
|
||||
_wroteToStream = true;
|
||||
}
|
||||
|
||||
|
||||
@@ -129,8 +129,7 @@ public class TarWriter : AbstractWriter
|
||||
header.Name = NormalizeFilename(filename);
|
||||
header.Size = realSize;
|
||||
header.Write(OutputStream);
|
||||
var progressStream = WrapWithProgress(source, filename);
|
||||
size = progressStream.TransferTo(OutputStream, realSize);
|
||||
size = source.TransferTo(OutputStream, realSize);
|
||||
PadTo512(size.Value);
|
||||
}
|
||||
|
||||
@@ -162,8 +161,7 @@ public class TarWriter : AbstractWriter
|
||||
header.Name = NormalizeFilename(filename);
|
||||
header.Size = realSize;
|
||||
header.Write(OutputStream);
|
||||
var progressStream = WrapWithProgress(source, filename);
|
||||
var written = await progressStream
|
||||
var written = await source
|
||||
.TransferToAsync(OutputStream, realSize, cancellationToken)
|
||||
.ConfigureAwait(false);
|
||||
PadTo512(written);
|
||||
|
||||
@@ -1,4 +1,3 @@
|
||||
using System;
|
||||
using SharpCompress.Common;
|
||||
using D = SharpCompress.Compressors.Deflate;
|
||||
|
||||
@@ -37,12 +36,6 @@ public class WriterOptions : OptionsBase
|
||||
/// </summary>
|
||||
public int CompressionLevel { get; set; }
|
||||
|
||||
/// <summary>
|
||||
/// An optional progress reporter for tracking compression operations.
|
||||
/// When set, progress updates will be reported as entries are written.
|
||||
/// </summary>
|
||||
public IProgress<ProgressReport>? Progress { get; set; }
|
||||
|
||||
public static implicit operator WriterOptions(CompressionType compressionType) =>
|
||||
new(compressionType);
|
||||
}
|
||||
|
||||
@@ -34,7 +34,6 @@ internal class ZipCentralDirectoryEntry
|
||||
internal ulong Decompressed { get; set; }
|
||||
internal ushort Zip64HeaderOffset { get; set; }
|
||||
internal ulong HeaderOffset { get; }
|
||||
internal string FileName => fileName;
|
||||
|
||||
internal uint Write(Stream outputStream)
|
||||
{
|
||||
|
||||
@@ -8,7 +8,6 @@ using System.Threading.Tasks;
|
||||
using SharpCompress.Common;
|
||||
using SharpCompress.Common.Zip;
|
||||
using SharpCompress.Common.Zip.Headers;
|
||||
using SharpCompress.Common.Zip.SOZip;
|
||||
using SharpCompress.Compressors;
|
||||
using SharpCompress.Compressors.BZip2;
|
||||
using SharpCompress.Compressors.Deflate;
|
||||
@@ -28,19 +27,12 @@ public class ZipWriter : AbstractWriter
|
||||
private long streamPosition;
|
||||
private PpmdProperties? ppmdProps;
|
||||
private readonly bool isZip64;
|
||||
private readonly bool enableSOZip;
|
||||
private readonly int sozipChunkSize;
|
||||
private readonly long sozipMinFileSize;
|
||||
|
||||
public ZipWriter(Stream destination, ZipWriterOptions zipWriterOptions)
|
||||
: base(ArchiveType.Zip, zipWriterOptions)
|
||||
{
|
||||
zipComment = zipWriterOptions.ArchiveComment ?? string.Empty;
|
||||
isZip64 = zipWriterOptions.UseZip64;
|
||||
enableSOZip = zipWriterOptions.EnableSOZip;
|
||||
sozipChunkSize = zipWriterOptions.SOZipChunkSize;
|
||||
sozipMinFileSize = zipWriterOptions.SOZipMinFileSize;
|
||||
|
||||
if (destination.CanSeek)
|
||||
{
|
||||
streamPosition = destination.Position;
|
||||
@@ -94,8 +86,7 @@ public class ZipWriter : AbstractWriter
|
||||
public void Write(string entryPath, Stream source, ZipWriterEntryOptions zipWriterEntryOptions)
|
||||
{
|
||||
using var output = WriteToStream(entryPath, zipWriterEntryOptions);
|
||||
var progressStream = WrapWithProgress(source, entryPath);
|
||||
progressStream.CopyTo(output);
|
||||
source.CopyTo(output);
|
||||
}
|
||||
|
||||
public Stream WriteToStream(string entryPath, ZipWriterEntryOptions options)
|
||||
@@ -125,21 +116,12 @@ public class ZipWriter : AbstractWriter
|
||||
|
||||
var headersize = (uint)WriteHeader(entryPath, options, entry, useZip64);
|
||||
streamPosition += headersize;
|
||||
|
||||
// Determine if SOZip should be used for this entry
|
||||
var useSozip =
|
||||
(options.EnableSOZip ?? enableSOZip)
|
||||
&& compression == ZipCompressionMethod.Deflate
|
||||
&& OutputStream.CanSeek;
|
||||
|
||||
return new ZipWritingStream(
|
||||
this,
|
||||
OutputStream.NotNull(),
|
||||
entry,
|
||||
compression,
|
||||
options.CompressionLevel ?? compressionLevel,
|
||||
useSozip,
|
||||
useSozip ? sozipChunkSize : 0
|
||||
options.CompressionLevel ?? compressionLevel
|
||||
);
|
||||
}
|
||||
|
||||
@@ -321,64 +303,6 @@ public class ZipWriter : AbstractWriter
|
||||
OutputStream.Write(intBuf);
|
||||
}
|
||||
|
||||
private void WriteSozipIndexFile(
|
||||
ZipCentralDirectoryEntry dataEntry,
|
||||
SOZipDeflateStream sozipStream
|
||||
)
|
||||
{
|
||||
var indexFileName = SOZipIndex.GetIndexFileName(dataEntry.FileName);
|
||||
|
||||
// Create the SOZip index
|
||||
var index = new SOZipIndex(
|
||||
chunkSize: sozipStream.ChunkSize,
|
||||
uncompressedSize: sozipStream.UncompressedBytesWritten,
|
||||
compressedSize: sozipStream.CompressedBytesWritten,
|
||||
compressedOffsets: sozipStream.CompressedOffsets
|
||||
);
|
||||
|
||||
var indexBytes = index.ToByteArray();
|
||||
|
||||
// Calculate CRC for index data
|
||||
var crc = new CRC32();
|
||||
crc.SlurpBlock(indexBytes, 0, indexBytes.Length);
|
||||
var indexCrc = (uint)crc.Crc32Result;
|
||||
|
||||
// Write the index file as a stored (uncompressed) entry
|
||||
var indexEntry = new ZipCentralDirectoryEntry(
|
||||
ZipCompressionMethod.None,
|
||||
indexFileName,
|
||||
(ulong)streamPosition,
|
||||
WriterOptions.ArchiveEncoding
|
||||
)
|
||||
{
|
||||
ModificationTime = DateTime.Now,
|
||||
};
|
||||
|
||||
// Write the local file header for index
|
||||
var indexOptions = new ZipWriterEntryOptions { CompressionType = CompressionType.None };
|
||||
var headerSize = (uint)WriteHeader(indexFileName, indexOptions, indexEntry, isZip64);
|
||||
streamPosition += headerSize;
|
||||
|
||||
// Write the index data directly
|
||||
OutputStream.Write(indexBytes, 0, indexBytes.Length);
|
||||
|
||||
// Finalize the index entry
|
||||
indexEntry.Crc = indexCrc;
|
||||
indexEntry.Compressed = (ulong)indexBytes.Length;
|
||||
indexEntry.Decompressed = (ulong)indexBytes.Length;
|
||||
|
||||
if (OutputStream.CanSeek)
|
||||
{
|
||||
// Update the header with sizes and CRC
|
||||
OutputStream.Position = (long)(indexEntry.HeaderOffset + 14);
|
||||
WriteFooter(indexCrc, (uint)indexBytes.Length, (uint)indexBytes.Length);
|
||||
OutputStream.Position = streamPosition + indexBytes.Length;
|
||||
}
|
||||
|
||||
streamPosition += indexBytes.Length;
|
||||
entries.Add(indexEntry);
|
||||
}
|
||||
|
||||
private void WriteEndRecord(ulong size)
|
||||
{
|
||||
var zip64EndOfCentralDirectoryNeeded =
|
||||
@@ -460,10 +384,7 @@ public class ZipWriter : AbstractWriter
|
||||
private readonly ZipWriter writer;
|
||||
private readonly ZipCompressionMethod zipCompressionMethod;
|
||||
private readonly int compressionLevel;
|
||||
private readonly bool useSozip;
|
||||
private readonly int sozipChunkSize;
|
||||
private SharpCompressStream? counting;
|
||||
private SOZipDeflateStream? sozipStream;
|
||||
private ulong decompressed;
|
||||
|
||||
// Flag to prevent throwing exceptions on Dispose
|
||||
@@ -475,9 +396,7 @@ public class ZipWriter : AbstractWriter
|
||||
Stream originalStream,
|
||||
ZipCentralDirectoryEntry entry,
|
||||
ZipCompressionMethod zipCompressionMethod,
|
||||
int compressionLevel,
|
||||
bool useSozip = false,
|
||||
int sozipChunkSize = 0
|
||||
int compressionLevel
|
||||
)
|
||||
{
|
||||
this.writer = writer;
|
||||
@@ -486,8 +405,6 @@ public class ZipWriter : AbstractWriter
|
||||
this.entry = entry;
|
||||
this.zipCompressionMethod = zipCompressionMethod;
|
||||
this.compressionLevel = compressionLevel;
|
||||
this.useSozip = useSozip;
|
||||
this.sozipChunkSize = sozipChunkSize;
|
||||
writeStream = GetWriteStream(originalStream);
|
||||
}
|
||||
|
||||
@@ -517,15 +434,6 @@ public class ZipWriter : AbstractWriter
|
||||
}
|
||||
case ZipCompressionMethod.Deflate:
|
||||
{
|
||||
if (useSozip && sozipChunkSize > 0)
|
||||
{
|
||||
sozipStream = new SOZipDeflateStream(
|
||||
counting,
|
||||
(CompressionLevel)compressionLevel,
|
||||
sozipChunkSize
|
||||
);
|
||||
return sozipStream;
|
||||
}
|
||||
return new DeflateStream(
|
||||
counting,
|
||||
CompressionMode.Compress,
|
||||
@@ -672,18 +580,7 @@ public class ZipWriter : AbstractWriter
|
||||
writer.WriteFooter(entry.Crc, compressedvalue, decompressedvalue);
|
||||
writer.streamPosition += (long)entry.Compressed + 16;
|
||||
}
|
||||
|
||||
writer.entries.Add(entry);
|
||||
|
||||
// Write SOZip index file if SOZip was used and file meets minimum size
|
||||
if (
|
||||
useSozip
|
||||
&& sozipStream is not null
|
||||
&& entry.Decompressed >= (ulong)writer.sozipMinFileSize
|
||||
)
|
||||
{
|
||||
writer.WriteSozipIndexFile(entry, sozipStream);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -49,11 +49,4 @@ public class ZipWriterEntryOptions
|
||||
/// This option is not supported with non-seekable streams.
|
||||
/// </summary>
|
||||
public bool? EnableZip64 { get; set; }
|
||||
|
||||
/// <summary>
|
||||
/// Enable or disable SOZip (Seek-Optimized ZIP) for this entry.
|
||||
/// When null, uses the archive's default setting.
|
||||
/// SOZip is only applicable to Deflate-compressed files on seekable streams.
|
||||
/// </summary>
|
||||
public bool? EnableSOZip { get; set; }
|
||||
}
|
||||
|
||||
@@ -1,6 +1,5 @@
|
||||
using System;
|
||||
using SharpCompress.Common;
|
||||
using SharpCompress.Common.Zip.SOZip;
|
||||
using SharpCompress.Compressors.Deflate;
|
||||
using D = SharpCompress.Compressors.Deflate;
|
||||
|
||||
@@ -25,9 +24,6 @@ public class ZipWriterOptions : WriterOptions
|
||||
{
|
||||
UseZip64 = writerOptions.UseZip64;
|
||||
ArchiveComment = writerOptions.ArchiveComment;
|
||||
EnableSOZip = writerOptions.EnableSOZip;
|
||||
SOZipChunkSize = writerOptions.SOZipChunkSize;
|
||||
SOZipMinFileSize = writerOptions.SOZipMinFileSize;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -84,27 +80,4 @@ public class ZipWriterOptions : WriterOptions
|
||||
/// are less than 4GiB in length.
|
||||
/// </summary>
|
||||
public bool UseZip64 { get; set; }
|
||||
|
||||
/// <summary>
|
||||
/// Enables SOZip (Seek-Optimized ZIP) for Deflate-compressed files.
|
||||
/// When enabled, files that meet the minimum size requirement will have
|
||||
/// an accompanying index file that allows random access within the
|
||||
/// compressed data. Requires a seekable output stream.
|
||||
/// </summary>
|
||||
public bool EnableSOZip { get; set; }
|
||||
|
||||
/// <summary>
|
||||
/// The chunk size for SOZip index creation in bytes.
|
||||
/// Must be a multiple of 1024 bytes. Default is 32KB (32768 bytes).
|
||||
/// Smaller chunks allow for finer-grained random access but result
|
||||
/// in larger index files and slightly less efficient compression.
|
||||
/// </summary>
|
||||
public int SOZipChunkSize { get; set; } = (int)SOZipIndex.DEFAULT_CHUNK_SIZE;
|
||||
|
||||
/// <summary>
|
||||
/// Minimum file size (uncompressed) in bytes for SOZip optimization.
|
||||
/// Files smaller than this size will not have SOZip index files created.
|
||||
/// Default is 1MB (1048576 bytes).
|
||||
/// </summary>
|
||||
public long SOZipMinFileSize { get; set; } = 1048576;
|
||||
}
|
||||
|
||||
@@ -4,9 +4,9 @@
|
||||
"net10.0": {
|
||||
"JetBrains.Profiler.SelfApi": {
|
||||
"type": "Direct",
|
||||
"requested": "[2.5.15, )",
|
||||
"resolved": "2.5.15",
|
||||
"contentHash": "Uc+GU4Mqwzw6i6C6wPWe2g9UwReUJzrWghCSN9818tSmNI7SJtqKQ1mgfuRylpJam+ED4Cbg2BNJE/LSVpokkg==",
|
||||
"requested": "[2.5.14, )",
|
||||
"resolved": "2.5.14",
|
||||
"contentHash": "9+NcTe49B2M8/MOledSxKZkQKqavFf5xXZw4JL4bVu/KYiw6OOaD6cDQmNGSO18yUP/WoBXsXGKmZ9VOpmyadw==",
|
||||
"dependencies": {
|
||||
"JetBrains.HabitatDetector": "1.4.5",
|
||||
"JetBrains.Profiler.Api": "1.4.10"
|
||||
@@ -34,7 +34,16 @@
|
||||
}
|
||||
},
|
||||
"sharpcompress": {
|
||||
"type": "Project"
|
||||
"type": "Project",
|
||||
"dependencies": {
|
||||
"ZstdSharp.Port": "[0.8.6, )"
|
||||
}
|
||||
},
|
||||
"ZstdSharp.Port": {
|
||||
"type": "CentralTransitive",
|
||||
"requested": "[0.8.6, )",
|
||||
"resolved": "0.8.6",
|
||||
"contentHash": "iP4jVLQoQmUjMU88g1WObiNr6YKZGvh4aOXn3yOJsHqZsflwRsxZPcIBvNXgjXO3vQKSLctXGLTpcBPLnWPS8A=="
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,61 +0,0 @@
|
||||
using System;
|
||||
using System.Collections.Generic;
|
||||
using System.IO;
|
||||
using System.Linq;
|
||||
using System.Text;
|
||||
using System.Threading.Tasks;
|
||||
using SharpCompress.Common;
|
||||
using SharpCompress.Readers;
|
||||
using SharpCompress.Readers.Ace;
|
||||
using Xunit;
|
||||
|
||||
namespace SharpCompress.Test.Ace
|
||||
{
|
||||
public class AceReaderTests : ReaderTests
|
||||
{
|
||||
public AceReaderTests()
|
||||
{
|
||||
UseExtensionInsteadOfNameToVerify = true;
|
||||
UseCaseInsensitiveToVerify = true;
|
||||
}
|
||||
|
||||
[Fact]
|
||||
public void Ace_Uncompressed_Read() => Read("Ace.store.ace", CompressionType.None);
|
||||
|
||||
[Fact]
|
||||
public void Ace_Encrypted_Read()
|
||||
{
|
||||
var exception = Assert.Throws<CryptographicException>(() => Read("Ace.encrypted.ace"));
|
||||
}
|
||||
|
||||
[Theory]
|
||||
[InlineData("Ace.method1.ace", CompressionType.AceLZ77)]
|
||||
[InlineData("Ace.method1-solid.ace", CompressionType.AceLZ77)]
|
||||
[InlineData("Ace.method2.ace", CompressionType.AceLZ77)]
|
||||
[InlineData("Ace.method2-solid.ace", CompressionType.AceLZ77)]
|
||||
public void Ace_Unsupported_ShouldThrow(string fileName, CompressionType compressionType)
|
||||
{
|
||||
var exception = Assert.Throws<NotSupportedException>(() =>
|
||||
Read(fileName, compressionType)
|
||||
);
|
||||
}
|
||||
|
||||
[Theory]
|
||||
[InlineData("Ace.store.largefile.ace", CompressionType.None)]
|
||||
public void Ace_LargeFileTest_Read(string fileName, CompressionType compressionType)
|
||||
{
|
||||
ReadForBufferBoundaryCheck(fileName, compressionType);
|
||||
}
|
||||
|
||||
[Fact]
|
||||
public void Ace_Multi_Reader()
|
||||
{
|
||||
var exception = Assert.Throws<MultiVolumeExtractionException>(() =>
|
||||
DoMultiReader(
|
||||
["Ace.store.split.ace", "Ace.store.split.c01"],
|
||||
streams => AceReader.Open(streams)
|
||||
)
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -261,7 +261,7 @@ public class ArchiveTests : ReaderTests
|
||||
testArchive = Path.Combine(TEST_ARCHIVES_PATH, testArchive);
|
||||
using (var archive = ArchiveFactory.Open(new FileInfo(testArchive), readerOptions))
|
||||
{
|
||||
archive.WriteToDirectory(SCRATCH_FILES_PATH);
|
||||
archive.ExtractToDirectory(SCRATCH_FILES_PATH);
|
||||
}
|
||||
VerifyFiles();
|
||||
}
|
||||
|
||||
@@ -45,17 +45,14 @@ namespace SharpCompress.Test.Arj
|
||||
public void Arj_Multi_Reader()
|
||||
{
|
||||
var exception = Assert.Throws<MultiVolumeExtractionException>(() =>
|
||||
DoMultiReader(
|
||||
[
|
||||
"Arj.store.split.arj",
|
||||
"Arj.store.split.a01",
|
||||
"Arj.store.split.a02",
|
||||
"Arj.store.split.a03",
|
||||
"Arj.store.split.a04",
|
||||
"Arj.store.split.a05",
|
||||
],
|
||||
streams => ArjReader.Open(streams)
|
||||
)
|
||||
DoArj_Multi_Reader([
|
||||
"Arj.store.split.arj",
|
||||
"Arj.store.split.a01",
|
||||
"Arj.store.split.a02",
|
||||
"Arj.store.split.a03",
|
||||
"Arj.store.split.a04",
|
||||
"Arj.store.split.a05",
|
||||
])
|
||||
);
|
||||
}
|
||||
|
||||
@@ -77,5 +74,26 @@ namespace SharpCompress.Test.Arj
|
||||
{
|
||||
ReadForBufferBoundaryCheck(fileName, compressionType);
|
||||
}
|
||||
|
||||
private void DoArj_Multi_Reader(string[] archives)
|
||||
{
|
||||
using (
|
||||
var reader = ArjReader.Open(
|
||||
archives
|
||||
.Select(s => Path.Combine(TEST_ARCHIVES_PATH, s))
|
||||
.Select(p => File.OpenRead(p))
|
||||
)
|
||||
)
|
||||
{
|
||||
while (reader.MoveToNextEntry())
|
||||
{
|
||||
reader.WriteEntryToDirectory(
|
||||
SCRATCH_FILES_PATH,
|
||||
new ExtractionOptions { ExtractFullPath = true, Overwrite = true }
|
||||
);
|
||||
}
|
||||
}
|
||||
VerifyFiles();
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -8,7 +8,7 @@ using Xunit;
|
||||
|
||||
namespace SharpCompress.Test;
|
||||
|
||||
public class ExtractAllTests : TestBase
|
||||
public class ExtractAll : TestBase
|
||||
{
|
||||
[Theory]
|
||||
[InlineData("Zip.deflate.zip")]
|
||||
@@ -18,29 +18,27 @@ public class ExtractAllTests : TestBase
|
||||
[InlineData("7Zip.solid.7z")]
|
||||
[InlineData("7Zip.nonsolid.7z")]
|
||||
[InlineData("7Zip.LZMA.7z")]
|
||||
public async Task ExtractAllEntriesAsync(string archivePath)
|
||||
public async Task ExtractAllEntries(string archivePath)
|
||||
{
|
||||
var testArchive = Path.Combine(TEST_ARCHIVES_PATH, archivePath);
|
||||
var options = new ExtractionOptions() { ExtractFullPath = true, Overwrite = true };
|
||||
|
||||
using var archive = ArchiveFactory.Open(testArchive);
|
||||
await archive.WriteToDirectoryAsync(SCRATCH_FILES_PATH, options);
|
||||
}
|
||||
|
||||
[Theory]
|
||||
[InlineData("Zip.deflate.zip")]
|
||||
[InlineData("Rar5.rar")]
|
||||
[InlineData("Rar.rar")]
|
||||
[InlineData("Rar.solid.rar")]
|
||||
[InlineData("7Zip.solid.7z")]
|
||||
[InlineData("7Zip.nonsolid.7z")]
|
||||
[InlineData("7Zip.LZMA.7z")]
|
||||
public void ExtractAllEntriesSync(string archivePath)
|
||||
{
|
||||
var testArchive = Path.Combine(TEST_ARCHIVES_PATH, archivePath);
|
||||
var options = new ExtractionOptions() { ExtractFullPath = true, Overwrite = true };
|
||||
|
||||
using var archive = ArchiveFactory.Open(testArchive);
|
||||
archive.WriteToDirectory(SCRATCH_FILES_PATH, options);
|
||||
if (archive.IsSolid || archive.Type == ArchiveType.SevenZip)
|
||||
{
|
||||
var reader = archive.ExtractAllEntries();
|
||||
while (await reader.MoveToNextEntryAsync())
|
||||
{
|
||||
if (!reader.Entry.IsDirectory)
|
||||
{
|
||||
await reader.WriteEntryToDirectoryAsync(SCRATCH_FILES_PATH, options);
|
||||
}
|
||||
}
|
||||
}
|
||||
else
|
||||
{
|
||||
archive.ExtractToDirectory(SCRATCH_FILES_PATH, options);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,61 +0,0 @@
|
||||
using System.IO;
|
||||
using System.Linq;
|
||||
using SharpCompress.Archives;
|
||||
using SharpCompress.Common;
|
||||
using SharpCompress.Readers;
|
||||
using Xunit;
|
||||
|
||||
namespace SharpCompress.Test;
|
||||
|
||||
/// <summary>
|
||||
/// Tests for the ExtractAllEntries method behavior on both solid and non-solid
|
||||
/// archives, including progress reporting and current usage restrictions.
|
||||
/// </summary>
|
||||
public class ExtractAllEntriesTests : TestBase
|
||||
{
|
||||
[Fact]
|
||||
public void ExtractAllEntries_WithProgressReporting_NonSolidArchive()
|
||||
{
|
||||
var archivePath = Path.Combine(TEST_ARCHIVES_PATH, "Zip.deflate.zip");
|
||||
|
||||
using var archive = ArchiveFactory.Open(archivePath);
|
||||
Assert.Throws<SharpCompressException>(() =>
|
||||
{
|
||||
using var reader = archive.ExtractAllEntries();
|
||||
});
|
||||
}
|
||||
|
||||
[Fact]
|
||||
public void ExtractAllEntries_WithProgressReporting_SolidArchive()
|
||||
{
|
||||
var archivePath = Path.Combine(TEST_ARCHIVES_PATH, "Rar.solid.rar");
|
||||
|
||||
using var archive = ArchiveFactory.Open(archivePath);
|
||||
Assert.True(archive.IsSolid);
|
||||
|
||||
// Calculate total size like user code does
|
||||
double totalSize = archive.Entries.Where(e => !e.IsDirectory).Sum(e => e.Size);
|
||||
long completed = 0;
|
||||
var progressReports = 0;
|
||||
|
||||
using var reader = archive.ExtractAllEntries();
|
||||
while (reader.MoveToNextEntry())
|
||||
{
|
||||
if (!reader.Entry.IsDirectory)
|
||||
{
|
||||
reader.WriteEntryToDirectory(
|
||||
SCRATCH_FILES_PATH,
|
||||
new ExtractionOptions { ExtractFullPath = true, Overwrite = true }
|
||||
);
|
||||
|
||||
completed += reader.Entry.Size;
|
||||
var progress = completed / totalSize;
|
||||
progressReports++;
|
||||
|
||||
Assert.True(progress >= 0 && progress <= 1.0);
|
||||
}
|
||||
}
|
||||
|
||||
Assert.True(progressReports > 0);
|
||||
}
|
||||
}
|
||||
@@ -1,65 +0,0 @@
|
||||
using System;
|
||||
using System.IO;
|
||||
|
||||
namespace SharpCompress.Test.Mocks;
|
||||
|
||||
/// <summary>
|
||||
/// A stream wrapper that truncates the underlying stream after reading a specified number of bytes.
|
||||
/// Used for testing error handling when streams end prematurely.
|
||||
/// </summary>
|
||||
public class TruncatedStream : Stream
|
||||
{
|
||||
private readonly Stream baseStream;
|
||||
private readonly long truncateAfterBytes;
|
||||
private long bytesRead;
|
||||
|
||||
public TruncatedStream(Stream baseStream, long truncateAfterBytes)
|
||||
{
|
||||
this.baseStream = baseStream ?? throw new ArgumentNullException(nameof(baseStream));
|
||||
this.truncateAfterBytes = truncateAfterBytes;
|
||||
bytesRead = 0;
|
||||
}
|
||||
|
||||
public override bool CanRead => baseStream.CanRead;
|
||||
public override bool CanSeek => baseStream.CanSeek;
|
||||
public override bool CanWrite => false;
|
||||
public override long Length => baseStream.Length;
|
||||
|
||||
public override long Position
|
||||
{
|
||||
get => baseStream.Position;
|
||||
set => baseStream.Position = value;
|
||||
}
|
||||
|
||||
public override int Read(byte[] buffer, int offset, int count)
|
||||
{
|
||||
if (bytesRead >= truncateAfterBytes)
|
||||
{
|
||||
// Simulate premature end of stream
|
||||
return 0;
|
||||
}
|
||||
|
||||
var maxBytesToRead = (int)Math.Min(count, truncateAfterBytes - bytesRead);
|
||||
var actualBytesRead = baseStream.Read(buffer, offset, maxBytesToRead);
|
||||
bytesRead += actualBytesRead;
|
||||
return actualBytesRead;
|
||||
}
|
||||
|
||||
public override long Seek(long offset, SeekOrigin origin) => baseStream.Seek(offset, origin);
|
||||
|
||||
public override void SetLength(long value) => throw new NotSupportedException();
|
||||
|
||||
public override void Write(byte[] buffer, int offset, int count) =>
|
||||
throw new NotSupportedException();
|
||||
|
||||
public override void Flush() => baseStream.Flush();
|
||||
|
||||
protected override void Dispose(bool disposing)
|
||||
{
|
||||
if (disposing)
|
||||
{
|
||||
baseStream?.Dispose();
|
||||
}
|
||||
base.Dispose(disposing);
|
||||
}
|
||||
}
|
||||
@@ -1,605 +0,0 @@
|
||||
using System;
|
||||
using System.Collections.Generic;
|
||||
using System.IO;
|
||||
using System.Linq;
|
||||
using System.Threading;
|
||||
using System.Threading.Tasks;
|
||||
using SharpCompress.Archives;
|
||||
using SharpCompress.Archives.Zip;
|
||||
using SharpCompress.Common;
|
||||
using SharpCompress.Readers;
|
||||
using SharpCompress.Writers;
|
||||
using SharpCompress.Writers.Tar;
|
||||
using SharpCompress.Writers.Zip;
|
||||
using Xunit;
|
||||
|
||||
namespace SharpCompress.Test;
|
||||
|
||||
/// <summary>
|
||||
/// A synchronous progress implementation for testing.
|
||||
/// Unlike Progress<T>, this captures reports immediately without SynchronizationContext.
|
||||
/// </summary>
|
||||
internal sealed class TestProgress<T> : IProgress<T>
|
||||
{
|
||||
private readonly List<T> _reports = new();
|
||||
|
||||
public IReadOnlyList<T> Reports => _reports;
|
||||
|
||||
public void Report(T value) => _reports.Add(value);
|
||||
}
|
||||
|
||||
public class ProgressReportTests : TestBase
|
||||
{
|
||||
private static byte[] CreateTestData(int size, byte fillValue)
|
||||
{
|
||||
var data = new byte[size];
|
||||
for (var i = 0; i < size; i++)
|
||||
{
|
||||
data[i] = fillValue;
|
||||
}
|
||||
return data;
|
||||
}
|
||||
|
||||
[Fact]
|
||||
public void Zip_Write_ReportsProgress()
|
||||
{
|
||||
var progress = new TestProgress<ProgressReport>();
|
||||
|
||||
using var archiveStream = new MemoryStream();
|
||||
var options = new ZipWriterOptions(CompressionType.Deflate) { Progress = progress };
|
||||
|
||||
using (var writer = new ZipWriter(archiveStream, options))
|
||||
{
|
||||
var testData = CreateTestData(10000, (byte)'A');
|
||||
using var sourceStream = new MemoryStream(testData);
|
||||
writer.Write("test.txt", sourceStream, DateTime.Now);
|
||||
}
|
||||
|
||||
Assert.NotEmpty(progress.Reports);
|
||||
Assert.All(progress.Reports, p => Assert.Equal("test.txt", p.EntryPath));
|
||||
Assert.All(progress.Reports, p => Assert.Equal(10000, p.TotalBytes));
|
||||
|
||||
var lastReport = progress.Reports[progress.Reports.Count - 1];
|
||||
Assert.Equal(10000, lastReport.BytesTransferred);
|
||||
Assert.Equal(100.0, lastReport.PercentComplete);
|
||||
}
|
||||
|
||||
[Fact]
|
||||
public void Tar_Write_ReportsProgress()
|
||||
{
|
||||
var progress = new TestProgress<ProgressReport>();
|
||||
|
||||
using var archiveStream = new MemoryStream();
|
||||
var options = new TarWriterOptions(CompressionType.None, true) { Progress = progress };
|
||||
|
||||
using (var writer = new TarWriter(archiveStream, options))
|
||||
{
|
||||
var testData = CreateTestData(10000, (byte)'A');
|
||||
using var sourceStream = new MemoryStream(testData);
|
||||
writer.Write("test.txt", sourceStream, DateTime.Now);
|
||||
}
|
||||
|
||||
Assert.NotEmpty(progress.Reports);
|
||||
Assert.All(progress.Reports, p => Assert.Equal("test.txt", p.EntryPath));
|
||||
Assert.All(progress.Reports, p => Assert.Equal(10000, p.TotalBytes));
|
||||
|
||||
var lastReport = progress.Reports[progress.Reports.Count - 1];
|
||||
Assert.Equal(10000, lastReport.BytesTransferred);
|
||||
Assert.Equal(100.0, lastReport.PercentComplete);
|
||||
}
|
||||
|
||||
[Fact]
|
||||
public void Zip_Read_ReportsProgress()
|
||||
{
|
||||
var progress = new TestProgress<ProgressReport>();
|
||||
|
||||
// First create a zip archive
|
||||
using var archiveStream = new MemoryStream();
|
||||
using (
|
||||
var writer = new ZipWriter(archiveStream, new ZipWriterOptions(CompressionType.Deflate))
|
||||
)
|
||||
{
|
||||
var testData = CreateTestData(10000, (byte)'A');
|
||||
using var sourceStream = new MemoryStream(testData);
|
||||
writer.Write("test.txt", sourceStream, DateTime.Now);
|
||||
}
|
||||
|
||||
// Now read it with progress reporting
|
||||
archiveStream.Position = 0;
|
||||
var readerOptions = new ReaderOptions { Progress = progress };
|
||||
|
||||
using (var reader = ReaderFactory.Open(archiveStream, readerOptions))
|
||||
{
|
||||
while (reader.MoveToNextEntry())
|
||||
{
|
||||
if (!reader.Entry.IsDirectory)
|
||||
{
|
||||
using var extractedStream = new MemoryStream();
|
||||
reader.WriteEntryTo(extractedStream);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
Assert.NotEmpty(progress.Reports);
|
||||
Assert.All(progress.Reports, p => Assert.Equal("test.txt", p.EntryPath));
|
||||
|
||||
var lastReport = progress.Reports[progress.Reports.Count - 1];
|
||||
Assert.Equal(10000, lastReport.BytesTransferred);
|
||||
}
|
||||
|
||||
[Fact]
|
||||
public void ZipArchive_Entry_WriteTo_ReportsProgress()
|
||||
{
|
||||
var progress = new TestProgress<ProgressReport>();
|
||||
|
||||
// First create a zip archive
|
||||
using var archiveStream = new MemoryStream();
|
||||
using (
|
||||
var writer = new ZipWriter(archiveStream, new ZipWriterOptions(CompressionType.Deflate))
|
||||
)
|
||||
{
|
||||
var testData = CreateTestData(10000, (byte)'A');
|
||||
using var sourceStream = new MemoryStream(testData);
|
||||
writer.Write("test.txt", sourceStream, DateTime.Now);
|
||||
}
|
||||
|
||||
// Now open as archive and extract entry with progress as parameter
|
||||
archiveStream.Position = 0;
|
||||
|
||||
using var archive = ZipArchive.Open(archiveStream);
|
||||
foreach (var entry in archive.Entries)
|
||||
{
|
||||
if (!entry.IsDirectory)
|
||||
{
|
||||
using var extractedStream = new MemoryStream();
|
||||
entry.WriteTo(extractedStream, progress);
|
||||
}
|
||||
}
|
||||
|
||||
Assert.NotEmpty(progress.Reports);
|
||||
Assert.All(progress.Reports, p => Assert.Equal("test.txt", p.EntryPath));
|
||||
|
||||
var lastReport = progress.Reports[progress.Reports.Count - 1];
|
||||
Assert.Equal(10000, lastReport.BytesTransferred);
|
||||
}
|
||||
|
||||
[Fact]
|
||||
public async Task ZipArchive_Entry_WriteToAsync_ReportsProgress()
|
||||
{
|
||||
var progress = new TestProgress<ProgressReport>();
|
||||
|
||||
// First create a zip archive
|
||||
using var archiveStream = new MemoryStream();
|
||||
using (
|
||||
var writer = new ZipWriter(archiveStream, new ZipWriterOptions(CompressionType.Deflate))
|
||||
)
|
||||
{
|
||||
var testData = CreateTestData(10000, (byte)'A');
|
||||
using var sourceStream = new MemoryStream(testData);
|
||||
writer.Write("test.txt", sourceStream, DateTime.Now);
|
||||
}
|
||||
|
||||
// Now open as archive and extract entry async with progress as parameter
|
||||
archiveStream.Position = 0;
|
||||
|
||||
using var archive = ZipArchive.Open(archiveStream);
|
||||
foreach (var entry in archive.Entries)
|
||||
{
|
||||
if (!entry.IsDirectory)
|
||||
{
|
||||
using var extractedStream = new MemoryStream();
|
||||
await entry.WriteToAsync(extractedStream, progress, CancellationToken.None);
|
||||
}
|
||||
}
|
||||
|
||||
Assert.NotEmpty(progress.Reports);
|
||||
Assert.All(progress.Reports, p => Assert.Equal("test.txt", p.EntryPath));
|
||||
|
||||
var lastReport = progress.Reports[progress.Reports.Count - 1];
|
||||
Assert.Equal(10000, lastReport.BytesTransferred);
|
||||
}
|
||||
|
||||
[Fact]
|
||||
public void WriterOptions_WithoutProgress_DoesNotThrow()
|
||||
{
|
||||
using var archiveStream = new MemoryStream();
|
||||
var options = new ZipWriterOptions(CompressionType.Deflate);
|
||||
Assert.Null(options.Progress);
|
||||
|
||||
using (var writer = new ZipWriter(archiveStream, options))
|
||||
{
|
||||
var testData = CreateTestData(100, (byte)'A');
|
||||
using var sourceStream = new MemoryStream(testData);
|
||||
writer.Write("test.txt", sourceStream, DateTime.Now);
|
||||
}
|
||||
|
||||
Assert.True(archiveStream.Length > 0);
|
||||
}
|
||||
|
||||
[Fact]
|
||||
public void ReaderOptions_WithoutProgress_DoesNotThrow()
|
||||
{
|
||||
// First create a zip archive
|
||||
using var archiveStream = new MemoryStream();
|
||||
using (
|
||||
var writer = new ZipWriter(archiveStream, new ZipWriterOptions(CompressionType.Deflate))
|
||||
)
|
||||
{
|
||||
var testData = CreateTestData(100, (byte)'A');
|
||||
using var sourceStream = new MemoryStream(testData);
|
||||
writer.Write("test.txt", sourceStream, DateTime.Now);
|
||||
}
|
||||
|
||||
// Read without progress
|
||||
archiveStream.Position = 0;
|
||||
var readerOptions = new ReaderOptions();
|
||||
Assert.Null(readerOptions.Progress);
|
||||
|
||||
using (var reader = ReaderFactory.Open(archiveStream, readerOptions))
|
||||
{
|
||||
while (reader.MoveToNextEntry())
|
||||
{
|
||||
if (!reader.Entry.IsDirectory)
|
||||
{
|
||||
using var extractedStream = new MemoryStream();
|
||||
reader.WriteEntryTo(extractedStream);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
[Fact]
|
||||
public void ZipArchive_WithoutProgress_DoesNotThrow()
|
||||
{
|
||||
// First create a zip archive
|
||||
using var archiveStream = new MemoryStream();
|
||||
using (
|
||||
var writer = new ZipWriter(archiveStream, new ZipWriterOptions(CompressionType.Deflate))
|
||||
)
|
||||
{
|
||||
var testData = CreateTestData(100, (byte)'A');
|
||||
using var sourceStream = new MemoryStream(testData);
|
||||
writer.Write("test.txt", sourceStream, DateTime.Now);
|
||||
}
|
||||
|
||||
// Open archive and extract without progress
|
||||
archiveStream.Position = 0;
|
||||
|
||||
using var archive = ZipArchive.Open(archiveStream);
|
||||
foreach (var entry in archive.Entries)
|
||||
{
|
||||
if (!entry.IsDirectory)
|
||||
{
|
||||
using var extractedStream = new MemoryStream();
|
||||
entry.WriteTo(extractedStream);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
[Fact]
|
||||
public void ProgressReport_PercentComplete_WithUnknownTotalBytes_ReturnsNull()
|
||||
{
|
||||
var progress = new ProgressReport("test.txt", 100, null);
|
||||
Assert.Null(progress.PercentComplete);
|
||||
}
|
||||
|
||||
[Fact]
|
||||
public void ProgressReport_PercentComplete_WithZeroTotalBytes_ReturnsNull()
|
||||
{
|
||||
var progress = new ProgressReport("test.txt", 0, 0);
|
||||
Assert.Null(progress.PercentComplete);
|
||||
}
|
||||
|
||||
[Fact]
|
||||
public void ProgressReport_Properties_AreSetCorrectly()
|
||||
{
|
||||
var progress = new ProgressReport("path/to/file.txt", 500, 1000);
|
||||
|
||||
Assert.Equal("path/to/file.txt", progress.EntryPath);
|
||||
Assert.Equal(500, progress.BytesTransferred);
|
||||
Assert.Equal(1000, progress.TotalBytes);
|
||||
Assert.Equal(50.0, progress.PercentComplete);
|
||||
}
|
||||
|
||||
[Fact]
|
||||
public void Tar_Read_ReportsProgress()
|
||||
{
|
||||
var progress = new TestProgress<ProgressReport>();
|
||||
|
||||
// Create a tar archive first
|
||||
using var archiveStream = new MemoryStream();
|
||||
using (
|
||||
var writer = new TarWriter(
|
||||
archiveStream,
|
||||
new TarWriterOptions(CompressionType.None, true)
|
||||
)
|
||||
)
|
||||
{
|
||||
var testData = CreateTestData(10000, (byte)'B');
|
||||
using var sourceStream = new MemoryStream(testData);
|
||||
writer.Write("data.bin", sourceStream, DateTime.Now);
|
||||
}
|
||||
|
||||
// Now read it with progress reporting
|
||||
archiveStream.Position = 0;
|
||||
var readerOptions = new ReaderOptions { Progress = progress };
|
||||
|
||||
using (var reader = ReaderFactory.Open(archiveStream, readerOptions))
|
||||
{
|
||||
while (reader.MoveToNextEntry())
|
||||
{
|
||||
if (!reader.Entry.IsDirectory)
|
||||
{
|
||||
using var extractedStream = new MemoryStream();
|
||||
reader.WriteEntryTo(extractedStream);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
Assert.NotEmpty(progress.Reports);
|
||||
Assert.All(progress.Reports, p => Assert.Equal("data.bin", p.EntryPath));
|
||||
|
||||
var lastReport = progress.Reports[progress.Reports.Count - 1];
|
||||
Assert.Equal(10000, lastReport.BytesTransferred);
|
||||
}
|
||||
|
||||
[Fact]
|
||||
public void TarArchive_Entry_WriteTo_ReportsProgress()
|
||||
{
|
||||
var progress = new TestProgress<ProgressReport>();
|
||||
|
||||
// Create a tar archive first
|
||||
using var archiveStream = new MemoryStream();
|
||||
using (
|
||||
var writer = new TarWriter(
|
||||
archiveStream,
|
||||
new TarWriterOptions(CompressionType.None, true)
|
||||
)
|
||||
)
|
||||
{
|
||||
var testData = CreateTestData(10000, (byte)'C');
|
||||
using var sourceStream = new MemoryStream(testData);
|
||||
writer.Write("file.dat", sourceStream, DateTime.Now);
|
||||
}
|
||||
|
||||
// Now open as archive and extract entry with progress as parameter
|
||||
archiveStream.Position = 0;
|
||||
|
||||
using var archive = SharpCompress.Archives.Tar.TarArchive.Open(archiveStream);
|
||||
foreach (var entry in archive.Entries)
|
||||
{
|
||||
if (!entry.IsDirectory)
|
||||
{
|
||||
using var extractedStream = new MemoryStream();
|
||||
entry.WriteTo(extractedStream, progress);
|
||||
}
|
||||
}
|
||||
|
||||
Assert.NotEmpty(progress.Reports);
|
||||
Assert.All(progress.Reports, p => Assert.Equal("file.dat", p.EntryPath));
|
||||
|
||||
var lastReport = progress.Reports[progress.Reports.Count - 1];
|
||||
Assert.Equal(10000, lastReport.BytesTransferred);
|
||||
}
|
||||
|
||||
[Fact]
|
||||
public async Task TarArchive_Entry_WriteToAsync_ReportsProgress()
|
||||
{
|
||||
var progress = new TestProgress<ProgressReport>();
|
||||
|
||||
// Create a tar archive first
|
||||
using var archiveStream = new MemoryStream();
|
||||
using (
|
||||
var writer = new TarWriter(
|
||||
archiveStream,
|
||||
new TarWriterOptions(CompressionType.None, true)
|
||||
)
|
||||
)
|
||||
{
|
||||
var testData = CreateTestData(10000, (byte)'D');
|
||||
using var sourceStream = new MemoryStream(testData);
|
||||
writer.Write("async.dat", sourceStream, DateTime.Now);
|
||||
}
|
||||
|
||||
// Now open as archive and extract entry async with progress as parameter
|
||||
archiveStream.Position = 0;
|
||||
|
||||
using var archive = SharpCompress.Archives.Tar.TarArchive.Open(archiveStream);
|
||||
foreach (var entry in archive.Entries)
|
||||
{
|
||||
if (!entry.IsDirectory)
|
||||
{
|
||||
using var extractedStream = new MemoryStream();
|
||||
await entry.WriteToAsync(extractedStream, progress, CancellationToken.None);
|
||||
}
|
||||
}
|
||||
|
||||
Assert.NotEmpty(progress.Reports);
|
||||
Assert.All(progress.Reports, p => Assert.Equal("async.dat", p.EntryPath));
|
||||
|
||||
var lastReport = progress.Reports[progress.Reports.Count - 1];
|
||||
Assert.Equal(10000, lastReport.BytesTransferred);
|
||||
}
|
||||
|
||||
[Fact]
|
||||
public void Zip_Read_MultipleEntries_ReportsProgress()
|
||||
{
|
||||
var progress = new TestProgress<ProgressReport>();
|
||||
|
||||
// Create a zip archive with multiple entries
|
||||
using var archiveStream = new MemoryStream();
|
||||
using (
|
||||
var writer = new ZipWriter(archiveStream, new ZipWriterOptions(CompressionType.Deflate))
|
||||
)
|
||||
{
|
||||
var testData1 = CreateTestData(5000, (byte)'A');
|
||||
using var sourceStream1 = new MemoryStream(testData1);
|
||||
writer.Write("file1.txt", sourceStream1, DateTime.Now);
|
||||
|
||||
var testData2 = CreateTestData(8000, (byte)'B');
|
||||
using var sourceStream2 = new MemoryStream(testData2);
|
||||
writer.Write("file2.txt", sourceStream2, DateTime.Now);
|
||||
}
|
||||
|
||||
// Now read it with progress reporting
|
||||
archiveStream.Position = 0;
|
||||
var readerOptions = new ReaderOptions { Progress = progress };
|
||||
|
||||
using (var reader = ReaderFactory.Open(archiveStream, readerOptions))
|
||||
{
|
||||
while (reader.MoveToNextEntry())
|
||||
{
|
||||
if (!reader.Entry.IsDirectory)
|
||||
{
|
||||
using var extractedStream = new MemoryStream();
|
||||
reader.WriteEntryTo(extractedStream);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
Assert.NotEmpty(progress.Reports);
|
||||
|
||||
// Should have reports for both files
|
||||
var file1Reports = progress.Reports.Where(p => p.EntryPath == "file1.txt").ToList();
|
||||
var file2Reports = progress.Reports.Where(p => p.EntryPath == "file2.txt").ToList();
|
||||
|
||||
Assert.NotEmpty(file1Reports);
|
||||
Assert.NotEmpty(file2Reports);
|
||||
|
||||
// Verify final bytes for each file
|
||||
Assert.Equal(5000, file1Reports[file1Reports.Count - 1].BytesTransferred);
|
||||
Assert.Equal(8000, file2Reports[file2Reports.Count - 1].BytesTransferred);
|
||||
}
|
||||
|
||||
[Fact]
|
||||
public void ZipArchive_MultipleEntries_WriteTo_ReportsProgress()
|
||||
{
|
||||
var progress = new TestProgress<ProgressReport>();
|
||||
|
||||
// Create a zip archive with multiple entries
|
||||
using var archiveStream = new MemoryStream();
|
||||
using (
|
||||
var writer = new ZipWriter(archiveStream, new ZipWriterOptions(CompressionType.Deflate))
|
||||
)
|
||||
{
|
||||
var testData1 = CreateTestData(5000, (byte)'A');
|
||||
using var sourceStream1 = new MemoryStream(testData1);
|
||||
writer.Write("entry1.txt", sourceStream1, DateTime.Now);
|
||||
|
||||
var testData2 = CreateTestData(7000, (byte)'B');
|
||||
using var sourceStream2 = new MemoryStream(testData2);
|
||||
writer.Write("entry2.txt", sourceStream2, DateTime.Now);
|
||||
}
|
||||
|
||||
// Now open as archive and extract entries with progress as parameter
|
||||
archiveStream.Position = 0;
|
||||
|
||||
using var archive = ZipArchive.Open(archiveStream);
|
||||
foreach (var entry in archive.Entries)
|
||||
{
|
||||
if (!entry.IsDirectory)
|
||||
{
|
||||
using var extractedStream = new MemoryStream();
|
||||
entry.WriteTo(extractedStream, progress);
|
||||
}
|
||||
}
|
||||
|
||||
Assert.NotEmpty(progress.Reports);
|
||||
|
||||
// Should have reports for both files
|
||||
var entry1Reports = progress.Reports.Where(p => p.EntryPath == "entry1.txt").ToList();
|
||||
var entry2Reports = progress.Reports.Where(p => p.EntryPath == "entry2.txt").ToList();
|
||||
|
||||
Assert.NotEmpty(entry1Reports);
|
||||
Assert.NotEmpty(entry2Reports);
|
||||
|
||||
// Verify final bytes for each entry
|
||||
Assert.Equal(5000, entry1Reports[entry1Reports.Count - 1].BytesTransferred);
|
||||
Assert.Equal(7000, entry2Reports[entry2Reports.Count - 1].BytesTransferred);
|
||||
}
|
||||
|
||||
[Fact]
|
||||
public async Task Zip_ReadAsync_ReportsProgress()
|
||||
{
|
||||
var progress = new TestProgress<ProgressReport>();
|
||||
|
||||
// Create a zip archive
|
||||
using var archiveStream = new MemoryStream();
|
||||
using (
|
||||
var writer = new ZipWriter(archiveStream, new ZipWriterOptions(CompressionType.Deflate))
|
||||
)
|
||||
{
|
||||
var testData = CreateTestData(10000, (byte)'E');
|
||||
using var sourceStream = new MemoryStream(testData);
|
||||
writer.Write("async_read.txt", sourceStream, DateTime.Now);
|
||||
}
|
||||
|
||||
// Now read it with progress reporting
|
||||
archiveStream.Position = 0;
|
||||
var readerOptions = new ReaderOptions { Progress = progress };
|
||||
|
||||
using (var reader = ReaderFactory.Open(archiveStream, readerOptions))
|
||||
{
|
||||
while (reader.MoveToNextEntry())
|
||||
{
|
||||
if (!reader.Entry.IsDirectory)
|
||||
{
|
||||
using var extractedStream = new MemoryStream();
|
||||
await reader.WriteEntryToAsync(extractedStream);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
Assert.NotEmpty(progress.Reports);
|
||||
Assert.All(progress.Reports, p => Assert.Equal("async_read.txt", p.EntryPath));
|
||||
|
||||
var lastReport = progress.Reports[progress.Reports.Count - 1];
|
||||
Assert.Equal(10000, lastReport.BytesTransferred);
|
||||
}
|
||||
|
||||
[Fact]
|
||||
public void GZip_Write_ReportsProgress()
|
||||
{
|
||||
var progress = new TestProgress<ProgressReport>();
|
||||
|
||||
using var archiveStream = new MemoryStream();
|
||||
var options = new SharpCompress.Writers.GZip.GZipWriterOptions { Progress = progress };
|
||||
|
||||
using (var writer = new SharpCompress.Writers.GZip.GZipWriter(archiveStream, options))
|
||||
{
|
||||
var testData = CreateTestData(10000, (byte)'G');
|
||||
using var sourceStream = new MemoryStream(testData);
|
||||
writer.Write("gzip_test.txt", sourceStream, DateTime.Now);
|
||||
}
|
||||
|
||||
Assert.NotEmpty(progress.Reports);
|
||||
Assert.All(progress.Reports, p => Assert.Equal("gzip_test.txt", p.EntryPath));
|
||||
Assert.All(progress.Reports, p => Assert.Equal(10000, p.TotalBytes));
|
||||
|
||||
var lastReport = progress.Reports[progress.Reports.Count - 1];
|
||||
Assert.Equal(10000, lastReport.BytesTransferred);
|
||||
Assert.Equal(100.0, lastReport.PercentComplete);
|
||||
}
|
||||
|
||||
[Fact]
|
||||
public async Task Tar_WriteAsync_ReportsProgress()
|
||||
{
|
||||
var progress = new TestProgress<ProgressReport>();
|
||||
|
||||
using var archiveStream = new MemoryStream();
|
||||
var options = new TarWriterOptions(CompressionType.None, true) { Progress = progress };
|
||||
|
||||
using (var writer = new TarWriter(archiveStream, options))
|
||||
{
|
||||
var testData = CreateTestData(10000, (byte)'A');
|
||||
using var sourceStream = new MemoryStream(testData);
|
||||
await writer.WriteAsync("test.txt", sourceStream, DateTime.Now);
|
||||
}
|
||||
|
||||
Assert.NotEmpty(progress.Reports);
|
||||
Assert.All(progress.Reports, p => Assert.Equal("test.txt", p.EntryPath));
|
||||
|
||||
var lastReport = progress.Reports[progress.Reports.Count - 1];
|
||||
Assert.Equal(10000, lastReport.BytesTransferred);
|
||||
}
|
||||
}
|
||||
@@ -1,4 +1,3 @@
|
||||
using System;
|
||||
using System.IO;
|
||||
using System.Linq;
|
||||
using SharpCompress.Archives;
|
||||
@@ -6,7 +5,6 @@ using SharpCompress.Archives.Rar;
|
||||
using SharpCompress.Common;
|
||||
using SharpCompress.Compressors.LZMA.Utilites;
|
||||
using SharpCompress.Readers;
|
||||
using SharpCompress.Test.Mocks;
|
||||
using Xunit;
|
||||
|
||||
namespace SharpCompress.Test.Rar;
|
||||
@@ -644,77 +642,4 @@ public class RarArchiveTests : ArchiveTests
|
||||
);
|
||||
Assert.True(passwordProtectedFilesArchive.IsEncrypted);
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
/// Test for issue: InvalidOperationException when extracting RAR files.
|
||||
/// This test verifies the fix for the validation logic that was changed from
|
||||
/// (_position != Length) to (_position < Length).
|
||||
/// The old logic would throw an exception when position exceeded expected length,
|
||||
/// but the new logic only throws when decompression ends prematurely (position < expected).
|
||||
/// </summary>
|
||||
[Fact]
|
||||
public void Rar_StreamValidation_OnlyThrowsOnPrematureEnd()
|
||||
{
|
||||
// Test normal extraction - should NOT throw InvalidOperationException
|
||||
// even if actual decompressed size differs from header
|
||||
var testFiles = new[] { "Rar.rar", "Rar5.rar", "Rar4.rar", "Rar2.rar" };
|
||||
|
||||
foreach (var testFile in testFiles)
|
||||
{
|
||||
using var stream = File.OpenRead(Path.Combine(TEST_ARCHIVES_PATH, testFile));
|
||||
using var archive = RarArchive.Open(stream);
|
||||
|
||||
// Extract all entries and read them completely
|
||||
foreach (var entry in archive.Entries.Where(e => !e.IsDirectory))
|
||||
{
|
||||
using var entryStream = entry.OpenEntryStream();
|
||||
using var ms = new MemoryStream();
|
||||
|
||||
// This should complete without throwing InvalidOperationException
|
||||
// The fix ensures we only throw when position < expected length, not when position >= expected
|
||||
entryStream.CopyTo(ms);
|
||||
|
||||
// Verify we read some data
|
||||
Assert.True(
|
||||
ms.Length > 0,
|
||||
$"Failed to extract data from {entry.Key} in {testFile}"
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
/// Negative test case: Verifies that InvalidOperationException IS thrown when
|
||||
/// a RAR stream ends prematurely (position < expected length).
|
||||
/// This tests the validation condition (_position < Length) works correctly.
|
||||
/// </summary>
|
||||
[Fact]
|
||||
public void Rar_StreamValidation_ThrowsOnTruncatedStream()
|
||||
{
|
||||
// This test verifies the exception is thrown when decompression ends prematurely
|
||||
// by using a truncated stream that stops reading after a small number of bytes
|
||||
var testFile = "Rar.rar";
|
||||
using var fileStream = File.OpenRead(Path.Combine(TEST_ARCHIVES_PATH, testFile));
|
||||
|
||||
// Wrap the file stream with a truncated stream that will stop reading early
|
||||
// This simulates a corrupted or truncated RAR file
|
||||
using var truncatedStream = new TruncatedStream(fileStream, 1000);
|
||||
|
||||
// Opening the archive should work, but extracting should throw
|
||||
// when we try to read beyond the truncated data
|
||||
var exception = Assert.Throws<InvalidOperationException>(() =>
|
||||
{
|
||||
using var archive = RarArchive.Open(truncatedStream);
|
||||
foreach (var entry in archive.Entries.Where(e => !e.IsDirectory))
|
||||
{
|
||||
using var entryStream = entry.OpenEntryStream();
|
||||
using var ms = new MemoryStream();
|
||||
// This should throw InvalidOperationException when it can't read all expected bytes
|
||||
entryStream.CopyTo(ms);
|
||||
}
|
||||
});
|
||||
|
||||
// Verify the exception message matches our expectation
|
||||
Assert.Contains("unpacked file size does not match header", exception.Message);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
using System;
|
||||
using System.Collections.Generic;
|
||||
using System.IO;
|
||||
using System.Linq;
|
||||
using System.Threading;
|
||||
using System.Threading.Tasks;
|
||||
using SharpCompress.Common;
|
||||
@@ -221,26 +220,4 @@ public abstract class ReaderTests : TestBase
|
||||
Assert.Equal(expected.Pop(), reader.Entry.Key);
|
||||
}
|
||||
}
|
||||
|
||||
protected void DoMultiReader(
|
||||
string[] archives,
|
||||
Func<IEnumerable<Stream>, IDisposable> readerFactory
|
||||
)
|
||||
{
|
||||
using var reader = readerFactory(
|
||||
archives.Select(s => Path.Combine(TEST_ARCHIVES_PATH, s)).Select(File.OpenRead)
|
||||
);
|
||||
|
||||
dynamic dynReader = reader;
|
||||
|
||||
while (dynReader.MoveToNextEntry())
|
||||
{
|
||||
dynReader.WriteEntryToDirectory(
|
||||
SCRATCH_FILES_PATH,
|
||||
new ExtractionOptions { ExtractFullPath = true, Overwrite = true }
|
||||
);
|
||||
}
|
||||
|
||||
VerifyFiles();
|
||||
}
|
||||
}
|
||||
|
||||
@@ -3,7 +3,6 @@ using System.Collections.Generic;
|
||||
using System.IO;
|
||||
using System.Linq;
|
||||
using System.Text;
|
||||
using SharpCompress.Common.Zip.SOZip;
|
||||
using SharpCompress.Readers;
|
||||
using Xunit;
|
||||
|
||||
@@ -46,7 +45,7 @@ public class TestBase : IDisposable
|
||||
|
||||
public void Dispose() => Directory.Delete(SCRATCH_BASE_PATH, true);
|
||||
|
||||
public void VerifyFiles(bool skipSoIndexes = false)
|
||||
public void VerifyFiles()
|
||||
{
|
||||
if (UseExtensionInsteadOfNameToVerify)
|
||||
{
|
||||
@@ -54,7 +53,7 @@ public class TestBase : IDisposable
|
||||
}
|
||||
else
|
||||
{
|
||||
VerifyFilesByName(skipSoIndexes);
|
||||
VerifyFilesByName();
|
||||
}
|
||||
}
|
||||
|
||||
@@ -73,23 +72,10 @@ public class TestBase : IDisposable
|
||||
}
|
||||
}
|
||||
|
||||
private void VerifyFilesByName(bool skipSoIndexes)
|
||||
protected void VerifyFilesByName()
|
||||
{
|
||||
var extracted = Directory
|
||||
.EnumerateFiles(SCRATCH_FILES_PATH, "*.*", SearchOption.AllDirectories)
|
||||
.Where(x =>
|
||||
{
|
||||
if (
|
||||
skipSoIndexes
|
||||
&& Path.GetFileName(x)
|
||||
.EndsWith(SOZipIndex.INDEX_EXTENSION, StringComparison.OrdinalIgnoreCase)
|
||||
)
|
||||
{
|
||||
return false;
|
||||
}
|
||||
|
||||
return true;
|
||||
})
|
||||
.ToLookup(path => path.Substring(SCRATCH_FILES_PATH.Length));
|
||||
var original = Directory
|
||||
.EnumerateFiles(ORIGINAL_FILES_PATH, "*.*", SearchOption.AllDirectories)
|
||||
|
||||
@@ -1,257 +0,0 @@
|
||||
using System.IO;
|
||||
using System.Linq;
|
||||
using System.Text;
|
||||
using System.Threading.Tasks;
|
||||
using SharpCompress.Archives.Zip;
|
||||
using SharpCompress.Common;
|
||||
using SharpCompress.Common.Zip.SOZip;
|
||||
using SharpCompress.Readers.Zip;
|
||||
using SharpCompress.Test.Mocks;
|
||||
using SharpCompress.Writers;
|
||||
using SharpCompress.Writers.Zip;
|
||||
using Xunit;
|
||||
|
||||
namespace SharpCompress.Test.Zip;
|
||||
|
||||
public class SoZipReaderTests : TestBase
|
||||
{
|
||||
[Fact]
|
||||
public async Task SOZip_Reader_RegularZip_NoSozipEntries()
|
||||
{
|
||||
// Regular zip files should not have SOZip entries
|
||||
var path = Path.Combine(TEST_ARCHIVES_PATH, "Zip.deflate.zip");
|
||||
using Stream stream = new ForwardOnlyStream(File.OpenRead(path));
|
||||
using var reader = ZipReader.Open(stream);
|
||||
while (await reader.MoveToNextEntryAsync())
|
||||
{
|
||||
// Regular zip entries should NOT be SOZip
|
||||
Assert.False(reader.Entry.IsSozip, $"Entry {reader.Entry.Key} should not be SOZip");
|
||||
Assert.False(
|
||||
reader.Entry.IsSozipIndexFile,
|
||||
$"Entry {reader.Entry.Key} should not be a SOZip index file"
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
[Fact]
|
||||
public void SOZip_Archive_RegularZip_NoSozipEntries()
|
||||
{
|
||||
// Regular zip files should not have SOZip entries
|
||||
var path = Path.Combine(TEST_ARCHIVES_PATH, "Zip.deflate.zip");
|
||||
using Stream stream = File.OpenRead(path);
|
||||
using var archive = ZipArchive.Open(stream);
|
||||
foreach (var entry in archive.Entries)
|
||||
{
|
||||
// Regular zip entries should NOT be SOZip
|
||||
Assert.False(entry.IsSozip, $"Entry {entry.Key} should not be SOZip");
|
||||
Assert.False(
|
||||
entry.IsSozipIndexFile,
|
||||
$"Entry {entry.Key} should not be a SOZip index file"
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
[Fact]
|
||||
public void SOZip_Archive_ReadSOZipFile()
|
||||
{
|
||||
// Read the SOZip test archive
|
||||
var path = Path.Combine(TEST_ARCHIVES_PATH, "Zip.sozip.zip");
|
||||
using Stream stream = File.OpenRead(path);
|
||||
using var archive = ZipArchive.Open(stream);
|
||||
|
||||
var entries = archive.Entries.ToList();
|
||||
|
||||
// Should have 3 entries: data.txt, .data.txt.sozip.idx, and small.txt
|
||||
Assert.Equal(3, entries.Count);
|
||||
|
||||
// Verify we have one SOZip index file
|
||||
var indexFiles = entries.Where(e => e.IsSozipIndexFile).ToList();
|
||||
Assert.Single(indexFiles);
|
||||
Assert.Equal(".data.txt.sozip.idx", indexFiles[0].Key);
|
||||
|
||||
// Verify the index file is not compressed
|
||||
Assert.Equal(CompressionType.None, indexFiles[0].CompressionType);
|
||||
|
||||
// Read and validate the index
|
||||
using (var indexStream = indexFiles[0].OpenEntryStream())
|
||||
{
|
||||
using var memStream = new MemoryStream();
|
||||
indexStream.CopyTo(memStream);
|
||||
var indexBytes = memStream.ToArray();
|
||||
|
||||
var index = SOZipIndex.Read(indexBytes);
|
||||
Assert.Equal(SOZipIndex.SOZIP_VERSION, index.Version);
|
||||
Assert.Equal(1024u, index.ChunkSize); // As set in CreateSOZipTestArchive
|
||||
Assert.True(index.UncompressedSize > 0);
|
||||
Assert.True(index.OffsetCount > 0);
|
||||
}
|
||||
|
||||
// Verify the data file can be read correctly
|
||||
var dataEntry = entries.First(e => e.Key == "data.txt");
|
||||
using (var dataStream = dataEntry.OpenEntryStream())
|
||||
{
|
||||
using var reader = new StreamReader(dataStream);
|
||||
var content = reader.ReadToEnd();
|
||||
Assert.Equal(5000, content.Length);
|
||||
Assert.True(content.All(c => c == 'A'));
|
||||
}
|
||||
|
||||
// Verify the small file
|
||||
var smallEntry = entries.First(e => e.Key == "small.txt");
|
||||
Assert.False(smallEntry.IsSozipIndexFile);
|
||||
using (var smallStream = smallEntry.OpenEntryStream())
|
||||
{
|
||||
using var reader = new StreamReader(smallStream);
|
||||
var content = reader.ReadToEnd();
|
||||
Assert.Equal("Small content", content);
|
||||
}
|
||||
}
|
||||
|
||||
[Fact]
|
||||
public async Task SOZip_Reader_ReadSOZipFile()
|
||||
{
|
||||
// Read the SOZip test archive with ZipReader
|
||||
var path = Path.Combine(TEST_ARCHIVES_PATH, "Zip.sozip.zip");
|
||||
using Stream stream = new ForwardOnlyStream(File.OpenRead(path));
|
||||
using var reader = ZipReader.Open(stream);
|
||||
|
||||
var foundData = false;
|
||||
var foundIndex = false;
|
||||
var foundSmall = false;
|
||||
|
||||
while (await reader.MoveToNextEntryAsync())
|
||||
{
|
||||
if (reader.Entry.Key == "data.txt")
|
||||
{
|
||||
foundData = true;
|
||||
Assert.False(reader.Entry.IsSozipIndexFile);
|
||||
|
||||
using var entryStream = reader.OpenEntryStream();
|
||||
using var streamReader = new StreamReader(entryStream);
|
||||
var content = streamReader.ReadToEnd();
|
||||
Assert.Equal(5000, content.Length);
|
||||
Assert.True(content.All(c => c == 'A'));
|
||||
}
|
||||
else if (reader.Entry.Key == ".data.txt.sozip.idx")
|
||||
{
|
||||
foundIndex = true;
|
||||
Assert.True(reader.Entry.IsSozipIndexFile);
|
||||
|
||||
using var indexStream = reader.OpenEntryStream();
|
||||
using var memStream = new MemoryStream();
|
||||
await indexStream.CopyToAsync(memStream);
|
||||
var indexBytes = memStream.ToArray();
|
||||
|
||||
var index = SOZipIndex.Read(indexBytes);
|
||||
Assert.Equal(SOZipIndex.SOZIP_VERSION, index.Version);
|
||||
}
|
||||
else if (reader.Entry.Key == "small.txt")
|
||||
{
|
||||
foundSmall = true;
|
||||
Assert.False(reader.Entry.IsSozipIndexFile);
|
||||
}
|
||||
}
|
||||
|
||||
Assert.True(foundData, "data.txt entry not found");
|
||||
Assert.True(foundIndex, ".data.txt.sozip.idx entry not found");
|
||||
Assert.True(foundSmall, "small.txt entry not found");
|
||||
}
|
||||
|
||||
[Fact]
|
||||
public void SOZip_Archive_DetectsIndexFileByName()
|
||||
{
|
||||
// Create a zip with a SOZip index file (by name pattern)
|
||||
using var memoryStream = new MemoryStream();
|
||||
|
||||
using (
|
||||
var writer = WriterFactory.Open(
|
||||
memoryStream,
|
||||
ArchiveType.Zip,
|
||||
new ZipWriterOptions(CompressionType.Deflate) { LeaveStreamOpen = true }
|
||||
)
|
||||
)
|
||||
{
|
||||
// Write a regular file
|
||||
writer.Write("test.txt", new MemoryStream(Encoding.UTF8.GetBytes("Hello World")));
|
||||
|
||||
// Write a file that looks like a SOZip index (by name pattern)
|
||||
var indexData = new SOZipIndex(
|
||||
chunkSize: 32768,
|
||||
uncompressedSize: 100,
|
||||
compressedSize: 50,
|
||||
compressedOffsets: new ulong[] { 0 }
|
||||
);
|
||||
writer.Write(".test.txt.sozip.idx", new MemoryStream(indexData.ToByteArray()));
|
||||
}
|
||||
|
||||
memoryStream.Position = 0;
|
||||
|
||||
// Test with ZipArchive
|
||||
using var archive = ZipArchive.Open(memoryStream);
|
||||
var entries = archive.Entries.ToList();
|
||||
|
||||
Assert.Equal(2, entries.Count);
|
||||
|
||||
var regularEntry = entries.First(e => e.Key == "test.txt");
|
||||
Assert.False(regularEntry.IsSozipIndexFile);
|
||||
Assert.False(regularEntry.IsSozip); // No SOZip extra field
|
||||
|
||||
var indexEntry = entries.First(e => e.Key == ".test.txt.sozip.idx");
|
||||
Assert.True(indexEntry.IsSozipIndexFile);
|
||||
}
|
||||
|
||||
[Fact]
|
||||
public async Task SOZip_Reader_DetectsIndexFileByName()
|
||||
{
|
||||
// Create a zip with a SOZip index file (by name pattern)
|
||||
using var memoryStream = new MemoryStream();
|
||||
|
||||
using (
|
||||
var writer = WriterFactory.Open(
|
||||
memoryStream,
|
||||
ArchiveType.Zip,
|
||||
new ZipWriterOptions(CompressionType.Deflate) { LeaveStreamOpen = true }
|
||||
)
|
||||
)
|
||||
{
|
||||
// Write a regular file
|
||||
writer.Write("test.txt", new MemoryStream(Encoding.UTF8.GetBytes("Hello World")));
|
||||
|
||||
// Write a file that looks like a SOZip index (by name pattern)
|
||||
var indexData = new SOZipIndex(
|
||||
chunkSize: 32768,
|
||||
uncompressedSize: 100,
|
||||
compressedSize: 50,
|
||||
compressedOffsets: new ulong[] { 0 }
|
||||
);
|
||||
writer.Write(".test.txt.sozip.idx", new MemoryStream(indexData.ToByteArray()));
|
||||
}
|
||||
|
||||
memoryStream.Position = 0;
|
||||
|
||||
// Test with ZipReader
|
||||
using Stream stream = new ForwardOnlyStream(memoryStream);
|
||||
using var reader = ZipReader.Open(stream);
|
||||
|
||||
var foundRegular = false;
|
||||
var foundIndex = false;
|
||||
|
||||
while (await reader.MoveToNextEntryAsync())
|
||||
{
|
||||
if (reader.Entry.Key == "test.txt")
|
||||
{
|
||||
foundRegular = true;
|
||||
Assert.False(reader.Entry.IsSozipIndexFile);
|
||||
Assert.False(reader.Entry.IsSozip);
|
||||
}
|
||||
else if (reader.Entry.Key == ".test.txt.sozip.idx")
|
||||
{
|
||||
foundIndex = true;
|
||||
Assert.True(reader.Entry.IsSozipIndexFile);
|
||||
}
|
||||
}
|
||||
|
||||
Assert.True(foundRegular, "Regular entry not found");
|
||||
Assert.True(foundIndex, "Index entry not found");
|
||||
}
|
||||
}
|
||||
@@ -1,358 +0,0 @@
|
||||
using System;
|
||||
using System.IO;
|
||||
using System.Linq;
|
||||
using System.Text;
|
||||
using SharpCompress.Archives.Zip;
|
||||
using SharpCompress.Common;
|
||||
using SharpCompress.Common.Zip.SOZip;
|
||||
using SharpCompress.Readers;
|
||||
using SharpCompress.Writers;
|
||||
using SharpCompress.Writers.Zip;
|
||||
using Xunit;
|
||||
|
||||
namespace SharpCompress.Test.Zip;
|
||||
|
||||
public class SoZipWriterTests : TestBase
|
||||
{
|
||||
[Fact]
|
||||
public void SOZipIndex_RoundTrip()
|
||||
{
|
||||
// Create an index
|
||||
var offsets = new ulong[] { 0, 1024, 2048, 3072 };
|
||||
var originalIndex = new SOZipIndex(
|
||||
chunkSize: 32768,
|
||||
uncompressedSize: 100000,
|
||||
compressedSize: 50000,
|
||||
compressedOffsets: offsets
|
||||
);
|
||||
|
||||
// Serialize to bytes
|
||||
var bytes = originalIndex.ToByteArray();
|
||||
|
||||
// Deserialize back
|
||||
var parsedIndex = SOZipIndex.Read(bytes);
|
||||
|
||||
// Verify all fields
|
||||
Assert.Equal(SOZipIndex.SOZIP_VERSION, parsedIndex.Version);
|
||||
Assert.Equal(32768u, parsedIndex.ChunkSize);
|
||||
Assert.Equal(100000ul, parsedIndex.UncompressedSize);
|
||||
Assert.Equal(50000ul, parsedIndex.CompressedSize);
|
||||
Assert.Equal(4u, parsedIndex.OffsetCount);
|
||||
Assert.Equal(offsets, parsedIndex.CompressedOffsets);
|
||||
}
|
||||
|
||||
[Fact]
|
||||
public void SOZipIndex_Read_InvalidMagic_ThrowsException()
|
||||
{
|
||||
var invalidData = new byte[] { 0x00, 0x00, 0x00, 0x00 };
|
||||
|
||||
var exception = Assert.Throws<InvalidDataException>(() => SOZipIndex.Read(invalidData));
|
||||
|
||||
Assert.Contains("magic number mismatch", exception.Message);
|
||||
}
|
||||
|
||||
[Fact]
|
||||
public void SOZipIndex_GetChunkIndex()
|
||||
{
|
||||
var offsets = new ulong[] { 0, 1000, 2000, 3000, 4000 };
|
||||
var index = new SOZipIndex(
|
||||
chunkSize: 32768,
|
||||
uncompressedSize: 163840, // 5 * 32768
|
||||
compressedSize: 5000,
|
||||
compressedOffsets: offsets
|
||||
);
|
||||
|
||||
Assert.Equal(0, index.GetChunkIndex(0));
|
||||
Assert.Equal(0, index.GetChunkIndex(32767));
|
||||
Assert.Equal(1, index.GetChunkIndex(32768));
|
||||
Assert.Equal(2, index.GetChunkIndex(65536));
|
||||
Assert.Equal(4, index.GetChunkIndex(163839));
|
||||
}
|
||||
|
||||
[Fact]
|
||||
public void SOZipIndex_GetCompressedOffset()
|
||||
{
|
||||
var offsets = new ulong[] { 0, 1000, 2000, 3000, 4000 };
|
||||
var index = new SOZipIndex(
|
||||
chunkSize: 32768,
|
||||
uncompressedSize: 163840,
|
||||
compressedSize: 5000,
|
||||
compressedOffsets: offsets
|
||||
);
|
||||
|
||||
Assert.Equal(0ul, index.GetCompressedOffset(0));
|
||||
Assert.Equal(1000ul, index.GetCompressedOffset(1));
|
||||
Assert.Equal(2000ul, index.GetCompressedOffset(2));
|
||||
Assert.Equal(3000ul, index.GetCompressedOffset(3));
|
||||
Assert.Equal(4000ul, index.GetCompressedOffset(4));
|
||||
}
|
||||
|
||||
[Fact]
|
||||
public void SOZipIndex_GetUncompressedOffset()
|
||||
{
|
||||
var offsets = new ulong[] { 0, 1000, 2000, 3000, 4000 };
|
||||
var index = new SOZipIndex(
|
||||
chunkSize: 32768,
|
||||
uncompressedSize: 163840,
|
||||
compressedSize: 5000,
|
||||
compressedOffsets: offsets
|
||||
);
|
||||
|
||||
Assert.Equal(0ul, index.GetUncompressedOffset(0));
|
||||
Assert.Equal(32768ul, index.GetUncompressedOffset(1));
|
||||
Assert.Equal(65536ul, index.GetUncompressedOffset(2));
|
||||
Assert.Equal(98304ul, index.GetUncompressedOffset(3));
|
||||
Assert.Equal(131072ul, index.GetUncompressedOffset(4));
|
||||
}
|
||||
|
||||
[Fact]
|
||||
public void SOZipIndex_GetIndexFileName()
|
||||
{
|
||||
Assert.Equal(".file.txt.sozip.idx", SOZipIndex.GetIndexFileName("file.txt"));
|
||||
Assert.Equal("dir/.file.txt.sozip.idx", SOZipIndex.GetIndexFileName("dir/file.txt"));
|
||||
Assert.Equal("a/b/.file.txt.sozip.idx", SOZipIndex.GetIndexFileName("a/b/file.txt"));
|
||||
}
|
||||
|
||||
[Fact]
|
||||
public void SOZipIndex_IsIndexFile()
|
||||
{
|
||||
Assert.True(SOZipIndex.IsIndexFile(".file.txt.sozip.idx"));
|
||||
Assert.True(SOZipIndex.IsIndexFile("dir/.file.txt.sozip.idx"));
|
||||
Assert.True(SOZipIndex.IsIndexFile(".test.sozip.idx"));
|
||||
|
||||
Assert.False(SOZipIndex.IsIndexFile("file.txt"));
|
||||
Assert.False(SOZipIndex.IsIndexFile("file.sozip.idx")); // Missing leading dot
|
||||
Assert.False(SOZipIndex.IsIndexFile(".file.txt")); // Missing .sozip.idx
|
||||
Assert.False(SOZipIndex.IsIndexFile(""));
|
||||
Assert.False(SOZipIndex.IsIndexFile(null!));
|
||||
}
|
||||
|
||||
[Fact]
|
||||
public void SOZipIndex_GetMainFileName()
|
||||
{
|
||||
Assert.Equal("file.txt", SOZipIndex.GetMainFileName(".file.txt.sozip.idx"));
|
||||
Assert.Equal("dir/file.txt", SOZipIndex.GetMainFileName("dir/.file.txt.sozip.idx"));
|
||||
Assert.Equal("test", SOZipIndex.GetMainFileName(".test.sozip.idx"));
|
||||
|
||||
Assert.Null(SOZipIndex.GetMainFileName("file.txt"));
|
||||
Assert.Null(SOZipIndex.GetMainFileName(""));
|
||||
}
|
||||
|
||||
[Fact]
|
||||
public void ZipEntry_IsSozipIndexFile_Detection()
|
||||
{
|
||||
// Create a zip with a file that has a SOZip index file name pattern
|
||||
using var memoryStream = new MemoryStream();
|
||||
|
||||
using (
|
||||
var writer = WriterFactory.Open(
|
||||
memoryStream,
|
||||
ArchiveType.Zip,
|
||||
new ZipWriterOptions(CompressionType.Deflate) { LeaveStreamOpen = true }
|
||||
)
|
||||
)
|
||||
{
|
||||
// Write a regular file
|
||||
writer.Write("test.txt", new MemoryStream(Encoding.UTF8.GetBytes("Hello World")));
|
||||
|
||||
// Write a file with SOZip index name pattern
|
||||
var indexData = new SOZipIndex(
|
||||
chunkSize: 32768,
|
||||
uncompressedSize: 100,
|
||||
compressedSize: 50,
|
||||
compressedOffsets: new ulong[] { 0 }
|
||||
);
|
||||
writer.Write(".test.txt.sozip.idx", new MemoryStream(indexData.ToByteArray()));
|
||||
}
|
||||
|
||||
memoryStream.Position = 0;
|
||||
|
||||
using var archive = ZipArchive.Open(memoryStream);
|
||||
var entries = archive.Entries.ToList();
|
||||
|
||||
Assert.Equal(2, entries.Count);
|
||||
|
||||
var regularEntry = entries.First(e => e.Key == "test.txt");
|
||||
Assert.False(regularEntry.IsSozipIndexFile);
|
||||
Assert.False(regularEntry.IsSozip); // No SOZip extra field
|
||||
|
||||
var indexEntry = entries.First(e => e.Key == ".test.txt.sozip.idx");
|
||||
Assert.True(indexEntry.IsSozipIndexFile);
|
||||
}
|
||||
|
||||
[Fact]
|
||||
public void ZipWriterOptions_SOZipDefaults()
|
||||
{
|
||||
var options = new ZipWriterOptions(CompressionType.Deflate);
|
||||
|
||||
Assert.False(options.EnableSOZip);
|
||||
Assert.Equal((int)SOZipIndex.DEFAULT_CHUNK_SIZE, options.SOZipChunkSize);
|
||||
Assert.Equal(1048576L, options.SOZipMinFileSize); // 1MB
|
||||
}
|
||||
|
||||
[Fact]
|
||||
public void ZipWriterEntryOptions_SOZipDefaults()
|
||||
{
|
||||
var options = new ZipWriterEntryOptions();
|
||||
|
||||
Assert.Null(options.EnableSOZip);
|
||||
}
|
||||
|
||||
[Fact]
|
||||
public void SOZip_RoundTrip_CompressAndDecompress()
|
||||
{
|
||||
// Create a SOZip archive from Original files
|
||||
var archivePath = Path.Combine(SCRATCH2_FILES_PATH, "test.sozip.zip");
|
||||
|
||||
using (var stream = File.Create(archivePath))
|
||||
{
|
||||
var options = new ZipWriterOptions(CompressionType.Deflate)
|
||||
{
|
||||
EnableSOZip = true,
|
||||
SOZipMinFileSize = 1024, // 1KB to ensure test files qualify
|
||||
LeaveStreamOpen = false,
|
||||
};
|
||||
|
||||
using var writer = new ZipWriter(stream, options);
|
||||
|
||||
// Write all files from Original directory
|
||||
var files = Directory.GetFiles(ORIGINAL_FILES_PATH, "*", SearchOption.AllDirectories);
|
||||
foreach (var filePath in files)
|
||||
{
|
||||
var relativePath = filePath
|
||||
.Substring(ORIGINAL_FILES_PATH.Length + 1)
|
||||
.Replace('\\', '/');
|
||||
using var fileStream = File.OpenRead(filePath);
|
||||
writer.Write(relativePath, fileStream, new ZipWriterEntryOptions());
|
||||
}
|
||||
}
|
||||
|
||||
// Validate the archive was created and has files
|
||||
Assert.True(File.Exists(archivePath));
|
||||
|
||||
// Validate the archive has SOZip entries
|
||||
using (var stream = File.OpenRead(archivePath))
|
||||
{
|
||||
using var archive = ZipArchive.Open(stream);
|
||||
|
||||
var allEntries = archive.Entries.ToList();
|
||||
|
||||
// Archive should have files
|
||||
Assert.NotEmpty(allEntries);
|
||||
|
||||
var sozipIndexEntries = allEntries.Where(e => e.IsSozipIndexFile).ToList();
|
||||
|
||||
// Should have at least one SOZip index file
|
||||
Assert.NotEmpty(sozipIndexEntries);
|
||||
|
||||
// Verify index files have valid SOZip index data
|
||||
foreach (var indexEntry in sozipIndexEntries)
|
||||
{
|
||||
// Check that the entry is stored (not compressed)
|
||||
Assert.Equal(CompressionType.None, indexEntry.CompressionType);
|
||||
|
||||
using var indexStream = indexEntry.OpenEntryStream();
|
||||
using var memStream = new MemoryStream();
|
||||
indexStream.CopyTo(memStream);
|
||||
var indexBytes = memStream.ToArray();
|
||||
|
||||
// Debug: Check first 4 bytes
|
||||
Assert.True(
|
||||
indexBytes.Length >= 4,
|
||||
$"Index file too small: {indexBytes.Length} bytes"
|
||||
);
|
||||
|
||||
// Should be able to parse the index without exception
|
||||
var index = SOZipIndex.Read(indexBytes);
|
||||
Assert.Equal(SOZipIndex.SOZIP_VERSION, index.Version);
|
||||
Assert.True(index.ChunkSize > 0);
|
||||
Assert.True(index.UncompressedSize > 0);
|
||||
Assert.True(index.OffsetCount > 0);
|
||||
|
||||
// Verify there's a corresponding data file
|
||||
var mainFileName = SOZipIndex.GetMainFileName(indexEntry.Key!);
|
||||
Assert.NotNull(mainFileName);
|
||||
Assert.Contains(allEntries, e => e.Key == mainFileName);
|
||||
}
|
||||
}
|
||||
|
||||
// Read and decompress the archive
|
||||
using (var stream = File.OpenRead(archivePath))
|
||||
{
|
||||
using var reader = ReaderFactory.Open(stream);
|
||||
reader.WriteAllToDirectory(
|
||||
SCRATCH_FILES_PATH,
|
||||
new ExtractionOptions { ExtractFullPath = true }
|
||||
);
|
||||
}
|
||||
|
||||
// Verify extracted files match originals
|
||||
VerifyFiles(true);
|
||||
}
|
||||
|
||||
[Fact]
|
||||
public void CreateSOZipTestArchive()
|
||||
{
|
||||
// Create a SOZip test archive that can be committed to the repository
|
||||
var archivePath = Path.Combine(TEST_ARCHIVES_PATH, "Zip.sozip.zip");
|
||||
|
||||
using (var stream = File.Create(archivePath))
|
||||
{
|
||||
var options = new ZipWriterOptions(CompressionType.Deflate)
|
||||
{
|
||||
EnableSOZip = true,
|
||||
SOZipMinFileSize = 100, // Low threshold to ensure test content is optimized
|
||||
SOZipChunkSize = 1024, // Small chunks for testing
|
||||
LeaveStreamOpen = false,
|
||||
};
|
||||
|
||||
using var writer = new ZipWriter(stream, options);
|
||||
|
||||
// Create test content that's large enough to create multiple chunks
|
||||
var largeContent = new string('A', 5000); // 5KB of 'A's
|
||||
|
||||
// Write a file with enough data to be SOZip-optimized
|
||||
writer.Write(
|
||||
"data.txt",
|
||||
new MemoryStream(Encoding.UTF8.GetBytes(largeContent)),
|
||||
new ZipWriterEntryOptions()
|
||||
);
|
||||
|
||||
// Write a smaller file that won't be SOZip-optimized
|
||||
writer.Write(
|
||||
"small.txt",
|
||||
new MemoryStream(Encoding.UTF8.GetBytes("Small content")),
|
||||
new ZipWriterEntryOptions()
|
||||
);
|
||||
}
|
||||
|
||||
// Validate the archive was created
|
||||
Assert.True(File.Exists(archivePath));
|
||||
|
||||
// Validate it's a valid SOZip archive
|
||||
using (var stream = File.OpenRead(archivePath))
|
||||
{
|
||||
using var archive = ZipArchive.Open(stream);
|
||||
var entries = archive.Entries.ToList();
|
||||
|
||||
// Should have data file, small file, and index file
|
||||
Assert.Equal(3, entries.Count);
|
||||
|
||||
// Verify we have one SOZip index file
|
||||
var indexFiles = entries.Where(e => e.IsSozipIndexFile).ToList();
|
||||
Assert.Single(indexFiles);
|
||||
|
||||
// Verify the index file
|
||||
var indexEntry = indexFiles.First();
|
||||
Assert.Equal(".data.txt.sozip.idx", indexEntry.Key);
|
||||
|
||||
// Verify the data file can be read
|
||||
var dataEntry = entries.First(e => e.Key == "data.txt");
|
||||
using var dataStream = dataEntry.OpenEntryStream();
|
||||
using var reader = new StreamReader(dataStream);
|
||||
var content = reader.ReadToEnd();
|
||||
Assert.Equal(5000, content.Length);
|
||||
Assert.True(content.All(c => c == 'A'));
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -194,38 +194,4 @@ public class ZipArchiveAsyncTests : ArchiveTests
|
||||
}
|
||||
VerifyFiles();
|
||||
}
|
||||
|
||||
[Fact]
|
||||
public async Task Zip_Deflate_Archive_WriteToDirectoryAsync()
|
||||
{
|
||||
using (Stream stream = File.OpenRead(Path.Combine(TEST_ARCHIVES_PATH, "Zip.deflate.zip")))
|
||||
using (var archive = ZipArchive.Open(stream))
|
||||
{
|
||||
await archive.WriteToDirectoryAsync(
|
||||
SCRATCH_FILES_PATH,
|
||||
new ExtractionOptions { ExtractFullPath = true, Overwrite = true }
|
||||
);
|
||||
}
|
||||
VerifyFiles();
|
||||
}
|
||||
|
||||
[Fact]
|
||||
public async Task Zip_Deflate_Archive_WriteToDirectoryAsync_WithProgress()
|
||||
{
|
||||
var progressReports = new System.Collections.Generic.List<ProgressReport>();
|
||||
var progress = new Progress<ProgressReport>(report => progressReports.Add(report));
|
||||
|
||||
using (Stream stream = File.OpenRead(Path.Combine(TEST_ARCHIVES_PATH, "Zip.deflate.zip")))
|
||||
using (var archive = ZipArchive.Open(stream))
|
||||
{
|
||||
await archive.WriteToDirectoryAsync(
|
||||
SCRATCH_FILES_PATH,
|
||||
new ExtractionOptions { ExtractFullPath = true, Overwrite = true },
|
||||
progress
|
||||
);
|
||||
}
|
||||
|
||||
VerifyFiles();
|
||||
Assert.True(progressReports.Count > 0, "Progress reports should be generated");
|
||||
}
|
||||
}
|
||||
|
||||
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user