Compare commits

...

69 Commits

Author SHA1 Message Date
Adam Hathcock
b010cce1ca Merge branch 'master' into adam/zstd 2025-10-13 17:03:58 +01:00
Adam Hathcock
ee2cbc8051 fmt 2025-10-13 17:02:41 +01:00
Adam Hathcock
906baf18d2 fix namespaces 2025-10-13 17:02:21 +01:00
Adam Hathcock
51bc9dc20e Merge pull request #950 from adamhathcock/adamhathcock-patch-1
Configure Dependabot for NuGet updates
2025-10-13 16:52:07 +01:00
Adam Hathcock
e45ac6bfa9 Update .github/dependabot.yml
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-10-13 16:48:55 +01:00
Adam Hathcock
44d1cbdb0c Configure Dependabot for NuGet updates
Added NuGet package ecosystem configuration for Dependabot.
2025-10-13 16:47:46 +01:00
Adam Hathcock
0a7ffd003b ran formatting 2025-10-13 16:42:22 +01:00
Adam Hathcock
b545973c55 stuff compiles now 2025-10-13 16:41:58 +01:00
Adam Hathcock
999af800af add more non legacy stuff 2025-10-13 16:21:05 +01:00
Adam Hathcock
5b5336f456 add first pass of zstdsharp into library. Full fat framework doesn't compile. Marked lib as not CLS compliant again 2025-10-13 13:02:48 +01:00
Adam Hathcock
3fa7e854d3 Merge pull request #948 from adamhathcock/adam/update-0410
update to 0.41.0 and change symbols type
2025-10-13 12:32:54 +01:00
Adam Hathcock
0f5c080be6 update to 0.41.0 and change symbols type 2025-10-13 12:29:02 +01:00
Adam Hathcock
871009ef8d Merge pull request #947 from adamhathcock/adam/update-deps
Update dependencies and csharpier
2025-10-13 12:13:51 +01:00
Adam Hathcock
89a565c6c4 format 2025-10-13 12:10:42 +01:00
Adam Hathcock
6ed60e4d5f adjust Microsoft.NETFramework.ReferenceAssemblies 2025-10-13 12:08:39 +01:00
Adam Hathcock
4dab1df3ae adjust usage of sourcelink 2025-10-13 12:06:34 +01:00
Adam Hathcock
7b7aa2cdf0 Merge remote-tracking branch 'origin/master' into adam/update-deps 2025-10-13 11:56:50 +01:00
Adam Hathcock
d1e1a65f32 add agents file 2025-10-13 11:55:41 +01:00
Adam Hathcock
9d332b4ac5 add agents file 2025-10-13 11:52:18 +01:00
Adam Hathcock
fc2462e281 Update dependencies and csharpier 2025-10-13 10:40:43 +01:00
Adam Hathcock
fb2cddabf0 Merge pull request #945 from mitchcapper/archive_extension_hinting_pr
Extension hinting for ReaderFactory for better first try factory success
2025-09-29 09:02:11 +01:00
Mitch Capper
33481a9465 Extension hinting for ReaderFactory for better first try factory success
Also used by TarFactory to hint what compressiontype to attempt first
2025-09-24 00:04:07 -07:00
Adam Hathcock
45de87cb97 Merge pull request #943 from mitchcapper/zstandard_reading_pr
ZStandard tar support
2025-09-23 13:34:26 +01:00
Mitch Capper
553c533ada ZStandard tar support 2025-09-23 03:26:33 -07:00
Adam Hathcock
634e562b93 Merge pull request #935 from Nanook/bugfix/StreamBufferFix
Rewind buffer fix for directory extract.
2025-08-22 14:08:39 +01:00
Adam Hathcock
c789ead590 Merge branch 'master' into bugfix/StreamBufferFix 2025-08-22 14:05:35 +01:00
Adam Hathcock
d23560a441 Merge pull request #939 from Morilli/fix-winaescryptostream-dispose
Fix WinzipAesCryptoStream potentially not getting disposed
2025-08-22 14:03:06 +01:00
Morilli
9f306966db Adjust logic in CreateDecompressionStream to not wrap the returned stream
a ReadOnlySubStream never disposes its passes in stream which is problematic when that stream needs to be disposed. It's hard to tell who exactly has ownership over which stream here, but I'll just assume that whatever stream is passed in here should be disposed.
2025-08-21 20:08:57 +02:00
Morilli
8eea2cef97 add failing test 2025-08-21 20:08:34 +02:00
Nanook
3abbb89c2e Comment typo edit. 2025-08-01 21:50:36 +01:00
Nanook
76de7d54c7 Rewind buffer fix for directory extract. 2025-08-01 21:47:09 +01:00
Adam Hathcock
35aee6e066 Merge pull request #934 from Nanook/feature/zstd_zip_writing
Zip ZStandard Writing with tests. Level support.
2025-07-24 08:24:32 +01:00
Nanook
8a0152fe7c Fixed up test issue after copilot "fixes" :S. 2025-07-24 01:33:18 +01:00
Nanook
7769435fc8 CSharpier fixes after copilot tweaks. 2025-07-24 01:31:30 +01:00
Nanook
ae311ea66e Update tests/SharpCompress.Test/Zip/TestPseudoTextStream.cs
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-07-24 01:28:56 +01:00
Nanook
d15816f162 Update tests/SharpCompress.Test/Zip/TestPseudoTextStream.cs
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-07-24 01:28:40 +01:00
Nanook
5cbb31c559 Update tests/SharpCompress.Test/Zip/TestPseudoTextStream.cs
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-07-24 01:28:23 +01:00
Nanook
b7f5b36f2b Update src/SharpCompress/Writers/Zip/ZipWriter.cs
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-07-24 01:28:11 +01:00
Nanook
e9dd413de6 CSharpier formatting. 2025-07-24 00:46:09 +01:00
Nanook
c5de3d8cc1 Zip ZStandard Writing with tests. Level support. 2025-07-24 00:23:55 +01:00
Adam Hathcock
624c385e27 Merge pull request #930 from Nanook/feature/StreamStack
Added IStreamStack for debugging and configurable buffer management. …
2025-07-23 08:48:58 +01:00
Adam Hathcock
893890c985 Merge branch 'master' into feature/StreamStack 2025-07-23 08:46:16 +01:00
Adam Hathcock
e12efc8b52 Merge pull request #933 from Morilli/zipentry-attribute
Implement `Attrib` for `ZipEntry`
2025-07-23 08:45:04 +01:00
Morilli
015dfa41b1 implement Attrib for ZipEntrys 2025-07-22 20:12:22 +02:00
Morilli
5e72ce5fbe add failing test 2025-07-22 20:11:13 +02:00
Adam Hathcock
236e653176 Merge branch 'master' into feature/StreamStack 2025-07-22 11:55:57 +01:00
Adam Hathcock
3946c3ba03 Merge pull request #931 from SimonCahill/feat/add-throw-on-unseekable-stream
Added ArgumentException to Archive.Open implementations
2025-07-22 11:54:23 +01:00
Adam Hathcock
5bdd39e6eb Merge branch 'master' into feat/add-throw-on-unseekable-stream 2025-07-22 11:50:04 +01:00
Adam Hathcock
46267c0531 Merge pull request #929 from Morilli/fix-zipentry-comment
Fix zipentry comment implementation
2025-07-22 11:49:12 +01:00
Adam Hathcock
7b96eb7704 Merge pull request #928 from Morilli/fix-dotsettings
fix DotSettings options to conform to current code style and editorconfig
2025-07-22 11:47:36 +01:00
Nanook
60d5955101 Final tidy. 2025-07-22 00:12:30 +01:00
Nanook
587675e488 Fixed some missing features caused by merging an earlier branch than required. 2025-07-21 23:45:57 +01:00
Simon Cahill
c0ae319c63 Reverted debug change 2025-07-21 15:41:33 +02:00
Simon Cahill
d810f8d8db Reverted back; I don't know how this file changed 2025-07-21 15:38:14 +02:00
Simon Cahill
a88390a546 Added ArgumentException which is thrown when a non-seekable Stream instance is passed 2025-07-21 15:34:03 +02:00
Nanook
8575e5615d Formatting fix. 2025-07-20 19:18:31 +01:00
Nanook
5fe9516c09 Various copilot suggestions. Tests still passing. 2025-07-20 19:16:11 +01:00
Nanook
938775789d Formated with Sharpier. 2025-07-20 18:11:37 +01:00
Nanook
f37bebe51f Merge branch 'master' into feature/StreamStack 2025-07-20 17:46:28 +01:00
Nanook
21f14cd3f2 Added IStreamStack for debugging and configurable buffer management. Added SharpCompressStream to consolodate streams to help simplify debugging. All unit tests passing. 2025-07-20 17:35:22 +01:00
Morilli
4e7baeb2c9 fix zipentry comment being lost after reading local entry header 2025-07-19 19:56:41 +02:00
Morilli
d78a682dd8 add failing test 2025-07-19 19:56:38 +02:00
Morilli
44021b7abc fix DotSettings options to conform to current code style and editorconfig 2025-07-19 19:30:52 +02:00
Adam Hathcock
7f9e543213 Merge pull request #924 from Morilli/7zip-1block-solid-archive
Fix 7-zip solid archive detection
2025-07-14 09:12:43 +01:00
Morilli
c489e5a59b fix SevenZipArchive solid detection 2025-07-13 15:09:59 +02:00
Morilli
8de30db7f9 add failing test 2025-07-13 15:08:45 +02:00
Adam Hathcock
415f0c6774 Merge pull request #921 from Morilli/fix-volume-filename
Fix volume FileName property potentially missing
2025-07-07 09:22:45 +01:00
Morilli
aa9cf195a5 Fix volume FileName property potentially missing 2025-07-06 21:56:54 +02:00
Adam Hathcock
6412fc8135 Mark for 0.40 2025-06-03 08:48:34 +01:00
367 changed files with 63576 additions and 5026 deletions

View File

@@ -3,7 +3,7 @@
"isRoot": true,
"tools": {
"csharpier": {
"version": "1.0.2",
"version": "1.1.2",
"commands": [
"csharpier"
],

View File

@@ -1,6 +1,13 @@
version: 2
updates:
- package-ecosystem: "github-actions" # search for actions - there are other options available
directory: "/" # search in .github/workflows under root `/`
- package-ecosystem: "github-actions"
directory: "/"
schedule:
interval: "weekly" # check for action update every week
interval: "weekly"
- package-ecosystem: "nuget"
directory: "/" # change to "/src/YourProject" if .csproj files are in subfolders
schedule:
interval: "weekly"
open-pull-requests-limit: 5
# optional: target-branch: "master"

64
AGENTS.md Normal file
View File

@@ -0,0 +1,64 @@
---
description: 'Guidelines for building C# applications'
applyTo: '**/*.cs'
---
# C# Development
## C# Instructions
- Always use the latest version C#, currently C# 13 features.
- Write clear and concise comments for each function.
## General Instructions
- Make only high confidence suggestions when reviewing code changes.
- Write code with good maintainability practices, including comments on why certain design decisions were made.
- Handle edge cases and write clear exception handling.
- For libraries or external dependencies, mention their usage and purpose in comments.
## Naming Conventions
- Follow PascalCase for component names, method names, and public members.
- Use camelCase for private fields and local variables.
- Prefix interface names with "I" (e.g., IUserService).
## Code Formatting
- Use CSharpier for all code formatting to ensure consistent style across the project.
- Install CSharpier globally: `dotnet tool install -g csharpier`
- Format files with: `dotnet csharpier format .`
- Configure your IDE to format on save using CSharpier.
- CSharpier configuration can be customized via `.csharpierrc` file in the project root.
- Trust CSharpier's opinionated formatting decisions to maintain consistency.
## Project Setup and Structure
- Guide users through creating a new .NET project with the appropriate templates.
- Explain the purpose of each generated file and folder to build understanding of the project structure.
- Demonstrate how to organize code using feature folders or domain-driven design principles.
- Show proper separation of concerns with models, services, and data access layers.
- Explain the Program.cs and configuration system in ASP.NET Core 9 including environment-specific settings.
## Nullable Reference Types
- Declare variables non-nullable, and check for `null` at entry points.
- Always use `is null` or `is not null` instead of `== null` or `!= null`.
- Trust the C# null annotations and don't add null checks when the type system says a value cannot be null.
## Testing
- Always include test cases for critical paths of the application.
- Guide users through creating unit tests.
- Do not emit "Act", "Arrange" or "Assert" comments.
- Copy existing style in nearby files for test method names and capitalization.
- Explain integration testing approaches for API endpoints.
- Demonstrate how to mock dependencies for effective testing.
- Show how to test authentication and authorization logic.
- Explain test-driven development principles as applied to API development.
## Performance Optimization
- Guide users on implementing caching strategies (in-memory, distributed, response caching).
- Explain asynchronous programming patterns and why they matter for API performance.
- Demonstrate pagination, filtering, and sorting for large data sets.
- Show how to implement compression and other performance optimizations.
- Explain how to measure and benchmark API performance.

View File

@@ -1,19 +1,20 @@
<Project>
<ItemGroup>
<PackageVersion Include="Bullseye" Version="6.0.0" />
<PackageVersion Include="AwesomeAssertions" Version="9.0.0" />
<PackageVersion Include="AwesomeAssertions" Version="9.2.0" />
<PackageVersion Include="Glob" Version="1.1.9" />
<PackageVersion Include="Microsoft.Bcl.AsyncInterfaces" Version="8.0.0" />
<PackageVersion Include="Microsoft.NET.Test.Sdk" Version="17.13.0" />
<PackageVersion Include="Mono.Posix.NETStandard" Version="1.0.0" />
<PackageVersion Include="SimpleExec" Version="12.0.0" />
<PackageVersion Include="System.Buffers" Version="4.6.0" />
<PackageVersion Include="System.Memory" Version="4.6.0" />
<PackageVersion Include="System.Buffers" Version="4.6.1" />
<PackageVersion Include="System.Memory" Version="4.6.3" />
<PackageVersion Include="System.Text.Encoding.CodePages" Version="8.0.0" />
<PackageVersion Include="xunit" Version="2.9.3" />
<PackageVersion Include="xunit.runner.visualstudio" Version="3.1.0" />
<PackageVersion Include="xunit.runner.visualstudio" Version="3.1.5" />
<PackageVersion Include="xunit.SkippableFact" Version="1.5.23" />
<PackageVersion Include="ZstdSharp.Port" Version="0.8.5" />
<GlobalPackageReference Include="Microsoft.SourceLink.GitHub" Version="8.0.0" />
<PackageVersion Include="ZstdSharp.Port" Version="0.8.6" />
<PackageVersion Include="Microsoft.SourceLink.GitHub" Version="8.0.0" />
<PackageVersion Include="Microsoft.NETFramework.ReferenceAssemblies" Version="1.0.3" />
</ItemGroup>
</Project>

View File

@@ -15,6 +15,7 @@
| Tar | None | Both | TarArchive | TarReader | TarWriter (3) |
| Tar.GZip | DEFLATE | Both | TarArchive | TarReader | TarWriter (3) |
| Tar.BZip2 | BZip2 | Both | TarArchive | TarReader | TarWriter (3) |
| Tar.Zstandard | ZStandard | Decompress | TarArchive | TarReader | N/A |
| Tar.LZip | LZMA | Both | TarArchive | TarReader | TarWriter (3) |
| Tar.XZ | LZMA2 | Decompress | TarArchive | TarReader | TarWriter (3) |
| GZip (single file) | DEFLATE | Both | GZipArchive | GZipReader | GZipWriter |
@@ -41,6 +42,7 @@ For those who want to directly compress/decompress bits. The single file formats
| ADCStream | Decompress |
| LZipStream | Both |
| XZStream | Decompress |
| ZStandardStream | Decompress |
## Archive Formats vs Compression

View File

@@ -1,6 +1,6 @@
# SharpCompress
SharpCompress is a compression library in pure C# for .NET Framework 4.62, .NET Standard 2.1, .NET 6.0 and NET 8.0 that can unrar, un7zip, unzip, untar unbzip2, ungzip, unlzip with forward-only reading and file random access APIs. Write support for zip/tar/bzip2/gzip/lzip are implemented.
SharpCompress is a compression library in pure C# for .NET Framework 4.62, .NET Standard 2.1, .NET 6.0 and NET 8.0 that can unrar, un7zip, unzip, untar unbzip2, ungzip, unlzip, unzstd with forward-only reading and file random access APIs. Write support for zip/tar/bzip2/gzip/lzip are implemented.
The major feature is support for non-seekable streams so large files can be processed on the fly (i.e. download stream).
@@ -20,10 +20,12 @@ In general, I recommend GZip (Deflate)/BZip2 (BZip)/LZip (LZMA) as the simplicit
Zip is okay, but it's a very hap-hazard format and the variation in headers and implementations makes it hard to get correct. Uses Deflate by default but supports a lot of compression methods.
RAR is not recommended as it's a propriatory format and the compression is closed source. Use Tar/LZip for LZMA
RAR is not recommended as it's a proprietary format and the compression is closed source. Use Tar/LZip for LZMA
7Zip and XZ both are overly complicated. 7Zip does not support streamable formats. XZ has known holes explained here: (http://www.nongnu.org/lzip/xz_inadequate.html) Use Tar/LZip for LZMA compression instead.
ZStandard is an efficient format that works well for streaming with a flexible compression level to tweak the speed/performance trade off you are looking for. We currently only implement decompression for ZStandard but as we leverage the [ZstdSharp](https://github.com/oleg-st/ZstdSharp) library one could likely add compression support without much trouble (PRs are welcome!).
## A Simple Request
Hi everyone. I hope you're using SharpCompress and finding it useful. Please give me feedback on what you'd like to see changed especially as far as usability goes. New feature suggestions are always welcome as well. I would also like to know what projects SharpCompress is being used in. I like seeing how it is used to give me ideas for future versions. Thanks!
@@ -40,6 +42,7 @@ I'm always looking for help or ideas. Please submit code or email with ideas. Un
* 7Zip writing
* Zip64 (Need writing and extend Reading)
* Multi-volume Zip support.
* ZStandard writing
## Version Log

View File

@@ -15,17 +15,17 @@
<s:String x:Key="/Default/CodeStyle/CodeCleanup/SilentCleanupProfile/@EntryValue">Basic Clean</s:String>
<s:Boolean x:Key="/Default/CodeStyle/CodeFormatting/CSharpCodeStyle/APPLY_ON_COMPLETION/@EntryValue">True</s:Boolean>
<s:String x:Key="/Default/CodeStyle/CodeFormatting/CSharpCodeStyle/ARGUMENTS_NAMED/@EntryValue">Named</s:String>
<s:String x:Key="/Default/CodeStyle/CodeFormatting/CSharpCodeStyle/ARGUMENTS_NAMED/@EntryValue">Positional</s:String>
<s:String x:Key="/Default/CodeStyle/CodeFormatting/CSharpCodeStyle/BRACES_FOR_FOR/@EntryValue">Required</s:String>
<s:String x:Key="/Default/CodeStyle/CodeFormatting/CSharpCodeStyle/BRACES_FOR_FOREACH/@EntryValue">Required</s:String>
<s:String x:Key="/Default/CodeStyle/CodeFormatting/CSharpCodeStyle/BRACES_FOR_IFELSE/@EntryValue">Required</s:String>
<s:String x:Key="/Default/CodeStyle/CodeFormatting/CSharpCodeStyle/BRACES_FOR_WHILE/@EntryValue">Required</s:String>
<s:Boolean x:Key="/Default/CodeStyle/CodeFormatting/CSharpFormat/ALIGN_FIRST_ARG_BY_PAREN/@EntryValue">True</s:Boolean>
<s:Boolean x:Key="/Default/CodeStyle/CodeFormatting/CSharpFormat/ALIGN_FIRST_ARG_BY_PAREN/@EntryValue">False</s:Boolean>
<s:Boolean x:Key="/Default/CodeStyle/CodeFormatting/CSharpFormat/ALIGN_LINQ_QUERY/@EntryValue">True</s:Boolean>
<s:Boolean x:Key="/Default/CodeStyle/CodeFormatting/CSharpFormat/ALIGN_MULTILINE_ARGUMENT/@EntryValue">True</s:Boolean>
<s:Boolean x:Key="/Default/CodeStyle/CodeFormatting/CSharpFormat/ALIGN_MULTILINE_ARRAY_AND_OBJECT_INITIALIZER/@EntryValue">True</s:Boolean>
<s:Boolean x:Key="/Default/CodeStyle/CodeFormatting/CSharpFormat/ALIGN_MULTILINE_ARRAY_AND_OBJECT_INITIALIZER/@EntryValue">False</s:Boolean>
<s:Boolean x:Key="/Default/CodeStyle/CodeFormatting/CSharpFormat/ALIGN_MULTILINE_CALLS_CHAIN/@EntryValue">True</s:Boolean>
<s:Boolean x:Key="/Default/CodeStyle/CodeFormatting/CSharpFormat/ALIGN_MULTILINE_EXPRESSION/@EntryValue">True</s:Boolean>
<s:Boolean x:Key="/Default/CodeStyle/CodeFormatting/CSharpFormat/ALIGN_MULTILINE_EXPRESSION/@EntryValue">False</s:Boolean>
<s:Boolean x:Key="/Default/CodeStyle/CodeFormatting/CSharpFormat/ALIGN_MULTILINE_EXTENDS_LIST/@EntryValue">True</s:Boolean>
<s:Boolean x:Key="/Default/CodeStyle/CodeFormatting/CSharpFormat/ALIGN_MULTILINE_FOR_STMT/@EntryValue">True</s:Boolean>
<s:Boolean x:Key="/Default/CodeStyle/CodeFormatting/CSharpFormat/ALIGN_MULTILINE_PARAMETER/@EntryValue">True</s:Boolean>
@@ -42,7 +42,7 @@
<s:String x:Key="/Default/CodeStyle/CodeFormatting/CSharpFormat/FORCE_IFELSE_BRACES_STYLE/@EntryValue">ALWAYS_ADD</s:String>
<s:String x:Key="/Default/CodeStyle/CodeFormatting/CSharpFormat/FORCE_USING_BRACES_STYLE/@EntryValue">ALWAYS_ADD</s:String>
<s:String x:Key="/Default/CodeStyle/CodeFormatting/CSharpFormat/FORCE_WHILE_BRACES_STYLE/@EntryValue">ALWAYS_ADD</s:String>
<s:Boolean x:Key="/Default/CodeStyle/CodeFormatting/CSharpFormat/INDENT_ANONYMOUS_METHOD_BLOCK/@EntryValue">True</s:Boolean>
<s:Boolean x:Key="/Default/CodeStyle/CodeFormatting/CSharpFormat/INDENT_ANONYMOUS_METHOD_BLOCK/@EntryValue">False</s:Boolean>
<s:Int64 x:Key="/Default/CodeStyle/CodeFormatting/CSharpFormat/KEEP_BLANK_LINES_IN_CODE/@EntryValue">1</s:Int64>
<s:Int64 x:Key="/Default/CodeStyle/CodeFormatting/CSharpFormat/KEEP_BLANK_LINES_IN_DECLARATIONS/@EntryValue">1</s:Int64>
<s:String x:Key="/Default/CodeStyle/CodeFormatting/CSharpFormat/PLACE_ACCESSOR_ATTRIBUTE_ON_SAME_LINE_EX/@EntryValue">NEVER</s:String>
@@ -50,12 +50,12 @@
<s:Boolean x:Key="/Default/CodeStyle/CodeFormatting/CSharpFormat/PLACE_CONSTRUCTOR_INITIALIZER_ON_SAME_LINE/@EntryValue">False</s:Boolean>
<s:Boolean x:Key="/Default/CodeStyle/CodeFormatting/CSharpFormat/PLACE_FIELD_ATTRIBUTE_ON_SAME_LINE/@EntryValue">False</s:Boolean>
<s:String x:Key="/Default/CodeStyle/CodeFormatting/CSharpFormat/PLACE_FIELD_ATTRIBUTE_ON_SAME_LINE_EX/@EntryValue">NEVER</s:String>
<s:Boolean x:Key="/Default/CodeStyle/CodeFormatting/CSharpFormat/PLACE_SIMPLE_ACCESSORHOLDER_ON_SINGLE_LINE/@EntryValue">True</s:Boolean>
<s:Boolean x:Key="/Default/CodeStyle/CodeFormatting/CSharpFormat/PLACE_SIMPLE_ACCESSORHOLDER_ON_SINGLE_LINE/@EntryValue">False</s:Boolean>
<s:Boolean x:Key="/Default/CodeStyle/CodeFormatting/CSharpFormat/PLACE_SIMPLE_ACCESSOR_ATTRIBUTE_ON_SAME_LINE/@EntryValue">False</s:Boolean>
<s:String x:Key="/Default/CodeStyle/CodeFormatting/CSharpFormat/PLACE_SIMPLE_EMBEDDED_STATEMENT_ON_SAME_LINE/@EntryValue">NEVER</s:String>
<s:Boolean x:Key="/Default/CodeStyle/CodeFormatting/CSharpFormat/PLACE_SIMPLE_INITIALIZER_ON_SINGLE_LINE/@EntryValue">True</s:Boolean>
<s:Boolean x:Key="/Default/CodeStyle/CodeFormatting/CSharpFormat/PLACE_SIMPLE_INITIALIZER_ON_SINGLE_LINE/@EntryValue">False</s:Boolean>
<s:Boolean x:Key="/Default/CodeStyle/CodeFormatting/CSharpFormat/PLACE_WHILE_ON_NEW_LINE/@EntryValue">True</s:Boolean>
<s:Boolean x:Key="/Default/CodeStyle/CodeFormatting/CSharpFormat/PLACE_WHILE_ON_NEW_LINE/@EntryValue">False</s:Boolean>
<s:String x:Key="/Default/CodeStyle/CodeFormatting/CSharpFormat/SIMPLE_EMBEDDED_STATEMENT_STYLE/@EntryValue">LINE_BREAK</s:String>
<s:Boolean x:Key="/Default/CodeStyle/CodeFormatting/CSharpFormat/SPACE_AFTER_TYPECAST_PARENTHESES/@EntryValue">False</s:Boolean>
@@ -67,13 +67,13 @@
<s:Boolean x:Key="/Default/CodeStyle/CodeFormatting/CSharpFormat/SPACE_BEFORE_TYPEOF_PARENTHESES/@EntryValue">False</s:Boolean>
<s:Boolean x:Key="/Default/CodeStyle/CodeFormatting/CSharpFormat/STICK_COMMENT/@EntryValue">False</s:Boolean>
<s:String x:Key="/Default/CodeStyle/CodeFormatting/CSharpFormat/WRAP_ARGUMENTS_STYLE/@EntryValue">CHOP_IF_LONG</s:String>
<s:String x:Key="/Default/CodeStyle/CodeFormatting/CSharpFormat/WRAP_ARRAY_INITIALIZER_STYLE/@EntryValue">CHOP_IF_LONG</s:String>
<s:String x:Key="/Default/CodeStyle/CodeFormatting/CSharpFormat/WRAP_ARRAY_INITIALIZER_STYLE/@EntryValue">CHOP_ALWAYS</s:String>
<s:String x:Key="/Default/CodeStyle/CodeFormatting/CSharpFormat/WRAP_EXTENDS_LIST_STYLE/@EntryValue">CHOP_IF_LONG</s:String>
<s:Boolean x:Key="/Default/CodeStyle/CodeFormatting/CSharpFormat/WRAP_LINES/@EntryValue">False</s:Boolean>
<s:String x:Key="/Default/CodeStyle/CodeFormatting/CSharpFormat/WRAP_PARAMETERS_STYLE/@EntryValue">CHOP_IF_LONG</s:String>
<s:String x:Key="/Default/CodeStyle/CSharpVarKeywordUsage/ForBuiltInTypes/@EntryValue">UseVarWhenEvident</s:String>
<s:String x:Key="/Default/CodeStyle/CSharpVarKeywordUsage/ForOtherTypes/@EntryValue">UseVarWhenEvident</s:String>
<s:String x:Key="/Default/CodeStyle/CSharpVarKeywordUsage/ForSimpleTypes/@EntryValue">UseVarWhenEvident</s:String>
<s:String x:Key="/Default/CodeStyle/CSharpVarKeywordUsage/ForBuiltInTypes/@EntryValue">UseVar</s:String>
<s:String x:Key="/Default/CodeStyle/CSharpVarKeywordUsage/ForOtherTypes/@EntryValue">UseVar</s:String>
<s:String x:Key="/Default/CodeStyle/CSharpVarKeywordUsage/ForSimpleTypes/@EntryValue">UseVar</s:String>
<s:String x:Key="/Default/CodeStyle/Naming/CSharpNaming/PredefinedNamingRules/=PrivateInstanceFields/@EntryIndexedValue">&lt;Policy Inspect="True" Prefix="_" Suffix="" Style="aaBb" /&gt;</s:String>
<s:String x:Key="/Default/CodeStyle/Naming/CSharpNaming/PredefinedNamingRules/=PrivateStaticFields/@EntryIndexedValue">&lt;Policy Inspect="True" Prefix="" Suffix="" Style="AA_BB" /&gt;</s:String>
@@ -122,6 +122,7 @@
<s:String x:Key="/Default/CodeStyle/Naming/XamlNaming/UserRules/=NAMESPACE_005FALIAS/@EntryIndexedValue">&lt;Policy Inspect="True" Prefix="" Suffix="" Style="aaBb" /&gt;</s:String>
<s:String x:Key="/Default/CodeStyle/Naming/XamlNaming/UserRules/=XAML_005FFIELD/@EntryIndexedValue">&lt;Policy Inspect="True" Prefix="" Suffix="" Style="AaBb" /&gt;</s:String>
<s:String x:Key="/Default/CodeStyle/Naming/XamlNaming/UserRules/=XAML_005FRESOURCE/@EntryIndexedValue">&lt;Policy Inspect="True" Prefix="" Suffix="" Style="AaBb" /&gt;</s:String>
<s:String x:Key="/Default/CustomTools/CustomToolsData/@EntryValue"></s:String>
<s:Boolean x:Key="/Default/Environment/SettingsMigration/IsMigratorApplied/=JetBrains_002EReSharper_002EPsi_002ECSharp_002ECodeStyle_002ECSharpAttributeForSingleLineMethodUpgrade/@EntryIndexedValue">True</s:Boolean>
<s:Boolean x:Key="/Default/Environment/SettingsMigration/IsMigratorApplied/=JetBrains_002EReSharper_002EPsi_002ECSharp_002ECodeStyle_002ECSharpKeepExistingMigration/@EntryIndexedValue">True</s:Boolean>
<s:Boolean x:Key="/Default/Environment/SettingsMigration/IsMigratorApplied/=JetBrains_002EReSharper_002EPsi_002ECSharp_002ECodeStyle_002ECSharpPlaceEmbeddedOnSameLineMigration/@EntryIndexedValue">True</s:Boolean>

View File

@@ -14,31 +14,11 @@
"resolved": "1.1.9",
"contentHash": "AfK5+ECWYTP7G3AAdnU8IfVj+QpGjrh9GC2mpdcJzCvtQ4pnerAGwHsxJ9D4/RnhDUz2DSzd951O/lQjQby2Sw=="
},
"Microsoft.SourceLink.GitHub": {
"type": "Direct",
"requested": "[8.0.0, )",
"resolved": "8.0.0",
"contentHash": "G5q7OqtwIyGTkeIOAc3u2ZuV/kicQaec5EaRnc0pIeSnh9LUjj+PYQrJYBURvDt7twGl2PKA7nSN0kz1Zw5bnQ==",
"dependencies": {
"Microsoft.Build.Tasks.Git": "8.0.0",
"Microsoft.SourceLink.Common": "8.0.0"
}
},
"SimpleExec": {
"type": "Direct",
"requested": "[12.0.0, )",
"resolved": "12.0.0",
"contentHash": "ptxlWtxC8vM6Y6e3h9ZTxBBkOWnWrm/Sa1HT+2i1xcXY3Hx2hmKDZP5RShPf8Xr9D+ivlrXNy57ktzyH8kyt+Q=="
},
"Microsoft.Build.Tasks.Git": {
"type": "Transitive",
"resolved": "8.0.0",
"contentHash": "bZKfSIKJRXLTuSzLudMFte/8CempWjVamNUR5eHJizsy+iuOuO/k2gnh7W0dHJmYY0tBf+gUErfluCv5mySAOQ=="
},
"Microsoft.SourceLink.Common": {
"type": "Transitive",
"resolved": "8.0.0",
"contentHash": "dk9JPxTCIevS75HyEQ0E4OVAFhB2N+V9ShCXf8Q6FkUQZDkgLI12y679Nym1YqsiSysuQskT7Z+6nUf3yab6Vw=="
}
}
}

View File

@@ -4,6 +4,7 @@ using System.IO;
using System.Linq;
using SharpCompress.Common;
using SharpCompress.Factories;
using SharpCompress.IO;
using SharpCompress.Readers;
namespace SharpCompress.Archives;
@@ -19,7 +20,7 @@ public static class ArchiveFactory
public static IArchive Open(Stream stream, ReaderOptions? readerOptions = null)
{
readerOptions ??= new ReaderOptions();
stream = new SharpCompressStream(stream, bufferSize: readerOptions.BufferSize);
return FindFactory<IArchiveFactory>(stream).Open(stream, readerOptions);
}
@@ -165,14 +166,22 @@ public static class ArchiveFactory
);
}
public static bool IsArchive(string filePath, out ArchiveType? type)
public static bool IsArchive(
string filePath,
out ArchiveType? type,
int bufferSize = ReaderOptions.DefaultBufferSize
)
{
filePath.CheckNotNullOrEmpty(nameof(filePath));
using Stream s = File.OpenRead(filePath);
return IsArchive(s, out type);
return IsArchive(s, out type, bufferSize);
}
public static bool IsArchive(Stream stream, out ArchiveType? type)
public static bool IsArchive(
Stream stream,
out ArchiveType? type,
int bufferSize = ReaderOptions.DefaultBufferSize
)
{
type = null;
stream.CheckNotNull(nameof(stream));

View File

@@ -1,4 +1,4 @@
using System;
using System;
using System.Collections.Generic;
using System.IO;
using SharpCompress.Common;
@@ -14,8 +14,11 @@ class AutoArchiveFactory : IArchiveFactory
public IEnumerable<string> GetSupportedExtensions() => throw new NotSupportedException();
public bool IsArchive(Stream stream, string? password = null) =>
throw new NotSupportedException();
public bool IsArchive(
Stream stream,
string? password = null,
int bufferSize = ReaderOptions.DefaultBufferSize
) => throw new NotSupportedException();
public FileInfo? GetFilePart(int index, FileInfo part1) => throw new NotSupportedException();

View File

@@ -89,6 +89,12 @@ public class GZipArchive : AbstractWritableArchive<GZipArchiveEntry, GZipVolume>
public static GZipArchive Open(Stream stream, ReaderOptions? readerOptions = null)
{
stream.CheckNotNull(nameof(stream));
if (stream is not { CanSeek: true })
{
throw new ArgumentException("Stream must be seekable", nameof(stream));
}
return new GZipArchive(
new SourceStream(stream, _ => null, readerOptions ?? new ReaderOptions())
);

View File

@@ -58,7 +58,7 @@ internal sealed class GZipWritableArchiveEntry : GZipArchiveEntry, IWritableArch
{
//ensure new stream is at the start, this could be reset
stream.Seek(0, SeekOrigin.Begin);
return NonDisposingStream.Create(stream);
return SharpCompressStream.Create(stream, leaveOpen: true);
}
internal override void Close()

View File

@@ -131,6 +131,12 @@ public class RarArchive : AbstractArchive<RarArchiveEntry, RarVolume>
public static RarArchive Open(Stream stream, ReaderOptions? options = null)
{
stream.CheckNotNull(nameof(stream));
if (stream is not { CanSeek: true })
{
throw new ArgumentException("Stream must be seekable", nameof(stream));
}
return new RarArchive(new SourceStream(stream, _ => null, options ?? new ReaderOptions()));
}

View File

@@ -92,6 +92,12 @@ public class SevenZipArchive : AbstractArchive<SevenZipArchiveEntry, SevenZipVol
public static SevenZipArchive Open(Stream stream, ReaderOptions? readerOptions = null)
{
stream.CheckNotNull("stream");
if (stream is not { CanSeek: true })
{
throw new ArgumentException("Stream must be seekable", nameof(stream));
}
return new SevenZipArchive(
new SourceStream(stream, _ => null, readerOptions ?? new ReaderOptions())
);
@@ -194,7 +200,10 @@ public class SevenZipArchive : AbstractArchive<SevenZipArchiveEntry, SevenZipVol
new SevenZipReader(ReaderOptions, this);
public override bool IsSolid =>
Entries.Where(x => !x.IsDirectory).GroupBy(x => x.FilePart.Folder).Count() > 1;
Entries
.Where(x => !x.IsDirectory)
.GroupBy(x => x.FilePart.Folder)
.Any(folder => folder.Count() > 1);
public override long TotalSize =>
_database?._packSizes.Aggregate(0L, (total, packSize) => total + packSize) ?? 0;

View File

@@ -90,6 +90,12 @@ public class TarArchive : AbstractWritableArchive<TarArchiveEntry, TarVolume>
public static TarArchive Open(Stream stream, ReaderOptions? readerOptions = null)
{
stream.CheckNotNull(nameof(stream));
if (stream is not { CanSeek: true })
{
throw new ArgumentException("Stream must be seekable", nameof(stream));
}
return new TarArchive(
new SourceStream(stream, i => null, readerOptions ?? new ReaderOptions())
);

View File

@@ -58,7 +58,7 @@ internal sealed class TarWritableArchiveEntry : TarArchiveEntry, IWritableArchiv
{
//ensure new stream is at the start, this could be reset
stream.Seek(0, SeekOrigin.Begin);
return NonDisposingStream.Create(stream);
return SharpCompressStream.Create(stream, leaveOpen: true);
}
internal override void Close()

View File

@@ -111,29 +111,51 @@ public class ZipArchive : AbstractWritableArchive<ZipArchiveEntry, ZipVolume>
public static ZipArchive Open(Stream stream, ReaderOptions? readerOptions = null)
{
stream.CheckNotNull(nameof(stream));
if (stream is not { CanSeek: true })
{
throw new ArgumentException("Stream must be seekable", nameof(stream));
}
return new ZipArchive(
new SourceStream(stream, i => null, readerOptions ?? new ReaderOptions())
);
}
public static bool IsZipFile(string filePath, string? password = null) =>
IsZipFile(new FileInfo(filePath), password);
public static bool IsZipFile(
string filePath,
string? password = null,
int bufferSize = ReaderOptions.DefaultBufferSize
) => IsZipFile(new FileInfo(filePath), password, bufferSize);
public static bool IsZipFile(FileInfo fileInfo, string? password = null)
public static bool IsZipFile(
FileInfo fileInfo,
string? password = null,
int bufferSize = ReaderOptions.DefaultBufferSize
)
{
if (!fileInfo.Exists)
{
return false;
}
using Stream stream = fileInfo.OpenRead();
return IsZipFile(stream, password);
return IsZipFile(stream, password, bufferSize);
}
public static bool IsZipFile(Stream stream, string? password = null)
public static bool IsZipFile(
Stream stream,
string? password = null,
int bufferSize = ReaderOptions.DefaultBufferSize
)
{
var headerFactory = new StreamingZipHeaderFactory(password, new ArchiveEncoding(), null);
try
{
if (stream is not SharpCompressStream)
{
stream = new SharpCompressStream(stream, bufferSize: bufferSize);
}
var header = headerFactory
.ReadStreamHeader(stream)
.FirstOrDefault(x => x.ZipHeaderType != ZipHeaderType.Split);
@@ -153,11 +175,20 @@ public class ZipArchive : AbstractWritableArchive<ZipArchiveEntry, ZipVolume>
}
}
public static bool IsZipMulti(Stream stream, string? password = null)
public static bool IsZipMulti(
Stream stream,
string? password = null,
int bufferSize = ReaderOptions.DefaultBufferSize
)
{
var headerFactory = new StreamingZipHeaderFactory(password, new ArchiveEncoding(), null);
try
{
if (stream is not SharpCompressStream)
{
stream = new SharpCompressStream(stream, bufferSize: bufferSize);
}
var header = headerFactory
.ReadStreamHeader(stream)
.FirstOrDefault(x => x.ZipHeaderType != ZipHeaderType.Split);
@@ -196,7 +227,7 @@ public class ZipArchive : AbstractWritableArchive<ZipArchiveEntry, ZipVolume>
if (streams.Count() > 1) //test part 2 - true = multipart not split
{
streams[1].Position += 4; //skip the POST_DATA_DESCRIPTOR to prevent an exception
var isZip = IsZipFile(streams[1], ReaderOptions.Password);
var isZip = IsZipFile(streams[1], ReaderOptions.Password, ReaderOptions.BufferSize);
streams[1].Position -= 4;
if (isZip)
{
@@ -299,7 +330,7 @@ public class ZipArchive : AbstractWritableArchive<ZipArchiveEntry, ZipVolume>
protected override IReader CreateReaderForSolidExtraction()
{
var stream = Volumes.Single().Stream;
stream.Position = 0;
((IStreamStack)stream).StackSeek(0);
return ZipReader.Open(stream, ReaderOptions, Entries);
}
}

View File

@@ -18,6 +18,4 @@ public class ZipArchiveEntry : ZipEntry, IArchiveEntry
public bool IsComplete => true;
#endregion
public string? Comment => ((SeekableZipFilePart)Parts.Single()).Comment;
}

View File

@@ -59,7 +59,7 @@ internal class ZipWritableArchiveEntry : ZipArchiveEntry, IWritableArchiveEntry
{
//ensure new stream is at the start, this could be reset
stream.Seek(0, SeekOrigin.Begin);
return NonDisposingStream.Create(stream);
return SharpCompressStream.Create(stream, leaveOpen: true);
}
internal override void Close()

View File

@@ -1,7 +1,7 @@
using System;
using System.Runtime.CompilerServices;
[assembly: CLSCompliant(true)]
[assembly: CLSCompliant(false)]
[assembly: InternalsVisibleTo(
"SharpCompress.Test,PublicKey=0024000004800000940000000602000000240000525341310004000001000100158bebf1433f76dffc356733c138babea7a47536c65ed8009b16372c6f4edbb20554db74a62687f56b97c20a6ce8c4b123280279e33c894e7b3aa93ab3c573656fde4db576cfe07dba09619ead26375b25d2c4a8e43f7be257d712b0dd2eb546f67adb09281338618a58ac834fc038dd7e2740a7ab3591826252e4f4516306dc"
)]

View File

@@ -7,54 +7,53 @@ using System.Threading.Tasks;
using SharpCompress.Common.GZip;
using SharpCompress.Common.Tar;
namespace SharpCompress.Common.Arc
namespace SharpCompress.Common.Arc;
public class ArcEntry : Entry
{
public class ArcEntry : Entry
private readonly ArcFilePart? _filePart;
internal ArcEntry(ArcFilePart? filePart)
{
private readonly ArcFilePart? _filePart;
internal ArcEntry(ArcFilePart? filePart)
{
_filePart = filePart;
}
public override long Crc
{
get
{
if (_filePart == null)
{
return 0;
}
return _filePart.Header.Crc16;
}
}
public override string? Key => _filePart?.Header.Name;
public override string? LinkTarget => null;
public override long CompressedSize => _filePart?.Header.CompressedSize ?? 0;
public override CompressionType CompressionType =>
_filePart?.Header.CompressionMethod ?? CompressionType.Unknown;
public override long Size => throw new NotImplementedException();
public override DateTime? LastModifiedTime => null;
public override DateTime? CreatedTime => null;
public override DateTime? LastAccessedTime => null;
public override DateTime? ArchivedTime => null;
public override bool IsEncrypted => false;
public override bool IsDirectory => false;
public override bool IsSplitAfter => false;
internal override IEnumerable<FilePart> Parts => _filePart.Empty();
_filePart = filePart;
}
public override long Crc
{
get
{
if (_filePart == null)
{
return 0;
}
return _filePart.Header.Crc16;
}
}
public override string? Key => _filePart?.Header.Name;
public override string? LinkTarget => null;
public override long CompressedSize => _filePart?.Header.CompressedSize ?? 0;
public override CompressionType CompressionType =>
_filePart?.Header.CompressionMethod ?? CompressionType.Unknown;
public override long Size => throw new NotImplementedException();
public override DateTime? LastModifiedTime => null;
public override DateTime? CreatedTime => null;
public override DateTime? LastAccessedTime => null;
public override DateTime? ArchivedTime => null;
public override bool IsEncrypted => false;
public override bool IsDirectory => false;
public override bool IsSplitAfter => false;
internal override IEnumerable<FilePart> Parts => _filePart.Empty();
}

View File

@@ -3,74 +3,73 @@ using System.IO;
using System.Linq;
using System.Text;
namespace SharpCompress.Common.Arc
namespace SharpCompress.Common.Arc;
public class ArcEntryHeader
{
public class ArcEntryHeader
public ArchiveEncoding ArchiveEncoding { get; }
public CompressionType CompressionMethod { get; private set; }
public string? Name { get; private set; }
public long CompressedSize { get; private set; }
public DateTime DateTime { get; private set; }
public int Crc16 { get; private set; }
public long OriginalSize { get; private set; }
public long DataStartPosition { get; private set; }
public ArcEntryHeader(ArchiveEncoding archiveEncoding)
{
public ArchiveEncoding ArchiveEncoding { get; }
public CompressionType CompressionMethod { get; private set; }
public string? Name { get; private set; }
public long CompressedSize { get; private set; }
public DateTime DateTime { get; private set; }
public int Crc16 { get; private set; }
public long OriginalSize { get; private set; }
public long DataStartPosition { get; private set; }
this.ArchiveEncoding = archiveEncoding;
}
public ArcEntryHeader(ArchiveEncoding archiveEncoding)
public ArcEntryHeader? ReadHeader(Stream stream)
{
byte[] headerBytes = new byte[29];
if (stream.Read(headerBytes, 0, headerBytes.Length) != headerBytes.Length)
{
this.ArchiveEncoding = archiveEncoding;
return null;
}
DataStartPosition = stream.Position;
return LoadFrom(headerBytes);
}
public ArcEntryHeader? ReadHeader(Stream stream)
public ArcEntryHeader LoadFrom(byte[] headerBytes)
{
CompressionMethod = GetCompressionType(headerBytes[1]);
// Read name
int nameEnd = Array.IndexOf(headerBytes, (byte)0, 1); // Find null terminator
Name = Encoding.UTF8.GetString(headerBytes, 2, nameEnd > 0 ? nameEnd - 2 : 12);
int offset = 15;
CompressedSize = BitConverter.ToUInt32(headerBytes, offset);
offset += 4;
uint rawDateTime = BitConverter.ToUInt32(headerBytes, offset);
DateTime = ConvertToDateTime(rawDateTime);
offset += 4;
Crc16 = BitConverter.ToUInt16(headerBytes, offset);
offset += 2;
OriginalSize = BitConverter.ToUInt32(headerBytes, offset);
return this;
}
private CompressionType GetCompressionType(byte value)
{
return value switch
{
byte[] headerBytes = new byte[29];
if (stream.Read(headerBytes, 0, headerBytes.Length) != headerBytes.Length)
{
return null;
}
DataStartPosition = stream.Position;
return LoadFrom(headerBytes);
}
1 or 2 => CompressionType.None,
3 => CompressionType.RLE90,
4 => CompressionType.Squeezed,
5 or 6 or 7 or 8 => CompressionType.Crunched,
9 => CompressionType.Squashed,
10 => CompressionType.Crushed,
11 => CompressionType.Distilled,
_ => CompressionType.Unknown,
};
}
public ArcEntryHeader LoadFrom(byte[] headerBytes)
{
CompressionMethod = GetCompressionType(headerBytes[1]);
// Read name
int nameEnd = Array.IndexOf(headerBytes, (byte)0, 1); // Find null terminator
Name = Encoding.UTF8.GetString(headerBytes, 2, nameEnd > 0 ? nameEnd - 2 : 12);
int offset = 15;
CompressedSize = BitConverter.ToUInt32(headerBytes, offset);
offset += 4;
uint rawDateTime = BitConverter.ToUInt32(headerBytes, offset);
DateTime = ConvertToDateTime(rawDateTime);
offset += 4;
Crc16 = BitConverter.ToUInt16(headerBytes, offset);
offset += 2;
OriginalSize = BitConverter.ToUInt32(headerBytes, offset);
return this;
}
private CompressionType GetCompressionType(byte value)
{
return value switch
{
1 or 2 => CompressionType.None,
3 => CompressionType.RLE90,
4 => CompressionType.Squeezed,
5 or 6 or 7 or 8 => CompressionType.Crunched,
9 => CompressionType.Squashed,
10 => CompressionType.Crushed,
11 => CompressionType.Distilled,
_ => CompressionType.Unknown,
};
}
public static DateTime ConvertToDateTime(long rawDateTime)
{
// Convert Unix timestamp to DateTime (UTC)
return DateTimeOffset.FromUnixTimeSeconds(rawDateTime).UtcDateTime;
}
public static DateTime ConvertToDateTime(long rawDateTime)
{
// Convert Unix timestamp to DateTime (UTC)
return DateTimeOffset.FromUnixTimeSeconds(rawDateTime).UtcDateTime;
}
}

View File

@@ -13,63 +13,55 @@ using SharpCompress.Compressors.RLE90;
using SharpCompress.Compressors.Squeezed;
using SharpCompress.IO;
namespace SharpCompress.Common.Arc
namespace SharpCompress.Common.Arc;
public class ArcFilePart : FilePart
{
public class ArcFilePart : FilePart
private readonly Stream? _stream;
internal ArcFilePart(ArcEntryHeader localArcHeader, Stream? seekableStream)
: base(localArcHeader.ArchiveEncoding)
{
private readonly Stream? _stream;
internal ArcFilePart(ArcEntryHeader localArcHeader, Stream? seekableStream)
: base(localArcHeader.ArchiveEncoding)
{
_stream = seekableStream;
Header = localArcHeader;
}
internal ArcEntryHeader Header { get; set; }
internal override string? FilePartName => Header.Name;
internal override Stream GetCompressedStream()
{
if (_stream != null)
{
Stream compressedStream;
switch (Header.CompressionMethod)
{
case CompressionType.None:
compressedStream = new ReadOnlySubStream(
_stream,
Header.DataStartPosition,
Header.CompressedSize
);
break;
case CompressionType.RLE90:
compressedStream = new RunLength90Stream(
_stream,
(int)Header.CompressedSize
);
break;
case CompressionType.Squeezed:
compressedStream = new SqueezeStream(_stream, (int)Header.CompressedSize);
break;
case CompressionType.Crunched:
compressedStream = new ArcLzwStream(
_stream,
(int)Header.CompressedSize,
true
);
break;
default:
throw new NotSupportedException(
"CompressionMethod: " + Header.CompressionMethod
);
}
return compressedStream;
}
return _stream.NotNull();
}
internal override Stream? GetRawStream() => _stream;
_stream = seekableStream;
Header = localArcHeader;
}
internal ArcEntryHeader Header { get; set; }
internal override string? FilePartName => Header.Name;
internal override Stream GetCompressedStream()
{
if (_stream != null)
{
Stream compressedStream;
switch (Header.CompressionMethod)
{
case CompressionType.None:
compressedStream = new ReadOnlySubStream(
_stream,
Header.DataStartPosition,
Header.CompressedSize
);
break;
case CompressionType.RLE90:
compressedStream = new RunLength90Stream(_stream, (int)Header.CompressedSize);
break;
case CompressionType.Squeezed:
compressedStream = new SqueezeStream(_stream, (int)Header.CompressedSize);
break;
case CompressionType.Crunched:
compressedStream = new ArcLzwStream(_stream, (int)Header.CompressedSize, true);
break;
default:
throw new NotSupportedException(
"CompressionMethod: " + Header.CompressionMethod
);
}
return compressedStream;
}
return _stream.NotNull();
}
internal override Stream? GetRawStream() => _stream;
}

View File

@@ -6,11 +6,10 @@ using System.Text;
using System.Threading.Tasks;
using SharpCompress.Readers;
namespace SharpCompress.Common.Arc
namespace SharpCompress.Common.Arc;
public class ArcVolume : Volume
{
public class ArcVolume : Volume
{
public ArcVolume(Stream stream, ReaderOptions readerOptions, int index = 0)
: base(stream, readerOptions, index) { }
}
public ArcVolume(Stream stream, ReaderOptions readerOptions, int index = 0)
: base(stream, readerOptions, index) { }
}

View File

@@ -28,4 +28,5 @@ public enum CompressionType
Squashed,
Crushed,
Distilled,
ZStandard,
}

View File

@@ -1,11 +1,33 @@
using System;
using System.IO;
using System.IO.Compression;
using SharpCompress.IO;
using SharpCompress.Readers;
namespace SharpCompress.Common;
public class EntryStream : Stream
public class EntryStream : Stream, IStreamStack
{
#if DEBUG_STREAMS
long IStreamStack.InstanceId { get; set; }
#endif
int IStreamStack.DefaultBufferSize { get; set; }
Stream IStreamStack.BaseStream() => _stream;
int IStreamStack.BufferSize
{
get => 0;
set { }
}
int IStreamStack.BufferPosition
{
get => 0;
set { }
}
void IStreamStack.SetPosition(long position) { }
private readonly IReader _reader;
private readonly Stream _stream;
private bool _completed;
@@ -15,6 +37,9 @@ public class EntryStream : Stream
{
_reader = reader;
_stream = stream;
#if DEBUG_STREAMS
this.DebugConstruct(typeof(EntryStream));
#endif
}
/// <summary>
@@ -32,11 +57,28 @@ public class EntryStream : Stream
{
SkipEntry();
}
//Need a safe standard approach to this - it's okay for compression to overreads. Handling needs to be standardised
if (_stream is IStreamStack ss)
{
if (ss.BaseStream() is SharpCompress.Compressors.Deflate.DeflateStream deflateStream)
{
deflateStream.Flush(); //Deflate over reads. Knock it back
}
else if (ss.BaseStream() is SharpCompress.Compressors.LZMA.LzmaStream lzmaStream)
{
lzmaStream.Flush(); //Lzma over reads. Knock it back
}
}
if (_isDisposed)
{
return;
}
_isDisposed = true;
#if DEBUG_STREAMS
this.DebugDispose(typeof(EntryStream));
#endif
base.Dispose(disposing);
_stream.Dispose();
}
@@ -53,7 +95,7 @@ public class EntryStream : Stream
public override long Position
{
get => throw new NotSupportedException();
get => _stream.Position; //throw new NotSupportedException();
set => throw new NotSupportedException();
}

View File

@@ -4,13 +4,38 @@ using SharpCompress.IO;
namespace SharpCompress.Common.Tar;
internal class TarReadOnlySubStream : NonDisposingStream
internal class TarReadOnlySubStream : SharpCompressStream, IStreamStack
{
#if DEBUG_STREAMS
long IStreamStack.InstanceId { get; set; }
#endif
Stream IStreamStack.BaseStream() => base.Stream;
int IStreamStack.BufferSize
{
get => 0;
set { }
}
int IStreamStack.BufferPosition
{
get => 0;
set { }
}
void IStreamStack.SetPosition(long position) { }
private bool _isDisposed;
private long _amountRead;
public TarReadOnlySubStream(Stream stream, long bytesToRead)
: base(stream, throwOnDispose: false) => BytesLeftToRead = bytesToRead;
: base(stream, leaveOpen: true, throwOnDispose: false)
{
BytesLeftToRead = bytesToRead;
#if DEBUG_STREAMS
this.DebugConstruct(typeof(TarReadOnlySubStream));
#endif
}
protected override void Dispose(bool disposing)
{
@@ -20,7 +45,9 @@ internal class TarReadOnlySubStream : NonDisposingStream
}
_isDisposed = true;
#if DEBUG_STREAMS
this.DebugDispose(typeof(TarReadOnlySubStream));
#endif
if (disposing)
{
// Ensure we read all remaining blocks for this entry.

View File

@@ -7,16 +7,22 @@ namespace SharpCompress.Common;
public abstract class Volume : IVolume
{
private readonly Stream _baseStream;
private readonly Stream _actualStream;
internal Volume(Stream stream, ReaderOptions? readerOptions, int index = 0)
{
Index = index;
ReaderOptions = readerOptions ?? new ReaderOptions();
_baseStream = stream;
if (ReaderOptions.LeaveStreamOpen)
{
stream = NonDisposingStream.Create(stream);
stream = SharpCompressStream.Create(stream, leaveOpen: true);
}
if (stream is IStreamStack ss)
ss.SetBuffer(ReaderOptions.BufferSize, true);
_actualStream = stream;
}
@@ -32,7 +38,7 @@ public abstract class Volume : IVolume
public virtual int Index { get; internal set; }
public string? FileName => (_actualStream as FileStream)?.Name;
public string? FileName => (_baseStream as FileStream)?.Name;
/// <summary>
/// RarArchive is part of a multi-part archive.

View File

@@ -123,11 +123,7 @@ internal class DirectoryEntryHeader : ZipFileEntry
public long RelativeOffsetOfEntryHeader { get; set; }
public uint ExternalFileAttributes { get; set; }
public ushort InternalFileAttributes { get; set; }
public ushort DiskNumberStart { get; set; }
public string? Comment { get; private set; }
}

View File

@@ -120,4 +120,8 @@ internal abstract class ZipFileEntry : ZipHeader
internal ZipFilePart? Part { get; set; }
internal bool IsZip64 => CompressedSize >= uint.MaxValue;
internal uint ExternalFileAttributes { get; set; }
internal string? Comment { get; set; }
}

View File

@@ -1,5 +1,6 @@
using System;
using System.IO;
using SharpCompress.IO;
namespace SharpCompress.Common.Zip;
@@ -9,8 +10,28 @@ internal enum CryptoMode
Decrypt,
}
internal class PkwareTraditionalCryptoStream : Stream
internal class PkwareTraditionalCryptoStream : Stream, IStreamStack
{
#if DEBUG_STREAMS
long IStreamStack.InstanceId { get; set; }
#endif
int IStreamStack.DefaultBufferSize { get; set; }
Stream IStreamStack.BaseStream() => _stream;
int IStreamStack.BufferSize
{
get => 0;
set { return; }
}
int IStreamStack.BufferPosition
{
get => 0;
set { return; }
}
void IStreamStack.SetPosition(long position) { }
private readonly PkwareTraditionalEncryptionData _encryptor;
private readonly CryptoMode _mode;
private readonly Stream _stream;
@@ -25,6 +46,10 @@ internal class PkwareTraditionalCryptoStream : Stream
_encryptor = encryptor;
_stream = stream;
_mode = mode;
#if DEBUG_STREAMS
this.DebugConstruct(typeof(PkwareTraditionalCryptoStream));
#endif
}
public override bool CanRead => (_mode == CryptoMode.Decrypt);
@@ -100,6 +125,9 @@ internal class PkwareTraditionalCryptoStream : Stream
return;
}
_isDisposed = true;
#if DEBUG_STREAMS
this.DebugDispose(typeof(PkwareTraditionalCryptoStream));
#endif
base.Dispose(disposing);
_stream.Dispose();
}

View File

@@ -25,14 +25,8 @@ internal class SeekableZipFilePart : ZipFilePart
return base.GetCompressedStream();
}
internal string? Comment => ((DirectoryEntryHeader)Header).Comment;
private void LoadLocalHeader()
{
var hasData = Header.HasData;
Header = _headerFactory.GetLocalHeader(BaseStream, ((DirectoryEntryHeader)Header));
Header.HasData = hasData;
}
private void LoadLocalHeader() =>
Header = _headerFactory.GetLocalHeader(BaseStream, (DirectoryEntryHeader)Header);
protected override Stream CreateBaseStream()
{

View File

@@ -149,6 +149,12 @@ internal sealed class SeekableZipHeaderFactory : ZipHeaderFactory
{
throw new InvalidOperationException();
}
// populate fields only known from the DirectoryEntryHeader
localEntryHeader.HasData = directoryEntryHeader.HasData;
localEntryHeader.ExternalFileAttributes = directoryEntryHeader.ExternalFileAttributes;
localEntryHeader.Comment = directoryEntryHeader.Comment;
if (FlagUtility.HasFlag(localEntryHeader.Flags, HeaderFlags.UsePostDataDescriptor))
{
localEntryHeader.Crc = directoryEntryHeader.Crc;

View File

@@ -26,12 +26,12 @@ internal sealed class StreamingZipFilePart : ZipFilePart
);
if (LeaveStreamOpen)
{
return NonDisposingStream.Create(_decompressionStream);
return SharpCompressStream.Create(_decompressionStream, leaveOpen: true);
}
return _decompressionStream;
}
internal BinaryReader FixStreamedFileLocation(ref RewindableStream rewindableStream)
internal BinaryReader FixStreamedFileLocation(ref SharpCompressStream rewindableStream)
{
if (Header.IsDirectory)
{
@@ -49,7 +49,7 @@ internal sealed class StreamingZipFilePart : ZipFilePart
if (_decompressionStream is DeflateStream deflateStream)
{
rewindableStream.Rewind(deflateStream.InputBuffer);
((IStreamStack)rewindableStream).StackSeek(0);
}
Skipped = true;

View File

@@ -1,3 +1,4 @@
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
@@ -19,16 +20,24 @@ internal class StreamingZipHeaderFactory : ZipHeaderFactory
internal IEnumerable<ZipHeader> ReadStreamHeader(Stream stream)
{
RewindableStream rewindableStream;
if (stream is not SharpCompressStream) //ensure the stream is already a SharpCompressStream. So the buffer/size will already be set
{
//the original code wrapped this with RewindableStream. Wrap with SharpCompressStream as we can get the buffer size
if (stream is SourceStream src)
{
stream = new SharpCompressStream(
stream,
src.ReaderOptions.LeaveStreamOpen,
bufferSize: src.ReaderOptions.BufferSize
);
}
else
{
throw new ArgumentException("Stream must be a SharpCompressStream", nameof(stream));
}
}
SharpCompressStream rewindableStream = (SharpCompressStream)stream;
if (stream is RewindableStream rs)
{
rewindableStream = rs;
}
else
{
rewindableStream = new RewindableStream(stream);
}
while (true)
{
ZipHeader? header;
@@ -43,9 +52,8 @@ internal class StreamingZipHeaderFactory : ZipHeaderFactory
{
continue;
}
reader = ((StreamingZipFilePart)_lastEntryHeader.Part).FixStreamedFileLocation(
ref rewindableStream
);
// removed requirement for FixStreamedFileLocation()
var pos = rewindableStream.CanSeek ? (long?)rewindableStream.Position : null;
@@ -56,26 +64,32 @@ internal class StreamingZipHeaderFactory : ZipHeaderFactory
}
_lastEntryHeader.Crc = crc;
// The DataDescriptor can be either 64bit or 32bit
var compressed_size = reader.ReadUInt32();
var uncompressed_size = reader.ReadUInt32();
// Check if we have header or 64bit DataDescriptor
//attempt 32bit read
ulong compSize = reader.ReadUInt32();
ulong uncompSize = reader.ReadUInt32();
headerBytes = reader.ReadUInt32();
var test_header = !(headerBytes == 0x04034b50 || headerBytes == 0x02014b50);
var test_64bit = ((long)uncompressed_size << 32) | compressed_size;
if (test_64bit == _lastEntryHeader.CompressedSize && test_header)
//check for zip64 sentinel or unexpected header
bool isSentinel = compSize == 0xFFFFFFFF || uncompSize == 0xFFFFFFFF;
bool isHeader = headerBytes == 0x04034b50 || headerBytes == 0x02014b50;
if (!isHeader && !isSentinel)
{
_lastEntryHeader.UncompressedSize =
((long)reader.ReadUInt32() << 32) | headerBytes;
//reshuffle into 64-bit values
compSize = (uncompSize << 32) | compSize;
uncompSize = ((ulong)headerBytes << 32) | reader.ReadUInt32();
headerBytes = reader.ReadUInt32();
}
else
else if (isSentinel)
{
_lastEntryHeader.UncompressedSize = uncompressed_size;
//standards-compliant zip64 descriptor
compSize = reader.ReadUInt64();
uncompSize = reader.ReadUInt64();
}
_lastEntryHeader.CompressedSize = (long)compSize;
_lastEntryHeader.UncompressedSize = (long)uncompSize;
if (pos.HasValue)
{
_lastEntryHeader.DataStartPosition = pos - _lastEntryHeader.CompressedSize;
@@ -86,9 +100,9 @@ internal class StreamingZipHeaderFactory : ZipHeaderFactory
if (_lastEntryHeader.Part is null)
continue;
reader = ((StreamingZipFilePart)_lastEntryHeader.Part).FixStreamedFileLocation(
ref rewindableStream
);
//reader = ((StreamingZipFilePart)_lastEntryHeader.Part).FixStreamedFileLocation(
// ref rewindableStream
//);
var pos = rewindableStream.CanSeek ? (long?)rewindableStream.Position : null;
@@ -173,16 +187,11 @@ internal class StreamingZipHeaderFactory : ZipHeaderFactory
} // Check if zip is streaming ( Length is 0 and is declared in PostDataDescriptor )
else if (local_header.Flags.HasFlag(HeaderFlags.UsePostDataDescriptor))
{
var isRecording = rewindableStream.IsRecording;
if (!isRecording)
{
rewindableStream.StartRecording();
}
var nextHeaderBytes = reader.ReadUInt32();
((IStreamStack)rewindableStream).Rewind(sizeof(uint));
// Check if next data is PostDataDescriptor, streamed file with 0 length
header.HasData = !IsHeader(nextHeaderBytes);
rewindableStream.Rewind(!isRecording);
}
else // We are not streaming and compressed size is 0, we have no data
{

View File

@@ -2,11 +2,32 @@ using System;
using System.Buffers.Binary;
using System.IO;
using System.Security.Cryptography;
using SharpCompress.IO;
namespace SharpCompress.Common.Zip;
internal class WinzipAesCryptoStream : Stream
internal class WinzipAesCryptoStream : Stream, IStreamStack
{
#if DEBUG_STREAMS
long IStreamStack.InstanceId { get; set; }
#endif
int IStreamStack.DefaultBufferSize { get; set; }
Stream IStreamStack.BaseStream() => _stream;
int IStreamStack.BufferSize
{
get => 0;
set { }
}
int IStreamStack.BufferPosition
{
get => 0;
set { }
}
void IStreamStack.SetPosition(long position) { }
private const int BLOCK_SIZE_IN_BYTES = 16;
private readonly SymmetricAlgorithm _cipher;
private readonly byte[] _counter = new byte[BLOCK_SIZE_IN_BYTES];
@@ -27,6 +48,10 @@ internal class WinzipAesCryptoStream : Stream
_stream = stream;
_totalBytesLeftToRead = length;
#if DEBUG_STREAMS
this.DebugConstruct(typeof(WinzipAesCryptoStream));
#endif
_cipher = CreateCipher(winzipAesEncryptionData);
var iv = new byte[BLOCK_SIZE_IN_BYTES];
@@ -64,6 +89,9 @@ internal class WinzipAesCryptoStream : Stream
return;
}
_isDisposed = true;
#if DEBUG_STREAMS
this.DebugDispose(typeof(WinzipAesCryptoStream));
#endif
if (disposing)
{
//read out last 10 auth bytes

View File

@@ -13,7 +13,7 @@ internal enum ZipCompressionMethod
Deflate64 = 9,
BZip2 = 12,
LZMA = 14,
ZStd = 93,
ZStandard = 93,
Xz = 95,
PPMd = 98,
WinzipAes = 0x63, //http://www.winzip.com/aes_info.htm

View File

@@ -46,6 +46,7 @@ public class ZipEntry : Entry
ZipCompressionMethod.Reduce3 => CompressionType.Reduce3,
ZipCompressionMethod.Reduce4 => CompressionType.Reduce4,
ZipCompressionMethod.Explode => CompressionType.Explode,
ZipCompressionMethod.ZStandard => CompressionType.ZStandard,
_ => CompressionType.Unknown,
};
@@ -83,4 +84,8 @@ public class ZipEntry : Entry
public override bool IsSplitAfter => false;
internal override IEnumerable<FilePart> Parts => _filePart.Empty();
public override int? Attrib => (int?)_filePart?.Header.ExternalFileAttributes;
public string? Comment => _filePart?.Header.Comment;
}

View File

@@ -13,8 +13,8 @@ using SharpCompress.Compressors.PPMd;
using SharpCompress.Compressors.Reduce;
using SharpCompress.Compressors.Shrink;
using SharpCompress.Compressors.Xz;
using SharpCompress.Compressors.ZStandard;
using SharpCompress.IO;
using ZstdSharp;
namespace SharpCompress.Common.Zip;
@@ -45,7 +45,7 @@ internal abstract class ZipFilePart : FilePart
);
if (LeaveStreamOpen)
{
return NonDisposingStream.Create(decompressionStream);
return SharpCompressStream.Create(decompressionStream, leaveOpen: true);
}
return decompressionStream;
}
@@ -70,17 +70,12 @@ internal abstract class ZipFilePart : FilePart
{
case ZipCompressionMethod.None:
{
if (stream is ReadOnlySubStream)
if (Header.CompressedSize is 0)
{
return stream;
return new DataDescriptorStream(stream);
}
if (Header.CompressedSize > 0)
{
return new ReadOnlySubStream(stream, Header.CompressedSize);
}
return new DataDescriptorStream(stream);
return stream;
}
case ZipCompressionMethod.Shrink:
{
@@ -152,7 +147,7 @@ internal abstract class ZipFilePart : FilePart
{
return new XZStream(stream);
}
case ZipCompressionMethod.ZStd:
case ZipCompressionMethod.ZStandard:
{
return new DecompressionStream(stream);
}
@@ -218,7 +213,7 @@ internal abstract class ZipFilePart : FilePart
) || Header.IsZip64
)
{
plainStream = NonDisposingStream.Create(plainStream); //make sure AES doesn't close
plainStream = SharpCompressStream.Create(plainStream, leaveOpen: true); //make sure AES doesn't close
}
else
{

View File

@@ -1,4 +1,4 @@
//
//
// ADC.cs
//
// Author:
@@ -28,14 +28,35 @@
using System;
using System.IO;
using SharpCompress.IO;
namespace SharpCompress.Compressors.ADC;
/// <summary>
/// Provides a forward readable only stream that decompresses ADC data
/// </summary>
public sealed class ADCStream : Stream
public sealed class ADCStream : Stream, IStreamStack
{
#if DEBUG_STREAMS
long IStreamStack.InstanceId { get; set; }
#endif
int IStreamStack.DefaultBufferSize { get; set; }
Stream IStreamStack.BaseStream() => _stream;
int IStreamStack.BufferSize
{
get => 0;
set { }
}
int IStreamStack.BufferPosition
{
get => 0;
set { }
}
void IStreamStack.SetPosition(long position) { }
/// <summary>
/// This stream holds the compressed data
/// </summary>
@@ -74,6 +95,9 @@ public sealed class ADCStream : Stream
}
_stream = stream;
#if DEBUG_STREAMS
this.DebugConstruct(typeof(ADCStream));
#endif
}
public override bool CanRead => _stream.CanRead;
@@ -99,6 +123,9 @@ public sealed class ADCStream : Stream
return;
}
_isDisposed = true;
#if DEBUG_STREAMS
this.DebugDispose(typeof(ADCStream));
#endif
base.Dispose(disposing);
}

View File

@@ -4,9 +4,30 @@ using System.IO;
using System.Linq;
using SharpCompress.Compressors.RLE90;
using SharpCompress.Compressors.Squeezed;
using SharpCompress.IO;
public partial class ArcLzwStream : Stream
public partial class ArcLzwStream : Stream, IStreamStack
{
#if DEBUG_STREAMS
long IStreamStack.InstanceId { get; set; }
#endif
int IStreamStack.DefaultBufferSize { get; set; }
Stream IStreamStack.BaseStream() => _stream;
int IStreamStack.BufferSize
{
get => 0;
set { }
}
int IStreamStack.BufferPosition
{
get => 0;
set { }
}
void IStreamStack.SetPosition(long position) { }
private Stream _stream;
private bool _processed;
private bool _useCrunched;
@@ -33,6 +54,9 @@ public partial class ArcLzwStream : Stream
public ArcLzwStream(Stream stream, int compressedSize, bool useCrunched = true)
{
_stream = stream;
#if DEBUG_STREAMS
this.DebugConstruct(typeof(ArcLzwStream));
#endif
_useCrunched = useCrunched;
_compressedSize = compressedSize;
@@ -196,4 +220,12 @@ public partial class ArcLzwStream : Stream
public override void Write(byte[] buffer, int offset, int count) =>
throw new NotImplementedException();
protected override void Dispose(bool disposing)
{
#if DEBUG_STREAMS
this.DebugDispose(typeof(ArcLzwStream));
#endif
base.Dispose(disposing);
}
}

View File

@@ -1,10 +1,31 @@
using System;
using System;
using System.IO;
using SharpCompress.IO;
namespace SharpCompress.Compressors.BZip2;
public sealed class BZip2Stream : Stream
public sealed class BZip2Stream : Stream, IStreamStack
{
#if DEBUG_STREAMS
long IStreamStack.InstanceId { get; set; }
#endif
int IStreamStack.DefaultBufferSize { get; set; }
Stream IStreamStack.BaseStream() => stream;
int IStreamStack.BufferSize
{
get => 0;
set { }
}
int IStreamStack.BufferPosition
{
get => 0;
set { }
}
void IStreamStack.SetPosition(long position) { }
private readonly Stream stream;
private bool isDisposed;
@@ -16,6 +37,9 @@ public sealed class BZip2Stream : Stream
/// <param name="decompressConcatenated">Decompress Concatenated</param>
public BZip2Stream(Stream stream, CompressionMode compressionMode, bool decompressConcatenated)
{
#if DEBUG_STREAMS
this.DebugConstruct(typeof(BZip2Stream));
#endif
Mode = compressionMode;
if (Mode == CompressionMode.Compress)
{
@@ -36,6 +60,9 @@ public sealed class BZip2Stream : Stream
return;
}
isDisposed = true;
#if DEBUG_STREAMS
this.DebugDispose(typeof(BZip2Stream));
#endif
if (disposing)
{
stream.Dispose();

View File

@@ -2,6 +2,7 @@
using System;
using System.IO;
using SharpCompress.IO;
/*
* Copyright 2001,2004-2005 The Apache Software Foundation
@@ -37,8 +38,28 @@ namespace SharpCompress.Compressors.BZip2;
* start of the BZIP2 stream to make it compatible with other PGP programs.
*/
internal class CBZip2InputStream : Stream
internal class CBZip2InputStream : Stream, IStreamStack
{
#if DEBUG_STREAMS
long IStreamStack.InstanceId { get; set; }
#endif
int IStreamStack.DefaultBufferSize { get; set; }
Stream IStreamStack.BaseStream() => bsStream;
int IStreamStack.BufferSize
{
get => 0;
set { }
}
int IStreamStack.BufferPosition
{
get => 0;
set { }
}
void IStreamStack.SetPosition(long position) { }
private static void Cadvise()
{
//System.out.Println("CRC Error");
@@ -164,6 +185,10 @@ internal class CBZip2InputStream : Stream
ll8 = null;
tt = null;
BsSetStream(zStream);
#if DEBUG_STREAMS
this.DebugConstruct(typeof(CBZip2InputStream));
#endif
Initialize(true);
InitBlock();
SetupBlock();
@@ -176,6 +201,9 @@ internal class CBZip2InputStream : Stream
return;
}
isDisposed = true;
#if DEBUG_STREAMS
this.DebugDispose(typeof(CBZip2InputStream));
#endif
base.Dispose(disposing);
bsStream?.Dispose();
}

View File

@@ -1,5 +1,6 @@
using System;
using System.IO;
using SharpCompress.IO;
/*
* Copyright 2001,2004-2005 The Apache Software Foundation
@@ -38,8 +39,28 @@ namespace SharpCompress.Compressors.BZip2;
* start of the BZIP2 stream to make it compatible with other PGP programs.
*/
internal sealed class CBZip2OutputStream : Stream
internal sealed class CBZip2OutputStream : Stream, IStreamStack
{
#if DEBUG_STREAMS
long IStreamStack.InstanceId { get; set; }
#endif
int IStreamStack.DefaultBufferSize { get; set; }
Stream IStreamStack.BaseStream() => bsStream;
int IStreamStack.BufferSize
{
get => 0;
set { }
}
int IStreamStack.BufferPosition
{
get => 0;
set { }
}
void IStreamStack.SetPosition(long position) { }
private const int SETMASK = (1 << 21);
private const int CLEARMASK = (~SETMASK);
private const int GREATER_ICOST = 15;
@@ -334,6 +355,10 @@ internal sealed class CBZip2OutputStream : Stream
BsSetStream(inStream);
#if DEBUG_STREAMS
this.DebugConstruct(typeof(CBZip2OutputStream));
#endif
workFactor = 50;
if (inBlockSize > 9)
{
@@ -450,6 +475,9 @@ internal sealed class CBZip2OutputStream : Stream
Finish();
disposed = true;
#if DEBUG_STREAMS
this.DebugDispose(typeof(CBZip2OutputStream));
#endif
Dispose();
bsStream?.Dispose();
bsStream = null;

View File

@@ -27,11 +27,33 @@
using System;
using System.IO;
using System.Text;
using System.Threading;
using SharpCompress.IO;
namespace SharpCompress.Compressors.Deflate;
public class DeflateStream : Stream
public class DeflateStream : Stream, IStreamStack
{
#if DEBUG_STREAMS
long IStreamStack.InstanceId { get; set; }
#endif
int IStreamStack.DefaultBufferSize { get; set; }
Stream IStreamStack.BaseStream() => _baseStream;
int IStreamStack.BufferSize
{
get => 0;
set { }
}
int IStreamStack.BufferPosition
{
get => 0;
set { }
}
void IStreamStack.SetPosition(long position) { }
private readonly ZlibBaseStream _baseStream;
private bool _disposed;
@@ -40,7 +62,8 @@ public class DeflateStream : Stream
CompressionMode mode,
CompressionLevel level = CompressionLevel.Default,
Encoding? forceEncoding = null
) =>
)
{
_baseStream = new ZlibBaseStream(
stream,
mode,
@@ -49,6 +72,11 @@ public class DeflateStream : Stream
forceEncoding
);
#if DEBUG_STREAMS
this.DebugConstruct(typeof(DeflateStream));
#endif
}
#region Zlib properties
/// <summary>
@@ -233,6 +261,9 @@ public class DeflateStream : Stream
{
if (!_disposed)
{
#if DEBUG_STREAMS
this.DebugDispose(typeof(DeflateStream));
#endif
if (disposing)
{
_baseStream?.Dispose();
@@ -290,6 +321,7 @@ public class DeflateStream : Stream
{
throw new ObjectDisposedException("DeflateStream");
}
return _baseStream.Read(buffer, offset, count);
}

View File

@@ -30,11 +30,32 @@ using System;
using System.Buffers.Binary;
using System.IO;
using System.Text;
using SharpCompress.IO;
namespace SharpCompress.Compressors.Deflate;
public class GZipStream : Stream
public class GZipStream : Stream, IStreamStack
{
#if DEBUG_STREAMS
long IStreamStack.InstanceId { get; set; }
#endif
int IStreamStack.DefaultBufferSize { get; set; }
Stream IStreamStack.BaseStream() => BaseStream;
int IStreamStack.BufferSize
{
get => 0;
set { }
}
int IStreamStack.BufferPosition
{
get => 0;
set { }
}
void IStreamStack.SetPosition(long position) { }
internal static readonly DateTime UNIX_EPOCH = new(1970, 1, 1, 0, 0, 0, DateTimeKind.Utc);
private string? _comment;
@@ -62,6 +83,9 @@ public class GZipStream : Stream
)
{
BaseStream = new ZlibBaseStream(stream, mode, level, ZlibStreamFlavor.GZIP, encoding);
#if DEBUG_STREAMS
this.DebugConstruct(typeof(GZipStream));
#endif
_encoding = encoding;
}
@@ -210,6 +234,9 @@ public class GZipStream : Stream
Crc32 = BaseStream.Crc32;
}
_disposed = true;
#if DEBUG_STREAMS
this.DebugDispose(typeof(GZipStream));
#endif
}
}
finally

View File

@@ -32,6 +32,7 @@ using System.Collections.Generic;
using System.IO;
using System.Text;
using SharpCompress.Common.Tar.Headers;
using SharpCompress.IO;
namespace SharpCompress.Compressors.Deflate;
@@ -42,8 +43,28 @@ internal enum ZlibStreamFlavor
GZIP = 1952,
}
internal class ZlibBaseStream : Stream
internal class ZlibBaseStream : Stream, IStreamStack
{
#if DEBUG_STREAMS
long IStreamStack.InstanceId { get; set; }
#endif
int IStreamStack.DefaultBufferSize { get; set; }
Stream IStreamStack.BaseStream() => _stream;
int IStreamStack.BufferSize
{
get => 0;
set { }
}
int IStreamStack.BufferPosition
{
get => 0;
set { }
}
void IStreamStack.SetPosition(long position) { }
protected internal ZlibCodec _z; // deferred init... new ZlibCodec();
protected internal StreamMode _streamMode = StreamMode.Undefined;
@@ -87,6 +108,10 @@ internal class ZlibBaseStream : Stream
_encoding = encoding;
#if DEBUG_STREAMS
this.DebugConstruct(typeof(ZlibBaseStream));
#endif
// workitem 7159
if (flavor == ZlibStreamFlavor.GZIP)
{
@@ -334,6 +359,9 @@ internal class ZlibBaseStream : Stream
return;
}
isDisposed = true;
#if DEBUG_STREAMS
this.DebugDispose(typeof(ZlibBaseStream));
#endif
base.Dispose(disposing);
if (disposing)
{
@@ -354,7 +382,13 @@ internal class ZlibBaseStream : Stream
}
}
public override void Flush() => _stream.Flush();
public override void Flush()
{
_stream.Flush();
//rewind the buffer
((IStreamStack)this).Rewind(z.AvailableBytesIn); //unused
z.AvailableBytesIn = 0;
}
public override Int64 Seek(Int64 offset, SeekOrigin origin) =>
throw new NotSupportedException();
@@ -634,6 +668,13 @@ internal class ZlibBaseStream : Stream
crc.SlurpBlock(buffer, offset, rc);
}
if (rc == ZlibConstants.Z_STREAM_END && z.AvailableBytesIn != 0 && !_wantCompress)
{
//rewind the buffer
((IStreamStack)this).Rewind(z.AvailableBytesIn); //unused
z.AvailableBytesIn = 0;
}
return rc;
}

View File

@@ -28,11 +28,32 @@
using System;
using System.IO;
using System.Text;
using SharpCompress.IO;
namespace SharpCompress.Compressors.Deflate;
public class ZlibStream : Stream
public class ZlibStream : Stream, IStreamStack
{
#if DEBUG_STREAMS
long IStreamStack.InstanceId { get; set; }
#endif
int IStreamStack.DefaultBufferSize { get; set; }
Stream IStreamStack.BaseStream() => _baseStream;
int IStreamStack.BufferSize
{
get => 0;
set { }
}
int IStreamStack.BufferPosition
{
get => 0;
set { }
}
void IStreamStack.SetPosition(long position) { }
private readonly ZlibBaseStream _baseStream;
private bool _disposed;
@@ -47,7 +68,13 @@ public class ZlibStream : Stream
CompressionMode mode,
CompressionLevel level,
Encoding encoding
) => _baseStream = new ZlibBaseStream(stream, mode, level, ZlibStreamFlavor.ZLIB, encoding);
)
{
_baseStream = new ZlibBaseStream(stream, mode, level, ZlibStreamFlavor.ZLIB, encoding);
#if DEBUG_STREAMS
this.DebugConstruct(typeof(ZlibStream));
#endif
}
#region Zlib properties
@@ -216,6 +243,9 @@ public class ZlibStream : Stream
_baseStream?.Dispose();
}
_disposed = true;
#if DEBUG_STREAMS
this.DebugDispose(typeof(ZlibStream));
#endif
}
}
finally

View File

@@ -10,11 +10,32 @@ using System.IO;
using System.Runtime.CompilerServices;
using SharpCompress.Common;
using SharpCompress.Common.Zip;
using SharpCompress.IO;
namespace SharpCompress.Compressors.Deflate64;
public sealed class Deflate64Stream : Stream
public sealed class Deflate64Stream : Stream, IStreamStack
{
#if DEBUG_STREAMS
long IStreamStack.InstanceId { get; set; }
#endif
int IStreamStack.DefaultBufferSize { get; set; }
Stream IStreamStack.BaseStream() => _stream;
int IStreamStack.BufferSize
{
get => 0;
set { }
}
int IStreamStack.BufferPosition
{
get => 0;
set { }
}
void IStreamStack.SetPosition(long position) { }
private const int DEFAULT_BUFFER_SIZE = 8192;
private Stream _stream;
@@ -42,6 +63,9 @@ public sealed class Deflate64Stream : Stream
}
InitializeInflater(stream, ZipCompressionMethod.Deflate64);
#if DEBUG_STREAMS
this.DebugConstruct(typeof(Deflate64Stream));
#endif
}
/// <summary>
@@ -252,6 +276,9 @@ public sealed class Deflate64Stream : Stream
// In this case, we still need to clean up internal resources, hence the inner finally blocks.
try
{
#if DEBUG_STREAMS
this.DebugDispose(typeof(Deflate64Stream));
#endif
if (disposing)
{
_stream?.Dispose();

View File

@@ -1,11 +1,32 @@
using System;
using System.IO;
using SharpCompress.Common.Zip.Headers;
using SharpCompress.IO;
namespace SharpCompress.Compressors.Explode;
public class ExplodeStream : Stream
public class ExplodeStream : Stream, IStreamStack
{
#if DEBUG_STREAMS
long IStreamStack.InstanceId { get; set; }
#endif
int IStreamStack.DefaultBufferSize { get; set; }
Stream IStreamStack.BaseStream() => inStream;
int IStreamStack.BufferSize
{
get => 0;
set { }
}
int IStreamStack.BufferPosition
{
get => 0;
set { }
}
void IStreamStack.SetPosition(long position) { }
private const int INVALID_CODE = 99;
private const int WSIZE = 64 * 1024;
@@ -45,6 +66,9 @@ public class ExplodeStream : Stream
)
{
inStream = inStr;
#if DEBUG_STREAMS
this.DebugConstruct(typeof(ExplodeStream));
#endif
this.compressedSize = (int)compressedSize;
unCompressedSize = (long)uncompressedSize;
this.generalPurposeBitFlag = generalPurposeBitFlag;
@@ -54,6 +78,14 @@ public class ExplodeStream : Stream
explode_var_init();
}
protected override void Dispose(bool disposing)
{
#if DEBUG_STREAMS
this.DebugDispose(typeof(ExplodeStream));
#endif
base.Dispose(disposing);
}
public override void Flush()
{
throw new NotImplementedException();

View File

@@ -1,36 +1,35 @@
using System.IO;
namespace SharpCompress.Compressors.Filters
namespace SharpCompress.Compressors.Filters;
internal class DeltaFilter : Filter
{
internal class DeltaFilter : Filter
private const int DISTANCE_MIN = 1;
private const int DISTANCE_MAX = 256;
private const int DISTANCE_MASK = DISTANCE_MAX - 1;
private int _distance;
private byte[] _history;
private int _position;
public DeltaFilter(bool isEncoder, Stream baseStream, byte[] info)
: base(isEncoder, baseStream, 1)
{
private const int DISTANCE_MIN = 1;
private const int DISTANCE_MAX = 256;
private const int DISTANCE_MASK = DISTANCE_MAX - 1;
_distance = info[0];
_history = new byte[DISTANCE_MAX];
_position = 0;
}
private int _distance;
private byte[] _history;
private int _position;
protected override int Transform(byte[] buffer, int offset, int count)
{
var end = offset + count;
public DeltaFilter(bool isEncoder, Stream baseStream, byte[] info)
: base(isEncoder, baseStream, 1)
for (var i = offset; i < end; i++)
{
_distance = info[0];
_history = new byte[DISTANCE_MAX];
_position = 0;
buffer[i] += _history[(_distance + _position--) & DISTANCE_MASK];
_history[_position & DISTANCE_MASK] = buffer[i];
}
protected override int Transform(byte[] buffer, int offset, int count)
{
var end = offset + count;
for (var i = offset; i < end; i++)
{
buffer[i] += _history[(_distance + _position--) & DISTANCE_MASK];
_history[_position & DISTANCE_MASK] = buffer[i];
}
return count;
}
return count;
}
}

View File

@@ -3,11 +3,32 @@ using System.IO;
using System.Security.Cryptography;
using System.Text;
using SharpCompress.Compressors.LZMA.Utilites;
using SharpCompress.IO;
namespace SharpCompress.Compressors.LZMA;
internal sealed class AesDecoderStream : DecoderStream2
internal sealed class AesDecoderStream : DecoderStream2, IStreamStack
{
#if DEBUG_STREAMS
long IStreamStack.InstanceId { get; set; }
#endif
int IStreamStack.DefaultBufferSize { get; set; }
Stream IStreamStack.BaseStream() => mStream;
int IStreamStack.BufferSize
{
get => 0;
set { }
}
int IStreamStack.BufferPosition
{
get => 0;
set { }
}
void IStreamStack.SetPosition(long position) { }
private readonly Stream mStream;
private readonly ICryptoTransform mDecoder;
private readonly byte[] mBuffer;
@@ -31,6 +52,10 @@ internal sealed class AesDecoderStream : DecoderStream2
mStream = input;
mLimit = limit;
#if DEBUG_STREAMS
this.DebugConstruct(typeof(AesDecoderStream));
#endif
if (((uint)input.Length & 15) != 0)
{
throw new NotSupportedException("AES decoder does not support padding.");
@@ -64,6 +89,9 @@ internal sealed class AesDecoderStream : DecoderStream2
return;
}
isDisposed = true;
#if DEBUG_STREAMS
this.DebugDispose(typeof(AesDecoderStream));
#endif
if (disposing)
{
mStream.Dispose();

View File

@@ -1,11 +1,32 @@
using System;
using System;
using System.Collections.Generic;
using System.IO;
using SharpCompress.IO;
namespace SharpCompress.Compressors.LZMA;
internal class Bcj2DecoderStream : DecoderStream2
internal class Bcj2DecoderStream : DecoderStream2, IStreamStack
{
#if DEBUG_STREAMS
long IStreamStack.InstanceId { get; set; }
#endif
int IStreamStack.DefaultBufferSize { get; set; }
Stream IStreamStack.BaseStream() => _mMainStream;
int IStreamStack.BufferSize
{
get => 0;
set { }
}
int IStreamStack.BufferPosition
{
get => 0;
set { }
}
void IStreamStack.SetPosition(long position) { }
private const int K_NUM_TOP_BITS = 24;
private const uint K_TOP_VALUE = (1 << K_NUM_TOP_BITS);
@@ -109,6 +130,10 @@ internal class Bcj2DecoderStream : DecoderStream2
_mStatusDecoder[i] = new StatusDecoder();
}
#if DEBUG_STREAMS
this.DebugConstruct(typeof(Bcj2DecoderStream));
#endif
_mIter = Run().GetEnumerator();
}
@@ -119,6 +144,9 @@ internal class Bcj2DecoderStream : DecoderStream2
return;
}
_isDisposed = true;
#if DEBUG_STREAMS
this.DebugDispose(typeof(Bcj2DecoderStream));
#endif
base.Dispose(disposing);
_mMainStream.Dispose();
_mCallStream.Dispose();

View File

@@ -15,10 +15,30 @@ namespace SharpCompress.Compressors.LZMA;
/// <summary>
/// Stream supporting the LZIP format, as documented at http://www.nongnu.org/lzip/manual/lzip_manual.html
/// </summary>
public sealed class LZipStream : Stream
public sealed class LZipStream : Stream, IStreamStack
{
#if DEBUG_STREAMS
long IStreamStack.InstanceId { get; set; }
#endif
int IStreamStack.DefaultBufferSize { get; set; }
Stream IStreamStack.BaseStream() => _stream;
int IStreamStack.BufferSize
{
get => 0;
set { }
}
int IStreamStack.BufferPosition
{
get => 0;
set { }
}
void IStreamStack.SetPosition(long position) { }
private readonly Stream _stream;
private readonly CountingWritableSubStream? _countingWritableSubStream;
private readonly SharpCompressStream? _countingWritableSubStream;
private bool _disposed;
private bool _finished;
@@ -44,7 +64,7 @@ public sealed class LZipStream : Stream
var dSize = 104 * 1024;
WriteHeaderSize(stream);
_countingWritableSubStream = new CountingWritableSubStream(stream);
_countingWritableSubStream = new SharpCompressStream(stream, leaveOpen: true);
_stream = new Crc32Stream(
new LzmaStream(
new LzmaEncoderProperties(true, dSize),
@@ -52,6 +72,9 @@ public sealed class LZipStream : Stream
_countingWritableSubStream
)
);
#if DEBUG_STREAMS
this.DebugConstruct(typeof(LZipStream));
#endif
}
}
@@ -64,7 +87,7 @@ public sealed class LZipStream : Stream
var crc32Stream = (Crc32Stream)_stream;
crc32Stream.WrappedStream.Dispose();
crc32Stream.Dispose();
var compressedCount = _countingWritableSubStream.NotNull().Count;
var compressedCount = _countingWritableSubStream.NotNull().InternalPosition;
Span<byte> intBuf = stackalloc byte[8];
BinaryPrimitives.WriteUInt32LittleEndian(intBuf, crc32Stream.Crc);
@@ -74,7 +97,10 @@ public sealed class LZipStream : Stream
_countingWritableSubStream?.Write(intBuf);
//total with headers
BinaryPrimitives.WriteUInt64LittleEndian(intBuf, compressedCount + 6 + 20);
BinaryPrimitives.WriteUInt64LittleEndian(
intBuf,
(ulong)compressedCount + (ulong)(6 + 20)
);
_countingWritableSubStream?.Write(intBuf);
}
_finished = true;
@@ -90,6 +116,9 @@ public sealed class LZipStream : Stream
return;
}
_disposed = true;
#if DEBUG_STREAMS
this.DebugDispose(typeof(LZipStream));
#endif
if (disposing)
{
Finish();

View File

@@ -1,4 +1,4 @@
using System;
using System;
using System.Collections.Generic;
using System.Diagnostics;
@@ -28,52 +28,68 @@ internal static class Log
if (NEEDS_INDENT)
{
NEEDS_INDENT = false;
#if DEBUG_LZMA
Debug.Write(INDENT.Peek());
#endif
}
}
public static void Write(object value)
{
EnsureIndent();
#if DEBUG_LZMA
Debug.Write(value);
#endif
}
public static void Write(string text)
{
EnsureIndent();
#if DEBUG_LZMA
Debug.Write(text);
#endif
}
public static void Write(string format, params object[] args)
{
EnsureIndent();
#if DEBUG_LZMA
Debug.Write(string.Format(format, args));
#endif
}
public static void WriteLine()
{
#if DEBUG_LZMA
Debug.WriteLine("");
#endif
NEEDS_INDENT = true;
}
public static void WriteLine(object value)
{
EnsureIndent();
#if DEBUG_LZMA
Debug.WriteLine(value);
#endif
NEEDS_INDENT = true;
}
public static void WriteLine(string text)
{
EnsureIndent();
#if DEBUG_LZMA
Debug.WriteLine(text);
#endif
NEEDS_INDENT = true;
}
public static void WriteLine(string format, params object[] args)
{
EnsureIndent();
#if DEBUG_LZMA
Debug.WriteLine(string.Format(format, args));
#endif
NEEDS_INDENT = true;
}
}

View File

@@ -4,11 +4,32 @@ using System;
using System.Buffers.Binary;
using System.IO;
using SharpCompress.Compressors.LZMA.LZ;
using SharpCompress.IO;
namespace SharpCompress.Compressors.LZMA;
public class LzmaStream : Stream
public class LzmaStream : Stream, IStreamStack
{
#if DEBUG_STREAMS
long IStreamStack.InstanceId { get; set; }
#endif
int IStreamStack.DefaultBufferSize { get; set; }
Stream IStreamStack.BaseStream() => _inputStream;
int IStreamStack.BufferSize
{
get => 0;
set { }
}
int IStreamStack.BufferPosition
{
get => 0;
set { }
}
void IStreamStack.SetPosition(long position) { }
private readonly Stream _inputStream;
private readonly long _inputSize;
private readonly long _outputSize;
@@ -56,6 +77,10 @@ public class LzmaStream : Stream
_outputSize = outputSize;
_isLzma2 = isLzma2;
#if DEBUG_STREAMS
this.DebugConstruct(typeof(LzmaStream));
#endif
if (!isLzma2)
{
_dictionarySize = BinaryPrimitives.ReadInt32LittleEndian(properties.AsSpan(1));
@@ -117,6 +142,11 @@ public class LzmaStream : Stream
Properties = prop;
_encoder.SetStreams(null, outputStream, -1, -1);
#if DEBUG_STREAMS
this.DebugConstruct(typeof(LzmaStream));
#endif
if (presetDictionary != null)
{
_encoder.Train(presetDictionary);
@@ -138,6 +168,9 @@ public class LzmaStream : Stream
return;
}
_isDisposed = true;
#if DEBUG_STREAMS
this.DebugDispose(typeof(LzmaStream));
#endif
if (disposing)
{
if (_encoder != null)

View File

@@ -7,7 +7,7 @@ using SharpCompress.Compressors.Deflate;
using SharpCompress.Compressors.Filters;
using SharpCompress.Compressors.LZMA.Utilites;
using SharpCompress.Compressors.PPMd;
using ZstdSharp;
using SharpCompress.Compressors.ZStandard;
namespace SharpCompress.Compressors.LZMA;

View File

@@ -1,10 +1,31 @@
using System;
using System;
using System.IO;
using SharpCompress.IO;
namespace SharpCompress.Compressors.LZMA.Utilites;
internal class CrcBuilderStream : Stream
internal class CrcBuilderStream : Stream, IStreamStack
{
#if DEBUG_STREAMS
long IStreamStack.InstanceId { get; set; }
#endif
int IStreamStack.DefaultBufferSize { get; set; }
Stream IStreamStack.BaseStream() => _mTarget;
int IStreamStack.BufferSize
{
get => 0;
set { }
}
int IStreamStack.BufferPosition
{
get => 0;
set { }
}
void IStreamStack.SetPosition(long position) { }
private readonly Stream _mTarget;
private uint _mCrc;
private bool _mFinished;
@@ -13,6 +34,9 @@ internal class CrcBuilderStream : Stream
public CrcBuilderStream(Stream target)
{
_mTarget = target;
#if DEBUG_STREAMS
this.DebugConstruct(typeof(CrcBuilderStream));
#endif
_mCrc = Crc.INIT_CRC;
}
@@ -23,6 +47,9 @@ internal class CrcBuilderStream : Stream
return;
}
_isDisposed = true;
#if DEBUG_STREAMS
this.DebugDispose(typeof(CrcBuilderStream));
#endif
_mTarget.Dispose();
base.Dispose(disposing);
}

View File

@@ -1,65 +1,64 @@
namespace SharpCompress.Compressors.Lzw
namespace SharpCompress.Compressors.Lzw;
/// <summary>
/// This class contains constants used for LZW
/// </summary>
[System.Diagnostics.CodeAnalysis.SuppressMessage(
"Naming",
"CA1707:Identifiers should not contain underscores",
Justification = "kept for backwards compatibility"
)]
public sealed class LzwConstants
{
/// <summary>
/// This class contains constants used for LZW
/// Magic number found at start of LZW header: 0x1f 0x9d
/// </summary>
[System.Diagnostics.CodeAnalysis.SuppressMessage(
"Naming",
"CA1707:Identifiers should not contain underscores",
Justification = "kept for backwards compatibility"
)]
public sealed class LzwConstants
{
/// <summary>
/// Magic number found at start of LZW header: 0x1f 0x9d
/// </summary>
public const int MAGIC = 0x1f9d;
public const int MAGIC = 0x1f9d;
/// <summary>
/// Maximum number of bits per code
/// </summary>
public const int MAX_BITS = 16;
/// <summary>
/// Maximum number of bits per code
/// </summary>
public const int MAX_BITS = 16;
/* 3rd header byte:
* bit 0..4 Number of compression bits
* bit 5 Extended header
* bit 6 Free
* bit 7 Block mode
*/
/* 3rd header byte:
* bit 0..4 Number of compression bits
* bit 5 Extended header
* bit 6 Free
* bit 7 Block mode
*/
/// <summary>
/// Mask for 'number of compression bits'
/// </summary>
public const int BIT_MASK = 0x1f;
/// <summary>
/// Mask for 'number of compression bits'
/// </summary>
public const int BIT_MASK = 0x1f;
/// <summary>
/// Indicates the presence of a fourth header byte
/// </summary>
public const int EXTENDED_MASK = 0x20;
/// <summary>
/// Indicates the presence of a fourth header byte
/// </summary>
public const int EXTENDED_MASK = 0x20;
//public const int FREE_MASK = 0x40;
//public const int FREE_MASK = 0x40;
/// <summary>
/// Reserved bits
/// </summary>
public const int RESERVED_MASK = 0x60;
/// <summary>
/// Reserved bits
/// </summary>
public const int RESERVED_MASK = 0x60;
/// <summary>
/// Block compression: if table is full and compression rate is dropping,
/// clear the dictionary.
/// </summary>
public const int BLOCK_MODE_MASK = 0x80;
/// <summary>
/// Block compression: if table is full and compression rate is dropping,
/// clear the dictionary.
/// </summary>
public const int BLOCK_MODE_MASK = 0x80;
/// <summary>
/// LZW file header size (in bytes)
/// </summary>
public const int HDR_SIZE = 3;
/// <summary>
/// LZW file header size (in bytes)
/// </summary>
public const int HDR_SIZE = 3;
/// <summary>
/// Initial number of bits per code
/// </summary>
public const int INIT_BITS = 9;
/// <summary>
/// Initial number of bits per code
/// </summary>
public const int INIT_BITS = 9;
private LzwConstants() { }
}
private LzwConstants() { }
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,15 +1,36 @@
#nullable disable
#nullable disable
using System;
using System.IO;
using SharpCompress.Compressors.LZMA.RangeCoder;
using SharpCompress.Compressors.PPMd.H;
using SharpCompress.Compressors.PPMd.I1;
using SharpCompress.IO;
namespace SharpCompress.Compressors.PPMd;
public class PpmdStream : Stream
public class PpmdStream : Stream, IStreamStack
{
#if DEBUG_STREAMS
long IStreamStack.InstanceId { get; set; }
#endif
int IStreamStack.DefaultBufferSize { get; set; }
Stream IStreamStack.BaseStream() => _stream;
int IStreamStack.BufferSize
{
get => 0;
set { }
}
int IStreamStack.BufferPosition
{
get => 0;
set { }
}
void IStreamStack.SetPosition(long position) { }
private readonly PpmdProperties _properties;
private readonly Stream _stream;
private readonly bool _compress;
@@ -25,6 +46,10 @@ public class PpmdStream : Stream
_stream = stream;
_compress = compress;
#if DEBUG_STREAMS
this.DebugConstruct(typeof(PpmdStream));
#endif
if (properties.Version == PpmdVersion.I1)
{
_model = new Model();
@@ -74,6 +99,9 @@ public class PpmdStream : Stream
return;
}
_isDisposed = true;
#if DEBUG_STREAMS
this.DebugDispose(typeof(PpmdStream));
#endif
if (isDisposing)
{
if (_compress)

View File

@@ -1,52 +1,51 @@
using System.Collections.Generic;
using System.Linq;
namespace SharpCompress.Compressors.RLE90
namespace SharpCompress.Compressors.RLE90;
public static class RLE
{
public static class RLE
private const byte DLE = 0x90;
/// <summary>
/// Unpacks an RLE compressed buffer.
/// Format: <char> DLE <count>, where count == 0 -> DLE
/// </summary>
/// <param name="compressedBuffer">The compressed buffer to unpack.</param>
/// <returns>A list of unpacked bytes.</returns>
public static List<byte> UnpackRLE(byte[] compressedBuffer)
{
private const byte DLE = 0x90;
var result = new List<byte>(compressedBuffer.Length * 2); // Optimized initial capacity
var countMode = false;
byte last = 0;
/// <summary>
/// Unpacks an RLE compressed buffer.
/// Format: <char> DLE <count>, where count == 0 -> DLE
/// </summary>
/// <param name="compressedBuffer">The compressed buffer to unpack.</param>
/// <returns>A list of unpacked bytes.</returns>
public static List<byte> UnpackRLE(byte[] compressedBuffer)
foreach (var c in compressedBuffer)
{
var result = new List<byte>(compressedBuffer.Length * 2); // Optimized initial capacity
var countMode = false;
byte last = 0;
foreach (var c in compressedBuffer)
if (!countMode)
{
if (!countMode)
if (c == DLE)
{
if (c == DLE)
{
countMode = true;
}
else
{
result.Add(c);
last = c;
}
countMode = true;
}
else
{
countMode = false;
if (c == 0)
{
result.Add(DLE);
}
else
{
result.AddRange(Enumerable.Repeat(last, c - 1));
}
result.Add(c);
last = c;
}
}
else
{
countMode = false;
if (c == 0)
{
result.Add(DLE);
}
else
{
result.AddRange(Enumerable.Repeat(last, c - 1));
}
}
return result;
}
return result;
}
}

View File

@@ -4,61 +4,92 @@ using System.IO;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using SharpCompress.IO;
namespace SharpCompress.Compressors.RLE90
namespace SharpCompress.Compressors.RLE90;
public class RunLength90Stream : Stream, IStreamStack
{
public class RunLength90Stream : Stream
#if DEBUG_STREAMS
long IStreamStack.InstanceId { get; set; }
#endif
int IStreamStack.DefaultBufferSize { get; set; }
Stream IStreamStack.BaseStream() => _stream;
int IStreamStack.BufferSize
{
private readonly Stream _stream;
private const byte DLE = 0x90;
private int _compressedSize;
private bool _processed = false;
public RunLength90Stream(Stream stream, int compressedSize)
{
_stream = stream;
_compressedSize = compressedSize;
}
public override bool CanRead => true;
public override bool CanSeek => false;
public override bool CanWrite => false;
public override long Length => throw new NotImplementedException();
public override long Position
{
get => _stream.Position;
set => throw new NotImplementedException();
}
public override void Flush() => throw new NotImplementedException();
public override int Read(byte[] buffer, int offset, int count)
{
if (_processed)
{
return 0;
}
_processed = true;
using var binaryReader = new BinaryReader(_stream);
byte[] compressedBuffer = binaryReader.ReadBytes(_compressedSize);
var unpacked = RLE.UnpackRLE(compressedBuffer);
unpacked.CopyTo(buffer);
return unpacked.Count;
}
public override long Seek(long offset, SeekOrigin origin) =>
throw new NotImplementedException();
public override void SetLength(long value) => throw new NotImplementedException();
public override void Write(byte[] buffer, int offset, int count) =>
throw new NotImplementedException();
get => 0;
set { }
}
int IStreamStack.BufferPosition
{
get => 0;
set { }
}
void IStreamStack.SetPosition(long position) { }
private readonly Stream _stream;
private const byte DLE = 0x90;
private int _compressedSize;
private bool _processed = false;
public RunLength90Stream(Stream stream, int compressedSize)
{
_stream = stream;
_compressedSize = compressedSize;
#if DEBUG_STREAMS
this.DebugConstruct(typeof(RunLength90Stream));
#endif
}
protected override void Dispose(bool disposing)
{
#if DEBUG_STREAMS
this.DebugDispose(typeof(RunLength90Stream));
#endif
base.Dispose(disposing);
}
public override bool CanRead => true;
public override bool CanSeek => false;
public override bool CanWrite => false;
public override long Length => throw new NotImplementedException();
public override long Position
{
get => _stream.Position;
set => throw new NotImplementedException();
}
public override void Flush() => throw new NotImplementedException();
public override int Read(byte[] buffer, int offset, int count)
{
if (_processed)
{
return 0;
}
_processed = true;
using var binaryReader = new BinaryReader(_stream);
byte[] compressedBuffer = binaryReader.ReadBytes(_compressedSize);
var unpacked = RLE.UnpackRLE(compressedBuffer);
unpacked.CopyTo(buffer);
return unpacked.Count;
}
public override long Seek(long offset, SeekOrigin origin) =>
throw new NotImplementedException();
public override void SetLength(long value) => throw new NotImplementedException();
public override void Write(byte[] buffer, int offset, int count) =>
throw new NotImplementedException();
}

View File

@@ -5,11 +5,32 @@ using System.Collections.Generic;
using System.IO;
using SharpCompress.Common;
using SharpCompress.Common.Rar;
using SharpCompress.IO;
namespace SharpCompress.Compressors.Rar;
internal sealed class MultiVolumeReadOnlyStream : Stream
internal sealed class MultiVolumeReadOnlyStream : Stream, IStreamStack
{
#if DEBUG_STREAMS
long IStreamStack.InstanceId { get; set; }
#endif
int IStreamStack.DefaultBufferSize { get; set; }
Stream IStreamStack.BaseStream() => currentStream;
int IStreamStack.BufferSize
{
get => 0;
set { }
}
int IStreamStack.BufferPosition
{
get => 0;
set { }
}
void IStreamStack.SetPosition(long position) { }
private long currentPosition;
private long maxPosition;
@@ -31,6 +52,9 @@ internal sealed class MultiVolumeReadOnlyStream : Stream
filePartEnumerator = parts.GetEnumerator();
filePartEnumerator.MoveNext();
InitializeNextFilePart();
#if DEBUG_STREAMS
this.DebugConstruct(typeof(MultiVolumeReadOnlyStream));
#endif
}
protected override void Dispose(bool disposing)
@@ -38,6 +62,10 @@ internal sealed class MultiVolumeReadOnlyStream : Stream
base.Dispose(disposing);
if (disposing)
{
#if DEBUG_STREAMS
this.DebugDispose(typeof(MultiVolumeReadOnlyStream));
#endif
if (filePartEnumerator != null)
{
filePartEnumerator.Dispose();

View File

@@ -3,11 +3,31 @@ using System.IO;
using System.Linq;
using SharpCompress.Common;
using SharpCompress.Common.Rar.Headers;
using SharpCompress.IO;
namespace SharpCompress.Compressors.Rar;
internal class RarBLAKE2spStream : RarStream
internal class RarBLAKE2spStream : RarStream, IStreamStack
{
#if DEBUG_STREAMS
long IStreamStack.InstanceId { get; set; }
#endif
Stream IStreamStack.BaseStream() => readStream;
int IStreamStack.BufferSize
{
get => 0;
set { }
}
int IStreamStack.BufferPosition
{
get => 0;
set { }
}
void IStreamStack.SetPosition(long position) { }
private readonly MultiVolumeReadOnlyStream readStream;
private readonly bool disableCRCCheck;
@@ -91,12 +111,24 @@ internal class RarBLAKE2spStream : RarStream
: base(unpack, fileHeader, readStream)
{
this.readStream = readStream;
#if DEBUG_STREAMS
this.DebugConstruct(typeof(RarBLAKE2spStream));
#endif
disableCRCCheck = fileHeader.IsEncrypted;
_hash = fileHeader.FileCrc.NotNull();
_blake2sp = new BLAKE2SP();
ResetCrc();
}
protected override void Dispose(bool disposing)
{
#if DEBUG_STREAMS
this.DebugDispose(typeof(RarBLAKE2spStream));
#endif
base.Dispose(disposing);
}
public byte[] GetCrc() => _hash;
internal void ResetCrc(BLAKE2S hash)

View File

@@ -1,11 +1,32 @@
using System;
using System.IO;
using SharpCompress.Common;
using SharpCompress.Common.Rar.Headers;
using SharpCompress.IO;
namespace SharpCompress.Compressors.Rar;
internal class RarCrcStream : RarStream
internal class RarCrcStream : RarStream, IStreamStack
{
#if DEBUG_STREAMS
long IStreamStack.InstanceId { get; set; }
#endif
Stream IStreamStack.BaseStream() => readStream;
int IStreamStack.BufferSize
{
get => 0;
set { }
}
int IStreamStack.BufferPosition
{
get => 0;
set { }
}
void IStreamStack.SetPosition(long position) { }
private readonly MultiVolumeReadOnlyStream readStream;
private uint currentCrc;
private readonly bool disableCRC;
@@ -18,10 +39,21 @@ internal class RarCrcStream : RarStream
: base(unpack, fileHeader, readStream)
{
this.readStream = readStream;
#if DEBUG_STREAMS
this.DebugConstruct(typeof(RarCrcStream));
#endif
disableCRC = fileHeader.IsEncrypted;
ResetCrc();
}
protected override void Dispose(bool disposing)
{
#if DEBUG_STREAMS
this.DebugDispose(typeof(RarCrcStream));
#endif
base.Dispose(disposing);
}
public uint GetCrc() => ~currentCrc;
public void ResetCrc() => currentCrc = 0xffffffff;

View File

@@ -4,11 +4,32 @@ using System;
using System.Buffers;
using System.IO;
using SharpCompress.Common.Rar.Headers;
using SharpCompress.IO;
namespace SharpCompress.Compressors.Rar;
internal class RarStream : Stream
internal class RarStream : Stream, IStreamStack
{
#if DEBUG_STREAMS
long IStreamStack.InstanceId { get; set; }
#endif
int IStreamStack.DefaultBufferSize { get; set; }
Stream IStreamStack.BaseStream() => readStream;
int IStreamStack.BufferSize
{
get => 0;
set { }
}
int IStreamStack.BufferPosition
{
get => 0;
set { }
}
void IStreamStack.SetPosition(long position) { }
private readonly IRarUnpack unpack;
private readonly FileHeader fileHeader;
private readonly Stream readStream;
@@ -31,6 +52,11 @@ internal class RarStream : Stream
this.unpack = unpack;
this.fileHeader = fileHeader;
this.readStream = readStream;
#if DEBUG_STREAMS
this.DebugConstruct(typeof(RarStream));
#endif
fetch = true;
unpack.DoUnpack(fileHeader, readStream, this);
fetch = false;
@@ -43,6 +69,9 @@ internal class RarStream : Stream
{
if (disposing)
{
#if DEBUG_STREAMS
this.DebugDispose(typeof(RarStream));
#endif
ArrayPool<byte>.Shared.Return(this.tmpBuffer);
this.tmpBuffer = null;
}

View File

@@ -1,10 +1,31 @@
using System;
using System.IO;
using SharpCompress.IO;
namespace SharpCompress.Compressors.Reduce;
public class ReduceStream : Stream
public class ReduceStream : Stream, IStreamStack
{
#if DEBUG_STREAMS
long IStreamStack.InstanceId { get; set; }
#endif
int IStreamStack.DefaultBufferSize { get; set; }
Stream IStreamStack.BaseStream() => inStream;
int IStreamStack.BufferSize
{
get => 0;
set { }
}
int IStreamStack.BufferPosition
{
get => 0;
set { }
}
void IStreamStack.SetPosition(long position) { }
private readonly long unCompressedSize;
private readonly long compressedSize;
private readonly Stream inStream;
@@ -31,6 +52,10 @@ public class ReduceStream : Stream
inByteCount = 0;
outBytesCount = 0;
#if DEBUG_STREAMS
this.DebugConstruct(typeof(ReduceStream));
#endif
this.factor = factor;
distanceMask = (int)mask_bits[factor] << 8;
lengthMask = 0xff >> factor;
@@ -47,6 +72,14 @@ public class ReduceStream : Stream
LoadNextByteTable();
}
protected override void Dispose(bool disposing)
{
#if DEBUG_STREAMS
this.DebugDispose(typeof(ReduceStream));
#endif
base.Dispose(disposing);
}
public override void Flush()
{
throw new NotImplementedException();

View File

@@ -1,79 +1,78 @@
namespace SharpCompress.Compressors.Shrink
namespace SharpCompress.Compressors.Shrink;
internal class BitStream
{
internal class BitStream
private byte[] _src;
private int _srcLen;
private int _byteIdx;
private int _bitIdx;
private int _bitsLeft;
private ulong _bitBuffer;
private static uint[] _maskBits = new uint[17]
{
private byte[] _src;
private int _srcLen;
private int _byteIdx;
private int _bitIdx;
private int _bitsLeft;
private ulong _bitBuffer;
private static uint[] _maskBits = new uint[17]
{
0U,
1U,
3U,
7U,
15U,
31U,
63U,
(uint)sbyte.MaxValue,
(uint)byte.MaxValue,
511U,
1023U,
2047U,
4095U,
8191U,
16383U,
(uint)short.MaxValue,
(uint)ushort.MaxValue,
};
0U,
1U,
3U,
7U,
15U,
31U,
63U,
(uint)sbyte.MaxValue,
(uint)byte.MaxValue,
511U,
1023U,
2047U,
4095U,
8191U,
16383U,
(uint)short.MaxValue,
(uint)ushort.MaxValue,
};
public BitStream(byte[] src, int srcLen)
public BitStream(byte[] src, int srcLen)
{
_src = src;
_srcLen = srcLen;
_byteIdx = 0;
_bitIdx = 0;
}
public int BytesRead => (_byteIdx << 3) + _bitIdx;
private int NextByte()
{
if (_byteIdx >= _srcLen)
{
_src = src;
_srcLen = srcLen;
_byteIdx = 0;
_bitIdx = 0;
return 0;
}
public int BytesRead => (_byteIdx << 3) + _bitIdx;
return _src[_byteIdx++];
}
private int NextByte()
public int NextBits(int nbits)
{
var result = 0;
if (nbits > _bitsLeft)
{
if (_byteIdx >= _srcLen)
int num;
while (_bitsLeft <= 24 && (num = NextByte()) != 1234)
{
return 0;
_bitBuffer |= (ulong)num << _bitsLeft;
_bitsLeft += 8;
}
return _src[_byteIdx++];
}
result = (int)((long)_bitBuffer & (long)_maskBits[nbits]);
_bitBuffer >>= nbits;
_bitsLeft -= nbits;
return result;
}
public int NextBits(int nbits)
public bool Advance(int count)
{
if (_byteIdx > _srcLen)
{
var result = 0;
if (nbits > _bitsLeft)
{
int num;
while (_bitsLeft <= 24 && (num = NextByte()) != 1234)
{
_bitBuffer |= (ulong)num << _bitsLeft;
_bitsLeft += 8;
}
}
result = (int)((long)_bitBuffer & (long)_maskBits[nbits]);
_bitBuffer >>= nbits;
_bitsLeft -= nbits;
return result;
}
public bool Advance(int count)
{
if (_byteIdx > _srcLen)
{
return false;
}
return true;
return false;
}
return true;
}
}

View File

@@ -1,275 +1,297 @@
using System;
namespace SharpCompress.Compressors.Shrink
namespace SharpCompress.Compressors.Shrink;
public class HwUnshrink
{
public class HwUnshrink
private const int MIN_CODE_SIZE = 9;
private const int MAX_CODE_SIZE = 13;
private const ushort MAX_CODE = (ushort)((1U << MAX_CODE_SIZE) - 1);
private const ushort INVALID_CODE = ushort.MaxValue;
private const ushort CONTROL_CODE = 256;
private const ushort INC_CODE_SIZE = 1;
private const ushort PARTIAL_CLEAR = 2;
private const int HASH_BITS = MAX_CODE_SIZE + 1; // For a load factor of 0.5.
private const int HASHTAB_SIZE = 1 << HASH_BITS;
private const ushort UNKNOWN_LEN = ushort.MaxValue;
private struct CodeTabEntry
{
private const int MIN_CODE_SIZE = 9;
private const int MAX_CODE_SIZE = 13;
public int prefixCode; // INVALID_CODE means the entry is invalid.
public byte extByte;
public ushort len;
public int lastDstPos;
}
private const ushort MAX_CODE = (ushort)((1U << MAX_CODE_SIZE) - 1);
private const ushort INVALID_CODE = ushort.MaxValue;
private const ushort CONTROL_CODE = 256;
private const ushort INC_CODE_SIZE = 1;
private const ushort PARTIAL_CLEAR = 2;
private const int HASH_BITS = MAX_CODE_SIZE + 1; // For a load factor of 0.5.
private const int HASHTAB_SIZE = 1 << HASH_BITS;
private const ushort UNKNOWN_LEN = ushort.MaxValue;
private struct CodeTabEntry
private static void CodeTabInit(CodeTabEntry[] codeTab)
{
for (var i = 0; i <= byte.MaxValue; i++)
{
public int prefixCode; // INVALID_CODE means the entry is invalid.
public byte extByte;
public ushort len;
public int lastDstPos;
codeTab[i].prefixCode = (ushort)i;
codeTab[i].extByte = (byte)i;
codeTab[i].len = 1;
}
private static void CodeTabInit(CodeTabEntry[] codeTab)
for (var i = byte.MaxValue + 1; i <= MAX_CODE; i++)
{
for (var i = 0; i <= byte.MaxValue; i++)
{
codeTab[i].prefixCode = (ushort)i;
codeTab[i].extByte = (byte)i;
codeTab[i].len = 1;
}
codeTab[i].prefixCode = INVALID_CODE;
}
}
for (var i = byte.MaxValue + 1; i <= MAX_CODE; i++)
private static void UnshrinkPartialClear(CodeTabEntry[] codeTab, ref CodeQueue queue)
{
var isPrefix = new bool[MAX_CODE + 1];
int codeQueueSize;
// Scan for codes that have been used as a prefix.
for (var i = CONTROL_CODE + 1; i <= MAX_CODE; i++)
{
if (codeTab[i].prefixCode != INVALID_CODE)
{
isPrefix[codeTab[i].prefixCode] = true;
}
}
// Clear "non-prefix" codes in the table; populate the code queue.
codeQueueSize = 0;
for (var i = CONTROL_CODE + 1; i <= MAX_CODE; i++)
{
if (!isPrefix[i])
{
codeTab[i].prefixCode = INVALID_CODE;
queue.codes[codeQueueSize++] = (ushort)i;
}
}
private static void UnshrinkPartialClear(CodeTabEntry[] codeTab, ref CodeQueue queue)
queue.codes[codeQueueSize] = INVALID_CODE; // End-of-queue marker.
queue.nextIdx = 0;
}
private static bool ReadCode(
BitStream stream,
ref int codeSize,
CodeTabEntry[] codeTab,
ref CodeQueue queue,
out int nextCode
)
{
int code,
controlCode;
code = (int)stream.NextBits(codeSize);
if (!stream.Advance(codeSize))
{
var isPrefix = new bool[MAX_CODE + 1];
int codeQueueSize;
// Scan for codes that have been used as a prefix.
for (var i = CONTROL_CODE + 1; i <= MAX_CODE; i++)
{
if (codeTab[i].prefixCode != INVALID_CODE)
{
isPrefix[codeTab[i].prefixCode] = true;
}
}
// Clear "non-prefix" codes in the table; populate the code queue.
codeQueueSize = 0;
for (var i = CONTROL_CODE + 1; i <= MAX_CODE; i++)
{
if (!isPrefix[i])
{
codeTab[i].prefixCode = INVALID_CODE;
queue.codes[codeQueueSize++] = (ushort)i;
}
}
queue.codes[codeQueueSize] = INVALID_CODE; // End-of-queue marker.
queue.nextIdx = 0;
nextCode = INVALID_CODE;
return false;
}
private static bool ReadCode(
BitStream stream,
ref int codeSize,
CodeTabEntry[] codeTab,
ref CodeQueue queue,
out int nextCode
)
// Handle regular codes (the common case).
if (code != CONTROL_CODE)
{
int code,
controlCode;
code = (int)stream.NextBits(codeSize);
if (!stream.Advance(codeSize))
{
nextCode = INVALID_CODE;
return false;
}
// Handle regular codes (the common case).
if (code != CONTROL_CODE)
{
nextCode = code;
return true;
}
// Handle control codes.
controlCode = (ushort)stream.NextBits(codeSize);
if (!stream.Advance(codeSize))
{
nextCode = INVALID_CODE;
return true;
}
if (controlCode == INC_CODE_SIZE && codeSize < MAX_CODE_SIZE)
{
codeSize++;
return ReadCode(stream, ref codeSize, codeTab, ref queue, out nextCode);
}
if (controlCode == PARTIAL_CLEAR)
{
UnshrinkPartialClear(codeTab, ref queue);
return ReadCode(stream, ref codeSize, codeTab, ref queue, out nextCode);
}
nextCode = code;
return true;
}
// Handle control codes.
controlCode = (ushort)stream.NextBits(codeSize);
if (!stream.Advance(codeSize))
{
nextCode = INVALID_CODE;
return true;
}
private static void CopyFromPrevPos(byte[] dst, int prevPos, int dstPos, int len)
if (controlCode == INC_CODE_SIZE && codeSize < MAX_CODE_SIZE)
{
if (dstPos + len > dst.Length)
{
// Not enough room in dst for the sloppy copy below.
Array.Copy(dst, prevPos, dst, dstPos, len);
return;
}
if (prevPos + len > dstPos)
{
// Benign one-byte overlap possible in the KwKwK case.
//assert(prevPos + len == dstPos + 1);
//assert(dst[prevPos] == dst[prevPos + len - 1]);
}
Buffer.BlockCopy(dst, prevPos, dst, dstPos, len);
codeSize++;
return ReadCode(stream, ref codeSize, codeTab, ref queue, out nextCode);
}
private static UnshrnkStatus OutputCode(
int code,
byte[] dst,
int dstPos,
int dstCap,
int prevCode,
CodeTabEntry[] codeTab,
ref CodeQueue queue,
out byte firstByte,
out int len
)
if (controlCode == PARTIAL_CLEAR)
{
int prefixCode;
UnshrinkPartialClear(codeTab, ref queue);
return ReadCode(stream, ref codeSize, codeTab, ref queue, out nextCode);
}
//assert(code <= MAX_CODE && code != CONTROL_CODE);
//assert(dstPos < dstCap);
nextCode = INVALID_CODE;
return true;
}
private static void CopyFromPrevPos(byte[] dst, int prevPos, int dstPos, int len)
{
if (dstPos + len > dst.Length)
{
// Not enough room in dst for the sloppy copy below.
Array.Copy(dst, prevPos, dst, dstPos, len);
return;
}
if (prevPos + len > dstPos)
{
// Benign one-byte overlap possible in the KwKwK case.
//assert(prevPos + len == dstPos + 1);
//assert(dst[prevPos] == dst[prevPos + len - 1]);
}
Buffer.BlockCopy(dst, prevPos, dst, dstPos, len);
}
private static UnshrnkStatus OutputCode(
int code,
byte[] dst,
int dstPos,
int dstCap,
int prevCode,
CodeTabEntry[] codeTab,
ref CodeQueue queue,
out byte firstByte,
out int len
)
{
int prefixCode;
//assert(code <= MAX_CODE && code != CONTROL_CODE);
//assert(dstPos < dstCap);
firstByte = 0;
if (code <= byte.MaxValue)
{
// Output literal byte.
firstByte = (byte)code;
len = 1;
dst[dstPos] = (byte)code;
return UnshrnkStatus.Ok;
}
if (codeTab[code].prefixCode == INVALID_CODE || codeTab[code].prefixCode == code)
{
// Reject invalid codes. Self-referential codes may exist in the table but cannot be used.
firstByte = 0;
if (code <= byte.MaxValue)
{
// Output literal byte.
firstByte = (byte)code;
len = 1;
dst[dstPos] = (byte)code;
return UnshrnkStatus.Ok;
}
len = 0;
return UnshrnkStatus.Error;
}
if (codeTab[code].prefixCode == INVALID_CODE || codeTab[code].prefixCode == code)
{
// Reject invalid codes. Self-referential codes may exist in the table but cannot be used.
firstByte = 0;
len = 0;
return UnshrnkStatus.Error;
}
if (codeTab[code].len != UNKNOWN_LEN)
{
// Output string with known length (the common case).
if (dstCap - dstPos < codeTab[code].len)
{
firstByte = 0;
len = 0;
return UnshrnkStatus.Full;
}
CopyFromPrevPos(dst, codeTab[code].lastDstPos, dstPos, codeTab[code].len);
firstByte = dst[dstPos];
len = codeTab[code].len;
return UnshrnkStatus.Ok;
}
// Output a string of unknown length.
//assert(codeTab[code].len == UNKNOWN_LEN);
prefixCode = codeTab[code].prefixCode;
// assert(prefixCode > CONTROL_CODE);
if (prefixCode == queue.codes[queue.nextIdx])
{
// The prefix code hasn't been added yet, but we were just about to: the KwKwK case.
//assert(codeTab[prevCode].prefixCode != INVALID_CODE);
codeTab[prefixCode].prefixCode = prevCode;
codeTab[prefixCode].extByte = firstByte;
codeTab[prefixCode].len = (ushort)(codeTab[prevCode].len + 1);
codeTab[prefixCode].lastDstPos = codeTab[prevCode].lastDstPos;
dst[dstPos] = firstByte;
}
else if (codeTab[prefixCode].prefixCode == INVALID_CODE)
{
// The prefix code is still invalid.
firstByte = 0;
len = 0;
return UnshrnkStatus.Error;
}
// Output the prefix string, then the extension byte.
len = codeTab[prefixCode].len + 1;
if (dstCap - dstPos < len)
if (codeTab[code].len != UNKNOWN_LEN)
{
// Output string with known length (the common case).
if (dstCap - dstPos < codeTab[code].len)
{
firstByte = 0;
len = 0;
return UnshrnkStatus.Full;
}
CopyFromPrevPos(dst, codeTab[prefixCode].lastDstPos, dstPos, codeTab[prefixCode].len);
dst[dstPos + len - 1] = codeTab[code].extByte;
CopyFromPrevPos(dst, codeTab[code].lastDstPos, dstPos, codeTab[code].len);
firstByte = dst[dstPos];
// Update the code table now that the string has a length and pos.
//assert(prevCode != code);
codeTab[code].len = (ushort)len;
codeTab[code].lastDstPos = dstPos;
len = codeTab[code].len;
return UnshrnkStatus.Ok;
}
public static UnshrnkStatus Unshrink(
byte[] src,
int srcLen,
out int srcUsed,
byte[] dst,
int dstCap,
out int dstUsed
)
// Output a string of unknown length.
//assert(codeTab[code].len == UNKNOWN_LEN);
prefixCode = codeTab[code].prefixCode;
// assert(prefixCode > CONTROL_CODE);
if (prefixCode == queue.codes[queue.nextIdx])
{
var codeTab = new CodeTabEntry[HASHTAB_SIZE];
var queue = new CodeQueue();
var stream = new BitStream(src, srcLen);
int codeSize,
dstPos,
len;
int currCode,
prevCode,
newCode;
byte firstByte;
// The prefix code hasn't been added yet, but we were just about to: the KwKwK case.
//assert(codeTab[prevCode].prefixCode != INVALID_CODE);
codeTab[prefixCode].prefixCode = prevCode;
codeTab[prefixCode].extByte = firstByte;
codeTab[prefixCode].len = (ushort)(codeTab[prevCode].len + 1);
codeTab[prefixCode].lastDstPos = codeTab[prevCode].lastDstPos;
dst[dstPos] = firstByte;
}
else if (codeTab[prefixCode].prefixCode == INVALID_CODE)
{
// The prefix code is still invalid.
firstByte = 0;
len = 0;
return UnshrnkStatus.Error;
}
CodeTabInit(codeTab);
CodeQueueInit(ref queue);
codeSize = MIN_CODE_SIZE;
dstPos = 0;
// Output the prefix string, then the extension byte.
len = codeTab[prefixCode].len + 1;
if (dstCap - dstPos < len)
{
firstByte = 0;
len = 0;
return UnshrnkStatus.Full;
}
// Handle the first code separately since there is no previous code.
if (!ReadCode(stream, ref codeSize, codeTab, ref queue, out currCode))
CopyFromPrevPos(dst, codeTab[prefixCode].lastDstPos, dstPos, codeTab[prefixCode].len);
dst[dstPos + len - 1] = codeTab[code].extByte;
firstByte = dst[dstPos];
// Update the code table now that the string has a length and pos.
//assert(prevCode != code);
codeTab[code].len = (ushort)len;
codeTab[code].lastDstPos = dstPos;
return UnshrnkStatus.Ok;
}
public static UnshrnkStatus Unshrink(
byte[] src,
int srcLen,
out int srcUsed,
byte[] dst,
int dstCap,
out int dstUsed
)
{
var codeTab = new CodeTabEntry[HASHTAB_SIZE];
var queue = new CodeQueue();
var stream = new BitStream(src, srcLen);
int codeSize,
dstPos,
len;
int currCode,
prevCode,
newCode;
byte firstByte;
CodeTabInit(codeTab);
CodeQueueInit(ref queue);
codeSize = MIN_CODE_SIZE;
dstPos = 0;
// Handle the first code separately since there is no previous code.
if (!ReadCode(stream, ref codeSize, codeTab, ref queue, out currCode))
{
srcUsed = stream.BytesRead;
dstUsed = 0;
return UnshrnkStatus.Ok;
}
//assert(currCode != CONTROL_CODE);
if (currCode > byte.MaxValue)
{
srcUsed = stream.BytesRead;
dstUsed = 0;
return UnshrnkStatus.Error; // The first code must be a literal.
}
if (dstPos == dstCap)
{
srcUsed = stream.BytesRead;
dstUsed = 0;
return UnshrnkStatus.Full;
}
firstByte = (byte)currCode;
dst[dstPos] = (byte)currCode;
codeTab[currCode].lastDstPos = dstPos;
dstPos++;
prevCode = currCode;
while (ReadCode(stream, ref codeSize, codeTab, ref queue, out currCode))
{
if (currCode == INVALID_CODE)
{
srcUsed = stream.BytesRead;
dstUsed = 0;
return UnshrnkStatus.Ok;
}
//assert(currCode != CONTROL_CODE);
if (currCode > byte.MaxValue)
{
srcUsed = stream.BytesRead;
dstUsed = 0;
return UnshrnkStatus.Error; // The first code must be a literal.
return UnshrnkStatus.Error;
}
if (dstPos == dstCap)
@@ -279,153 +301,130 @@ namespace SharpCompress.Compressors.Shrink
return UnshrnkStatus.Full;
}
firstByte = (byte)currCode;
dst[dstPos] = (byte)currCode;
codeTab[currCode].lastDstPos = dstPos;
dstPos++;
prevCode = currCode;
while (ReadCode(stream, ref codeSize, codeTab, ref queue, out currCode))
// Handle KwKwK: next code used before being added.
if (currCode == queue.codes[queue.nextIdx])
{
if (currCode == INVALID_CODE)
if (codeTab[prevCode].prefixCode == INVALID_CODE)
{
// The previous code is no longer valid.
srcUsed = stream.BytesRead;
dstUsed = 0;
return UnshrnkStatus.Error;
}
if (dstPos == dstCap)
{
srcUsed = stream.BytesRead;
dstUsed = 0;
return UnshrnkStatus.Full;
}
// Handle KwKwK: next code used before being added.
if (currCode == queue.codes[queue.nextIdx])
{
if (codeTab[prevCode].prefixCode == INVALID_CODE)
{
// The previous code is no longer valid.
srcUsed = stream.BytesRead;
dstUsed = 0;
return UnshrnkStatus.Error;
}
// Extend the previous code with its first byte.
//assert(currCode != prevCode);
codeTab[currCode].prefixCode = prevCode;
codeTab[currCode].extByte = firstByte;
codeTab[currCode].len = (ushort)(codeTab[prevCode].len + 1);
codeTab[currCode].lastDstPos = codeTab[prevCode].lastDstPos;
//assert(dstPos < dstCap);
dst[dstPos] = firstByte;
}
// Output the string represented by the current code.
var status = OutputCode(
currCode,
dst,
dstPos,
dstCap,
prevCode,
codeTab,
ref queue,
out firstByte,
out len
);
if (status != UnshrnkStatus.Ok)
{
srcUsed = stream.BytesRead;
dstUsed = 0;
return status;
}
// Verify that the output matches walking the prefixes.
var c = currCode;
for (var i = 0; i < len; i++)
{
// assert(codeTab[c].len == len - i);
//assert(codeTab[c].extByte == dst[dstPos + len - i - 1]);
c = codeTab[c].prefixCode;
}
// Add a new code to the string table if there's room.
// The string is the previous code's string extended with the first byte of the current code's string.
newCode = CodeQueueRemoveNext(ref queue);
if (newCode != INVALID_CODE)
{
//assert(codeTab[prevCode].lastDstPos < dstPos);
codeTab[newCode].prefixCode = prevCode;
codeTab[newCode].extByte = firstByte;
codeTab[newCode].len = (ushort)(codeTab[prevCode].len + 1);
codeTab[newCode].lastDstPos = codeTab[prevCode].lastDstPos;
if (codeTab[prevCode].prefixCode == INVALID_CODE)
{
// prevCode was invalidated in a partial clearing. Until that code is re-used, the
// string represented by newCode is indeterminate.
codeTab[newCode].len = UNKNOWN_LEN;
}
// If prevCode was invalidated in a partial clearing, it's possible that newCode == prevCode,
// in which case it will never be used or cleared.
}
codeTab[currCode].lastDstPos = dstPos;
dstPos += len;
prevCode = currCode;
// Extend the previous code with its first byte.
//assert(currCode != prevCode);
codeTab[currCode].prefixCode = prevCode;
codeTab[currCode].extByte = firstByte;
codeTab[currCode].len = (ushort)(codeTab[prevCode].len + 1);
codeTab[currCode].lastDstPos = codeTab[prevCode].lastDstPos;
//assert(dstPos < dstCap);
dst[dstPos] = firstByte;
}
srcUsed = stream.BytesRead;
dstUsed = dstPos;
return UnshrnkStatus.Ok;
}
public enum UnshrnkStatus
{
Ok,
Full,
Error,
}
private struct CodeQueue
{
public int nextIdx;
public ushort[] codes;
}
private static void CodeQueueInit(ref CodeQueue q)
{
int codeQueueSize;
ushort code;
codeQueueSize = 0;
q.codes = new ushort[MAX_CODE - CONTROL_CODE + 2];
for (code = CONTROL_CODE + 1; code <= MAX_CODE; code++)
// Output the string represented by the current code.
var status = OutputCode(
currCode,
dst,
dstPos,
dstCap,
prevCode,
codeTab,
ref queue,
out firstByte,
out len
);
if (status != UnshrnkStatus.Ok)
{
q.codes[codeQueueSize++] = code;
srcUsed = stream.BytesRead;
dstUsed = 0;
return status;
}
//assert(codeQueueSize < q.codes.Length);
q.codes[codeQueueSize] = INVALID_CODE; // End-of-queue marker.
q.nextIdx = 0;
}
private static ushort CodeQueueNext(ref CodeQueue q) =>
//assert(q.nextIdx < q.codes.Length);
q.codes[q.nextIdx];
private static ushort CodeQueueRemoveNext(ref CodeQueue q)
{
var code = CodeQueueNext(ref q);
if (code != INVALID_CODE)
// Verify that the output matches walking the prefixes.
var c = currCode;
for (var i = 0; i < len; i++)
{
q.nextIdx++;
// assert(codeTab[c].len == len - i);
//assert(codeTab[c].extByte == dst[dstPos + len - i - 1]);
c = codeTab[c].prefixCode;
}
return code;
// Add a new code to the string table if there's room.
// The string is the previous code's string extended with the first byte of the current code's string.
newCode = CodeQueueRemoveNext(ref queue);
if (newCode != INVALID_CODE)
{
//assert(codeTab[prevCode].lastDstPos < dstPos);
codeTab[newCode].prefixCode = prevCode;
codeTab[newCode].extByte = firstByte;
codeTab[newCode].len = (ushort)(codeTab[prevCode].len + 1);
codeTab[newCode].lastDstPos = codeTab[prevCode].lastDstPos;
if (codeTab[prevCode].prefixCode == INVALID_CODE)
{
// prevCode was invalidated in a partial clearing. Until that code is re-used, the
// string represented by newCode is indeterminate.
codeTab[newCode].len = UNKNOWN_LEN;
}
// If prevCode was invalidated in a partial clearing, it's possible that newCode == prevCode,
// in which case it will never be used or cleared.
}
codeTab[currCode].lastDstPos = dstPos;
dstPos += len;
prevCode = currCode;
}
srcUsed = stream.BytesRead;
dstUsed = dstPos;
return UnshrnkStatus.Ok;
}
public enum UnshrnkStatus
{
Ok,
Full,
Error,
}
private struct CodeQueue
{
public int nextIdx;
public ushort[] codes;
}
private static void CodeQueueInit(ref CodeQueue q)
{
int codeQueueSize;
ushort code;
codeQueueSize = 0;
q.codes = new ushort[MAX_CODE - CONTROL_CODE + 2];
for (code = CONTROL_CODE + 1; code <= MAX_CODE; code++)
{
q.codes[codeQueueSize++] = code;
}
//assert(codeQueueSize < q.codes.Length);
q.codes[codeQueueSize] = INVALID_CODE; // End-of-queue marker.
q.nextIdx = 0;
}
private static ushort CodeQueueNext(ref CodeQueue q) =>
//assert(q.nextIdx < q.codes.Length);
q.codes[q.nextIdx];
private static ushort CodeQueueRemoveNext(ref CodeQueue q)
{
var code = CodeQueueNext(ref q);
if (code != INVALID_CODE)
{
q.nextIdx++;
}
return code;
}
}

View File

@@ -1,10 +1,31 @@
using System;
using System.IO;
using SharpCompress.IO;
namespace SharpCompress.Compressors.Shrink;
internal class ShrinkStream : Stream
internal class ShrinkStream : Stream, IStreamStack
{
#if DEBUG_STREAMS
long IStreamStack.InstanceId { get; set; }
#endif
int IStreamStack.DefaultBufferSize { get; set; }
Stream IStreamStack.BaseStream() => inStream;
int IStreamStack.BufferSize
{
get => 0;
set { }
}
int IStreamStack.BufferPosition
{
get => 0;
set { }
}
void IStreamStack.SetPosition(long position) { }
private Stream inStream;
private CompressionMode _compressionMode;
@@ -23,12 +44,24 @@ internal class ShrinkStream : Stream
inStream = stream;
_compressionMode = compressionMode;
#if DEBUG_STREAMS
this.DebugConstruct(typeof(ShrinkStream));
#endif
_compressedSize = (ulong)compressedSize;
_uncompressedSize = uncompressedSize;
_byteOut = new byte[_uncompressedSize];
_outBytesCount = 0L;
}
protected override void Dispose(bool disposing)
{
#if DEBUG_STREAMS
this.DebugDispose(typeof(ShrinkStream));
#endif
base.Dispose(disposing);
}
public override bool CanRead => true;
public override bool CanSeek => true;

View File

@@ -5,110 +5,140 @@ using System.Linq;
using System.Text;
using System.Threading.Tasks;
using SharpCompress.Compressors.RLE90;
using ZstdSharp.Unsafe;
using SharpCompress.IO;
namespace SharpCompress.Compressors.Squeezed
namespace SharpCompress.Compressors.Squeezed;
public class SqueezeStream : Stream, IStreamStack
{
public class SqueezeStream : Stream
#if DEBUG_STREAMS
long IStreamStack.InstanceId { get; set; }
#endif
int IStreamStack.DefaultBufferSize { get; set; }
Stream IStreamStack.BaseStream() => _stream;
int IStreamStack.BufferSize
{
private readonly Stream _stream;
private readonly int _compressedSize;
private const int NUMVALS = 257;
private const int SPEOF = 256;
private bool _processed = false;
get => 0;
set { }
}
int IStreamStack.BufferPosition
{
get => 0;
set { }
}
public SqueezeStream(Stream stream, int compressedSize)
void IStreamStack.SetPosition(long position) { }
private readonly Stream _stream;
private readonly int _compressedSize;
private const int NUMVALS = 257;
private const int SPEOF = 256;
private bool _processed = false;
public SqueezeStream(Stream stream, int compressedSize)
{
_stream = stream;
_compressedSize = compressedSize;
#if DEBUG_STREAMS
this.DebugConstruct(typeof(SqueezeStream));
#endif
}
protected override void Dispose(bool disposing)
{
#if DEBUG_STREAMS
this.DebugDispose(typeof(SqueezeStream));
#endif
base.Dispose(disposing);
}
public override bool CanRead => true;
public override bool CanSeek => false;
public override bool CanWrite => false;
public override long Length => throw new NotImplementedException();
public override long Position
{
get => _stream.Position;
set => throw new NotImplementedException();
}
public override void Flush() => throw new NotImplementedException();
public override int Read(byte[] buffer, int offset, int count)
{
if (_processed)
{
_stream = stream;
_compressedSize = compressedSize;
return 0;
}
_processed = true;
using var binaryReader = new BinaryReader(_stream);
// Read numnodes (equivalent to convert_u16!(numnodes, buf))
var numnodes = binaryReader.ReadUInt16();
// Validation: numnodes should be within bounds
if (numnodes >= NUMVALS)
{
throw new InvalidDataException(
$"Invalid number of nodes {numnodes} (max {NUMVALS - 1})"
);
}
public override bool CanRead => true;
public override bool CanSeek => false;
public override bool CanWrite => false;
public override long Length => throw new NotImplementedException();
public override long Position
// Handle the case where no nodes exist
if (numnodes == 0)
{
get => _stream.Position;
set => throw new NotImplementedException();
return 0;
}
public override void Flush() => throw new NotImplementedException();
public override int Read(byte[] buffer, int offset, int count)
// Build dnode (tree of nodes)
var dnode = new int[numnodes, 2];
for (int j = 0; j < numnodes; j++)
{
if (_processed)
dnode[j, 0] = binaryReader.ReadInt16();
dnode[j, 1] = binaryReader.ReadInt16();
}
// Initialize BitReader for reading bits
var bitReader = new BitReader(_stream);
var decoded = new List<byte>();
int i = 0;
// Decode the buffer using the dnode tree
while (true)
{
i = dnode[i, bitReader.ReadBit() ? 1 : 0];
if (i < 0)
{
return 0;
}
_processed = true;
using var binaryReader = new BinaryReader(_stream);
// Read numnodes (equivalent to convert_u16!(numnodes, buf))
var numnodes = binaryReader.ReadUInt16();
// Validation: numnodes should be within bounds
if (numnodes >= NUMVALS)
{
throw new InvalidDataException(
$"Invalid number of nodes {numnodes} (max {NUMVALS - 1})"
);
}
// Handle the case where no nodes exist
if (numnodes == 0)
{
return 0;
}
// Build dnode (tree of nodes)
var dnode = new int[numnodes, 2];
for (int j = 0; j < numnodes; j++)
{
dnode[j, 0] = binaryReader.ReadInt16();
dnode[j, 1] = binaryReader.ReadInt16();
}
// Initialize BitReader for reading bits
var bitReader = new BitReader(_stream);
var decoded = new List<byte>();
int i = 0;
// Decode the buffer using the dnode tree
while (true)
{
i = dnode[i, bitReader.ReadBit() ? 1 : 0];
if (i < 0)
i = (short)-(i + 1);
if (i == SPEOF)
{
i = (short)-(i + 1);
if (i == SPEOF)
{
break;
}
else
{
decoded.Add((byte)i);
i = 0;
}
break;
}
else
{
decoded.Add((byte)i);
i = 0;
}
}
// Unpack the decoded buffer using the RLE class
var unpacked = RLE.UnpackRLE(decoded.ToArray());
unpacked.CopyTo(buffer, 0);
return unpacked.Count();
}
public override long Seek(long offset, SeekOrigin origin) =>
throw new NotImplementedException();
public override void SetLength(long value) => throw new NotImplementedException();
public override void Write(byte[] buffer, int offset, int count) =>
throw new NotImplementedException();
// Unpack the decoded buffer using the RLE class
var unpacked = RLE.UnpackRLE(decoded.ToArray());
unpacked.CopyTo(buffer, 0);
return unpacked.Count();
}
public override long Seek(long offset, SeekOrigin origin) =>
throw new NotImplementedException();
public override void SetLength(long value) => throw new NotImplementedException();
public override void Write(byte[] buffer, int offset, int count) =>
throw new NotImplementedException();
}

View File

@@ -23,7 +23,7 @@ public class XZFooter
public static XZFooter FromStream(Stream stream)
{
var footer = new XZFooter(
new BinaryReader(NonDisposingStream.Create(stream), Encoding.UTF8)
new BinaryReader(SharpCompressStream.Create(stream, leaveOpen: true), Encoding.UTF8)
);
footer.Process();
return footer;

View File

@@ -19,7 +19,7 @@ public class XZHeader
public static XZHeader FromStream(Stream stream)
{
var header = new XZHeader(
new BinaryReader(NonDisposingStream.Create(stream), Encoding.UTF8)
new BinaryReader(SharpCompressStream.Create(stream, leaveOpen: true), Encoding.UTF8)
);
header.Process();
return header;

View File

@@ -32,7 +32,7 @@ public class XZIndex
public static XZIndex FromStream(Stream stream, bool indexMarkerAlreadyVerified)
{
var index = new XZIndex(
new BinaryReader(NonDisposingStream.Create(stream), Encoding.UTF8),
new BinaryReader(SharpCompressStream.Create(stream, leaveOpen: true), Encoding.UTF8),
indexMarkerAlreadyVerified
);
index.Process();

View File

@@ -1,10 +1,31 @@
using System.IO;
using SharpCompress.Common;
using SharpCompress.IO;
namespace SharpCompress.Compressors.Xz;
public abstract class XZReadOnlyStream : ReadOnlyStream
public abstract class XZReadOnlyStream : ReadOnlyStream, IStreamStack
{
#if DEBUG_STREAMS
long IStreamStack.InstanceId { get; set; }
#endif
int IStreamStack.DefaultBufferSize { get; set; }
Stream IStreamStack.BaseStream() => base.BaseStream;
int IStreamStack.BufferSize
{
get => 0;
set { }
}
int IStreamStack.BufferPosition
{
get => 0;
set { }
}
void IStreamStack.SetPosition(long position) { }
public XZReadOnlyStream(Stream stream)
{
BaseStream = stream;
@@ -12,5 +33,16 @@ public abstract class XZReadOnlyStream : ReadOnlyStream
{
throw new InvalidFormatException("Must be able to read from stream");
}
#if DEBUG_STREAMS
this.DebugConstruct(typeof(XZReadOnlyStream));
#endif
}
protected override void Dispose(bool disposing)
{
#if DEBUG_STREAMS
this.DebugDispose(typeof(XZReadOnlyStream));
#endif
base.Dispose(disposing);
}
}

View File

@@ -3,12 +3,49 @@
using System;
using System.IO;
using SharpCompress.Common;
using SharpCompress.IO;
namespace SharpCompress.Compressors.Xz;
[CLSCompliant(false)]
public sealed class XZStream : XZReadOnlyStream
public sealed class XZStream : XZReadOnlyStream, IStreamStack
{
#if DEBUG_STREAMS
long IStreamStack.InstanceId { get; set; }
#endif
Stream IStreamStack.BaseStream() => _baseStream;
int IStreamStack.BufferSize
{
get => 0;
set { }
}
int IStreamStack.BufferPosition
{
get => 0;
set { }
}
void IStreamStack.SetPosition(long position) { }
public XZStream(Stream baseStream)
: base(baseStream)
{
_baseStream = baseStream;
#if DEBUG_STREAMS
this.DebugConstruct(typeof(XZStream));
#endif
}
protected override void Dispose(bool disposing)
{
#if DEBUG_STREAMS
this.DebugDispose(typeof(XZStream));
#endif
base.Dispose(disposing);
}
public static bool IsXZStream(Stream stream)
{
try
@@ -35,6 +72,7 @@ public sealed class XZStream : XZReadOnlyStream
}
}
private readonly Stream _baseStream;
public XZHeader Header { get; private set; }
public XZIndex Index { get; private set; }
public XZFooter Footer { get; private set; }
@@ -43,9 +81,6 @@ public sealed class XZStream : XZReadOnlyStream
private bool _endOfStream;
public XZStream(Stream stream)
: base(stream) { }
public override int Read(byte[] buffer, int offset, int count)
{
var bytesRead = 0;

View File

@@ -0,0 +1,311 @@
// Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.
#if !NETCOREAPP3_0_OR_GREATER
using System.Runtime.CompilerServices;
using static SharpCompress.Compressors.ZStandard.UnsafeHelper;
// Some routines inspired by the Stanford Bit Twiddling Hacks by Sean Eron Anderson:
// http://graphics.stanford.edu/~seander/bithacks.html
namespace System.Numerics
{
/// <summary>
/// Utility methods for intrinsic bit-twiddling operations.
/// The methods use hardware intrinsics when available on the underlying platform,
/// otherwise they use optimized software fallbacks.
/// </summary>
public static unsafe class BitOperations
{
// hack: should be public because of inline
public static readonly byte* TrailingZeroCountDeBruijn = GetArrayPointer(
new byte[]
{
00,
01,
28,
02,
29,
14,
24,
03,
30,
22,
20,
15,
25,
17,
04,
08,
31,
27,
13,
23,
21,
19,
16,
07,
26,
12,
18,
06,
11,
05,
10,
09,
}
);
// hack: should be public because of inline
public static readonly byte* Log2DeBruijn = GetArrayPointer(
new byte[]
{
00,
09,
01,
10,
13,
21,
02,
29,
11,
14,
16,
18,
22,
25,
03,
30,
08,
12,
20,
28,
15,
17,
24,
07,
19,
27,
23,
06,
26,
05,
04,
31,
}
);
/// <summary>
/// Returns the integer (floor) log of the specified value, base 2.
/// Note that by convention, input value 0 returns 0 since log(0) is undefined.
/// </summary>
/// <param name="value">The value.</param>
[MethodImpl(MethodImplOptions.AggressiveInlining)]
public static int Log2(uint value)
{
// The 0->0 contract is fulfilled by setting the LSB to 1.
// Log(1) is 0, and setting the LSB for values > 1 does not change the log2 result.
value |= 1;
// value lzcnt actual expected
// ..0001 31 31-31 0
// ..0010 30 31-30 1
// 0010.. 2 31-2 29
// 0100.. 1 31-1 30
// 1000.. 0 31-0 31
// Fallback contract is 0->0
// No AggressiveInlining due to large method size
// Has conventional contract 0->0 (Log(0) is undefined)
// Fill trailing zeros with ones, eg 00010010 becomes 00011111
value |= value >> 01;
value |= value >> 02;
value |= value >> 04;
value |= value >> 08;
value |= value >> 16;
// uint.MaxValue >> 27 is always in range [0 - 31] so we use Unsafe.AddByteOffset to avoid bounds check
return Log2DeBruijn[
// Using deBruijn sequence, k=2, n=5 (2^5=32) : 0b_0000_0111_1100_0100_1010_1100_1101_1101u
(int)((value * 0x07C4ACDDu) >> 27)
];
}
/// <summary>
/// Returns the integer (floor) log of the specified value, base 2.
/// Note that by convention, input value 0 returns 0 since log(0) is undefined.
/// </summary>
/// <param name="value">The value.</param>
[MethodImpl(MethodImplOptions.AggressiveInlining)]
public static int Log2(ulong value)
{
value |= 1;
uint hi = (uint)(value >> 32);
if (hi == 0)
{
return Log2((uint)value);
}
return 32 + Log2(hi);
}
/// <summary>
/// Count the number of trailing zero bits in an integer value.
/// Similar in behavior to the x86 instruction TZCNT.
/// </summary>
/// <param name="value">The value.</param>
[MethodImpl(MethodImplOptions.AggressiveInlining)]
public static int TrailingZeroCount(int value) => TrailingZeroCount((uint)value);
/// <summary>
/// Count the number of trailing zero bits in an integer value.
/// Similar in behavior to the x86 instruction TZCNT.
/// </summary>
/// <param name="value">The value.</param>
[MethodImpl(MethodImplOptions.AggressiveInlining)]
public static int TrailingZeroCount(uint value)
{
// Unguarded fallback contract is 0->0, BSF contract is 0->undefined
if (value == 0)
{
return 32;
}
// uint.MaxValue >> 27 is always in range [0 - 31] so we use Unsafe.AddByteOffset to avoid bounds check
return TrailingZeroCountDeBruijn[
// Using deBruijn sequence, k=2, n=5 (2^5=32) : 0b_0000_0111_0111_1100_1011_0101_0011_0001u
(int)(((value & (uint)-(int)value) * 0x077CB531u) >> 27)
]; // Multi-cast mitigates redundant conv.u8
}
/// <summary>
/// Count the number of trailing zero bits in a mask.
/// Similar in behavior to the x86 instruction TZCNT.
/// </summary>
/// <param name="value">The value.</param>
[MethodImpl(MethodImplOptions.AggressiveInlining)]
public static int TrailingZeroCount(long value) => TrailingZeroCount((ulong)value);
/// <summary>
/// Count the number of trailing zero bits in a mask.
/// Similar in behavior to the x86 instruction TZCNT.
/// </summary>
/// <param name="value">The value.</param>
[MethodImpl(MethodImplOptions.AggressiveInlining)]
public static int TrailingZeroCount(ulong value)
{
uint lo = (uint)value;
if (lo == 0)
{
return 32 + TrailingZeroCount((uint)(value >> 32));
}
return TrailingZeroCount(lo);
}
/// <summary>
/// Rotates the specified value left by the specified number of bits.
/// Similar in behavior to the x86 instruction ROL.
/// </summary>
/// <param name="value">The value to rotate.</param>
/// <param name="offset">The number of bits to rotate by.
/// Any value outside the range [0..31] is treated as congruent mod 32.</param>
/// <returns>The rotated value.</returns>
[MethodImpl(MethodImplOptions.AggressiveInlining)]
public static uint RotateLeft(uint value, int offset) =>
(value << offset) | (value >> (32 - offset));
/// <summary>
/// Rotates the specified value left by the specified number of bits.
/// Similar in behavior to the x86 instruction ROL.
/// </summary>
/// <param name="value">The value to rotate.</param>
/// <param name="offset">The number of bits to rotate by.
/// Any value outside the range [0..63] is treated as congruent mod 64.</param>
/// <returns>The rotated value.</returns>
[MethodImpl(MethodImplOptions.AggressiveInlining)]
public static ulong RotateLeft(ulong value, int offset) =>
(value << offset) | (value >> (64 - offset));
/// <summary>
/// Rotates the specified value right by the specified number of bits.
/// Similar in behavior to the x86 instruction ROR.
/// </summary>
/// <param name="value">The value to rotate.</param>
/// <param name="offset">The number of bits to rotate by.
/// Any value outside the range [0..31] is treated as congruent mod 32.</param>
/// <returns>The rotated value.</returns>
[MethodImpl(MethodImplOptions.AggressiveInlining)]
public static uint RotateRight(uint value, int offset) =>
(value >> offset) | (value << (32 - offset));
/// <summary>
/// Rotates the specified value right by the specified number of bits.
/// Similar in behavior to the x86 instruction ROR.
/// </summary>
/// <param name="value">The value to rotate.</param>
/// <param name="offset">The number of bits to rotate by.
/// Any value outside the range [0..63] is treated as congruent mod 64.</param>
/// <returns>The rotated value.</returns>
[MethodImpl(MethodImplOptions.AggressiveInlining)]
public static ulong RotateRight(ulong value, int offset) =>
(value >> offset) | (value << (64 - offset));
/// <summary>
/// Count the number of leading zero bits in a mask.
/// Similar in behavior to the x86 instruction LZCNT.
/// </summary>
/// <param name="value">The value.</param>
[MethodImpl(MethodImplOptions.AggressiveInlining)]
public static int LeadingZeroCount(uint value)
{
// Unguarded fallback contract is 0->31, BSR contract is 0->undefined
if (value == 0)
{
return 32;
}
// No AggressiveInlining due to large method size
// Has conventional contract 0->0 (Log(0) is undefined)
// Fill trailing zeros with ones, eg 00010010 becomes 00011111
value |= value >> 01;
value |= value >> 02;
value |= value >> 04;
value |= value >> 08;
value |= value >> 16;
// uint.MaxValue >> 27 is always in range [0 - 31] so we use Unsafe.AddByteOffset to avoid bounds check
return 31
^ Log2DeBruijn[
// uint|long -> IntPtr cast on 32-bit platforms does expensive overflow checks not needed here
(int)((value * 0x07C4ACDDu) >> 27)
];
}
/// <summary>
/// Count the number of leading zero bits in a mask.
/// Similar in behavior to the x86 instruction LZCNT.
/// </summary>
/// <param name="value">The value.</param>
[MethodImpl(MethodImplOptions.AggressiveInlining)]
public static int LeadingZeroCount(ulong value)
{
uint hi = (uint)(value >> 32);
if (hi == 0)
{
return 32 + LeadingZeroCount((uint)value);
}
return LeadingZeroCount(hi);
}
}
}
#endif

View File

@@ -0,0 +1,301 @@
using System;
using System.Buffers;
using System.IO;
using System.Threading;
using System.Threading.Tasks;
using SharpCompress.Compressors.ZStandard.Unsafe;
namespace SharpCompress.Compressors.ZStandard;
public class CompressionStream : Stream
{
private readonly Stream innerStream;
private readonly byte[] outputBuffer;
private readonly bool preserveCompressor;
private readonly bool leaveOpen;
private Compressor? compressor;
private ZSTD_outBuffer_s output;
public CompressionStream(
Stream stream,
int level = Compressor.DefaultCompressionLevel,
int bufferSize = 0,
bool leaveOpen = true
)
: this(stream, new Compressor(level), bufferSize, false, leaveOpen) { }
public CompressionStream(
Stream stream,
Compressor compressor,
int bufferSize = 0,
bool preserveCompressor = true,
bool leaveOpen = true
)
{
if (stream == null)
throw new ArgumentNullException(nameof(stream));
if (!stream.CanWrite)
throw new ArgumentException("Stream is not writable", nameof(stream));
if (bufferSize < 0)
throw new ArgumentOutOfRangeException(nameof(bufferSize));
innerStream = stream;
this.compressor = compressor;
this.preserveCompressor = preserveCompressor;
this.leaveOpen = leaveOpen;
var outputBufferSize =
bufferSize > 0
? bufferSize
: (int)Unsafe.Methods.ZSTD_CStreamOutSize().EnsureZstdSuccess();
outputBuffer = ArrayPool<byte>.Shared.Rent(outputBufferSize);
output = new ZSTD_outBuffer_s { pos = 0, size = (nuint)outputBufferSize };
}
public void SetParameter(ZSTD_cParameter parameter, int value)
{
EnsureNotDisposed();
compressor.NotNull().SetParameter(parameter, value);
}
public int GetParameter(ZSTD_cParameter parameter)
{
EnsureNotDisposed();
return compressor.NotNull().GetParameter(parameter);
}
public void LoadDictionary(byte[] dict)
{
EnsureNotDisposed();
compressor.NotNull().LoadDictionary(dict);
}
~CompressionStream() => Dispose(false);
#if !NETSTANDARD2_0 && !NETFRAMEWORK
public override async ValueTask DisposeAsync()
#else
public async Task DisposeAsync()
#endif
{
if (compressor == null)
return;
try
{
await FlushInternalAsync(ZSTD_EndDirective.ZSTD_e_end).ConfigureAwait(false);
}
finally
{
ReleaseUnmanagedResources();
GC.SuppressFinalize(this);
}
}
protected override void Dispose(bool disposing)
{
if (compressor == null)
return;
try
{
if (disposing)
FlushInternal(ZSTD_EndDirective.ZSTD_e_end);
}
finally
{
ReleaseUnmanagedResources();
}
}
private void ReleaseUnmanagedResources()
{
if (!preserveCompressor)
{
compressor.NotNull().Dispose();
}
compressor = null;
if (outputBuffer != null)
{
ArrayPool<byte>.Shared.Return(outputBuffer);
}
if (!leaveOpen)
{
innerStream.Dispose();
}
}
public override void Flush() => FlushInternal(ZSTD_EndDirective.ZSTD_e_flush);
public override async Task FlushAsync(CancellationToken cancellationToken) =>
await FlushInternalAsync(ZSTD_EndDirective.ZSTD_e_flush, cancellationToken)
.ConfigureAwait(false);
private void FlushInternal(ZSTD_EndDirective directive) => WriteInternal(null, directive);
private async Task FlushInternalAsync(
ZSTD_EndDirective directive,
CancellationToken cancellationToken = default
) => await WriteInternalAsync(null, directive, cancellationToken).ConfigureAwait(false);
public override void Write(byte[] buffer, int offset, int count) =>
Write(new ReadOnlySpan<byte>(buffer, offset, count));
#if !NETSTANDARD2_0 && !NETFRAMEWORK
public override void Write(ReadOnlySpan<byte> buffer) =>
WriteInternal(buffer, ZSTD_EndDirective.ZSTD_e_continue);
#else
public void Write(ReadOnlySpan<byte> buffer) =>
WriteInternal(buffer, ZSTD_EndDirective.ZSTD_e_continue);
#endif
private void WriteInternal(ReadOnlySpan<byte> buffer, ZSTD_EndDirective directive)
{
EnsureNotDisposed();
var input = new ZSTD_inBuffer_s
{
pos = 0,
size = buffer != null ? (nuint)buffer.Length : 0,
};
nuint remaining;
do
{
output.pos = 0;
remaining = CompressStream(ref input, buffer, directive);
var written = (int)output.pos;
if (written > 0)
innerStream.Write(outputBuffer, 0, written);
} while (
directive == ZSTD_EndDirective.ZSTD_e_continue ? input.pos < input.size : remaining > 0
);
}
#if !NETSTANDARD2_0 && !NETFRAMEWORK
private async ValueTask WriteInternalAsync(
ReadOnlyMemory<byte>? buffer,
ZSTD_EndDirective directive,
CancellationToken cancellationToken = default
)
#else
private async Task WriteInternalAsync(
ReadOnlyMemory<byte>? buffer,
ZSTD_EndDirective directive,
CancellationToken cancellationToken = default
)
#endif
{
EnsureNotDisposed();
var input = new ZSTD_inBuffer_s
{
pos = 0,
size = buffer.HasValue ? (nuint)buffer.Value.Length : 0,
};
nuint remaining;
do
{
output.pos = 0;
remaining = CompressStream(
ref input,
buffer.HasValue ? buffer.Value.Span : null,
directive
);
var written = (int)output.pos;
if (written > 0)
await innerStream
.WriteAsync(outputBuffer, 0, written, cancellationToken)
.ConfigureAwait(false);
} while (
directive == ZSTD_EndDirective.ZSTD_e_continue ? input.pos < input.size : remaining > 0
);
}
#if !NETSTANDARD2_0 && !NETFRAMEWORK
public override Task WriteAsync(
byte[] buffer,
int offset,
int count,
CancellationToken cancellationToken
) => WriteAsync(new ReadOnlyMemory<byte>(buffer, offset, count), cancellationToken).AsTask();
public override async ValueTask WriteAsync(
ReadOnlyMemory<byte> buffer,
CancellationToken cancellationToken = default
) =>
await WriteInternalAsync(buffer, ZSTD_EndDirective.ZSTD_e_continue, cancellationToken)
.ConfigureAwait(false);
#else
public override Task WriteAsync(
byte[] buffer,
int offset,
int count,
CancellationToken cancellationToken
) => WriteAsync(new ReadOnlyMemory<byte>(buffer, offset, count), cancellationToken);
public async Task WriteAsync(
ReadOnlyMemory<byte> buffer,
CancellationToken cancellationToken = default
) =>
await WriteInternalAsync(buffer, ZSTD_EndDirective.ZSTD_e_continue, cancellationToken)
.ConfigureAwait(false);
#endif
internal unsafe nuint CompressStream(
ref ZSTD_inBuffer_s input,
ReadOnlySpan<byte> inputBuffer,
ZSTD_EndDirective directive
)
{
fixed (byte* inputBufferPtr = inputBuffer)
fixed (byte* outputBufferPtr = outputBuffer)
{
input.src = inputBufferPtr;
output.dst = outputBufferPtr;
return compressor
.NotNull()
.CompressStream(ref input, ref output, directive)
.EnsureZstdSuccess();
}
}
public override bool CanRead => false;
public override bool CanSeek => false;
public override bool CanWrite => true;
public override long Length => throw new NotSupportedException();
public override long Position
{
get => throw new NotSupportedException();
set => throw new NotSupportedException();
}
public override long Seek(long offset, SeekOrigin origin) => throw new NotSupportedException();
public override void SetLength(long value) => throw new NotSupportedException();
public override int Read(byte[] buffer, int offset, int count) =>
throw new NotSupportedException();
private void EnsureNotDisposed()
{
if (compressor == null)
throw new ObjectDisposedException(nameof(CompressionStream));
}
public void SetPledgedSrcSize(ulong pledgedSrcSize)
{
EnsureNotDisposed();
compressor.NotNull().SetPledgedSrcSize(pledgedSrcSize);
}
}

View File

@@ -0,0 +1,204 @@
using System;
using SharpCompress.Compressors.ZStandard.Unsafe;
namespace SharpCompress.Compressors.ZStandard;
public unsafe class Compressor : IDisposable
{
/// <summary>
/// Minimum negative compression level allowed
/// </summary>
public static int MinCompressionLevel => Unsafe.Methods.ZSTD_minCLevel();
/// <summary>
/// Maximum compression level available
/// </summary>
public static int MaxCompressionLevel => Unsafe.Methods.ZSTD_maxCLevel();
/// <summary>
/// Default compression level
/// </summary>
/// <see cref="Unsafe.Methods.ZSTD_defaultCLevel"/>
public const int DefaultCompressionLevel = 3;
private int level = DefaultCompressionLevel;
private readonly SafeCctxHandle handle;
public int Level
{
get => level;
set
{
if (level != value)
{
level = value;
SetParameter(ZSTD_cParameter.ZSTD_c_compressionLevel, value);
}
}
}
public void SetParameter(ZSTD_cParameter parameter, int value)
{
using var cctx = handle.Acquire();
Unsafe.Methods.ZSTD_CCtx_setParameter(cctx, parameter, value).EnsureZstdSuccess();
}
public int GetParameter(ZSTD_cParameter parameter)
{
using var cctx = handle.Acquire();
int value;
Unsafe.Methods.ZSTD_CCtx_getParameter(cctx, parameter, &value).EnsureZstdSuccess();
return value;
}
public void LoadDictionary(byte[] dict)
{
var dictReadOnlySpan = new ReadOnlySpan<byte>(dict);
LoadDictionary(dictReadOnlySpan);
}
public void LoadDictionary(ReadOnlySpan<byte> dict)
{
using var cctx = handle.Acquire();
fixed (byte* dictPtr = dict)
Unsafe
.Methods.ZSTD_CCtx_loadDictionary(cctx, dictPtr, (nuint)dict.Length)
.EnsureZstdSuccess();
}
public Compressor(int level = DefaultCompressionLevel)
{
handle = SafeCctxHandle.Create();
Level = level;
}
public static int GetCompressBound(int length) =>
(int)Unsafe.Methods.ZSTD_compressBound((nuint)length);
public static ulong GetCompressBoundLong(ulong length) =>
Unsafe.Methods.ZSTD_compressBound((nuint)length);
public Span<byte> Wrap(ReadOnlySpan<byte> src)
{
var dest = new byte[GetCompressBound(src.Length)];
var length = Wrap(src, dest);
return new Span<byte>(dest, 0, length);
}
public int Wrap(byte[] src, byte[] dest, int offset) =>
Wrap(src, new Span<byte>(dest, offset, dest.Length - offset));
public int Wrap(ReadOnlySpan<byte> src, Span<byte> dest)
{
fixed (byte* srcPtr = src)
fixed (byte* destPtr = dest)
{
using var cctx = handle.Acquire();
return (int)
Unsafe
.Methods.ZSTD_compress2(
cctx,
destPtr,
(nuint)dest.Length,
srcPtr,
(nuint)src.Length
)
.EnsureZstdSuccess();
}
}
public int Wrap(ArraySegment<byte> src, ArraySegment<byte> dest) =>
Wrap((ReadOnlySpan<byte>)src, dest);
public int Wrap(
byte[] src,
int srcOffset,
int srcLength,
byte[] dst,
int dstOffset,
int dstLength
) =>
Wrap(
new ReadOnlySpan<byte>(src, srcOffset, srcLength),
new Span<byte>(dst, dstOffset, dstLength)
);
public bool TryWrap(byte[] src, byte[] dest, int offset, out int written) =>
TryWrap(src, new Span<byte>(dest, offset, dest.Length - offset), out written);
public bool TryWrap(ReadOnlySpan<byte> src, Span<byte> dest, out int written)
{
fixed (byte* srcPtr = src)
fixed (byte* destPtr = dest)
{
nuint returnValue;
using (var cctx = handle.Acquire())
{
returnValue = Unsafe.Methods.ZSTD_compress2(
cctx,
destPtr,
(nuint)dest.Length,
srcPtr,
(nuint)src.Length
);
}
if (returnValue == unchecked(0 - (nuint)ZSTD_ErrorCode.ZSTD_error_dstSize_tooSmall))
{
written = default;
return false;
}
returnValue.EnsureZstdSuccess();
written = (int)returnValue;
return true;
}
}
public bool TryWrap(ArraySegment<byte> src, ArraySegment<byte> dest, out int written) =>
TryWrap((ReadOnlySpan<byte>)src, dest, out written);
public bool TryWrap(
byte[] src,
int srcOffset,
int srcLength,
byte[] dst,
int dstOffset,
int dstLength,
out int written
) =>
TryWrap(
new ReadOnlySpan<byte>(src, srcOffset, srcLength),
new Span<byte>(dst, dstOffset, dstLength),
out written
);
public void Dispose()
{
handle.Dispose();
GC.SuppressFinalize(this);
}
internal nuint CompressStream(
ref ZSTD_inBuffer_s input,
ref ZSTD_outBuffer_s output,
ZSTD_EndDirective directive
)
{
fixed (ZSTD_inBuffer_s* inputPtr = &input)
fixed (ZSTD_outBuffer_s* outputPtr = &output)
{
using var cctx = handle.Acquire();
return Unsafe
.Methods.ZSTD_compressStream2(cctx, outputPtr, inputPtr, directive)
.EnsureZstdSuccess();
}
}
public void SetPledgedSrcSize(ulong pledgedSrcSize)
{
using var cctx = handle.Acquire();
Unsafe.Methods.ZSTD_CCtx_setPledgedSrcSize(cctx, pledgedSrcSize).EnsureZstdSuccess();
}
}

View File

@@ -0,0 +1,8 @@
namespace SharpCompress.Compressors.ZStandard;
internal class Constants
{
//NOTE: https://docs.microsoft.com/en-us/dotnet/framework/configure-apps/file-schema/runtime/gcallowverylargeobjects-element#remarks
//NOTE: https://github.com/dotnet/runtime/blob/v5.0.0-rtm.20519.4/src/libraries/System.Private.CoreLib/src/System/Array.cs#L27
public const ulong MaxByteArrayLength = 0x7FFFFFC7;
}

View File

@@ -0,0 +1,293 @@
using System;
using System.Buffers;
using System.IO;
using System.Threading;
using System.Threading.Tasks;
using SharpCompress.Compressors.ZStandard.Unsafe;
namespace SharpCompress.Compressors.ZStandard;
public class DecompressionStream : Stream
{
private readonly Stream innerStream;
private readonly byte[] inputBuffer;
private readonly int inputBufferSize;
private readonly bool preserveDecompressor;
private readonly bool leaveOpen;
private readonly bool checkEndOfStream;
private Decompressor? decompressor;
private ZSTD_inBuffer_s input;
private nuint lastDecompressResult = 0;
private bool contextDrained = true;
public DecompressionStream(
Stream stream,
int bufferSize = 0,
bool checkEndOfStream = true,
bool leaveOpen = true
)
: this(stream, new Decompressor(), bufferSize, checkEndOfStream, false, leaveOpen) { }
public DecompressionStream(
Stream stream,
Decompressor decompressor,
int bufferSize = 0,
bool checkEndOfStream = true,
bool preserveDecompressor = true,
bool leaveOpen = true
)
{
if (stream == null)
throw new ArgumentNullException(nameof(stream));
if (!stream.CanRead)
throw new ArgumentException("Stream is not readable", nameof(stream));
if (bufferSize < 0)
throw new ArgumentOutOfRangeException(nameof(bufferSize));
innerStream = stream;
this.decompressor = decompressor;
this.preserveDecompressor = preserveDecompressor;
this.leaveOpen = leaveOpen;
this.checkEndOfStream = checkEndOfStream;
inputBufferSize =
bufferSize > 0
? bufferSize
: (int)Unsafe.Methods.ZSTD_DStreamInSize().EnsureZstdSuccess();
inputBuffer = ArrayPool<byte>.Shared.Rent(inputBufferSize);
input = new ZSTD_inBuffer_s { pos = (nuint)inputBufferSize, size = (nuint)inputBufferSize };
}
public void SetParameter(ZSTD_dParameter parameter, int value)
{
EnsureNotDisposed();
decompressor.NotNull().SetParameter(parameter, value);
}
public int GetParameter(ZSTD_dParameter parameter)
{
EnsureNotDisposed();
return decompressor.NotNull().GetParameter(parameter);
}
public void LoadDictionary(byte[] dict)
{
EnsureNotDisposed();
decompressor.NotNull().LoadDictionary(dict);
}
~DecompressionStream() => Dispose(false);
protected override void Dispose(bool disposing)
{
if (decompressor == null)
return;
if (!preserveDecompressor)
{
decompressor.Dispose();
}
decompressor = null;
if (inputBuffer != null)
{
ArrayPool<byte>.Shared.Return(inputBuffer);
}
if (!leaveOpen)
{
innerStream.Dispose();
}
}
public override int Read(byte[] buffer, int offset, int count) =>
Read(new Span<byte>(buffer, offset, count));
#if !NETSTANDARD2_0 && !NETFRAMEWORK
public override int Read(Span<byte> buffer)
#else
public int Read(Span<byte> buffer)
#endif
{
EnsureNotDisposed();
// Guard against infinite loop (output.pos would never become non-zero)
if (buffer.Length == 0)
{
return 0;
}
var output = new ZSTD_outBuffer_s { pos = 0, size = (nuint)buffer.Length };
while (true)
{
// If there is still input available, or there might be data buffered in the decompressor context, flush that out
while (input.pos < input.size || !contextDrained)
{
nuint oldInputPos = input.pos;
nuint result = DecompressStream(ref output, buffer);
if (output.pos > 0 || oldInputPos != input.pos)
{
// Keep result from last decompress call that made some progress, so we known if we're at end of frame
lastDecompressResult = result;
}
// If decompression filled the output buffer, there might still be data buffered in the decompressor context
contextDrained = output.pos < output.size;
// If we have data to return, return it immediately, so we won't stall on Read
if (output.pos > 0)
{
return (int)output.pos;
}
}
// Otherwise, read some more input
int bytesRead;
if ((bytesRead = innerStream.Read(inputBuffer, 0, inputBufferSize)) == 0)
{
if (checkEndOfStream && lastDecompressResult != 0)
{
throw new EndOfStreamException("Premature end of stream");
}
return 0;
}
input.size = (nuint)bytesRead;
input.pos = 0;
}
}
#if !NETSTANDARD2_0 && !NETFRAMEWORK
public override Task<int> ReadAsync(
byte[] buffer,
int offset,
int count,
CancellationToken cancellationToken
) => ReadAsync(new Memory<byte>(buffer, offset, count), cancellationToken).AsTask();
public override async ValueTask<int> ReadAsync(
Memory<byte> buffer,
CancellationToken cancellationToken = default
)
#else
public override Task<int> ReadAsync(
byte[] buffer,
int offset,
int count,
CancellationToken cancellationToken
) => ReadAsync(new Memory<byte>(buffer, offset, count), cancellationToken);
public async Task<int> ReadAsync(
Memory<byte> buffer,
CancellationToken cancellationToken = default
)
#endif
{
EnsureNotDisposed();
// Guard against infinite loop (output.pos would never become non-zero)
if (buffer.Length == 0)
{
return 0;
}
var output = new ZSTD_outBuffer_s { pos = 0, size = (nuint)buffer.Length };
while (true)
{
// If there is still input available, or there might be data buffered in the decompressor context, flush that out
while (input.pos < input.size || !contextDrained)
{
nuint oldInputPos = input.pos;
nuint result = DecompressStream(ref output, buffer.Span);
if (output.pos > 0 || oldInputPos != input.pos)
{
// Keep result from last decompress call that made some progress, so we known if we're at end of frame
lastDecompressResult = result;
}
// If decompression filled the output buffer, there might still be data buffered in the decompressor context
contextDrained = output.pos < output.size;
// If we have data to return, return it immediately, so we won't stall on Read
if (output.pos > 0)
{
return (int)output.pos;
}
}
// Otherwise, read some more input
int bytesRead;
if (
(
bytesRead = await innerStream
.ReadAsync(inputBuffer, 0, inputBufferSize, cancellationToken)
.ConfigureAwait(false)
) == 0
)
{
if (checkEndOfStream && lastDecompressResult != 0)
{
throw new EndOfStreamException("Premature end of stream");
}
return 0;
}
input.size = (nuint)bytesRead;
input.pos = 0;
}
}
private unsafe nuint DecompressStream(ref ZSTD_outBuffer_s output, Span<byte> outputBuffer)
{
fixed (byte* inputBufferPtr = inputBuffer)
fixed (byte* outputBufferPtr = outputBuffer)
{
input.src = inputBufferPtr;
output.dst = outputBufferPtr;
return decompressor.NotNull().DecompressStream(ref input, ref output);
}
}
public override bool CanRead => true;
public override bool CanSeek => false;
public override bool CanWrite => false;
public override long Length => throw new NotSupportedException();
public override long Position
{
get => throw new NotSupportedException();
set => throw new NotSupportedException();
}
public override void Flush() => throw new NotSupportedException();
public override long Seek(long offset, SeekOrigin origin) => throw new NotSupportedException();
public override void SetLength(long value) => throw new NotSupportedException();
public override void Write(byte[] buffer, int offset, int count) =>
throw new NotSupportedException();
private void EnsureNotDisposed()
{
if (decompressor == null)
throw new ObjectDisposedException(nameof(DecompressionStream));
}
#if NETSTANDARD2_0 || NETFRAMEWORK
public virtual Task DisposeAsync()
{
try
{
Dispose();
return Task.CompletedTask;
}
catch (Exception exc)
{
return Task.FromException(exc);
}
}
#endif
}

View File

@@ -0,0 +1,176 @@
using System;
using SharpCompress.Compressors.ZStandard.Unsafe;
namespace SharpCompress.Compressors.ZStandard;
public unsafe class Decompressor : IDisposable
{
private readonly SafeDctxHandle handle;
public Decompressor()
{
handle = SafeDctxHandle.Create();
}
public void SetParameter(ZSTD_dParameter parameter, int value)
{
using var dctx = handle.Acquire();
Unsafe.Methods.ZSTD_DCtx_setParameter(dctx, parameter, value).EnsureZstdSuccess();
}
public int GetParameter(ZSTD_dParameter parameter)
{
using var dctx = handle.Acquire();
int value;
Unsafe.Methods.ZSTD_DCtx_getParameter(dctx, parameter, &value).EnsureZstdSuccess();
return value;
}
public void LoadDictionary(byte[] dict)
{
var dictReadOnlySpan = new ReadOnlySpan<byte>(dict);
this.LoadDictionary(dictReadOnlySpan);
}
public void LoadDictionary(ReadOnlySpan<byte> dict)
{
using var dctx = handle.Acquire();
fixed (byte* dictPtr = dict)
Unsafe
.Methods.ZSTD_DCtx_loadDictionary(dctx, dictPtr, (nuint)dict.Length)
.EnsureZstdSuccess();
}
public static ulong GetDecompressedSize(ReadOnlySpan<byte> src)
{
fixed (byte* srcPtr = src)
return Unsafe
.Methods.ZSTD_decompressBound(srcPtr, (nuint)src.Length)
.EnsureContentSizeOk();
}
public static ulong GetDecompressedSize(ArraySegment<byte> src) =>
GetDecompressedSize((ReadOnlySpan<byte>)src);
public static ulong GetDecompressedSize(byte[] src, int srcOffset, int srcLength) =>
GetDecompressedSize(new ReadOnlySpan<byte>(src, srcOffset, srcLength));
public Span<byte> Unwrap(ReadOnlySpan<byte> src, int maxDecompressedSize = int.MaxValue)
{
var expectedDstSize = GetDecompressedSize(src);
if (expectedDstSize > (ulong)maxDecompressedSize)
throw new ZstdException(
ZSTD_ErrorCode.ZSTD_error_dstSize_tooSmall,
$"Decompressed content size {expectedDstSize} is greater than {nameof(maxDecompressedSize)} {maxDecompressedSize}"
);
if (expectedDstSize > Constants.MaxByteArrayLength)
throw new ZstdException(
ZSTD_ErrorCode.ZSTD_error_dstSize_tooSmall,
$"Decompressed content size {expectedDstSize} is greater than max possible byte array size {Constants.MaxByteArrayLength}"
);
var dest = new byte[expectedDstSize];
var length = Unwrap(src, dest);
return new Span<byte>(dest, 0, length);
}
public int Unwrap(byte[] src, byte[] dest, int offset) =>
Unwrap(src, new Span<byte>(dest, offset, dest.Length - offset));
public int Unwrap(ReadOnlySpan<byte> src, Span<byte> dest)
{
fixed (byte* srcPtr = src)
fixed (byte* destPtr = dest)
{
using var dctx = handle.Acquire();
return (int)
Unsafe
.Methods.ZSTD_decompressDCtx(
dctx,
destPtr,
(nuint)dest.Length,
srcPtr,
(nuint)src.Length
)
.EnsureZstdSuccess();
}
}
public int Unwrap(
byte[] src,
int srcOffset,
int srcLength,
byte[] dst,
int dstOffset,
int dstLength
) =>
Unwrap(
new ReadOnlySpan<byte>(src, srcOffset, srcLength),
new Span<byte>(dst, dstOffset, dstLength)
);
public bool TryUnwrap(byte[] src, byte[] dest, int offset, out int written) =>
TryUnwrap(src, new Span<byte>(dest, offset, dest.Length - offset), out written);
public bool TryUnwrap(ReadOnlySpan<byte> src, Span<byte> dest, out int written)
{
fixed (byte* srcPtr = src)
fixed (byte* destPtr = dest)
{
nuint returnValue;
using (var dctx = handle.Acquire())
{
returnValue = Unsafe.Methods.ZSTD_decompressDCtx(
dctx,
destPtr,
(nuint)dest.Length,
srcPtr,
(nuint)src.Length
);
}
if (returnValue == unchecked(0 - (nuint)ZSTD_ErrorCode.ZSTD_error_dstSize_tooSmall))
{
written = default;
return false;
}
returnValue.EnsureZstdSuccess();
written = (int)returnValue;
return true;
}
}
public bool TryUnwrap(
byte[] src,
int srcOffset,
int srcLength,
byte[] dst,
int dstOffset,
int dstLength,
out int written
) =>
TryUnwrap(
new ReadOnlySpan<byte>(src, srcOffset, srcLength),
new Span<byte>(dst, dstOffset, dstLength),
out written
);
public void Dispose()
{
handle.Dispose();
GC.SuppressFinalize(this);
}
internal nuint DecompressStream(ref ZSTD_inBuffer_s input, ref ZSTD_outBuffer_s output)
{
fixed (ZSTD_inBuffer_s* inputPtr = &input)
fixed (ZSTD_outBuffer_s* outputPtr = &output)
{
using var dctx = handle.Acquire();
return Unsafe
.Methods.ZSTD_decompressStream(dctx, outputPtr, inputPtr)
.EnsureZstdSuccess();
}
}
}

View File

@@ -0,0 +1,141 @@
using System;
using System.Collections.Concurrent;
using System.Collections.Generic;
using System.Threading;
namespace SharpCompress.Compressors.ZStandard;
internal unsafe class JobThreadPool : IDisposable
{
private int numThreads;
private readonly List<JobThread> threads;
private readonly BlockingCollection<Job> queue;
private struct Job
{
public void* function;
public void* opaque;
}
private class JobThread
{
private Thread Thread { get; }
public CancellationTokenSource CancellationTokenSource { get; }
public JobThread(Thread thread)
{
CancellationTokenSource = new CancellationTokenSource();
Thread = thread;
}
public void Start()
{
Thread.Start(this);
}
public void Cancel()
{
CancellationTokenSource.Cancel();
}
public void Join()
{
Thread.Join();
}
}
private void Worker(object? obj)
{
if (obj is not JobThread poolThread)
return;
var cancellationToken = poolThread.CancellationTokenSource.Token;
while (!queue.IsCompleted && !cancellationToken.IsCancellationRequested)
{
try
{
if (queue.TryTake(out var job, -1, cancellationToken))
((delegate* managed<void*, void>)job.function)(job.opaque);
}
catch (InvalidOperationException) { }
catch (OperationCanceledException) { }
}
}
public JobThreadPool(int num, int queueSize)
{
numThreads = num;
queue = new BlockingCollection<Job>(queueSize + 1);
threads = new List<JobThread>(num);
for (var i = 0; i < numThreads; i++)
CreateThread();
}
private void CreateThread()
{
var poolThread = new JobThread(new Thread(Worker));
threads.Add(poolThread);
poolThread.Start();
}
public void Resize(int num)
{
lock (threads)
{
if (num < numThreads)
{
for (var i = numThreads - 1; i >= num; i--)
{
threads[i].Cancel();
threads.RemoveAt(i);
}
}
else
{
for (var i = numThreads; i < num; i++)
CreateThread();
}
}
numThreads = num;
}
public void Add(void* function, void* opaque)
{
queue.Add(new Job { function = function, opaque = opaque });
}
public bool TryAdd(void* function, void* opaque)
{
return queue.TryAdd(new Job { function = function, opaque = opaque });
}
public void Join(bool cancel = true)
{
queue.CompleteAdding();
List<JobThread> jobThreads;
lock (threads)
jobThreads = new List<JobThread>(threads);
if (cancel)
{
foreach (var thread in jobThreads)
thread.Cancel();
}
foreach (var thread in jobThreads)
thread.Join();
}
public void Dispose()
{
queue.Dispose();
}
public int Size()
{
// todo not implemented
// https://github.com/dotnet/runtime/issues/24200
return 0;
}
}

View File

@@ -0,0 +1,163 @@
using System;
using System.Runtime.InteropServices;
using SharpCompress.Compressors.ZStandard.Unsafe;
namespace SharpCompress.Compressors.ZStandard;
/// <summary>
/// Provides the base class for ZstdSharp <see cref="SafeHandle"/> implementations.
/// </summary>
/// <remarks>
/// Even though ZstdSharp is a managed library, its internals are using unmanaged
/// memory and we are using safe handles in the library's high-level API to ensure
/// proper disposal of unmanaged resources and increase safety.
/// </remarks>
/// <seealso cref="SafeCctxHandle"/>
/// <seealso cref="SafeDctxHandle"/>
internal abstract unsafe class SafeZstdHandle : SafeHandle
{
/// <summary>
/// Parameterless constructor is hidden. Use the static <c>Create</c> factory
/// method to create a new safe handle instance.
/// </summary>
protected SafeZstdHandle()
: base(IntPtr.Zero, true) { }
public sealed override bool IsInvalid => handle == IntPtr.Zero;
}
/// <summary>
/// Safely wraps an unmanaged Zstd compression context.
/// </summary>
internal sealed unsafe class SafeCctxHandle : SafeZstdHandle
{
/// <inheritdoc/>
private SafeCctxHandle() { }
/// <summary>
/// Creates a new instance of <see cref="SafeCctxHandle"/>.
/// </summary>
/// <returns></returns>
/// <exception cref="ZstdException">Creation failed.</exception>
public static SafeCctxHandle Create()
{
var safeHandle = new SafeCctxHandle();
bool success = false;
try
{
var cctx = Unsafe.Methods.ZSTD_createCCtx();
if (cctx == null)
throw new ZstdException(ZSTD_ErrorCode.ZSTD_error_GENERIC, "Failed to create cctx");
safeHandle.SetHandle((IntPtr)cctx);
success = true;
}
finally
{
if (!success)
{
safeHandle.SetHandleAsInvalid();
}
}
return safeHandle;
}
/// <summary>
/// Acquires a reference to the safe handle.
/// </summary>
/// <returns>
/// A <see cref="SafeHandleHolder{T}"/> instance that can be implicitly converted to a pointer
/// to <see cref="ZSTD_CCtx_s"/>.
/// </returns>
public SafeHandleHolder<ZSTD_CCtx_s> Acquire() => new(this);
protected override bool ReleaseHandle()
{
return Unsafe.Methods.ZSTD_freeCCtx((ZSTD_CCtx_s*)handle) == 0;
}
}
/// <summary>
/// Safely wraps an unmanaged Zstd compression context.
/// </summary>
internal sealed unsafe class SafeDctxHandle : SafeZstdHandle
{
/// <inheritdoc/>
private SafeDctxHandle() { }
/// <summary>
/// Creates a new instance of <see cref="SafeDctxHandle"/>.
/// </summary>
/// <returns></returns>
/// <exception cref="ZstdException">Creation failed.</exception>
public static SafeDctxHandle Create()
{
var safeHandle = new SafeDctxHandle();
bool success = false;
try
{
var dctx = Unsafe.Methods.ZSTD_createDCtx();
if (dctx == null)
throw new ZstdException(ZSTD_ErrorCode.ZSTD_error_GENERIC, "Failed to create dctx");
safeHandle.SetHandle((IntPtr)dctx);
success = true;
}
finally
{
if (!success)
{
safeHandle.SetHandleAsInvalid();
}
}
return safeHandle;
}
/// <summary>
/// Acquires a reference to the safe handle.
/// </summary>
/// <returns>
/// A <see cref="SafeHandleHolder{T}"/> instance that can be implicitly converted to a pointer
/// to <see cref="ZSTD_DCtx_s"/>.
/// </returns>
public SafeHandleHolder<ZSTD_DCtx_s> Acquire() => new(this);
protected override bool ReleaseHandle()
{
return Unsafe.Methods.ZSTD_freeDCtx((ZSTD_DCtx_s*)handle) == 0;
}
}
/// <summary>
/// Provides a convenient interface to safely acquire pointers of a specific type
/// from a <see cref="SafeHandle"/>, by utilizing <see langword="using"/> blocks.
/// </summary>
/// <typeparam name="T">The type of pointers to return.</typeparam>
/// <remarks>
/// Safe handle holders can be <see cref="Dispose"/>d to decrement the safe handle's
/// reference count, and can be implicitly converted to pointers to <see cref="T"/>.
/// </remarks>
internal unsafe ref struct SafeHandleHolder<T>
where T : unmanaged
{
private readonly SafeHandle _handle;
private bool _refAdded;
public SafeHandleHolder(SafeHandle safeHandle)
{
_handle = safeHandle;
_refAdded = false;
safeHandle.DangerousAddRef(ref _refAdded);
}
public static implicit operator T*(SafeHandleHolder<T> holder) =>
(T*)holder._handle.DangerousGetHandle();
public void Dispose()
{
if (_refAdded)
{
_handle.DangerousRelease();
_refAdded = false;
}
}
}

View File

@@ -0,0 +1,22 @@
using System.Threading;
namespace SharpCompress.Compressors.ZStandard;
internal static unsafe class SynchronizationWrapper
{
private static object UnwrapObject(void** obj) => UnmanagedObject.Unwrap<object>(*obj);
public static void Init(void** obj) => *obj = UnmanagedObject.Wrap(new object());
public static void Free(void** obj) => UnmanagedObject.Free(*obj);
public static void Enter(void** obj) => Monitor.Enter(UnwrapObject(obj));
public static void Exit(void** obj) => Monitor.Exit(UnwrapObject(obj));
public static void Pulse(void** obj) => Monitor.Pulse(UnwrapObject(obj));
public static void PulseAll(void** obj) => Monitor.PulseAll(UnwrapObject(obj));
public static void Wait(void** mutex) => Monitor.Wait(UnwrapObject(mutex));
}

View File

@@ -0,0 +1,48 @@
using SharpCompress.Compressors.ZStandard.Unsafe;
namespace SharpCompress.Compressors.ZStandard;
public static unsafe class ThrowHelper
{
private const ulong ZSTD_CONTENTSIZE_UNKNOWN = unchecked(0UL - 1);
private const ulong ZSTD_CONTENTSIZE_ERROR = unchecked(0UL - 2);
public static nuint EnsureZstdSuccess(this nuint returnValue)
{
if (Unsafe.Methods.ZSTD_isError(returnValue))
ThrowException(returnValue, Unsafe.Methods.ZSTD_getErrorName(returnValue));
return returnValue;
}
public static nuint EnsureZdictSuccess(this nuint returnValue)
{
if (Unsafe.Methods.ZDICT_isError(returnValue))
ThrowException(returnValue, Unsafe.Methods.ZDICT_getErrorName(returnValue));
return returnValue;
}
public static ulong EnsureContentSizeOk(this ulong returnValue)
{
if (returnValue == ZSTD_CONTENTSIZE_UNKNOWN)
throw new ZstdException(
ZSTD_ErrorCode.ZSTD_error_GENERIC,
"Decompressed content size is not specified"
);
if (returnValue == ZSTD_CONTENTSIZE_ERROR)
throw new ZstdException(
ZSTD_ErrorCode.ZSTD_error_GENERIC,
"Decompressed content size cannot be determined (e.g. invalid magic number, srcSize too small)"
);
return returnValue;
}
private static void ThrowException(nuint returnValue, string message)
{
var code = 0 - returnValue;
throw new ZstdException((ZSTD_ErrorCode)code, message);
}
}

View File

@@ -0,0 +1,18 @@
using System;
using System.Runtime.InteropServices;
namespace SharpCompress.Compressors.ZStandard;
/*
* Wrap object to void* to make it unmanaged
*/
internal static unsafe class UnmanagedObject
{
public static void* Wrap(object obj) => (void*)GCHandle.ToIntPtr(GCHandle.Alloc(obj));
private static GCHandle UnwrapGcHandle(void* value) => GCHandle.FromIntPtr((IntPtr)value);
public static T Unwrap<T>(void* value) => (T)UnwrapGcHandle(value).Target!;
public static void Free(void* value) => UnwrapGcHandle(value).Free();
}

View File

@@ -0,0 +1,52 @@
using System.Runtime.CompilerServices;
using static SharpCompress.Compressors.ZStandard.UnsafeHelper;
namespace SharpCompress.Compressors.ZStandard.Unsafe;
public static unsafe partial class Methods
{
/* custom memory allocation functions */
[MethodImpl(MethodImplOptions.AggressiveInlining)]
private static void* ZSTD_customMalloc(nuint size, ZSTD_customMem customMem)
{
if (customMem.customAlloc != null)
return ((delegate* managed<void*, nuint, void*>)customMem.customAlloc)(
customMem.opaque,
size
);
return malloc(size);
}
[MethodImpl(MethodImplOptions.AggressiveInlining)]
private static void* ZSTD_customCalloc(nuint size, ZSTD_customMem customMem)
{
if (customMem.customAlloc != null)
{
/* calloc implemented as malloc+memset;
* not as efficient as calloc, but next best guess for custom malloc */
void* ptr = ((delegate* managed<void*, nuint, void*>)customMem.customAlloc)(
customMem.opaque,
size
);
memset(ptr, 0, (uint)size);
return ptr;
}
return calloc(1, size);
}
[MethodImpl(MethodImplOptions.AggressiveInlining)]
private static void ZSTD_customFree(void* ptr, ZSTD_customMem customMem)
{
if (ptr != null)
{
if (customMem.customFree != null)
((delegate* managed<void*, void*, void>)customMem.customFree)(
customMem.opaque,
ptr
);
else
free(ptr);
}
}
}

View File

@@ -0,0 +1,14 @@
namespace SharpCompress.Compressors.ZStandard.Unsafe;
/* bitStream can mix input from multiple sources.
* A critical property of these streams is that they encode and decode in **reverse** direction.
* So the first bit sequence you add will be the last to be read, like a LIFO stack.
*/
public unsafe struct BIT_CStream_t
{
public nuint bitContainer;
public uint bitPos;
public sbyte* startPtr;
public sbyte* ptr;
public sbyte* endPtr;
}

View File

@@ -0,0 +1,16 @@
namespace SharpCompress.Compressors.ZStandard.Unsafe;
public enum BIT_DStream_status
{
/* fully refilled */
BIT_DStream_unfinished = 0,
/* still some bits left in bitstream */
BIT_DStream_endOfBuffer = 1,
/* bitstream entirely consumed, bit-exact */
BIT_DStream_completed = 2,
/* user requested more bits than present in bitstream */
BIT_DStream_overflow = 3,
}

View File

@@ -0,0 +1,13 @@
namespace SharpCompress.Compressors.ZStandard.Unsafe;
/*-********************************************
* bitStream decoding API (read backward)
**********************************************/
public unsafe struct BIT_DStream_t
{
public nuint bitContainer;
public uint bitsConsumed;
public sbyte* ptr;
public sbyte* start;
public sbyte* limitPtr;
}

View File

@@ -0,0 +1,60 @@
using System;
using System.Numerics;
using System.Runtime.CompilerServices;
using static SharpCompress.Compressors.ZStandard.UnsafeHelper;
namespace SharpCompress.Compressors.ZStandard.Unsafe;
public static unsafe partial class Methods
{
[MethodImpl(MethodImplOptions.AggressiveInlining)]
private static uint ZSTD_countTrailingZeros32(uint val)
{
assert(val != 0);
return (uint)BitOperations.TrailingZeroCount(val);
}
[MethodImpl(MethodImplOptions.AggressiveInlining)]
private static uint ZSTD_countLeadingZeros32(uint val)
{
assert(val != 0);
return (uint)BitOperations.LeadingZeroCount(val);
}
[MethodImpl(MethodImplOptions.AggressiveInlining)]
private static uint ZSTD_countTrailingZeros64(ulong val)
{
assert(val != 0);
return (uint)BitOperations.TrailingZeroCount(val);
}
[MethodImpl(MethodImplOptions.AggressiveInlining)]
private static uint ZSTD_countLeadingZeros64(ulong val)
{
assert(val != 0);
return (uint)BitOperations.LeadingZeroCount(val);
}
[MethodImpl(MethodImplOptions.AggressiveInlining)]
private static uint ZSTD_NbCommonBytes(nuint val)
{
assert(val != 0);
if (BitConverter.IsLittleEndian)
{
return MEM_64bits
? (uint)BitOperations.TrailingZeroCount(val) >> 3
: (uint)BitOperations.TrailingZeroCount((uint)val) >> 3;
}
return MEM_64bits
? (uint)BitOperations.LeadingZeroCount(val) >> 3
: (uint)BitOperations.LeadingZeroCount((uint)val) >> 3;
}
[MethodImpl(MethodImplOptions.AggressiveInlining)]
private static uint ZSTD_highbit32(uint val)
{
assert(val != 0);
return (uint)BitOperations.Log2(val);
}
}

View File

@@ -0,0 +1,739 @@
using System;
using System.Runtime.CompilerServices;
using System.Runtime.InteropServices;
using static SharpCompress.Compressors.ZStandard.UnsafeHelper;
#if NETCOREAPP3_0_OR_GREATER
using System.Runtime.Intrinsics.X86;
#endif
namespace SharpCompress.Compressors.ZStandard.Unsafe;
public static unsafe partial class Methods
{
#if NET7_0_OR_GREATER
private static ReadOnlySpan<uint> Span_BIT_mask =>
new uint[32]
{
0,
1,
3,
7,
0xF,
0x1F,
0x3F,
0x7F,
0xFF,
0x1FF,
0x3FF,
0x7FF,
0xFFF,
0x1FFF,
0x3FFF,
0x7FFF,
0xFFFF,
0x1FFFF,
0x3FFFF,
0x7FFFF,
0xFFFFF,
0x1FFFFF,
0x3FFFFF,
0x7FFFFF,
0xFFFFFF,
0x1FFFFFF,
0x3FFFFFF,
0x7FFFFFF,
0xFFFFFFF,
0x1FFFFFFF,
0x3FFFFFFF,
0x7FFFFFFF,
};
private static uint* BIT_mask =>
(uint*)
System.Runtime.CompilerServices.Unsafe.AsPointer(
ref MemoryMarshal.GetReference(Span_BIT_mask)
);
#else
private static readonly uint* BIT_mask = GetArrayPointer(
new uint[32]
{
0,
1,
3,
7,
0xF,
0x1F,
0x3F,
0x7F,
0xFF,
0x1FF,
0x3FF,
0x7FF,
0xFFF,
0x1FFF,
0x3FFF,
0x7FFF,
0xFFFF,
0x1FFFF,
0x3FFFF,
0x7FFFF,
0xFFFFF,
0x1FFFFF,
0x3FFFFF,
0x7FFFFF,
0xFFFFFF,
0x1FFFFFF,
0x3FFFFFF,
0x7FFFFFF,
0xFFFFFFF,
0x1FFFFFFF,
0x3FFFFFFF,
0x7FFFFFFF,
}
);
#endif
/*-**************************************************************
* bitStream encoding
****************************************************************/
/*! BIT_initCStream() :
* `dstCapacity` must be > sizeof(size_t)
* @return : 0 if success,
* otherwise an error code (can be tested using ERR_isError()) */
[MethodImpl(MethodImplOptions.AggressiveInlining)]
private static nuint BIT_initCStream(ref BIT_CStream_t bitC, void* startPtr, nuint dstCapacity)
{
bitC.bitContainer = 0;
bitC.bitPos = 0;
bitC.startPtr = (sbyte*)startPtr;
bitC.ptr = bitC.startPtr;
bitC.endPtr = bitC.startPtr + dstCapacity - sizeof(nuint);
if (dstCapacity <= (nuint)sizeof(nuint))
return unchecked((nuint)(-(int)ZSTD_ErrorCode.ZSTD_error_dstSize_tooSmall));
return 0;
}
[MethodImpl(MethodImplOptions.AggressiveInlining)]
private static nuint BIT_getLowerBits(nuint bitContainer, uint nbBits)
{
assert(nbBits < sizeof(uint) * 32 / sizeof(uint));
#if NETCOREAPP3_1_OR_GREATER
if (Bmi2.X64.IsSupported)
{
return (nuint)Bmi2.X64.ZeroHighBits(bitContainer, nbBits);
}
if (Bmi2.IsSupported)
{
return Bmi2.ZeroHighBits((uint)bitContainer, nbBits);
}
#endif
return bitContainer & BIT_mask[nbBits];
}
/*! BIT_addBits() :
* can add up to 31 bits into `bitC`.
* Note : does not check for register overflow ! */
[MethodImpl(MethodImplOptions.AggressiveInlining)]
private static void BIT_addBits(
ref nuint bitC_bitContainer,
ref uint bitC_bitPos,
nuint value,
uint nbBits
)
{
assert(nbBits < sizeof(uint) * 32 / sizeof(uint));
assert(nbBits + bitC_bitPos < (uint)(sizeof(nuint) * 8));
bitC_bitContainer |= BIT_getLowerBits(value, nbBits) << (int)bitC_bitPos;
bitC_bitPos += nbBits;
}
/*! BIT_addBitsFast() :
* works only if `value` is _clean_,
* meaning all high bits above nbBits are 0 */
[MethodImpl(MethodImplOptions.AggressiveInlining)]
private static void BIT_addBitsFast(
ref nuint bitC_bitContainer,
ref uint bitC_bitPos,
nuint value,
uint nbBits
)
{
assert(value >> (int)nbBits == 0);
assert(nbBits + bitC_bitPos < (uint)(sizeof(nuint) * 8));
bitC_bitContainer |= value << (int)bitC_bitPos;
bitC_bitPos += nbBits;
}
/*! BIT_flushBitsFast() :
* assumption : bitContainer has not overflowed
* unsafe version; does not check buffer overflow */
[MethodImpl(MethodImplOptions.AggressiveInlining)]
private static void BIT_flushBitsFast(
ref nuint bitC_bitContainer,
ref uint bitC_bitPos,
ref sbyte* bitC_ptr,
sbyte* bitC_endPtr
)
{
nuint nbBytes = bitC_bitPos >> 3;
assert(bitC_bitPos < (uint)(sizeof(nuint) * 8));
assert(bitC_ptr <= bitC_endPtr);
MEM_writeLEST(bitC_ptr, bitC_bitContainer);
bitC_ptr += nbBytes;
bitC_bitPos &= 7;
bitC_bitContainer >>= (int)(nbBytes * 8);
}
/*! BIT_flushBits() :
* assumption : bitContainer has not overflowed
* safe version; check for buffer overflow, and prevents it.
* note : does not signal buffer overflow.
* overflow will be revealed later on using BIT_closeCStream() */
[MethodImpl(MethodImplOptions.AggressiveInlining)]
private static void BIT_flushBits(
ref nuint bitC_bitContainer,
ref uint bitC_bitPos,
ref sbyte* bitC_ptr,
sbyte* bitC_endPtr
)
{
nuint nbBytes = bitC_bitPos >> 3;
assert(bitC_bitPos < (uint)(sizeof(nuint) * 8));
assert(bitC_ptr <= bitC_endPtr);
MEM_writeLEST(bitC_ptr, bitC_bitContainer);
bitC_ptr += nbBytes;
if (bitC_ptr > bitC_endPtr)
bitC_ptr = bitC_endPtr;
bitC_bitPos &= 7;
bitC_bitContainer >>= (int)(nbBytes * 8);
}
/*! BIT_closeCStream() :
* @return : size of CStream, in bytes,
* or 0 if it could not fit into dstBuffer */
[MethodImpl(MethodImplOptions.AggressiveInlining)]
private static nuint BIT_closeCStream(
ref nuint bitC_bitContainer,
ref uint bitC_bitPos,
sbyte* bitC_ptr,
sbyte* bitC_endPtr,
sbyte* bitC_startPtr
)
{
BIT_addBitsFast(ref bitC_bitContainer, ref bitC_bitPos, 1, 1);
BIT_flushBits(ref bitC_bitContainer, ref bitC_bitPos, ref bitC_ptr, bitC_endPtr);
if (bitC_ptr >= bitC_endPtr)
return 0;
return (nuint)(bitC_ptr - bitC_startPtr) + (nuint)(bitC_bitPos > 0 ? 1 : 0);
}
/*-********************************************************
* bitStream decoding
**********************************************************/
/*! BIT_initDStream() :
* Initialize a BIT_DStream_t.
* `bitD` : a pointer to an already allocated BIT_DStream_t structure.
* `srcSize` must be the *exact* size of the bitStream, in bytes.
* @return : size of stream (== srcSize), or an errorCode if a problem is detected
*/
[MethodImpl(MethodImplOptions.AggressiveInlining)]
private static nuint BIT_initDStream(BIT_DStream_t* bitD, void* srcBuffer, nuint srcSize)
{
if (srcSize < 1)
{
*bitD = new BIT_DStream_t();
return unchecked((nuint)(-(int)ZSTD_ErrorCode.ZSTD_error_srcSize_wrong));
}
bitD->start = (sbyte*)srcBuffer;
bitD->limitPtr = bitD->start + sizeof(nuint);
if (srcSize >= (nuint)sizeof(nuint))
{
bitD->ptr = (sbyte*)srcBuffer + srcSize - sizeof(nuint);
bitD->bitContainer = MEM_readLEST(bitD->ptr);
{
byte lastByte = ((byte*)srcBuffer)[srcSize - 1];
bitD->bitsConsumed = lastByte != 0 ? 8 - ZSTD_highbit32(lastByte) : 0;
if (lastByte == 0)
return unchecked((nuint)(-(int)ZSTD_ErrorCode.ZSTD_error_GENERIC));
}
}
else
{
bitD->ptr = bitD->start;
bitD->bitContainer = *(byte*)bitD->start;
switch (srcSize)
{
case 7:
bitD->bitContainer += (nuint)((byte*)srcBuffer)[6] << sizeof(nuint) * 8 - 16;
goto case 6;
case 6:
bitD->bitContainer += (nuint)((byte*)srcBuffer)[5] << sizeof(nuint) * 8 - 24;
goto case 5;
case 5:
bitD->bitContainer += (nuint)((byte*)srcBuffer)[4] << sizeof(nuint) * 8 - 32;
goto case 4;
case 4:
bitD->bitContainer += (nuint)((byte*)srcBuffer)[3] << 24;
goto case 3;
case 3:
bitD->bitContainer += (nuint)((byte*)srcBuffer)[2] << 16;
goto case 2;
case 2:
bitD->bitContainer += (nuint)((byte*)srcBuffer)[1] << 8;
goto default;
default:
break;
}
{
byte lastByte = ((byte*)srcBuffer)[srcSize - 1];
bitD->bitsConsumed = lastByte != 0 ? 8 - ZSTD_highbit32(lastByte) : 0;
if (lastByte == 0)
return unchecked((nuint)(-(int)ZSTD_ErrorCode.ZSTD_error_corruption_detected));
}
bitD->bitsConsumed += (uint)((nuint)sizeof(nuint) - srcSize) * 8;
}
return srcSize;
}
[MethodImpl(MethodImplOptions.AggressiveInlining)]
private static nuint BIT_getUpperBits(nuint bitContainer, uint start)
{
return bitContainer >> (int)start;
}
[MethodImpl(MethodImplOptions.AggressiveInlining)]
private static nuint BIT_getMiddleBits(nuint bitContainer, uint start, uint nbBits)
{
uint regMask = (uint)(sizeof(nuint) * 8 - 1);
assert(nbBits < sizeof(uint) * 32 / sizeof(uint));
#if NETCOREAPP3_1_OR_GREATER
if (Bmi2.X64.IsSupported)
{
return (nuint)Bmi2.X64.ZeroHighBits(bitContainer >> (int)(start & regMask), nbBits);
}
if (Bmi2.IsSupported)
{
return Bmi2.ZeroHighBits((uint)(bitContainer >> (int)(start & regMask)), nbBits);
}
#endif
return (nuint)(bitContainer >> (int)(start & regMask) & ((ulong)1 << (int)nbBits) - 1);
}
/*! BIT_lookBits() :
* Provides next n bits from local register.
* local register is not modified.
* On 32-bits, maxNbBits==24.
* On 64-bits, maxNbBits==56.
* @return : value extracted */
[MethodImpl(MethodImplOptions.AggressiveInlining)]
private static nuint BIT_lookBits(BIT_DStream_t* bitD, uint nbBits)
{
return BIT_getMiddleBits(
bitD->bitContainer,
(uint)(sizeof(nuint) * 8) - bitD->bitsConsumed - nbBits,
nbBits
);
}
/*! BIT_lookBitsFast() :
* unsafe version; only works if nbBits >= 1 */
[MethodImpl(MethodImplOptions.AggressiveInlining)]
private static nuint BIT_lookBitsFast(BIT_DStream_t* bitD, uint nbBits)
{
uint regMask = (uint)(sizeof(nuint) * 8 - 1);
assert(nbBits >= 1);
return bitD->bitContainer
<< (int)(bitD->bitsConsumed & regMask)
>> (int)(regMask + 1 - nbBits & regMask);
}
[MethodImpl(MethodImplOptions.AggressiveInlining)]
private static void BIT_skipBits(BIT_DStream_t* bitD, uint nbBits)
{
bitD->bitsConsumed += nbBits;
}
/*! BIT_readBits() :
* Read (consume) next n bits from local register and update.
* Pay attention to not read more than nbBits contained into local register.
* @return : extracted value. */
[MethodImpl(MethodImplOptions.AggressiveInlining)]
private static nuint BIT_readBits(BIT_DStream_t* bitD, uint nbBits)
{
nuint value = BIT_lookBits(bitD, nbBits);
BIT_skipBits(bitD, nbBits);
return value;
}
/*! BIT_readBitsFast() :
* unsafe version; only works if nbBits >= 1 */
[MethodImpl(MethodImplOptions.AggressiveInlining)]
private static nuint BIT_readBitsFast(BIT_DStream_t* bitD, uint nbBits)
{
nuint value = BIT_lookBitsFast(bitD, nbBits);
assert(nbBits >= 1);
BIT_skipBits(bitD, nbBits);
return value;
}
/*! BIT_reloadDStream_internal() :
* Simple variant of BIT_reloadDStream(), with two conditions:
* 1. bitstream is valid : bitsConsumed <= sizeof(bitD->bitContainer)*8
* 2. look window is valid after shifted down : bitD->ptr >= bitD->start
*/
[MethodImpl(MethodImplOptions.AggressiveInlining)]
private static BIT_DStream_status BIT_reloadDStream_internal(BIT_DStream_t* bitD)
{
assert(bitD->bitsConsumed <= (uint)(sizeof(nuint) * 8));
bitD->ptr -= bitD->bitsConsumed >> 3;
assert(bitD->ptr >= bitD->start);
bitD->bitsConsumed &= 7;
bitD->bitContainer = MEM_readLEST(bitD->ptr);
return BIT_DStream_status.BIT_DStream_unfinished;
}
/*! BIT_reloadDStreamFast() :
* Similar to BIT_reloadDStream(), but with two differences:
* 1. bitsConsumed <= sizeof(bitD->bitContainer)*8 must hold!
* 2. Returns BIT_DStream_overflow when bitD->ptr < bitD->limitPtr, at this
* point you must use BIT_reloadDStream() to reload.
*/
[MethodImpl(MethodImplOptions.AggressiveInlining)]
private static BIT_DStream_status BIT_reloadDStreamFast(BIT_DStream_t* bitD)
{
if (bitD->ptr < bitD->limitPtr)
return BIT_DStream_status.BIT_DStream_overflow;
return BIT_reloadDStream_internal(bitD);
}
#if NET7_0_OR_GREATER
private static ReadOnlySpan<byte> Span_static_zeroFilled =>
new byte[] { 0, 0, 0, 0, 0, 0, 0, 0 };
private static nuint* static_zeroFilled =>
(nuint*)
System.Runtime.CompilerServices.Unsafe.AsPointer(
ref MemoryMarshal.GetReference(Span_static_zeroFilled)
);
#else
private static readonly nuint* static_zeroFilled = (nuint*)GetArrayPointer(
new byte[] { 0, 0, 0, 0, 0, 0, 0, 0 }
);
#endif
/*! BIT_reloadDStream() :
* Refill `bitD` from buffer previously set in BIT_initDStream() .
* This function is safe, it guarantees it will not never beyond src buffer.
* @return : status of `BIT_DStream_t` internal register.
* when status == BIT_DStream_unfinished, internal register is filled with at least 25 or 57 bits */
[MethodImpl(MethodImplOptions.AggressiveInlining)]
private static BIT_DStream_status BIT_reloadDStream(BIT_DStream_t* bitD)
{
if (bitD->bitsConsumed > (uint)(sizeof(nuint) * 8))
{
bitD->ptr = (sbyte*)&static_zeroFilled[0];
return BIT_DStream_status.BIT_DStream_overflow;
}
assert(bitD->ptr >= bitD->start);
if (bitD->ptr >= bitD->limitPtr)
{
return BIT_reloadDStream_internal(bitD);
}
if (bitD->ptr == bitD->start)
{
if (bitD->bitsConsumed < (uint)(sizeof(nuint) * 8))
return BIT_DStream_status.BIT_DStream_endOfBuffer;
return BIT_DStream_status.BIT_DStream_completed;
}
{
uint nbBytes = bitD->bitsConsumed >> 3;
BIT_DStream_status result = BIT_DStream_status.BIT_DStream_unfinished;
if (bitD->ptr - nbBytes < bitD->start)
{
nbBytes = (uint)(bitD->ptr - bitD->start);
result = BIT_DStream_status.BIT_DStream_endOfBuffer;
}
bitD->ptr -= nbBytes;
bitD->bitsConsumed -= nbBytes * 8;
bitD->bitContainer = MEM_readLEST(bitD->ptr);
return result;
}
}
/*! BIT_endOfDStream() :
* @return : 1 if DStream has _exactly_ reached its end (all bits consumed).
*/
[MethodImpl(MethodImplOptions.AggressiveInlining)]
private static uint BIT_endOfDStream(BIT_DStream_t* DStream)
{
return DStream->ptr == DStream->start && DStream->bitsConsumed == (uint)(sizeof(nuint) * 8)
? 1U
: 0U;
}
/*-********************************************************
* bitStream decoding
**********************************************************/
/*! BIT_initDStream() :
* Initialize a BIT_DStream_t.
* `bitD` : a pointer to an already allocated BIT_DStream_t structure.
* `srcSize` must be the *exact* size of the bitStream, in bytes.
* @return : size of stream (== srcSize), or an errorCode if a problem is detected
*/
[MethodImpl(MethodImplOptions.AggressiveInlining)]
private static nuint BIT_initDStream(ref BIT_DStream_t bitD, void* srcBuffer, nuint srcSize)
{
if (srcSize < 1)
{
bitD = new BIT_DStream_t();
return unchecked((nuint)(-(int)ZSTD_ErrorCode.ZSTD_error_srcSize_wrong));
}
bitD.start = (sbyte*)srcBuffer;
bitD.limitPtr = bitD.start + sizeof(nuint);
if (srcSize >= (nuint)sizeof(nuint))
{
bitD.ptr = (sbyte*)srcBuffer + srcSize - sizeof(nuint);
bitD.bitContainer = MEM_readLEST(bitD.ptr);
{
byte lastByte = ((byte*)srcBuffer)[srcSize - 1];
bitD.bitsConsumed = lastByte != 0 ? 8 - ZSTD_highbit32(lastByte) : 0;
if (lastByte == 0)
return unchecked((nuint)(-(int)ZSTD_ErrorCode.ZSTD_error_GENERIC));
}
}
else
{
bitD.ptr = bitD.start;
bitD.bitContainer = *(byte*)bitD.start;
switch (srcSize)
{
case 7:
bitD.bitContainer += (nuint)((byte*)srcBuffer)[6] << sizeof(nuint) * 8 - 16;
goto case 6;
case 6:
bitD.bitContainer += (nuint)((byte*)srcBuffer)[5] << sizeof(nuint) * 8 - 24;
goto case 5;
case 5:
bitD.bitContainer += (nuint)((byte*)srcBuffer)[4] << sizeof(nuint) * 8 - 32;
goto case 4;
case 4:
bitD.bitContainer += (nuint)((byte*)srcBuffer)[3] << 24;
goto case 3;
case 3:
bitD.bitContainer += (nuint)((byte*)srcBuffer)[2] << 16;
goto case 2;
case 2:
bitD.bitContainer += (nuint)((byte*)srcBuffer)[1] << 8;
goto default;
default:
break;
}
{
byte lastByte = ((byte*)srcBuffer)[srcSize - 1];
bitD.bitsConsumed = lastByte != 0 ? 8 - ZSTD_highbit32(lastByte) : 0;
if (lastByte == 0)
return unchecked((nuint)(-(int)ZSTD_ErrorCode.ZSTD_error_corruption_detected));
}
bitD.bitsConsumed += (uint)((nuint)sizeof(nuint) - srcSize) * 8;
}
return srcSize;
}
/*! BIT_lookBits() :
* Provides next n bits from local register.
* local register is not modified.
* On 32-bits, maxNbBits==24.
* On 64-bits, maxNbBits==56.
* @return : value extracted */
[MethodImpl(MethodImplOptions.AggressiveInlining)]
private static nuint BIT_lookBits(nuint bitD_bitContainer, uint bitD_bitsConsumed, uint nbBits)
{
return BIT_getMiddleBits(
bitD_bitContainer,
(uint)(sizeof(nuint) * 8) - bitD_bitsConsumed - nbBits,
nbBits
);
}
/*! BIT_lookBitsFast() :
* unsafe version; only works if nbBits >= 1 */
[MethodImpl(MethodImplOptions.AggressiveInlining)]
private static nuint BIT_lookBitsFast(
nuint bitD_bitContainer,
uint bitD_bitsConsumed,
uint nbBits
)
{
uint regMask = (uint)(sizeof(nuint) * 8 - 1);
assert(nbBits >= 1);
return bitD_bitContainer
<< (int)(bitD_bitsConsumed & regMask)
>> (int)(regMask + 1 - nbBits & regMask);
}
[MethodImpl(MethodImplOptions.AggressiveInlining)]
private static void BIT_skipBits(ref uint bitD_bitsConsumed, uint nbBits)
{
bitD_bitsConsumed += nbBits;
}
/*! BIT_readBits() :
* Read (consume) next n bits from local register and update.
* Pay attention to not read more than nbBits contained into local register.
* @return : extracted value. */
[MethodImpl(MethodImplOptions.AggressiveInlining)]
private static nuint BIT_readBits(
nuint bitD_bitContainer,
ref uint bitD_bitsConsumed,
uint nbBits
)
{
nuint value = BIT_lookBits(bitD_bitContainer, bitD_bitsConsumed, nbBits);
BIT_skipBits(ref bitD_bitsConsumed, nbBits);
return value;
}
/*! BIT_readBitsFast() :
* unsafe version; only works if nbBits >= 1 */
[MethodImpl(MethodImplOptions.AggressiveInlining)]
private static nuint BIT_readBitsFast(
nuint bitD_bitContainer,
ref uint bitD_bitsConsumed,
uint nbBits
)
{
nuint value = BIT_lookBitsFast(bitD_bitContainer, bitD_bitsConsumed, nbBits);
assert(nbBits >= 1);
BIT_skipBits(ref bitD_bitsConsumed, nbBits);
return value;
}
/*! BIT_reloadDStreamFast() :
* Similar to BIT_reloadDStream(), but with two differences:
* 1. bitsConsumed <= sizeof(bitD->bitContainer)*8 must hold!
* 2. Returns BIT_DStream_overflow when bitD->ptr < bitD->limitPtr, at this
* point you must use BIT_reloadDStream() to reload.
*/
[MethodImpl(MethodImplOptions.AggressiveInlining)]
private static BIT_DStream_status BIT_reloadDStreamFast(
ref nuint bitD_bitContainer,
ref uint bitD_bitsConsumed,
ref sbyte* bitD_ptr,
sbyte* bitD_start,
sbyte* bitD_limitPtr
)
{
if (bitD_ptr < bitD_limitPtr)
return BIT_DStream_status.BIT_DStream_overflow;
return BIT_reloadDStream_internal(
ref bitD_bitContainer,
ref bitD_bitsConsumed,
ref bitD_ptr,
bitD_start
);
}
/*! BIT_reloadDStream() :
* Refill `bitD` from buffer previously set in BIT_initDStream() .
* This function is safe, it guarantees it will not never beyond src buffer.
* @return : status of `BIT_DStream_t` internal register.
* when status == BIT_DStream_unfinished, internal register is filled with at least 25 or 57 bits */
[MethodImpl(MethodImplOptions.AggressiveInlining)]
private static BIT_DStream_status BIT_reloadDStream(
ref nuint bitD_bitContainer,
ref uint bitD_bitsConsumed,
ref sbyte* bitD_ptr,
sbyte* bitD_start,
sbyte* bitD_limitPtr
)
{
if (bitD_bitsConsumed > (uint)(sizeof(nuint) * 8))
{
bitD_ptr = (sbyte*)&static_zeroFilled[0];
return BIT_DStream_status.BIT_DStream_overflow;
}
assert(bitD_ptr >= bitD_start);
if (bitD_ptr >= bitD_limitPtr)
{
return BIT_reloadDStream_internal(
ref bitD_bitContainer,
ref bitD_bitsConsumed,
ref bitD_ptr,
bitD_start
);
}
if (bitD_ptr == bitD_start)
{
if (bitD_bitsConsumed < (uint)(sizeof(nuint) * 8))
return BIT_DStream_status.BIT_DStream_endOfBuffer;
return BIT_DStream_status.BIT_DStream_completed;
}
{
uint nbBytes = bitD_bitsConsumed >> 3;
BIT_DStream_status result = BIT_DStream_status.BIT_DStream_unfinished;
if (bitD_ptr - nbBytes < bitD_start)
{
nbBytes = (uint)(bitD_ptr - bitD_start);
result = BIT_DStream_status.BIT_DStream_endOfBuffer;
}
bitD_ptr -= nbBytes;
bitD_bitsConsumed -= nbBytes * 8;
bitD_bitContainer = MEM_readLEST(bitD_ptr);
return result;
}
}
/*! BIT_reloadDStream_internal() :
* Simple variant of BIT_reloadDStream(), with two conditions:
* 1. bitstream is valid : bitsConsumed <= sizeof(bitD->bitContainer)*8
* 2. look window is valid after shifted down : bitD->ptr >= bitD->start
*/
[MethodImpl(MethodImplOptions.AggressiveInlining)]
private static BIT_DStream_status BIT_reloadDStream_internal(
ref nuint bitD_bitContainer,
ref uint bitD_bitsConsumed,
ref sbyte* bitD_ptr,
sbyte* bitD_start
)
{
assert(bitD_bitsConsumed <= (uint)(sizeof(nuint) * 8));
bitD_ptr -= bitD_bitsConsumed >> 3;
assert(bitD_ptr >= bitD_start);
bitD_bitsConsumed &= 7;
bitD_bitContainer = MEM_readLEST(bitD_ptr);
return BIT_DStream_status.BIT_DStream_unfinished;
}
/*! BIT_endOfDStream() :
* @return : 1 if DStream has _exactly_ reached its end (all bits consumed).
*/
[MethodImpl(MethodImplOptions.AggressiveInlining)]
private static uint BIT_endOfDStream(
uint DStream_bitsConsumed,
sbyte* DStream_ptr,
sbyte* DStream_start
)
{
return DStream_ptr == DStream_start && DStream_bitsConsumed == (uint)(sizeof(nuint) * 8)
? 1U
: 0U;
}
}

View File

@@ -0,0 +1,8 @@
namespace SharpCompress.Compressors.ZStandard.Unsafe;
public struct BlockSummary
{
public nuint nbSequences;
public nuint blockSize;
public nuint litSize;
}

View File

@@ -0,0 +1,20 @@
namespace SharpCompress.Compressors.ZStandard.Unsafe;
/**
* COVER_best_t is used for two purposes:
* 1. Synchronizing threads.
* 2. Saving the best parameters and dictionary.
*
* All of the methods except COVER_best_init() are thread safe if zstd is
* compiled with multithreaded support.
*/
public unsafe struct COVER_best_s
{
public void* mutex;
public void* cond;
public nuint liveJobs;
public void* dict;
public nuint dictSize;
public ZDICT_cover_params_t parameters;
public nuint compressedSize;
}

View File

@@ -0,0 +1,19 @@
namespace SharpCompress.Compressors.ZStandard.Unsafe;
/*-*************************************
* Context
***************************************/
public unsafe struct COVER_ctx_t
{
public byte* samples;
public nuint* offsets;
public nuint* samplesSizes;
public nuint nbSamples;
public nuint nbTrainSamples;
public nuint nbTestSamples;
public uint* suffix;
public nuint suffixSize;
public uint* freqs;
public uint* dmerAt;
public uint d;
}

View File

@@ -0,0 +1,11 @@
namespace SharpCompress.Compressors.ZStandard.Unsafe;
/**
* Struct used for the dictionary selection function.
*/
public unsafe struct COVER_dictSelection
{
public byte* dictContent;
public nuint dictSize;
public nuint totalCompressedSize;
}

View File

@@ -0,0 +1,10 @@
namespace SharpCompress.Compressors.ZStandard.Unsafe;
/**
*Number of epochs and size of each epoch.
*/
public struct COVER_epoch_info_t
{
public uint num;
public uint size;
}

View File

@@ -0,0 +1,7 @@
namespace SharpCompress.Compressors.ZStandard.Unsafe;
public struct COVER_map_pair_t_s
{
public uint key;
public uint value;
}

Some files were not shown because too many files have changed in this diff Show More