Compare commits

...

272 Commits

Author SHA1 Message Date
copilot-swe-agent[bot]
b83ee67cdc Add regression test for TarArchive.ExtractAllEntries()
Co-authored-by: adamhathcock <527620+adamhathcock@users.noreply.github.com>
2025-11-27 16:01:24 +00:00
copilot-swe-agent[bot]
bfb8e9421e Changes before error encountered
Co-authored-by: adamhathcock <527620+adamhathcock@users.noreply.github.com>
2025-11-25 15:57:55 +00:00
copilot-swe-agent[bot]
27e6051241 Initial plan 2025-11-25 15:40:45 +00:00
Adam Hathcock
9433e06b93 Merge pull request #1023 from adamhathcock/copilot/fix-zip64-validation-issue
Fix version mismatch between Local File Header and Central Directory File Header in Zip archives
2025-11-25 15:30:48 +00:00
copilot-swe-agent[bot]
a92aaa51d5 Remove ZipCompressionMethod.None from version 63 check
None (stored) compression only requires version 10/20, not version 63.
Version 63 is specifically for advanced compression methods like LZMA,
PPMd, BZip2, and ZStandard.

Co-authored-by: adamhathcock <527620+adamhathcock@users.noreply.github.com>
2025-11-25 14:54:13 +00:00
Adam Hathcock
d41908adeb fixes for clarity 2025-11-25 14:25:58 +00:00
Adam Hathcock
81ca15b567 Update src/SharpCompress/Writers/Zip/ZipCentralDirectoryEntry.cs
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-11-25 14:13:10 +00:00
Adam Hathcock
b81d0fd730 Merge pull request #1009 from adamhathcock/dependabot/nuget/AwesomeAssertions-9.3.0
Bump AwesomeAssertions from 9.2.1 to 9.3.0
2025-11-25 11:55:41 +00:00
Adam Hathcock
3a1bb187e8 Merge pull request #1031 from adamhathcock/dependabot/github_actions/actions/checkout-6
Bump actions/checkout from 5 to 6
2025-11-25 11:55:21 +00:00
Adam Hathcock
3fee14a070 Merge pull request #1035 from adamhathcock/adam/update-csharpier
Update csharpier and reformat
2025-11-25 11:54:56 +00:00
Adam Hathcock
5bf789ac65 Update csharpier and reformat 2025-11-25 11:50:21 +00:00
dependabot[bot]
be06049db3 Bump actions/checkout from 5 to 6
Bumps [actions/checkout](https://github.com/actions/checkout) from 5 to 6.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v5...v6)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-version: '6'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-11-24 09:20:23 +00:00
Adam Hathcock
a0435f6a60 Merge pull request #1030 from TwanVanDongen/master
Added buffer boundary tests.
2025-11-23 15:53:36 +00:00
Twan van Dongen
2321e2c90b Added buffer boundaty tests. Changed largefile to Alice29.txt as it's sufficient for the tests. 2025-11-22 12:32:25 +01:00
Adam Hathcock
97e98d8629 Merge pull request #1028 from TwanVanDongen/master
Buffer boundary tests
2025-11-21 08:35:54 +00:00
Twan van Dongen
d96e7362d2 Buffer boundary test for ARC's Squeezed method 2025-11-20 21:56:07 +01:00
Twan van Dongen
7dd46fe5ed More buffer boundary tests 2025-11-20 19:43:07 +01:00
Adam Hathcock
04c044cb2b Update tests/SharpCompress.Test/Zip/Zip64VersionConsistencyTests.cs
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-11-19 15:49:55 +00:00
Adam Hathcock
cc10a12fbc Update tests/SharpCompress.Test/Zip/Zip64VersionConsistencyTests.cs
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-11-19 15:49:27 +00:00
Adam Hathcock
8b0a1c699f Update tests/SharpCompress.Test/Zip/Zip64VersionConsistencyTests.cs
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-11-19 15:49:17 +00:00
Adam Hathcock
15ca7c9807 Merge pull request #1025 from adamhathcock/copilot/fix-exception-disposing-rararchive
Fix ArgumentNullException when disposing RarArchive with damaged archives
2025-11-19 15:48:44 +00:00
Adam Hathcock
2b4da7e39b Merge pull request #1024 from adamhathcock/copilot/fix-memory-exhaustion-bug
Fix memory exhaustion in TAR header auto-detection
2025-11-19 15:33:52 +00:00
copilot-swe-agent[bot]
31f81f38af Fix null reference exception in UnpackV1.Unpack.Dispose()
Co-authored-by: adamhathcock <527620+adamhathcock@users.noreply.github.com>
2025-11-19 15:16:18 +00:00
copilot-swe-agent[bot]
72cf77b7c7 Initial plan 2025-11-19 15:09:19 +00:00
copilot-swe-agent[bot]
0fe48c647e Enhance fix to handle LZMA/PPMd/BZip2/ZStandard compression methods
Also fixes pre-existing version mismatch for advanced compression methods that require version 63. Added tests for LZMA and PPMd to verify version consistency.

Co-authored-by: adamhathcock <527620+adamhathcock@users.noreply.github.com>
2025-11-19 11:29:43 +00:00
copilot-swe-agent[bot]
7b06652bff Add validation to prevent memory exhaustion in TAR long name headers
Co-authored-by: adamhathcock <527620+adamhathcock@users.noreply.github.com>
2025-11-19 11:20:59 +00:00
copilot-swe-agent[bot]
434ce05416 Fix Zip64 version mismatch between LFH and CDFH
When UseZip64=true but files are small, ensure Central Directory File Header uses version 45 to match Local File Header. This fixes validation failures in System.IO.Packaging and other strict readers.

Co-authored-by: adamhathcock <527620+adamhathcock@users.noreply.github.com>
2025-11-19 11:14:06 +00:00
copilot-swe-agent[bot]
0698031ed4 Initial plan 2025-11-19 11:04:50 +00:00
copilot-swe-agent[bot]
51237a34eb Initial plan 2025-11-19 11:02:28 +00:00
Adam Hathcock
b8264a8131 Merge pull request #1004 from adamhathcock/adam/async-xz
Async XZ
2025-11-19 10:53:24 +00:00
Adam Hathcock
cad923018e Merge remote-tracking branch 'origin/master' into adam/async-xz 2025-11-19 09:56:01 +00:00
Adam Hathcock
db94b49941 fix mispellings 2025-11-19 09:55:43 +00:00
Adam Hathcock
72d15d9cbf Merge pull request #1019 from TwanVanDongen/master
ARJ's methods 1, 2 and 3 implemented for streaming
2025-11-18 13:41:49 +00:00
Twan van Dongen
e0186eadc0 Add buffer boundary tests for streaming decompression. Methods 1,2,3 still need fixing full streaming; exception now thrown to prevent corrupt output 2025-11-18 08:14:34 +01:00
Twan van Dongen
4cfa5b04af Included ARC and ARJ in readme 2025-11-14 15:57:22 +01:00
Twan van Dongen
f2c54b1f8b Merge branch 'master' of https://github.com/TwanVanDongen/sharpcompress 2025-11-14 15:43:51 +01:00
Twan van Dongen
d7d0bc6582 Missed formatting FilePart. 2025-11-14 15:43:47 +01:00
Twan
dd9dc2500b Merge branch 'adamhathcock:master' into master 2025-11-14 15:40:36 +01:00
Twan van Dongen
4efb109da8 Arj's methods CompressedMost (1), Compressed (2) and CompressedFaster (3) implemented. 2025-11-14 15:39:58 +01:00
Adam Hathcock
bcf9a6bdf1 Merge pull request #1017 from Morilli/fix-streamstack
Fix some IStreamStack and SharpCompressStream functions
2025-11-14 08:33:38 +00:00
Morilli
e3a25ecdc0 make formatting worse with csharpier 2025-11-14 03:54:26 +01:00
Morilli
783521928d fix SharpCompressStream BufferSize setter and Seek as well
the BufferSize setter was completely broken and trashed the `_internalPosition` value on set, for `Seek` this is just an off-by-one fix allowing seeking to immediately past the last byte in the buffer
2025-11-14 03:37:08 +01:00
Morilli
9a876abd31 fix IStreamStack.StackSeek with buffering streams 2025-11-14 03:37:08 +01:00
Morilli
97f58b412e Add test for StackSeek behavior 2025-11-14 03:37:08 +01:00
dependabot[bot]
4c61628078 Bump AwesomeAssertions from 9.2.1 to 9.3.0
---
updated-dependencies:
- dependency-name: AwesomeAssertions
  dependency-version: 9.3.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-11-10 11:57:30 +00:00
Adam Hathcock
99a8b0f750 Merge pull request #1007 from TwanVanDongen/master
ArjReader throws exception for password protected archives.
2025-11-06 09:58:24 +00:00
Twan van Dongen
a9017d7c25 ArjReader throws exception for password protected archives. 2025-11-06 09:32:12 +01:00
Adam Hathcock
d9e4b26648 Merge pull request #1006 from TwanVanDongen/master
ARJ multi-part archive handling improved
2025-11-05 08:37:54 +00:00
Twan van Dongen
0d03bafe49 Merg conflicts resolved 2025-11-05 08:19:30 +01:00
Twan
fee15a31f9 Merge branch 'adamhathcock:master' into master 2025-11-05 08:12:53 +01:00
Twan van Dongen
997d3910d4 CSharpier appliedArjReader thrown exception for multi-part archives. Method4 (decodefastest) refactored to support Stream. 2025-11-05 08:05:30 +01:00
Twan van Dongen
a3918cc0d7 ArjReader thrown exception for multi-part archives. Method4 (decodefastest) refactored to support Stream. 2025-11-05 08:02:20 +01:00
Adam Hathcock
f056986b07 Merge pull request #1005 from TwanVanDongen/master
Refactor SqueezeStream for CLS Compliance, Streaming, and Generic Test Coverage
2025-11-04 15:50:36 +00:00
Twan van Dongen
59c1f02f98 Still difficult to run CSharpier... 2025-11-02 19:23:30 +01:00
Twan van Dongen
3a71a2b1f8 Ran CSharpier. 2025-11-02 19:22:01 +01:00
Twan van Dongen
2ef1215b49 Merge branch 'master' of https://github.com/TwanVanDongen/sharpcompress 2025-11-02 19:20:47 +01:00
Twan van Dongen
130ac83076 Revert "CSharpier...Refactors the SqueezeStream class to ensure full CLS compliance and proper stream behavior. It replaces the previous one-shot decoding logic with a true streaming implementation by piping Huffman-decoded output into the existing RunLength90Stream, enabling real-time decompression."
This reverts commit dd606a0702.
2025-11-02 19:20:34 +01:00
Twan van Dongen
dd606a0702 CSharpier...Refactors the SqueezeStream class to ensure full CLS compliance and proper stream behavior. It replaces the previous one-shot decoding logic with a true streaming implementation by piping Huffman-decoded output into the existing RunLength90Stream, enabling real-time decompression. 2025-11-02 19:15:17 +01:00
Twan van Dongen
84cd772f50 Refactors the SqueezeStream class to ensure full CLS compliance and proper stream behavior. It replaces the previous one-shot decoding logic with a true streaming implementation by piping Huffman-decoded output into the existing RunLength90Stream, enabling real-time decompression. 2025-11-02 19:09:59 +01:00
Adam Hathcock
fa1d7af22f Merge remote-tracking branch 'origin/master' into adam/async-xz 2025-11-01 10:35:22 +00:00
Adam Hathcock
a771ba3bc0 async tests 2025-11-01 10:35:07 +00:00
Adam Hathcock
8b612c658d Merge pull request #1003 from adamhathcock/adam/async-lzma
async lzma
2025-11-01 10:34:54 +00:00
Adam Hathcock
7dd0da5fd7 Merge branch 'adam/async-lzma' into adam/async-xz 2025-11-01 10:28:27 +00:00
Adam Hathcock
f7b3525c4e fix tests and fmt 2025-11-01 10:25:14 +00:00
Adam Hathcock
de83bdae48 Merge remote-tracking branch 'origin/master' into adam/async-lzma 2025-11-01 10:21:25 +00:00
Adam Hathcock
d90b610767 Merge pull request #994 from TwanVanDongen/master
Adding the ARJ (Archived by Robert Jung) format
2025-11-01 10:20:54 +00:00
Adam Hathcock
2d41de6b72 add async tests 2025-11-01 09:57:26 +00:00
Adam Hathcock
f391c3caf3 make an async code 2025-11-01 09:42:17 +00:00
Twan van Dongen
9bdf150676 Merge branch 'master' of https://github.com/TwanVanDongen/sharpcompress 2025-10-31 16:17:26 +01:00
Twan van Dongen
0c199609eb CSharpier...Merge branch 'master' of https://github.com/TwanVanDongen/sharpcompress 2025-10-31 16:17:23 +01:00
Twan van Dongen
6eff9d3753 Merge branch 'master' of https://github.com/TwanVanDongen/sharpcompress 2025-10-31 16:15:51 +01:00
Twan
7ab16457c7 Added ARJ's compressmion method4 (compressed fastest).
Refactored TestBase to support archives with mixed compression algorithms.Merge branch 'adamhathcock:master' into master
2025-10-31 16:14:48 +01:00
Twan
e7ad8132b5 Merge branch 'adamhathcock:master' into master 2025-10-31 14:33:14 +01:00
Adam Hathcock
da87e45534 add async xzstream 2025-10-31 11:57:49 +00:00
Adam Hathcock
2ffaef5563 async xzblock 2025-10-31 11:47:27 +00:00
Adam Hathcock
55cb350d2c remove needless variable 2025-10-31 11:44:32 +00:00
Adam Hathcock
7fa271a1b4 fix ILLink 2025-10-31 11:43:35 +00:00
Adam Hathcock
c53ca372f2 don't use pools in tests 2025-10-31 11:39:57 +00:00
Adam Hathcock
75bc8501f4 Merge pull request #997 from adamhathcock/copilot/fix-ziparchive-linux-issue
Fix ArchiveFactory.Open double-wrapping causing "Cannot determine compressed stream type" on Linux
2025-10-31 11:24:59 +00:00
Adam Hathcock
1e22b47fe1 some fixes 2025-10-31 11:23:30 +00:00
Adam Hathcock
74e2dca207 fmt 2025-10-31 11:15:42 +00:00
Adam Hathcock
a669de24b7 Merge remote-tracking branch 'origin/master' into adam/async-lzma 2025-10-31 11:13:17 +00:00
Adam Hathcock
e1e9c449e9 use proper async versions 2025-10-31 11:12:46 +00:00
Adam Hathcock
60e1dc0239 review fixes 2025-10-31 11:10:55 +00:00
Adam Hathcock
10eb94fd82 Merge pull request #1002 from adamhathcock/adam/async-bzip2
async bzip2 and add
2025-10-31 11:09:39 +00:00
Adam Hathcock
ccc8587e5f review fixes 2025-10-31 10:55:33 +00:00
Adam Hathcock
53c96193c1 format 2025-10-30 16:40:59 +00:00
Adam Hathcock
d4f11e00b1 some more async changes 2025-10-30 16:31:45 +00:00
Adam Hathcock
321233b82c some async implementations 2025-10-30 15:08:57 +00:00
Adam Hathcock
eb188051d4 fmt 2025-10-30 14:53:11 +00:00
Adam Hathcock
a136084e11 add adc async 2025-10-30 14:52:51 +00:00
Adam Hathcock
bc06f3179d add basics for async bzip2 2025-10-30 14:42:46 +00:00
Adam Hathcock
ee84d971b2 Merge remote-tracking branch 'origin/master' into copilot/fix-ziparchive-linux-issue
# Conflicts:
#	src/SharpCompress/packages.lock.json
2025-10-30 14:30:31 +00:00
Twan
264d80ef4c Merge branch 'master' into master 2025-10-30 07:53:09 +01:00
Adam Hathcock
dba68187ac Merge pull request #996 from adamhathcock/adam/async-rar-ai 2025-10-29 14:26:15 +00:00
Adam Hathcock
ca4a1936b3 Merge remote-tracking branch 'origin/adam/async-rar-ai' into adam/async-rar-ai 2025-10-29 14:11:02 +00:00
Adam Hathcock
77c8d31a90 Merge pull request #1000 from adamhathcock/copilot/sub-pr-996
Fix Windows test failures due to ArrayPool buffer sizing
2025-10-29 14:10:39 +00:00
Adam Hathcock
ab7196f86c some review fixes 2025-10-29 14:07:12 +00:00
copilot-swe-agent[bot]
88b3a66bf9 Fix Windows test failures in SharpCompressStreamTests
ArrayPool.Rent() can return buffers larger than requested. The tests were using test.Length (the actual buffer size) instead of the requested size (0x1000), causing failures on Windows where ArrayPool returns larger buffers than on Linux.

Fixed by:
- Using explicit size (0x1000) instead of test.Length in Read() calls
- Using test.Take(0x1000) instead of test when comparing arrays

Co-authored-by: adamhathcock <527620+adamhathcock@users.noreply.github.com>
2025-10-29 13:40:44 +00:00
copilot-swe-agent[bot]
ea77666b4a Final verification: All tests pass, no security issues
Co-authored-by: adamhathcock <527620+adamhathcock@users.noreply.github.com>
2025-10-29 13:38:38 +00:00
copilot-swe-agent[bot]
db98e5f39b Fix ArchiveFactory.Open to avoid double-wrapping SharpCompressStream
Use SharpCompressStream.Create instead of constructor to properly handle
streams that are already wrapped. This prevents potential buffering issues
when opening ZIP files, particularly on Linux systems.

Added tests to verify both raw FileStream and pre-wrapped stream scenarios.

Co-authored-by: adamhathcock <527620+adamhathcock@users.noreply.github.com>
2025-10-29 13:34:53 +00:00
copilot-swe-agent[bot]
df59c5cb9d Plan: Fix potential ZipArchive.IsZipFile failure on Linux
Co-authored-by: adamhathcock <527620+adamhathcock@users.noreply.github.com>
2025-10-29 13:26:00 +00:00
copilot-swe-agent[bot]
e786e95358 Initial plan 2025-10-29 13:25:48 +00:00
Adam Hathcock
75ada5623c add async tests for compress stream 2025-10-29 13:23:50 +00:00
copilot-swe-agent[bot]
ad5c655c45 Initial plan 2025-10-29 13:09:05 +00:00
Adam Hathcock
65e607454e add async for UnpWriteBufAsync and remove comments 2025-10-29 13:07:43 +00:00
Adam Hathcock
f238be6003 add arraypool usage in init 2025-10-29 10:35:03 +00:00
Adam Hathcock
dc31e4c5fa fmt 2025-10-29 10:27:30 +00:00
Adam Hathcock
665d8cd266 ran out of tokens, things work but need one more conversion 2025-10-29 10:23:57 +00:00
Adam Hathcock
8324114e84 remove unused code 2025-10-29 10:11:34 +00:00
Adam Hathcock
b83e6ee4ce fix async usage 2025-10-29 09:55:11 +00:00
Adam Hathcock
58bab0d310 async unpack2.0 2025-10-29 09:53:14 +00:00
Adam Hathcock
1af51aaaba async unpack1.5 2025-10-29 09:50:12 +00:00
Adam Hathcock
a09327b831 unpack50async 2025-10-29 09:46:10 +00:00
Adam Hathcock
16543bf74c async UnpReadBufAsync 2025-10-29 09:38:55 +00:00
Adam Hathcock
aa4cd373ac added more async methods 2025-10-29 09:28:34 +00:00
Adam Hathcock
351e294362 convert unpack15 and unpack20 2025-10-29 09:23:42 +00:00
Adam Hathcock
a0c5b1cd9d add archive tests 2025-10-29 09:08:08 +00:00
Adam Hathcock
df2ed1e584 fix RarStream 2025-10-29 09:00:23 +00:00
Adam Hathcock
b354f7a3a5 fix logic mistake 2025-10-29 08:47:18 +00:00
Adam Hathcock
bb53d1e1c6 entrystream fixes and fmt 2025-10-29 08:41:05 +00:00
Twan van Dongen
2aabd8d0e1 Initial implementation of the ARJ (Archived by Robert Jung) format, supporting 'store' for single file archives. Multi-file archives and all compression algorithms still require implementations. 2025-10-28 20:38:29 +01:00
Adam Hathcock
aca97c2c6c add rarcrc tests 2025-10-28 16:48:05 +00:00
Adam Hathcock
8e7d959cf4 add async creations 2025-10-28 16:07:16 +00:00
Adam Hathcock
b23f031db9 add async reads 2025-10-28 15:52:18 +00:00
Adam Hathcock
1ba529a9d5 first pass of async files 2025-10-28 15:29:41 +00:00
Adam Hathcock
3d29c183ef basic async usage 2025-10-28 15:00:11 +00:00
Adam Hathcock
8a108b590d Merge pull request #993 from adamhathcock/adam/macos-fixes
make test linux only
2025-10-28 14:52:05 +00:00
Adam Hathcock
bca0f67344 make test linux only 2025-10-28 12:18:05 +00:00
Adam Hathcock
f3dad51134 Merge pull request #991 from adamhathcock/async-reader-methods
Add more Async tests and complete Zip tests
2025-10-28 11:50:03 +00:00
Adam Hathcock
f51840829c Merge branch 'master' into async-reader-methods 2025-10-28 11:39:35 +00:00
Adam Hathcock
aa1c0d0870 Merge pull request #988 from adamhathcock/copilot/fix-file-write-error
Fix extraction failure on Windows due to case-sensitive path comparison
2025-10-28 11:39:17 +00:00
Adam Hathcock
dee5ee6589 Merge pull request #989 from adamhathcock/copilot/add-support-empty-directories
Add support for empty directory entries in archives
2025-10-28 11:38:41 +00:00
Adam Hathcock
b799f479c4 Merge branch 'master' into copilot/add-support-empty-directories 2025-10-28 11:35:56 +00:00
copilot-swe-agent[bot]
b4352fefa5 Fix code formatting per CSharpier standards
Co-authored-by: adamhathcock <527620+adamhathcock@users.noreply.github.com>
2025-10-28 11:32:39 +00:00
Adam Hathcock
77d06fb60e Merge branch 'master' into copilot/fix-file-write-error 2025-10-28 11:32:05 +00:00
Adam Hathcock
00b647457c Merge pull request #990 from adamhathcock/copilot/add-common-exception-type
Make all library exceptions inherit from SharpCompressException
2025-10-28 11:31:32 +00:00
Adam Hathcock
153d10a35c add async to forward only streams 2025-10-28 11:30:12 +00:00
Adam Hathcock
06713c641e async deflate 64 2025-10-28 11:26:31 +00:00
copilot-swe-agent[bot]
210978ec2d Format code with CSharpier
Co-authored-by: adamhathcock <527620+adamhathcock@users.noreply.github.com>
2025-10-28 11:13:51 +00:00
Adam Hathcock
42f7d43139 enable zip64 tests that pass 2025-10-28 11:07:53 +00:00
Adam Hathcock
19967f5ad7 allow forward only write 2025-10-28 11:01:27 +00:00
Adam Hathcock
a1de3eb47d add async tests and clean up deflate64stream 2025-10-28 11:01:12 +00:00
copilot-swe-agent[bot]
e88841bdec Add support for empty directory entries in archives
Co-authored-by: adamhathcock <527620+adamhathcock@users.noreply.github.com>
2025-10-28 10:34:03 +00:00
copilot-swe-agent[bot]
c8e4915f8e Final progress report
Co-authored-by: adamhathcock <527620+adamhathcock@users.noreply.github.com>
2025-10-28 10:29:36 +00:00
copilot-swe-agent[bot]
a93a3f0598 Address code review feedback - fix formatting
Co-authored-by: adamhathcock <527620+adamhathcock@users.noreply.github.com>
2025-10-28 10:26:31 +00:00
copilot-swe-agent[bot]
084f81fc8d Format code with CSharpier 2025-10-28 10:23:57 +00:00
copilot-swe-agent[bot]
d148f36e87 Add support for empty directory entries in archives
Co-authored-by: adamhathcock <527620+adamhathcock@users.noreply.github.com>
2025-10-28 10:22:58 +00:00
copilot-swe-agent[bot]
150d9c35b7 Complete fix for case-sensitive path comparison on Windows
Co-authored-by: adamhathcock <527620+adamhathcock@users.noreply.github.com>
2025-10-28 10:22:56 +00:00
copilot-swe-agent[bot]
e11198616e Address code review feedback: use RuntimeInformation for platform detection
- Replace Environment.OSVersion.Platform with RuntimeInformation.IsOSPlatform(OSPlatform.Windows)
- Clarify test comment about platform-specific behavior
- Add using System.Runtime.InteropServices for RuntimeInformation

Co-authored-by: adamhathcock <527620+adamhathcock@users.noreply.github.com>
2025-10-28 10:20:05 +00:00
copilot-swe-agent[bot]
2f27f1e6f9 Complete exception hierarchy implementation
Co-authored-by: adamhathcock <527620+adamhathcock@users.noreply.github.com>
2025-10-28 10:18:42 +00:00
copilot-swe-agent[bot]
5392ca9794 Fix case-sensitive path comparison on Windows for file extraction
- Add PathComparison property that uses OrdinalIgnoreCase on Windows and Ordinal on Unix
- Update all path comparison checks in ExtractionMethods to use PathComparison
- Add comprehensive tests for extraction with case-insensitive paths
- Ensure security check for path traversal still works correctly

Co-authored-by: adamhathcock <527620+adamhathcock@users.noreply.github.com>
2025-10-28 10:16:07 +00:00
copilot-swe-agent[bot]
46672eb583 Update exceptions to inherit from SharpCompressException
Co-authored-by: adamhathcock <527620+adamhathcock@users.noreply.github.com>
2025-10-28 10:15:40 +00:00
Adam Hathcock
79653eee80 Merge remote-tracking branch 'origin/master' into async-reader-methods 2025-10-28 10:11:11 +00:00
Adam Hathcock
16ad86c52a add async implementations to readonlysubstream 2025-10-28 10:09:46 +00:00
copilot-swe-agent[bot]
6b7c6be5f5 Initial plan 2025-10-28 10:02:58 +00:00
copilot-swe-agent[bot]
fda1c2cc79 Initial plan 2025-10-28 10:01:37 +00:00
copilot-swe-agent[bot]
ef2fee0ee3 Initial plan 2025-10-28 10:00:50 +00:00
Adam Hathcock
e287d0811d minor clean up 2025-10-28 09:58:01 +00:00
Adam Hathcock
a7164f3c9f Merge pull request #987 from adamhathcock/copilot/fix-gzip-extract-not-supported-exception
Fix GZip extraction NotSupportedException for non-seekable streams
2025-10-27 14:26:04 +00:00
Adam Hathcock
c55060039a Merge branch 'master' into copilot/fix-gzip-extract-not-supported-exception 2025-10-27 14:21:49 +00:00
Adam Hathcock
c68d8deddd add async tests 2025-10-27 12:34:24 +00:00
Adam Hathcock
f6eabc5db1 Merge remote-tracking branch 'origin/master' into async-reader-methods
# Conflicts:
#	src/SharpCompress/packages.lock.json
2025-10-27 12:24:14 +00:00
Adam Hathcock
72d5884db6 added async reader overloads 2025-10-27 12:23:54 +00:00
Adam Hathcock
3595c89c79 Merge pull request #983 from adamhathcock/copilot/set-up-copilot-instructions
Configure Copilot coding agent instructions for SharpCompress
2025-10-27 12:13:50 +00:00
Adam Hathcock
9ebbc718c5 Merge branch 'master' into copilot/fix-gzip-extract-not-supported-exception 2025-10-27 12:11:32 +00:00
Adam Hathcock
e862480b86 Merge branch 'master' into copilot/set-up-copilot-instructions 2025-10-27 12:09:55 +00:00
Adam Hathcock
1f3d8fe6f1 Merge pull request #986 from adamhathcock/copilot/fix-compressiontype-none-bug
Support CompressionType.None for uncompressed 7z files
2025-10-27 12:06:52 +00:00
Adam Hathcock
41ae036ab4 Merge branch 'master' into copilot/set-up-copilot-instructions 2025-10-27 12:04:58 +00:00
copilot-swe-agent[bot]
588d176b96 Final verification - all tests pass
Co-authored-by: adamhathcock <527620+adamhathcock@users.noreply.github.com>
2025-10-27 11:15:16 +00:00
copilot-swe-agent[bot]
f8697120a0 Add support for CompressionType.None for uncompressed 7z files
Co-authored-by: adamhathcock <527620+adamhathcock@users.noreply.github.com>
2025-10-27 11:11:38 +00:00
copilot-swe-agent[bot]
1a767105e6 Add explanatory comment for EntryStartPosition initialization
Co-authored-by: adamhathcock <527620+adamhathcock@users.noreply.github.com>
2025-10-27 11:09:52 +00:00
copilot-swe-agent[bot]
4067b6ed2c Fix GZip extraction for non-seekable streams
- Modified GZipFilePart to only access stream.Position when stream.CanSeek is true
- Modified GZipArchiveEntry.OpenEntryStream to check CanSeek before accessing Position
- Added test case GZip_Archive_NonSeekableStream to verify non-seekable stream support
- All existing tests pass

Co-authored-by: adamhathcock <527620+adamhathcock@users.noreply.github.com>
2025-10-27 11:08:38 +00:00
copilot-swe-agent[bot]
b272dbfd1f Clarify CSharpier should be run from project root
Co-authored-by: adamhathcock <527620+adamhathcock@users.noreply.github.com>
2025-10-27 11:07:02 +00:00
copilot-swe-agent[bot]
48be7bbf86 Correct formatting instructions to use CSharpier
Co-authored-by: adamhathcock <527620+adamhathcock@users.noreply.github.com>
2025-10-27 11:05:47 +00:00
copilot-swe-agent[bot]
51e22cea71 Initial plan for fixing GZip non-seekable stream support
Co-authored-by: adamhathcock <527620+adamhathcock@users.noreply.github.com>
2025-10-27 11:02:53 +00:00
copilot-swe-agent[bot]
2241e27e68 Initial exploration: Understanding the issue
Co-authored-by: adamhathcock <527620+adamhathcock@users.noreply.github.com>
2025-10-27 11:02:29 +00:00
copilot-swe-agent[bot]
11c90ae879 Update Copilot configuration to reflect actual project setup
Co-authored-by: adamhathcock <527620+adamhathcock@users.noreply.github.com>
2025-10-27 11:00:02 +00:00
copilot-swe-agent[bot]
cf55125202 Initial plan 2025-10-27 10:58:07 +00:00
copilot-swe-agent[bot]
9cefb85905 Enhance AGENTS.md with SharpCompress-specific guidelines
Co-authored-by: adamhathcock <527620+adamhathcock@users.noreply.github.com>
2025-10-27 10:57:22 +00:00
copilot-swe-agent[bot]
fc672da0e0 Initial plan 2025-10-27 10:55:53 +00:00
Adam Hathcock
25b297b142 Merge pull request #980 from adamhathcock/async-gzip-tests
adds more async tests and overloads to make things writable and async
2025-10-27 10:54:42 +00:00
Adam Hathcock
ab03c12fa8 add more tests 2025-10-27 10:52:03 +00:00
copilot-swe-agent[bot]
3095c805ad Initial plan 2025-10-27 10:48:24 +00:00
Adam Hathcock
9c18daafb8 Merge remote-tracking branch 'origin/async-gzip-tests' into async-gzip-tests 2025-10-27 10:46:15 +00:00
Adam Hathcock
16182417fb add tar specific tests 2025-10-27 10:46:08 +00:00
Adam Hathcock
9af35201e4 Merge branch 'master' into async-gzip-tests 2025-10-27 10:31:51 +00:00
Adam Hathcock
f21b982955 adds more async tests and overloads to make things writable and async 2025-10-27 10:31:10 +00:00
Adam Hathcock
b3a20d05c5 Merge pull request #978 from adamhathcock/copilot/enhance-stream-io-async-support
Add comprehensive async/await support for Stream I/O operations
2025-10-27 10:23:08 +00:00
Adam Hathcock
4cd024a2b2 Merge remote-tracking branch 'origin/master' into copilot/enhance-stream-io-async-support 2025-10-27 10:20:06 +00:00
Adam Hathcock
63d08ebfd2 update agents 2025-10-27 10:19:57 +00:00
Adam Hathcock
c696197b03 formatting 2025-10-27 10:19:24 +00:00
Adam Hathcock
738a72228b added fixes and more async tests 2025-10-27 10:15:06 +00:00
Adam Hathcock
90641f4488 Merge pull request #979 from adamhathcock/dependabot/github_actions/actions/upload-artifact-5
Bump actions/upload-artifact from 4 to 5
2025-10-27 10:06:02 +00:00
Adam Hathcock
a4cc7eaf9b fully use async for zlibbase 2025-10-27 09:51:39 +00:00
Adam Hathcock
fdca728fdc add some dispose async 2025-10-27 09:47:15 +00:00
dependabot[bot]
d2c4ae8cdf Bump actions/upload-artifact from 4 to 5
Bumps [actions/upload-artifact](https://github.com/actions/upload-artifact) from 4 to 5.
- [Release notes](https://github.com/actions/upload-artifact/releases)
- [Commits](https://github.com/actions/upload-artifact/compare/v4...v5)

---
updated-dependencies:
- dependency-name: actions/upload-artifact
  dependency-version: '5'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-10-27 09:43:01 +00:00
Adam Hathcock
f3d3ac30a6 add gubbins 2025-10-27 09:39:08 +00:00
Adam Hathcock
f8cc4ade8a format 2025-10-27 09:37:00 +00:00
copilot-swe-agent[bot]
b3975b7bbd Add async tests for EntryStream and compression streams
Co-authored-by: adamhathcock <527620+adamhathcock@users.noreply.github.com>
2025-10-27 09:28:34 +00:00
copilot-swe-agent[bot]
4f1b61f5bc Add async support to DeflateStream and GZipStream
Co-authored-by: adamhathcock <527620+adamhathcock@users.noreply.github.com>
2025-10-27 09:20:37 +00:00
copilot-swe-agent[bot]
beeb37b4fd Add async support to EntryStream, ZlibStream, and ZlibBaseStream
Co-authored-by: adamhathcock <527620+adamhathcock@users.noreply.github.com>
2025-10-27 09:11:29 +00:00
copilot-swe-agent[bot]
43aa2bad22 Integrate async/await support from PR #976 as baseline
Co-authored-by: adamhathcock <527620+adamhathcock@users.noreply.github.com>
2025-10-27 09:00:38 +00:00
copilot-swe-agent[bot]
1b2ba921bb Initial plan 2025-10-27 08:48:01 +00:00
Adam Hathcock
f543da0ea8 Merge pull request #977 from adamhathcock/copilot/add-copilot-agent-config
Add Copilot agent manifest and usage documentation
2025-10-27 08:42:20 +00:00
copilot-swe-agent[bot]
e60c9efa84 Add copilot agent configuration and documentation
- Create .github/agents/copilot-agent.yml with agent manifest
- Replace AGENTS.md with agent usage and command documentation

Co-authored-by: adamhathcock <527620+adamhathcock@users.noreply.github.com>
2025-10-25 18:16:47 +00:00
copilot-swe-agent[bot]
c52fc6f240 Initial plan 2025-10-25 18:12:45 +00:00
Adam Hathcock
ee136b024a Merge pull request #974 from adamhathcock/adam/enable-agent
chore: add Copilot coding agent config and CI workflow
2025-10-25 16:09:51 +01:00
Adam Hathcock
699bc5f34b chore: add Copilot coding agent config and CI workflow 2025-10-25 16:05:09 +01:00
Adam Hathcock
9eed8e842c Merge pull request #972 from TwanVanDongen/master
Handle vendor-specific and malformed ZIP extra fields safely
2025-10-25 13:53:10 +01:00
Twan van Dongen
6d652a12ee And again forgot to apply CSharpierAdds bounds checks to prevent exceptions when extra fields are truncated or non-standard (e.g., 0x4341 "AC"/ARC0). Stops parsing gracefully, allowing other fields to be processed. 2025-10-24 17:18:37 +02:00
Adam Hathcock
e043e06656 Merge pull request #969 from adamhathcock/adam/perf
Add JB perf testing project.
2025-10-23 14:34:43 +01:00
Adam Hathcock
14b52599f4 Update src/SharpCompress/Compressors/Rar/UnpackV1/Unpack.cs
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-10-23 14:20:54 +01:00
Adam Hathcock
e3e2c0c567 Update tests/SharpCompress.Performance/LargeMemoryStream.cs
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-10-23 14:19:16 +01:00
Adam Hathcock
4fc5d60f03 reduce visibility 2025-10-23 14:16:39 +01:00
Adam Hathcock
c37a9e0f82 Merge remote-tracking branch 'origin/adam/perf' into adam/perf 2025-10-23 13:50:31 +01:00
Adam Hathcock
fed17ebb96 fmt 2025-10-23 13:50:07 +01:00
Adam Hathcock
eeac678872 More usage of pool and better copy 2025-10-23 13:49:54 +01:00
Adam Hathcock
f9ed0f2df9 Update tests/SharpCompress.Performance/Program.cs
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-10-23 11:47:42 +01:00
Adam Hathcock
0ddbacac85 Update src/SharpCompress/Compressors/Rar/UnpackV1/UnpackUtility.cs
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-10-23 11:47:27 +01:00
Adam Hathcock
f0d28aa5cf fmt 2025-10-23 11:43:38 +01:00
Adam Hathcock
cc84f6fee4 more making rar faster 2025-10-23 11:43:21 +01:00
Adam Hathcock
00e6eef369 used AI to optimize some copys and shifting 2025-10-23 11:18:50 +01:00
Adam Hathcock
1ae71907bc don't need to clear everything 2025-10-23 10:53:54 +01:00
Adam Hathcock
3ff688fba2 clear and null check 2025-10-23 10:48:18 +01:00
Adam Hathcock
bb59b3d456 add pool to LZMA out window 2025-10-23 09:54:52 +01:00
Adam Hathcock
186ea74ada add some fixes for rar to pool memory 2025-10-23 09:40:15 +01:00
Adam Hathcock
c108f2dcf3 add perf testing project using JB memory and cpu 2025-10-23 09:39:57 +01:00
Adam Hathcock
4cca232d83 Merge pull request #959 from adamhathcock/adam/xz-wrapped-often
Removed wrappers that weren't needed (probably)
2025-10-22 11:54:47 +01:00
Adam Hathcock
1db511e9cb Merge branch 'master' into adam/xz-wrapped-often 2025-10-22 11:51:46 +01:00
Adam Hathcock
76afa7d3bf Merge pull request #968 from adamhathcock/adam/rework-deps
rework dependencies to be correct for frameworks and update
2025-10-22 11:51:30 +01:00
Adam Hathcock
3513f7b1cd Update src/SharpCompress/SharpCompress.csproj
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-10-22 10:51:12 +01:00
Adam Hathcock
4531fe39e6 Merge branch 'master' into adam/rework-deps 2025-10-22 10:48:16 +01:00
Adam Hathcock
8d276a85bc rework dependencies to be correct for frameworks and update 2025-10-22 10:47:43 +01:00
Adam Hathcock
5f0d042bc3 Merge pull request #967 from adamhathcock/adam/reduce-custom-utilities
Reduce custom utilities for arrays/bytes
2025-10-22 10:41:10 +01:00
Adam Hathcock
408f07e3c4 Merge branch 'master' into adam/reduce-custom-utilities 2025-10-22 10:38:01 +01:00
Adam Hathcock
d1a540c90c use windows instead of skippable fact 2025-10-22 10:32:47 +01:00
Adam Hathcock
00df8e930e add windows only compile constant 2025-10-22 10:30:40 +01:00
Adam Hathcock
3b768b1b77 Merge pull request #961 from adamhathcock/dependabot/nuget/AwesomeAssertions-9.2.1
Bump AwesomeAssertions from 9.2.0 to 9.2.1
2025-10-22 10:25:01 +01:00
Adam Hathcock
42a7ececa0 Merge branch 'master' into adam/xz-wrapped-often 2025-10-22 10:22:36 +01:00
Adam Hathcock
e8867de049 Merge branch 'master' into dependabot/nuget/AwesomeAssertions-9.2.1 2025-10-22 10:21:59 +01:00
Adam Hathcock
a1dfa3dfa3 xplat tests for path characters 2025-10-22 10:21:22 +01:00
Adam Hathcock
83917d4f79 Merge remote-tracking branch 'origin/master' into adam/reduce-custom-utilities 2025-10-22 10:17:20 +01:00
Adam Hathcock
513cd4f905 some AI suggestions 2025-10-22 10:16:45 +01:00
Adam Hathcock
eda0309df3 Merge pull request #966 from adamhathcock/adam/reduce-stackalloc
Remove a dynamically created stackalloc
2025-10-22 10:13:14 +01:00
Adam Hathcock
74e27c028e fix the span length 2025-10-22 10:10:07 +01:00
Adam Hathcock
36c06c4089 ugh, this is used because it shadows a field 2025-10-22 09:32:19 +01:00
Adam Hathcock
249b8a9cdd add AI generated tests 2025-10-22 09:28:07 +01:00
Adam Hathcock
62bee15f00 fmt 2025-10-22 09:19:30 +01:00
Adam Hathcock
d8797b69e4 remove do while 2025-10-22 09:19:09 +01:00
Adam Hathcock
084fe72b02 Consolidate not null 2025-10-22 09:17:13 +01:00
Adam Hathcock
c823acaa3f optimize ReadFully and Skip 2025-10-22 09:10:16 +01:00
Adam Hathcock
e0d6cd9cb7 Try to reduce custom functions for array/byte management 2025-10-22 09:00:21 +01:00
Adam Hathcock
01021e102b remove some extra stackallocs 2025-10-22 08:36:03 +01:00
Adam Hathcock
6de738ff17 reduce dynamic stackallocs in unpackv1 2025-10-22 08:32:19 +01:00
Adam Hathcock
c0612547eb Merge pull request #964 from adamhathcock/adam/extract-all-solid-only
Only allow extract all on archives that are solid (some rars and 7zip only)
2025-10-21 14:08:23 +01:00
Adam Hathcock
e960907698 Update src/SharpCompress/Archives/AbstractArchive.cs
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-10-21 13:55:56 +01:00
Adam Hathcock
84e03b1b27 Allow 7zip files of all sizes? 2025-10-21 10:28:58 +01:00
Adam Hathcock
f1a80da34b fix tests that use extract all wrongly 2025-10-21 09:56:29 +01:00
Adam Hathcock
5a5a55e556 fmt 2025-10-21 09:22:35 +01:00
Adam Hathcock
e1f132b45b Only allow extract all on archives that are solid (some rars and 7zip only) 2025-10-21 09:21:46 +01:00
dependabot[bot]
087011aede Bump AwesomeAssertions from 9.2.0 to 9.2.1
---
updated-dependencies:
- dependency-name: AwesomeAssertions
  dependency-version: 9.2.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-10-20 10:44:17 +00:00
Adam Hathcock
1430bf9b31 fmt 2025-10-15 09:54:13 +01:00
Adam Hathcock
4e5de817ef Removed too many wrappers
# Conflicts:
#	src/SharpCompress/Compressors/Xz/XZIndex.cs
2025-10-15 09:53:46 +01:00
Adam Hathcock
5d6b94f8c3 Merge pull request #952 from adamhathcock/dependabot/github_actions/actions/checkout-5
Bump actions/checkout from 4 to 5
2025-10-14 08:25:53 +01:00
Adam Hathcock
8dfbe56f42 Merge branch 'master' into dependabot/github_actions/actions/checkout-5 2025-10-14 08:23:18 +01:00
Adam Hathcock
df79d983d7 Merge pull request #957 from adamhathcock/dependabot/github_actions/actions/setup-dotnet-5
Bump actions/setup-dotnet from 4 to 5
2025-10-14 08:22:47 +01:00
dependabot[bot]
6c23a28826 Bump actions/setup-dotnet from 4 to 5
Bumps [actions/setup-dotnet](https://github.com/actions/setup-dotnet) from 4 to 5.
- [Release notes](https://github.com/actions/setup-dotnet/releases)
- [Commits](https://github.com/actions/setup-dotnet/compare/v4...v5)

---
updated-dependencies:
- dependency-name: actions/setup-dotnet
  dependency-version: '5'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-10-13 16:21:25 +00:00
dependabot[bot]
f72289570a Bump actions/checkout from 4 to 5
Bumps [actions/checkout](https://github.com/actions/checkout) from 4 to 5.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v4...v5)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-version: '5'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-10-13 16:12:51 +00:00
Adam Hathcock
51bc9dc20e Merge pull request #950 from adamhathcock/adamhathcock-patch-1
Configure Dependabot for NuGet updates
2025-10-13 16:52:07 +01:00
Adam Hathcock
e45ac6bfa9 Update .github/dependabot.yml
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-10-13 16:48:55 +01:00
Adam Hathcock
44d1cbdb0c Configure Dependabot for NuGet updates
Added NuGet package ecosystem configuration for Dependabot.
2025-10-13 16:47:46 +01:00
227 changed files with 32046 additions and 2204 deletions

View File

@@ -3,7 +3,7 @@
"isRoot": true,
"tools": {
"csharpier": {
"version": "1.1.2",
"version": "1.2.1",
"commands": [
"csharpier"
],

7
.copilot-agent.yml Normal file
View File

@@ -0,0 +1,7 @@
enabled: true
agent:
name: copilot-coding-agent
allow:
- paths: ["src/**/*", "tests/**/*", "README.md", "AGENTS.md"]
actions: ["create", "modify"]
require_review_before_merge: true

15
.github/COPILOT_AGENT_README.md vendored Normal file
View File

@@ -0,0 +1,15 @@
# Copilot Coding Agent Configuration
This repository includes a minimal opt-in configuration and CI workflow to allow the GitHub Copilot coding agent to open and validate PRs.
- .copilot-agent.yml: opt-in config for automated agents
- .github/agents/copilot-agent.yml: detailed agent policy configuration
- .github/workflows/dotnetcore.yml: CI runs on PRs touching the solution, source, or tests to validate changes
- AGENTS.md: general instructions for Copilot coding agent with project-specific guidelines
Maintainers can adjust the allowed paths or disable the agent by editing or removing .copilot-agent.yml.
Notes:
- The agent can create, modify, and delete files within the allowed paths (src, tests, README.md, AGENTS.md)
- All changes require review before merge
- If build/test paths are different, update the workflow accordingly; this workflow targets SharpCompress.sln and the SharpCompress.Test test project.

17
.github/agents/copilot-agent.yml vendored Normal file
View File

@@ -0,0 +1,17 @@
enabled: true
agent:
name: copilot-coding-agent
allow:
- paths: ["src/**/*", "tests/**/*", "README.md", "AGENTS.md"]
actions: ["create", "modify", "delete"]
require_review_before_merge: true
required_approvals: 1
allowed_merge_strategies:
- squash
- merge
auto_merge_on_green: false
run_workflows: true
notes: |
- This manifest expresses the policy for the Copilot coding agent in this repository.
- It does NOT install or authorize the agent; a repository admin must install the Copilot coding agent app and grant the repository the necessary permissions (contents: write, pull_requests: write, checks: write, actions: write/read, issues: write) to allow the agent to act.
- Keep allow paths narrow and prefer require_review_before_merge during initial rollout.

View File

@@ -1,6 +1,13 @@
version: 2
updates:
- package-ecosystem: "github-actions" # search for actions - there are other options available
directory: "/" # search in .github/workflows under root `/`
- package-ecosystem: "github-actions"
directory: "/"
schedule:
interval: "weekly" # check for action update every week
interval: "weekly"
- package-ecosystem: "nuget"
directory: "/" # change to "/src/YourProject" if .csproj files are in subfolders
schedule:
interval: "weekly"
open-pull-requests-limit: 5
# optional: target-branch: "master"

View File

@@ -14,12 +14,12 @@ jobs:
os: [windows-latest, ubuntu-latest]
steps:
- uses: actions/checkout@v4
- uses: actions/setup-dotnet@v4
- uses: actions/checkout@v6
- uses: actions/setup-dotnet@v5
with:
dotnet-version: 8.0.x
- run: dotnet run --project build/build.csproj
- uses: actions/upload-artifact@v4
- uses: actions/upload-artifact@v5
with:
name: ${{ matrix.os }}-sharpcompress.nupkg
path: artifacts/*

3
.gitignore vendored
View File

@@ -11,6 +11,8 @@ TestResults/
packages/*/
project.lock.json
tests/TestArchives/Scratch
tests/TestArchives/*/Scratch
tests/TestArchives/*/Scratch2
.vs
tools
.vscode
@@ -18,4 +20,3 @@ tools
.DS_Store
*.snupkg
/tests/TestArchives/6d23a38c-f064-4ef1-ad89-b942396f53b9/Scratch

106
AGENTS.md
View File

@@ -1,19 +1,24 @@
---
description: 'Guidelines for building C# applications'
description: 'Guidelines for building SharpCompress - A C# compression library'
applyTo: '**/*.cs'
---
# C# Development
# SharpCompress Development
## About SharpCompress
SharpCompress is a pure C# compression library supporting multiple archive formats (Zip, Tar, GZip, BZip2, 7Zip, Rar, LZip, XZ, ZStandard) for .NET Framework 4.62, .NET Standard 2.1, .NET 6.0, and .NET 8.0. The library provides both seekable Archive APIs and forward-only Reader/Writer APIs for streaming scenarios.
## C# Instructions
- Always use the latest version C#, currently C# 13 features.
- Write clear and concise comments for each function.
- Follow the existing code style and patterns in the codebase.
## General Instructions
- Make only high confidence suggestions when reviewing code changes.
- Write code with good maintainability practices, including comments on why certain design decisions were made.
- Handle edge cases and write clear exception handling.
- For libraries or external dependencies, mention their usage and purpose in comments.
- Preserve backward compatibility when making changes to public APIs.
## Naming Conventions
@@ -23,20 +28,26 @@ applyTo: '**/*.cs'
## Code Formatting
- Use CSharpier for all code formatting to ensure consistent style across the project.
- Install CSharpier globally: `dotnet tool install -g csharpier`
- Format files with: `dotnet csharpier format .`
- Configure your IDE to format on save using CSharpier.
- CSharpier configuration can be customized via `.csharpierrc` file in the project root.
- Trust CSharpier's opinionated formatting decisions to maintain consistency.
- Use CSharpier for code formatting to ensure consistent style across the project
- CSharpier is configured as a local tool in `.config/dotnet-tools.json`
- Restore tools with: `dotnet tool restore`
- Format files from the project root with: `dotnet csharpier .`
- **Run `dotnet csharpier .` from the project root after making code changes before committing**
- Configure your IDE to format on save using CSharpier for the best experience
- The project also uses `.editorconfig` for editor settings (indentation, encoding, etc.)
- Let CSharpier handle code style while `.editorconfig` handles editor behavior
## Project Setup and Structure
- Guide users through creating a new .NET project with the appropriate templates.
- Explain the purpose of each generated file and folder to build understanding of the project structure.
- Demonstrate how to organize code using feature folders or domain-driven design principles.
- Show proper separation of concerns with models, services, and data access layers.
- Explain the Program.cs and configuration system in ASP.NET Core 9 including environment-specific settings.
- The project targets multiple frameworks: .NET Framework 4.62, .NET Standard 2.1, .NET 6.0, and .NET 8.0
- Main library is in `src/SharpCompress/`
- Tests are in `tests/SharpCompress.Test/`
- Performance tests are in `tests/SharpCompress.Performance/`
- Test archives are in `tests/TestArchives/`
- Build project is in `build/`
- Use `dotnet build` to build the solution
- Use `dotnet test` to run tests
- Solution file: `SharpCompress.sln`
## Nullable Reference Types
@@ -44,21 +55,64 @@ applyTo: '**/*.cs'
- Always use `is null` or `is not null` instead of `== null` or `!= null`.
- Trust the C# null annotations and don't add null checks when the type system says a value cannot be null.
## SharpCompress-Specific Guidelines
### Supported Formats
SharpCompress supports multiple archive and compression formats:
- **Archive Formats**: Zip, Tar, 7Zip, Rar (read-only)
- **Compression**: DEFLATE, BZip2, LZMA/LZMA2, PPMd, ZStandard (decompress only), Deflate64 (decompress only)
- **Combined Formats**: Tar.GZip, Tar.BZip2, Tar.LZip, Tar.XZ, Tar.ZStandard
- See FORMATS.md for complete format support matrix
### Stream Handling Rules
- **Disposal**: As of version 0.21, SharpCompress closes wrapped streams by default
- Use `ReaderOptions` or `WriterOptions` with `LeaveStreamOpen = true` to control stream disposal
- Use `NonDisposingStream` wrapper when working with compression streams directly to prevent disposal
- Always dispose of readers, writers, and archives in `using` blocks
- For forward-only operations, use Reader/Writer APIs; for random access, use Archive APIs
### Async/Await Patterns
- All I/O operations support async/await with `CancellationToken`
- Async methods follow the naming convention: `MethodNameAsync`
- Key async methods:
- `WriteEntryToAsync` - Extract entry asynchronously
- `WriteAllToDirectoryAsync` - Extract all entries asynchronously
- `WriteAsync` - Write entry asynchronously
- `WriteAllAsync` - Write directory asynchronously
- `OpenEntryStreamAsync` - Open entry stream asynchronously
- Always provide `CancellationToken` parameter in async methods
### Archive APIs vs Reader/Writer APIs
- **Archive API**: Use for random access with seekable streams (e.g., `ZipArchive`, `TarArchive`)
- **Reader API**: Use for forward-only reading on non-seekable streams (e.g., `ZipReader`, `TarReader`)
- **Writer API**: Use for forward-only writing on streams (e.g., `ZipWriter`, `TarWriter`)
- 7Zip only supports Archive API due to format limitations
### Tar-Specific Considerations
- Tar format requires file size in the header
- If no size is specified to TarWriter and the stream is not seekable, an exception will be thrown
- Tar combined with compression (GZip, BZip2, LZip, XZ) is supported
### Zip-Specific Considerations
- Supports Zip64 for large files (seekable streams only)
- Supports PKWare and WinZip AES encryption
- Multiple compression methods: None, Shrink, Reduce, Implode, DEFLATE, Deflate64, BZip2, LZMA, PPMd
- Encrypted LZMA is not supported
### Performance Considerations
- For large files, use Reader/Writer APIs with non-seekable streams to avoid loading entire file in memory
- Leverage async I/O for better scalability
- Consider compression level trade-offs (speed vs. size)
- Use appropriate buffer sizes for stream operations
## Testing
- Always include test cases for critical paths of the application.
- Guide users through creating unit tests.
- Test with multiple archive formats when making changes to core functionality.
- Include tests for both Archive and Reader/Writer APIs when applicable.
- Test async operations with cancellation tokens.
- Do not emit "Act", "Arrange" or "Assert" comments.
- Copy existing style in nearby files for test method names and capitalization.
- Explain integration testing approaches for API endpoints.
- Demonstrate how to mock dependencies for effective testing.
- Show how to test authentication and authorization logic.
- Explain test-driven development principles as applied to API development.
## Performance Optimization
- Guide users on implementing caching strategies (in-memory, distributed, response caching).
- Explain asynchronous programming patterns and why they matter for API performance.
- Demonstrate pagination, filtering, and sorting for large data sets.
- Show how to implement compression and other performance optimizations.
- Explain how to measure and benchmark API performance.
- Use test archives from `tests/TestArchives` directory for consistency.
- Test stream disposal and `LeaveStreamOpen` behavior.
- Test edge cases: empty archives, large files, corrupted archives, encrypted archives.

View File

@@ -1,19 +1,20 @@
<Project>
<ItemGroup>
<PackageVersion Include="Bullseye" Version="6.0.0" />
<PackageVersion Include="AwesomeAssertions" Version="9.2.0" />
<PackageVersion Include="AwesomeAssertions" Version="9.3.0" />
<PackageVersion Include="Glob" Version="1.1.9" />
<PackageVersion Include="JetBrains.Profiler.SelfApi" Version="2.5.14" />
<PackageVersion Include="Microsoft.Bcl.AsyncInterfaces" Version="8.0.0" />
<PackageVersion Include="Microsoft.NET.Test.Sdk" Version="17.13.0" />
<PackageVersion Include="Microsoft.NET.Test.Sdk" Version="18.0.0" />
<PackageVersion Include="Mono.Posix.NETStandard" Version="1.0.0" />
<PackageVersion Include="SimpleExec" Version="12.0.0" />
<PackageVersion Include="System.Buffers" Version="4.6.0" />
<PackageVersion Include="System.Memory" Version="4.6.0" />
<PackageVersion Include="System.Buffers" Version="4.6.1" />
<PackageVersion Include="System.Memory" Version="4.6.3" />
<PackageVersion Include="System.Text.Encoding.CodePages" Version="8.0.0" />
<PackageVersion Include="xunit" Version="2.9.3" />
<PackageVersion Include="xunit.runner.visualstudio" Version="3.1.5" />
<PackageVersion Include="xunit.SkippableFact" Version="1.5.23" />
<PackageVersion Include="ZstdSharp.Port" Version="0.8.6" />
<PackageVersion Include="Microsoft.NET.ILLink.Tasks" Version="8.0.21" />
<PackageVersion Include="Microsoft.SourceLink.GitHub" Version="8.0.0" />
<PackageVersion Include="Microsoft.NETFramework.ReferenceAssemblies" Version="1.0.3" />
</ItemGroup>

View File

@@ -1,9 +1,11 @@
# SharpCompress
SharpCompress is a compression library in pure C# for .NET Framework 4.62, .NET Standard 2.1, .NET 6.0 and NET 8.0 that can unrar, un7zip, unzip, untar unbzip2, ungzip, unlzip, unzstd with forward-only reading and file random access APIs. Write support for zip/tar/bzip2/gzip/lzip are implemented.
SharpCompress is a compression library in pure C# for .NET Framework 4.62, .NET Standard 2.1, .NET 6.0 and NET 8.0 that can unrar, un7zip, unzip, untar unbzip2, ungzip, unlzip, unzstd, unarc and unarj with forward-only reading and file random access APIs. Write support for zip/tar/bzip2/gzip/lzip are implemented.
The major feature is support for non-seekable streams so large files can be processed on the fly (i.e. download stream).
**NEW:** All I/O operations now support async/await for improved performance and scalability. See the [Async Usage](#async-usage) section below.
GitHub Actions Build -
[![SharpCompress](https://github.com/adamhathcock/sharpcompress/actions/workflows/dotnetcore.yml/badge.svg)](https://github.com/adamhathcock/sharpcompress/actions/workflows/dotnetcore.yml)
[![Static Badge](https://img.shields.io/badge/API%20Docs-DNDocs-190088?logo=readme&logoColor=white)](https://dndocs.com/d/sharpcompress/api/index.html)
@@ -32,6 +34,82 @@ Hi everyone. I hope you're using SharpCompress and finding it useful. Please giv
Please do not email me directly to ask for help. If you think there is a real issue, please report it here.
## Async Usage
SharpCompress now provides full async/await support for all I/O operations, allowing for better performance and scalability in modern applications.
### Async Reading Examples
Extract entries asynchronously:
```csharp
using (Stream stream = File.OpenRead("archive.zip"))
using (var reader = ReaderFactory.Open(stream))
{
while (reader.MoveToNextEntry())
{
if (!reader.Entry.IsDirectory)
{
// Async extraction
await reader.WriteEntryToDirectoryAsync(
@"C:\temp",
new ExtractionOptions() { ExtractFullPath = true, Overwrite = true },
cancellationToken
);
}
}
}
```
Extract all entries to directory asynchronously:
```csharp
using (Stream stream = File.OpenRead("archive.tar.gz"))
using (var reader = ReaderFactory.Open(stream))
{
await reader.WriteAllToDirectoryAsync(
@"C:\temp",
new ExtractionOptions() { ExtractFullPath = true, Overwrite = true },
cancellationToken
);
}
```
Open entry stream asynchronously:
```csharp
using (var archive = ZipArchive.Open("archive.zip"))
{
foreach (var entry in archive.Entries.Where(e => !e.IsDirectory))
{
using (var entryStream = await entry.OpenEntryStreamAsync(cancellationToken))
{
// Process stream asynchronously
await entryStream.CopyToAsync(outputStream, cancellationToken);
}
}
}
```
### Async Writing Examples
Write files asynchronously:
```csharp
using (Stream stream = File.OpenWrite("output.zip"))
using (var writer = WriterFactory.Open(stream, ArchiveType.Zip, CompressionType.Deflate))
{
await writer.WriteAsync("file1.txt", fileStream, DateTime.Now, cancellationToken);
}
```
Write all files from directory asynchronously:
```csharp
using (Stream stream = File.OpenWrite("output.tar.gz"))
using (var writer = WriterFactory.Open(stream, ArchiveType.Tar, new WriterOptions(CompressionType.GZip)))
{
await writer.WriteAllAsync(@"D:\files", "*", SearchOption.AllDirectories, cancellationToken);
}
```
All async methods support `CancellationToken` for graceful cancellation of long-running operations.
## Want to contribute?
I'm always looking for help or ideas. Please submit code or email with ideas. Unfortunately, just letting me know you'd like to help is not enough because I really have no overall plan of what needs to be done. I'll definitely accept code submissions and add you as a member of the project!

View File

@@ -21,8 +21,14 @@ Project("{2150E333-8FDC-42A3-9474-1A3956D46DE8}") = "Config", "Config", "{CDB425
Directory.Packages.props = Directory.Packages.props
NuGet.config = NuGet.config
.github\workflows\dotnetcore.yml = .github\workflows\dotnetcore.yml
USAGE.md = USAGE.md
README.md = README.md
FORMATS.md = FORMATS.md
AGENTS.md = AGENTS.md
EndProjectSection
EndProject
Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "SharpCompress.Performance", "tests\SharpCompress.Performance\SharpCompress.Performance.csproj", "{5BDE6DBC-9E5F-4E21-AB71-F138A3E72B17}"
EndProject
Global
GlobalSection(SolutionConfigurationPlatforms) = preSolution
Debug|Any CPU = Debug|Any CPU
@@ -41,6 +47,10 @@ Global
{D4D613CB-5E94-47FB-85BE-B8423D20C545}.Debug|Any CPU.Build.0 = Debug|Any CPU
{D4D613CB-5E94-47FB-85BE-B8423D20C545}.Release|Any CPU.ActiveCfg = Release|Any CPU
{D4D613CB-5E94-47FB-85BE-B8423D20C545}.Release|Any CPU.Build.0 = Release|Any CPU
{5BDE6DBC-9E5F-4E21-AB71-F138A3E72B17}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
{5BDE6DBC-9E5F-4E21-AB71-F138A3E72B17}.Debug|Any CPU.Build.0 = Debug|Any CPU
{5BDE6DBC-9E5F-4E21-AB71-F138A3E72B17}.Release|Any CPU.ActiveCfg = Release|Any CPU
{5BDE6DBC-9E5F-4E21-AB71-F138A3E72B17}.Release|Any CPU.Build.0 = Release|Any CPU
EndGlobalSection
GlobalSection(SolutionProperties) = preSolution
HideSolutionNode = FALSE
@@ -48,5 +58,6 @@ Global
GlobalSection(NestedProjects) = preSolution
{FD19DDD8-72B2-4024-8665-0D1F7A2AA998} = {3C5BE746-03E5-4895-9988-0B57F162F86C}
{F2B1A1EB-0FA6-40D0-8908-E13247C7226F} = {0F0901FF-E8D9-426A-B5A2-17C7F47C1529}
{5BDE6DBC-9E5F-4E21-AB71-F138A3E72B17} = {0F0901FF-E8D9-426A-B5A2-17C7F47C1529}
EndGlobalSection
EndGlobal

143
USAGE.md
View File

@@ -1,5 +1,18 @@
# SharpCompress Usage
## Async/Await Support
SharpCompress now provides full async/await support for all I/O operations. All `Read`, `Write`, and extraction operations have async equivalents ending in `Async` that accept an optional `CancellationToken`. This enables better performance and scalability for I/O-bound operations.
**Key Async Methods:**
- `reader.WriteEntryToAsync(stream, cancellationToken)` - Extract entry asynchronously
- `reader.WriteAllToDirectoryAsync(path, options, cancellationToken)` - Extract all asynchronously
- `writer.WriteAsync(filename, stream, modTime, cancellationToken)` - Write entry asynchronously
- `writer.WriteAllAsync(directory, pattern, searchOption, cancellationToken)` - Write directory asynchronously
- `entry.OpenEntryStreamAsync(cancellationToken)` - Open entry stream asynchronously
See [Async Examples](#async-examples) section below for usage patterns.
## Stream Rules (changed with 0.21)
When dealing with Streams, the rule should be that you don't close a stream you didn't create. This, in effect, should mean you should always put a Stream in a using block to dispose it.
@@ -172,3 +185,133 @@ foreach(var entry in tr.Entries)
Console.WriteLine($"{entry.Key}");
}
```
## Async Examples
### Async Reader Examples
**Extract single entry asynchronously:**
```C#
using (Stream stream = File.OpenRead("archive.zip"))
using (var reader = ReaderFactory.Open(stream))
{
while (reader.MoveToNextEntry())
{
if (!reader.Entry.IsDirectory)
{
using (var entryStream = reader.OpenEntryStream())
{
using (var outputStream = File.Create("output.bin"))
{
await reader.WriteEntryToAsync(outputStream, cancellationToken);
}
}
}
}
}
```
**Extract all entries asynchronously:**
```C#
using (Stream stream = File.OpenRead("archive.tar.gz"))
using (var reader = ReaderFactory.Open(stream))
{
await reader.WriteAllToDirectoryAsync(
@"D:\temp",
new ExtractionOptions()
{
ExtractFullPath = true,
Overwrite = true
},
cancellationToken
);
}
```
**Open and process entry stream asynchronously:**
```C#
using (var archive = ZipArchive.Open("archive.zip"))
{
foreach (var entry in archive.Entries.Where(e => !e.IsDirectory))
{
using (var entryStream = await entry.OpenEntryStreamAsync(cancellationToken))
{
// Process the decompressed stream asynchronously
await ProcessStreamAsync(entryStream, cancellationToken);
}
}
}
```
### Async Writer Examples
**Write single file asynchronously:**
```C#
using (Stream archiveStream = File.OpenWrite("output.zip"))
using (var writer = WriterFactory.Open(archiveStream, ArchiveType.Zip, CompressionType.Deflate))
{
using (Stream fileStream = File.OpenRead("input.txt"))
{
await writer.WriteAsync("entry.txt", fileStream, DateTime.Now, cancellationToken);
}
}
```
**Write entire directory asynchronously:**
```C#
using (Stream stream = File.OpenWrite("backup.tar.gz"))
using (var writer = WriterFactory.Open(stream, ArchiveType.Tar, new WriterOptions(CompressionType.GZip)))
{
await writer.WriteAllAsync(
@"D:\files",
"*",
SearchOption.AllDirectories,
cancellationToken
);
}
```
**Write with progress tracking and cancellation:**
```C#
var cts = new CancellationTokenSource();
// Set timeout or cancel from UI
cts.CancelAfter(TimeSpan.FromMinutes(5));
using (Stream stream = File.OpenWrite("archive.zip"))
using (var writer = WriterFactory.Open(stream, ArchiveType.Zip, CompressionType.Deflate))
{
try
{
await writer.WriteAllAsync(@"D:\data", "*", SearchOption.AllDirectories, cts.Token);
}
catch (OperationCanceledException)
{
Console.WriteLine("Operation was cancelled");
}
}
```
### Archive Async Examples
**Extract from archive asynchronously:**
```C#
using (var archive = ZipArchive.Open("archive.zip"))
{
using (var reader = archive.ExtractAllEntries())
{
await reader.WriteAllToDirectoryAsync(
@"C:\output",
new ExtractionOptions() { ExtractFullPath = true, Overwrite = true },
cancellationToken
);
}
}
```
**Benefits of Async Operations:**
- Non-blocking I/O for better application responsiveness
- Improved scalability for server applications
- Support for cancellation via CancellationToken
- Better resource utilization in async/await contexts
- Compatible with modern .NET async patterns

View File

@@ -144,6 +144,12 @@ public abstract class AbstractArchive<TEntry, TVolume> : IArchive, IArchiveExtra
/// <returns></returns>
public IReader ExtractAllEntries()
{
if (!IsSolid && Type != ArchiveType.SevenZip && Type != ArchiveType.Tar)
{
throw new InvalidOperationException(
"ExtractAllEntries can only be used on solid archives or 7Zip archives (which require random access)."
);
}
((IArchiveExtractionListener)this).EnsureEntriesLoaded();
return CreateReaderForSolidExtraction();
}

View File

@@ -2,6 +2,8 @@ using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Threading;
using System.Threading.Tasks;
using SharpCompress.Common;
using SharpCompress.IO;
using SharpCompress.Writers;
@@ -94,6 +96,9 @@ public abstract class AbstractWritableArchive<TEntry, TVolume>
DateTime? modified
) => AddEntry(key, source, closeStream, size, modified);
IArchiveEntry IWritableArchive.AddDirectoryEntry(string key, DateTime? modified) =>
AddDirectoryEntry(key, modified);
public TEntry AddEntry(
string key,
Stream source,
@@ -134,6 +139,22 @@ public abstract class AbstractWritableArchive<TEntry, TVolume>
return false;
}
public TEntry AddDirectoryEntry(string key, DateTime? modified = null)
{
if (key.Length > 0 && key[0] is '/' or '\\')
{
key = key.Substring(1);
}
if (DoesKeyMatchExisting(key))
{
throw new ArchiveException("Cannot add entry with duplicate key: " + key);
}
var entry = CreateDirectoryEntry(key, modified);
newEntries.Add(entry);
RebuildModifiedCollection();
return entry;
}
public void SaveTo(Stream stream, WriterOptions options)
{
//reset streams of new entries
@@ -141,6 +162,18 @@ public abstract class AbstractWritableArchive<TEntry, TVolume>
SaveTo(stream, options, OldEntries, newEntries);
}
public async Task SaveToAsync(
Stream stream,
WriterOptions options,
CancellationToken cancellationToken = default
)
{
//reset streams of new entries
newEntries.Cast<IWritableArchiveEntry>().ForEach(x => x.Stream.Seek(0, SeekOrigin.Begin));
await SaveToAsync(stream, options, OldEntries, newEntries, cancellationToken)
.ConfigureAwait(false);
}
protected TEntry CreateEntry(
string key,
Stream source,
@@ -166,6 +199,8 @@ public abstract class AbstractWritableArchive<TEntry, TVolume>
bool closeStream
);
protected abstract TEntry CreateDirectoryEntry(string key, DateTime? modified);
protected abstract void SaveTo(
Stream stream,
WriterOptions options,
@@ -173,6 +208,14 @@ public abstract class AbstractWritableArchive<TEntry, TVolume>
IEnumerable<TEntry> newEntries
);
protected abstract Task SaveToAsync(
Stream stream,
WriterOptions options,
IEnumerable<TEntry> oldEntries,
IEnumerable<TEntry> newEntries,
CancellationToken cancellationToken = default
);
public override void Dispose()
{
base.Dispose();

View File

@@ -20,7 +20,7 @@ public static class ArchiveFactory
public static IArchive Open(Stream stream, ReaderOptions? readerOptions = null)
{
readerOptions ??= new ReaderOptions();
stream = new SharpCompressStream(stream, bufferSize: readerOptions.BufferSize);
stream = SharpCompressStream.Create(stream, bufferSize: readerOptions.BufferSize);
return FindFactory<IArchiveFactory>(stream).Open(stream, readerOptions);
}
@@ -45,7 +45,7 @@ public static class ArchiveFactory
/// <param name="options"></param>
public static IArchive Open(string filePath, ReaderOptions? options = null)
{
filePath.CheckNotNullOrEmpty(nameof(filePath));
filePath.NotNullOrEmpty(nameof(filePath));
return Open(new FileInfo(filePath), options);
}
@@ -68,7 +68,7 @@ public static class ArchiveFactory
/// <param name="options"></param>
public static IArchive Open(IEnumerable<FileInfo> fileInfos, ReaderOptions? options = null)
{
fileInfos.CheckNotNull(nameof(fileInfos));
fileInfos.NotNull(nameof(fileInfos));
var filesArray = fileInfos.ToArray();
if (filesArray.Length == 0)
{
@@ -81,7 +81,7 @@ public static class ArchiveFactory
return Open(fileInfo, options);
}
fileInfo.CheckNotNull(nameof(fileInfo));
fileInfo.NotNull(nameof(fileInfo));
options ??= new ReaderOptions { LeaveStreamOpen = false };
return FindFactory<IMultiArchiveFactory>(fileInfo).Open(filesArray, options);
@@ -94,7 +94,7 @@ public static class ArchiveFactory
/// <param name="options"></param>
public static IArchive Open(IEnumerable<Stream> streams, ReaderOptions? options = null)
{
streams.CheckNotNull(nameof(streams));
streams.NotNull(nameof(streams));
var streamsArray = streams.ToArray();
if (streamsArray.Length == 0)
{
@@ -107,7 +107,7 @@ public static class ArchiveFactory
return Open(firstStream, options);
}
firstStream.CheckNotNull(nameof(firstStream));
firstStream.NotNull(nameof(firstStream));
options ??= new ReaderOptions();
return FindFactory<IMultiArchiveFactory>(firstStream).Open(streamsArray, options);
@@ -129,7 +129,7 @@ public static class ArchiveFactory
private static T FindFactory<T>(FileInfo finfo)
where T : IFactory
{
finfo.CheckNotNull(nameof(finfo));
finfo.NotNull(nameof(finfo));
using Stream stream = finfo.OpenRead();
return FindFactory<T>(stream);
}
@@ -137,7 +137,7 @@ public static class ArchiveFactory
private static T FindFactory<T>(Stream stream)
where T : IFactory
{
stream.CheckNotNull(nameof(stream));
stream.NotNull(nameof(stream));
if (!stream.CanRead || !stream.CanSeek)
{
throw new ArgumentException("Stream should be readable and seekable");
@@ -172,7 +172,7 @@ public static class ArchiveFactory
int bufferSize = ReaderOptions.DefaultBufferSize
)
{
filePath.CheckNotNullOrEmpty(nameof(filePath));
filePath.NotNullOrEmpty(nameof(filePath));
using Stream s = File.OpenRead(filePath);
return IsArchive(s, out type, bufferSize);
}
@@ -184,7 +184,7 @@ public static class ArchiveFactory
)
{
type = null;
stream.CheckNotNull(nameof(stream));
stream.NotNull(nameof(stream));
if (!stream.CanRead || !stream.CanSeek)
{
@@ -215,7 +215,7 @@ public static class ArchiveFactory
/// <returns></returns>
public static IEnumerable<string> GetFileParts(string part1)
{
part1.CheckNotNullOrEmpty(nameof(part1));
part1.NotNullOrEmpty(nameof(part1));
return GetFileParts(new FileInfo(part1)).Select(a => a.FullName);
}
@@ -226,7 +226,7 @@ public static class ArchiveFactory
/// <returns></returns>
public static IEnumerable<FileInfo> GetFileParts(FileInfo part1)
{
part1.CheckNotNull(nameof(part1));
part1.NotNull(nameof(part1));
yield return part1;
foreach (var factory in Factory.Factories.OfType<IFactory>())

View File

@@ -2,6 +2,8 @@ using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Threading;
using System.Threading.Tasks;
using SharpCompress.Common;
using SharpCompress.Common.GZip;
using SharpCompress.IO;
@@ -21,7 +23,7 @@ public class GZipArchive : AbstractWritableArchive<GZipArchiveEntry, GZipVolume>
/// <param name="readerOptions"></param>
public static GZipArchive Open(string filePath, ReaderOptions? readerOptions = null)
{
filePath.CheckNotNullOrEmpty(nameof(filePath));
filePath.NotNullOrEmpty(nameof(filePath));
return Open(new FileInfo(filePath), readerOptions ?? new ReaderOptions());
}
@@ -32,7 +34,7 @@ public class GZipArchive : AbstractWritableArchive<GZipArchiveEntry, GZipVolume>
/// <param name="readerOptions"></param>
public static GZipArchive Open(FileInfo fileInfo, ReaderOptions? readerOptions = null)
{
fileInfo.CheckNotNull(nameof(fileInfo));
fileInfo.NotNull(nameof(fileInfo));
return new GZipArchive(
new SourceStream(
fileInfo,
@@ -52,7 +54,7 @@ public class GZipArchive : AbstractWritableArchive<GZipArchiveEntry, GZipVolume>
ReaderOptions? readerOptions = null
)
{
fileInfos.CheckNotNull(nameof(fileInfos));
fileInfos.NotNull(nameof(fileInfos));
var files = fileInfos.ToArray();
return new GZipArchive(
new SourceStream(
@@ -70,7 +72,7 @@ public class GZipArchive : AbstractWritableArchive<GZipArchiveEntry, GZipVolume>
/// <param name="readerOptions"></param>
public static GZipArchive Open(IEnumerable<Stream> streams, ReaderOptions? readerOptions = null)
{
streams.CheckNotNull(nameof(streams));
streams.NotNull(nameof(streams));
var strms = streams.ToArray();
return new GZipArchive(
new SourceStream(
@@ -88,7 +90,7 @@ public class GZipArchive : AbstractWritableArchive<GZipArchiveEntry, GZipVolume>
/// <param name="readerOptions"></param>
public static GZipArchive Open(Stream stream, ReaderOptions? readerOptions = null)
{
stream.CheckNotNull(nameof(stream));
stream.NotNull(nameof(stream));
if (stream is not { CanSeek: true })
{
@@ -136,6 +138,16 @@ public class GZipArchive : AbstractWritableArchive<GZipArchiveEntry, GZipVolume>
SaveTo(stream, new WriterOptions(CompressionType.GZip));
}
public Task SaveToAsync(string filePath, CancellationToken cancellationToken = default) =>
SaveToAsync(new FileInfo(filePath), cancellationToken);
public async Task SaveToAsync(FileInfo fileInfo, CancellationToken cancellationToken = default)
{
using var stream = fileInfo.Open(FileMode.Create, FileAccess.Write);
await SaveToAsync(stream, new WriterOptions(CompressionType.GZip), cancellationToken)
.ConfigureAwait(false);
}
public static bool IsGZipFile(Stream stream)
{
// read the header on the first read
@@ -173,6 +185,11 @@ public class GZipArchive : AbstractWritableArchive<GZipArchiveEntry, GZipVolume>
return new GZipWritableArchiveEntry(this, source, filePath, size, modified, closeStream);
}
protected override GZipArchiveEntry CreateDirectoryEntry(
string directoryPath,
DateTime? modified
) => throw new NotSupportedException("GZip archives do not support directory entries.");
protected override void SaveTo(
Stream stream,
WriterOptions options,
@@ -196,6 +213,28 @@ public class GZipArchive : AbstractWritableArchive<GZipArchiveEntry, GZipVolume>
}
}
protected override async Task SaveToAsync(
Stream stream,
WriterOptions options,
IEnumerable<GZipArchiveEntry> oldEntries,
IEnumerable<GZipArchiveEntry> newEntries,
CancellationToken cancellationToken = default
)
{
if (Entries.Count > 1)
{
throw new InvalidFormatException("Only one entry is allowed in a GZip Archive");
}
using var writer = new GZipWriter(stream, new GZipWriterOptions(options));
foreach (var entry in oldEntries.Concat(newEntries).Where(x => !x.IsDirectory))
{
using var entryStream = entry.OpenEntryStream();
await writer
.WriteAsync(entry.Key.NotNull("Entry Key is null"), entryStream, cancellationToken)
.ConfigureAwait(false);
}
}
protected override IEnumerable<GZipArchiveEntry> LoadEntries(IEnumerable<GZipVolume> volumes)
{
var stream = volumes.Single().Stream;

View File

@@ -1,5 +1,7 @@
using System.IO;
using System.Linq;
using System.Threading;
using System.Threading.Tasks;
using SharpCompress.Common.GZip;
namespace SharpCompress.Archives.GZip;
@@ -13,13 +15,20 @@ public class GZipArchiveEntry : GZipEntry, IArchiveEntry
{
//this is to reset the stream to be read multiple times
var part = (GZipFilePart)Parts.Single();
if (part.GetRawStream().Position != part.EntryStartPosition)
var rawStream = part.GetRawStream();
if (rawStream.CanSeek && rawStream.Position != part.EntryStartPosition)
{
part.GetRawStream().Position = part.EntryStartPosition;
rawStream.Position = part.EntryStartPosition;
}
return Parts.Single().GetCompressedStream().NotNull();
}
public virtual Task<Stream> OpenEntryStreamAsync(CancellationToken cancellationToken = default)
{
// GZip synchronous implementation is fast enough, just wrap it
return Task.FromResult(OpenEntryStream());
}
#region IArchiveEntry Members
public IArchive Archive { get; }

View File

@@ -1,4 +1,6 @@
using System.IO;
using System.Threading;
using System.Threading.Tasks;
using SharpCompress.Common;
namespace SharpCompress.Archives;
@@ -11,6 +13,12 @@ public interface IArchiveEntry : IEntry
/// </summary>
Stream OpenEntryStream();
/// <summary>
/// Opens the current entry as a stream that will decompress as it is read asynchronously.
/// Read the entire stream or use SkipEntry on EntryStream.
/// </summary>
Task<Stream> OpenEntryStreamAsync(CancellationToken cancellationToken = default);
/// <summary>
/// The archive can find all the parts of the archive needed to extract this entry.
/// </summary>

View File

@@ -1,4 +1,6 @@
using System.IO;
using System.Threading;
using System.Threading.Tasks;
using SharpCompress.Common;
using SharpCompress.IO;
@@ -25,7 +27,35 @@ public static class IArchiveEntryExtensions
using (entryStream)
{
using Stream s = new ListeningStream(streamListener, entryStream);
s.TransferTo(streamToWriteTo);
s.CopyTo(streamToWriteTo);
}
streamListener.FireEntryExtractionEnd(archiveEntry);
}
public static async Task WriteToAsync(
this IArchiveEntry archiveEntry,
Stream streamToWriteTo,
CancellationToken cancellationToken = default
)
{
if (archiveEntry.IsDirectory)
{
throw new ExtractionException("Entry is a file directory and cannot be extracted.");
}
var streamListener = (IArchiveExtractionListener)archiveEntry.Archive;
streamListener.EnsureEntriesLoaded();
streamListener.FireEntryExtractionBegin(archiveEntry);
streamListener.FireFilePartExtractionBegin(
archiveEntry.Key ?? "Key",
archiveEntry.Size,
archiveEntry.CompressedSize
);
var entryStream = archiveEntry.OpenEntryStream();
using (entryStream)
{
using Stream s = new ListeningStream(streamListener, entryStream);
await s.CopyToAsync(streamToWriteTo, 81920, cancellationToken).ConfigureAwait(false);
}
streamListener.FireEntryExtractionEnd(archiveEntry);
}
@@ -45,6 +75,23 @@ public static class IArchiveEntryExtensions
entry.WriteToFile
);
/// <summary>
/// Extract to specific directory asynchronously, retaining filename
/// </summary>
public static Task WriteToDirectoryAsync(
this IArchiveEntry entry,
string destinationDirectory,
ExtractionOptions? options = null,
CancellationToken cancellationToken = default
) =>
ExtractionMethods.WriteEntryToDirectoryAsync(
entry,
destinationDirectory,
options,
(x, opt) => entry.WriteToFileAsync(x, opt, cancellationToken),
cancellationToken
);
/// <summary>
/// Extract to specific file
/// </summary>
@@ -63,4 +110,25 @@ public static class IArchiveEntryExtensions
entry.WriteTo(fs);
}
);
/// <summary>
/// Extract to specific file asynchronously
/// </summary>
public static Task WriteToFileAsync(
this IArchiveEntry entry,
string destinationFileName,
ExtractionOptions? options = null,
CancellationToken cancellationToken = default
) =>
ExtractionMethods.WriteEntryToFileAsync(
entry,
destinationFileName,
options,
async (x, fm) =>
{
using var fs = File.Open(destinationFileName, fm);
await entry.WriteToAsync(fs, cancellationToken).ConfigureAwait(false);
},
cancellationToken
);
}

View File

@@ -45,12 +45,10 @@ public static class IArchiveExtensions
var seenDirectories = new HashSet<string>();
// Extract
var entries = archive.ExtractAllEntries();
while (entries.MoveToNextEntry())
foreach (var entry in archive.Entries)
{
cancellationToken.ThrowIfCancellationRequested();
var entry = entries.Entry;
if (entry.IsDirectory)
{
var dirPath = Path.Combine(destination, entry.Key.NotNull("Entry Key is null"));
@@ -77,7 +75,7 @@ public static class IArchiveExtensions
// Write file
using var fs = File.OpenWrite(path);
entries.WriteEntryTo(fs);
entry.WriteTo(fs);
// Update progress
bytesRead += entry.Size;

View File

@@ -1,5 +1,7 @@
using System;
using System.IO;
using System.Threading;
using System.Threading.Tasks;
using SharpCompress.Writers;
namespace SharpCompress.Archives;
@@ -16,8 +18,16 @@ public interface IWritableArchive : IArchive
DateTime? modified = null
);
IArchiveEntry AddDirectoryEntry(string key, DateTime? modified = null);
void SaveTo(Stream stream, WriterOptions options);
Task SaveToAsync(
Stream stream,
WriterOptions options,
CancellationToken cancellationToken = default
);
/// <summary>
/// Use this to pause entry rebuilding when adding large collections of entries. Dispose when complete. A using statement is recommended.
/// </summary>

View File

@@ -1,5 +1,7 @@
using System;
using System.IO;
using System.Threading;
using System.Threading.Tasks;
using SharpCompress.Writers;
namespace SharpCompress.Archives;
@@ -42,6 +44,24 @@ public static class IWritableArchiveExtensions
writableArchive.SaveTo(stream, options);
}
public static Task SaveToAsync(
this IWritableArchive writableArchive,
string filePath,
WriterOptions options,
CancellationToken cancellationToken = default
) => writableArchive.SaveToAsync(new FileInfo(filePath), options, cancellationToken);
public static async Task SaveToAsync(
this IWritableArchive writableArchive,
FileInfo fileInfo,
WriterOptions options,
CancellationToken cancellationToken = default
)
{
using var stream = fileInfo.Open(FileMode.Create, FileAccess.Write);
await writableArchive.SaveToAsync(stream, options, cancellationToken).ConfigureAwait(false);
}
public static void AddAllFromDirectory(
this IWritableArchive writableArchive,
string filePath,

View File

@@ -95,7 +95,7 @@ public class RarArchive : AbstractArchive<RarArchiveEntry, RarVolume>
/// <param name="options"></param>
public static RarArchive Open(string filePath, ReaderOptions? options = null)
{
filePath.CheckNotNullOrEmpty(nameof(filePath));
filePath.NotNullOrEmpty(nameof(filePath));
var fileInfo = new FileInfo(filePath);
return new RarArchive(
new SourceStream(
@@ -113,7 +113,7 @@ public class RarArchive : AbstractArchive<RarArchiveEntry, RarVolume>
/// <param name="options"></param>
public static RarArchive Open(FileInfo fileInfo, ReaderOptions? options = null)
{
fileInfo.CheckNotNull(nameof(fileInfo));
fileInfo.NotNull(nameof(fileInfo));
return new RarArchive(
new SourceStream(
fileInfo,
@@ -130,7 +130,7 @@ public class RarArchive : AbstractArchive<RarArchiveEntry, RarVolume>
/// <param name="options"></param>
public static RarArchive Open(Stream stream, ReaderOptions? options = null)
{
stream.CheckNotNull(nameof(stream));
stream.NotNull(nameof(stream));
if (stream is not { CanSeek: true })
{
@@ -150,7 +150,7 @@ public class RarArchive : AbstractArchive<RarArchiveEntry, RarVolume>
ReaderOptions? readerOptions = null
)
{
fileInfos.CheckNotNull(nameof(fileInfos));
fileInfos.NotNull(nameof(fileInfos));
var files = fileInfos.ToArray();
return new RarArchive(
new SourceStream(
@@ -168,7 +168,7 @@ public class RarArchive : AbstractArchive<RarArchiveEntry, RarVolume>
/// <param name="readerOptions"></param>
public static RarArchive Open(IEnumerable<Stream> streams, ReaderOptions? readerOptions = null)
{
streams.CheckNotNull(nameof(streams));
streams.NotNull(nameof(streams));
var strms = streams.ToArray();
return new RarArchive(
new SourceStream(

View File

@@ -2,6 +2,8 @@ using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Threading;
using System.Threading.Tasks;
using SharpCompress.Common;
using SharpCompress.Common.Rar;
using SharpCompress.Common.Rar.Headers;
@@ -68,20 +70,50 @@ public class RarArchiveEntry : RarEntry, IArchiveEntry
public Stream OpenEntryStream()
{
RarStream stream;
if (IsRarV3)
{
return new RarStream(
stream = new RarStream(
archive.UnpackV1.Value,
FileHeader,
new MultiVolumeReadOnlyStream(Parts.Cast<RarFilePart>(), archive)
);
}
else
{
stream = new RarStream(
archive.UnpackV2017.Value,
FileHeader,
new MultiVolumeReadOnlyStream(Parts.Cast<RarFilePart>(), archive)
);
}
return new RarStream(
archive.UnpackV2017.Value,
FileHeader,
new MultiVolumeReadOnlyStream(Parts.Cast<RarFilePart>(), archive)
);
stream.Initialize();
return stream;
}
public async Task<Stream> OpenEntryStreamAsync(CancellationToken cancellationToken = default)
{
RarStream stream;
if (IsRarV3)
{
stream = new RarStream(
archive.UnpackV1.Value,
FileHeader,
new MultiVolumeReadOnlyStream(Parts.Cast<RarFilePart>(), archive)
);
}
else
{
stream = new RarStream(
archive.UnpackV2017.Value,
FileHeader,
new MultiVolumeReadOnlyStream(Parts.Cast<RarFilePart>(), archive)
);
}
await stream.InitializeAsync(cancellationToken);
return stream;
}
public bool IsComplete

View File

@@ -21,7 +21,7 @@ public class SevenZipArchive : AbstractArchive<SevenZipArchiveEntry, SevenZipVol
/// <param name="readerOptions"></param>
public static SevenZipArchive Open(string filePath, ReaderOptions? readerOptions = null)
{
filePath.CheckNotNullOrEmpty("filePath");
filePath.NotNullOrEmpty("filePath");
return Open(new FileInfo(filePath), readerOptions ?? new ReaderOptions());
}
@@ -32,7 +32,7 @@ public class SevenZipArchive : AbstractArchive<SevenZipArchiveEntry, SevenZipVol
/// <param name="readerOptions"></param>
public static SevenZipArchive Open(FileInfo fileInfo, ReaderOptions? readerOptions = null)
{
fileInfo.CheckNotNull("fileInfo");
fileInfo.NotNull("fileInfo");
return new SevenZipArchive(
new SourceStream(
fileInfo,
@@ -52,7 +52,7 @@ public class SevenZipArchive : AbstractArchive<SevenZipArchiveEntry, SevenZipVol
ReaderOptions? readerOptions = null
)
{
fileInfos.CheckNotNull(nameof(fileInfos));
fileInfos.NotNull(nameof(fileInfos));
var files = fileInfos.ToArray();
return new SevenZipArchive(
new SourceStream(
@@ -73,7 +73,7 @@ public class SevenZipArchive : AbstractArchive<SevenZipArchiveEntry, SevenZipVol
ReaderOptions? readerOptions = null
)
{
streams.CheckNotNull(nameof(streams));
streams.NotNull(nameof(streams));
var strms = streams.ToArray();
return new SevenZipArchive(
new SourceStream(
@@ -91,7 +91,7 @@ public class SevenZipArchive : AbstractArchive<SevenZipArchiveEntry, SevenZipVol
/// <param name="readerOptions"></param>
public static SevenZipArchive Open(Stream stream, ReaderOptions? readerOptions = null)
{
stream.CheckNotNull("stream");
stream.NotNull("stream");
if (stream is not { CanSeek: true })
{

View File

@@ -1,4 +1,6 @@
using System.IO;
using System.Threading;
using System.Threading.Tasks;
using SharpCompress.Common.SevenZip;
namespace SharpCompress.Archives.SevenZip;
@@ -10,6 +12,9 @@ public class SevenZipArchiveEntry : SevenZipEntry, IArchiveEntry
public Stream OpenEntryStream() => FilePart.GetCompressedStream();
public Task<Stream> OpenEntryStreamAsync(CancellationToken cancellationToken = default) =>
Task.FromResult(OpenEntryStream());
public IArchive Archive { get; }
public bool IsComplete => true;

View File

@@ -2,6 +2,8 @@ using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Threading;
using System.Threading.Tasks;
using SharpCompress.Common;
using SharpCompress.Common.Tar;
using SharpCompress.Common.Tar.Headers;
@@ -22,7 +24,7 @@ public class TarArchive : AbstractWritableArchive<TarArchiveEntry, TarVolume>
/// <param name="readerOptions"></param>
public static TarArchive Open(string filePath, ReaderOptions? readerOptions = null)
{
filePath.CheckNotNullOrEmpty(nameof(filePath));
filePath.NotNullOrEmpty(nameof(filePath));
return Open(new FileInfo(filePath), readerOptions ?? new ReaderOptions());
}
@@ -33,7 +35,7 @@ public class TarArchive : AbstractWritableArchive<TarArchiveEntry, TarVolume>
/// <param name="readerOptions"></param>
public static TarArchive Open(FileInfo fileInfo, ReaderOptions? readerOptions = null)
{
fileInfo.CheckNotNull(nameof(fileInfo));
fileInfo.NotNull(nameof(fileInfo));
return new TarArchive(
new SourceStream(
fileInfo,
@@ -53,7 +55,7 @@ public class TarArchive : AbstractWritableArchive<TarArchiveEntry, TarVolume>
ReaderOptions? readerOptions = null
)
{
fileInfos.CheckNotNull(nameof(fileInfos));
fileInfos.NotNull(nameof(fileInfos));
var files = fileInfos.ToArray();
return new TarArchive(
new SourceStream(
@@ -71,7 +73,7 @@ public class TarArchive : AbstractWritableArchive<TarArchiveEntry, TarVolume>
/// <param name="readerOptions"></param>
public static TarArchive Open(IEnumerable<Stream> streams, ReaderOptions? readerOptions = null)
{
streams.CheckNotNull(nameof(streams));
streams.NotNull(nameof(streams));
var strms = streams.ToArray();
return new TarArchive(
new SourceStream(
@@ -89,7 +91,7 @@ public class TarArchive : AbstractWritableArchive<TarArchiveEntry, TarVolume>
/// <param name="readerOptions"></param>
public static TarArchive Open(Stream stream, ReaderOptions? readerOptions = null)
{
stream.CheckNotNull(nameof(stream));
stream.NotNull(nameof(stream));
if (stream is not { CanSeek: true })
{
@@ -178,7 +180,7 @@ public class TarArchive : AbstractWritableArchive<TarArchiveEntry, TarVolume>
using (var entryStream = entry.OpenEntryStream())
{
using var memoryStream = new MemoryStream();
entryStream.TransferTo(memoryStream);
entryStream.CopyTo(memoryStream);
memoryStream.Position = 0;
var bytes = memoryStream.ToArray();
@@ -222,6 +224,11 @@ public class TarArchive : AbstractWritableArchive<TarArchiveEntry, TarVolume>
closeStream
);
protected override TarArchiveEntry CreateDirectoryEntry(
string directoryPath,
DateTime? modified
) => new TarWritableArchiveEntry(this, directoryPath, modified);
protected override void SaveTo(
Stream stream,
WriterOptions options,
@@ -230,15 +237,62 @@ public class TarArchive : AbstractWritableArchive<TarArchiveEntry, TarVolume>
)
{
using var writer = new TarWriter(stream, new TarWriterOptions(options));
foreach (var entry in oldEntries.Concat(newEntries).Where(x => !x.IsDirectory))
foreach (var entry in oldEntries.Concat(newEntries))
{
using var entryStream = entry.OpenEntryStream();
writer.Write(
entry.Key.NotNull("Entry Key is null"),
entryStream,
entry.LastModifiedTime,
entry.Size
);
if (entry.IsDirectory)
{
writer.WriteDirectory(
entry.Key.NotNull("Entry Key is null"),
entry.LastModifiedTime
);
}
else
{
using var entryStream = entry.OpenEntryStream();
writer.Write(
entry.Key.NotNull("Entry Key is null"),
entryStream,
entry.LastModifiedTime,
entry.Size
);
}
}
}
protected override async Task SaveToAsync(
Stream stream,
WriterOptions options,
IEnumerable<TarArchiveEntry> oldEntries,
IEnumerable<TarArchiveEntry> newEntries,
CancellationToken cancellationToken = default
)
{
using var writer = new TarWriter(stream, new TarWriterOptions(options));
foreach (var entry in oldEntries.Concat(newEntries))
{
if (entry.IsDirectory)
{
await writer
.WriteDirectoryAsync(
entry.Key.NotNull("Entry Key is null"),
entry.LastModifiedTime,
cancellationToken
)
.ConfigureAwait(false);
}
else
{
using var entryStream = entry.OpenEntryStream();
await writer
.WriteAsync(
entry.Key.NotNull("Entry Key is null"),
entryStream,
entry.LastModifiedTime,
entry.Size,
cancellationToken
)
.ConfigureAwait(false);
}
}
}

View File

@@ -1,5 +1,7 @@
using System.IO;
using System.Linq;
using System.Threading;
using System.Threading.Tasks;
using SharpCompress.Common;
using SharpCompress.Common.Tar;
@@ -12,6 +14,10 @@ public class TarArchiveEntry : TarEntry, IArchiveEntry
public virtual Stream OpenEntryStream() => Parts.Single().GetCompressedStream().NotNull();
public virtual Task<Stream> OpenEntryStreamAsync(
CancellationToken cancellationToken = default
) => Task.FromResult(OpenEntryStream());
#region IArchiveEntry Members
public IArchive Archive { get; }

View File

@@ -9,7 +9,8 @@ namespace SharpCompress.Archives.Tar;
internal sealed class TarWritableArchiveEntry : TarArchiveEntry, IWritableArchiveEntry
{
private readonly bool closeStream;
private readonly Stream stream;
private readonly Stream? stream;
private readonly bool isDirectory;
internal TarWritableArchiveEntry(
TarArchive archive,
@@ -27,6 +28,22 @@ internal sealed class TarWritableArchiveEntry : TarArchiveEntry, IWritableArchiv
Size = size;
LastModifiedTime = lastModified;
this.closeStream = closeStream;
isDirectory = false;
}
internal TarWritableArchiveEntry(
TarArchive archive,
string directoryPath,
DateTime? lastModified
)
: base(archive, null, CompressionType.None)
{
stream = null;
Key = directoryPath;
Size = 0;
LastModifiedTime = lastModified;
closeStream = false;
isDirectory = true;
}
public override long Crc => 0;
@@ -47,15 +64,19 @@ internal sealed class TarWritableArchiveEntry : TarArchiveEntry, IWritableArchiv
public override bool IsEncrypted => false;
public override bool IsDirectory => false;
public override bool IsDirectory => isDirectory;
public override bool IsSplitAfter => false;
internal override IEnumerable<FilePart> Parts => throw new NotImplementedException();
Stream IWritableArchiveEntry.Stream => stream;
Stream IWritableArchiveEntry.Stream => stream ?? Stream.Null;
public override Stream OpenEntryStream()
{
if (stream is null)
{
return Stream.Null;
}
//ensure new stream is at the start, this could be reset
stream.Seek(0, SeekOrigin.Begin);
return SharpCompressStream.Create(stream, leaveOpen: true);
@@ -63,7 +84,7 @@ internal sealed class TarWritableArchiveEntry : TarArchiveEntry, IWritableArchiv
internal override void Close()
{
if (closeStream)
if (closeStream && stream is not null)
{
stream.Dispose();
}

View File

@@ -2,6 +2,8 @@ using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Threading;
using System.Threading.Tasks;
using SharpCompress.Common;
using SharpCompress.Common.Zip;
using SharpCompress.Common.Zip.Headers;
@@ -43,7 +45,7 @@ public class ZipArchive : AbstractWritableArchive<ZipArchiveEntry, ZipVolume>
/// <param name="readerOptions"></param>
public static ZipArchive Open(string filePath, ReaderOptions? readerOptions = null)
{
filePath.CheckNotNullOrEmpty(nameof(filePath));
filePath.NotNullOrEmpty(nameof(filePath));
return Open(new FileInfo(filePath), readerOptions ?? new ReaderOptions());
}
@@ -54,7 +56,7 @@ public class ZipArchive : AbstractWritableArchive<ZipArchiveEntry, ZipVolume>
/// <param name="readerOptions"></param>
public static ZipArchive Open(FileInfo fileInfo, ReaderOptions? readerOptions = null)
{
fileInfo.CheckNotNull(nameof(fileInfo));
fileInfo.NotNull(nameof(fileInfo));
return new ZipArchive(
new SourceStream(
fileInfo,
@@ -74,7 +76,7 @@ public class ZipArchive : AbstractWritableArchive<ZipArchiveEntry, ZipVolume>
ReaderOptions? readerOptions = null
)
{
fileInfos.CheckNotNull(nameof(fileInfos));
fileInfos.NotNull(nameof(fileInfos));
var files = fileInfos.ToArray();
return new ZipArchive(
new SourceStream(
@@ -92,7 +94,7 @@ public class ZipArchive : AbstractWritableArchive<ZipArchiveEntry, ZipVolume>
/// <param name="readerOptions"></param>
public static ZipArchive Open(IEnumerable<Stream> streams, ReaderOptions? readerOptions = null)
{
streams.CheckNotNull(nameof(streams));
streams.NotNull(nameof(streams));
var strms = streams.ToArray();
return new ZipArchive(
new SourceStream(
@@ -110,7 +112,7 @@ public class ZipArchive : AbstractWritableArchive<ZipArchiveEntry, ZipVolume>
/// <param name="readerOptions"></param>
public static ZipArchive Open(Stream stream, ReaderOptions? readerOptions = null)
{
stream.CheckNotNull(nameof(stream));
stream.NotNull(nameof(stream));
if (stream is not { CanSeek: true })
{
@@ -306,14 +308,59 @@ public class ZipArchive : AbstractWritableArchive<ZipArchiveEntry, ZipVolume>
)
{
using var writer = new ZipWriter(stream, new ZipWriterOptions(options));
foreach (var entry in oldEntries.Concat(newEntries).Where(x => !x.IsDirectory))
foreach (var entry in oldEntries.Concat(newEntries))
{
using var entryStream = entry.OpenEntryStream();
writer.Write(
entry.Key.NotNull("Entry Key is null"),
entryStream,
entry.LastModifiedTime
);
if (entry.IsDirectory)
{
writer.WriteDirectory(
entry.Key.NotNull("Entry Key is null"),
entry.LastModifiedTime
);
}
else
{
using var entryStream = entry.OpenEntryStream();
writer.Write(
entry.Key.NotNull("Entry Key is null"),
entryStream,
entry.LastModifiedTime
);
}
}
}
protected override async Task SaveToAsync(
Stream stream,
WriterOptions options,
IEnumerable<ZipArchiveEntry> oldEntries,
IEnumerable<ZipArchiveEntry> newEntries,
CancellationToken cancellationToken = default
)
{
using var writer = new ZipWriter(stream, new ZipWriterOptions(options));
foreach (var entry in oldEntries.Concat(newEntries))
{
if (entry.IsDirectory)
{
await writer
.WriteDirectoryAsync(
entry.Key.NotNull("Entry Key is null"),
entry.LastModifiedTime,
cancellationToken
)
.ConfigureAwait(false);
}
else
{
using var entryStream = entry.OpenEntryStream();
await writer
.WriteAsync(
entry.Key.NotNull("Entry Key is null"),
entryStream,
cancellationToken
)
.ConfigureAwait(false);
}
}
}
@@ -325,6 +372,11 @@ public class ZipArchive : AbstractWritableArchive<ZipArchiveEntry, ZipVolume>
bool closeStream
) => new ZipWritableArchiveEntry(this, source, filePath, size, modified, closeStream);
protected override ZipArchiveEntry CreateDirectoryEntry(
string directoryPath,
DateTime? modified
) => new ZipWritableArchiveEntry(this, directoryPath, modified);
public static ZipArchive Create() => new();
protected override IReader CreateReaderForSolidExtraction()

View File

@@ -1,5 +1,7 @@
using System.IO;
using System.Linq;
using System.Threading;
using System.Threading.Tasks;
using SharpCompress.Common.Zip;
namespace SharpCompress.Archives.Zip;
@@ -11,6 +13,10 @@ public class ZipArchiveEntry : ZipEntry, IArchiveEntry
public virtual Stream OpenEntryStream() => Parts.Single().GetCompressedStream().NotNull();
public virtual Task<Stream> OpenEntryStreamAsync(
CancellationToken cancellationToken = default
) => Task.FromResult(OpenEntryStream());
#region IArchiveEntry Members
public IArchive Archive { get; }

View File

@@ -9,7 +9,8 @@ namespace SharpCompress.Archives.Zip;
internal class ZipWritableArchiveEntry : ZipArchiveEntry, IWritableArchiveEntry
{
private readonly bool closeStream;
private readonly Stream stream;
private readonly Stream? stream;
private readonly bool isDirectory;
private bool isDisposed;
internal ZipWritableArchiveEntry(
@@ -27,6 +28,22 @@ internal class ZipWritableArchiveEntry : ZipArchiveEntry, IWritableArchiveEntry
Size = size;
LastModifiedTime = lastModified;
this.closeStream = closeStream;
isDirectory = false;
}
internal ZipWritableArchiveEntry(
ZipArchive archive,
string directoryPath,
DateTime? lastModified
)
: base(archive, null)
{
stream = null;
Key = directoryPath;
Size = 0;
LastModifiedTime = lastModified;
closeStream = false;
isDirectory = true;
}
public override long Crc => 0;
@@ -47,16 +64,20 @@ internal class ZipWritableArchiveEntry : ZipArchiveEntry, IWritableArchiveEntry
public override bool IsEncrypted => false;
public override bool IsDirectory => false;
public override bool IsDirectory => isDirectory;
public override bool IsSplitAfter => false;
internal override IEnumerable<FilePart> Parts => throw new NotImplementedException();
Stream IWritableArchiveEntry.Stream => stream;
Stream IWritableArchiveEntry.Stream => stream ?? Stream.Null;
public override Stream OpenEntryStream()
{
if (stream is null)
{
return Stream.Null;
}
//ensure new stream is at the start, this could be reset
stream.Seek(0, SeekOrigin.Begin);
return SharpCompressStream.Create(stream, leaveOpen: true);
@@ -64,7 +85,7 @@ internal class ZipWritableArchiveEntry : ZipArchiveEntry, IWritableArchiveEntry
internal override void Close()
{
if (closeStream && !isDisposed)
if (closeStream && !isDisposed && stream is not null)
{
stream.Dispose();
isDisposed = true;

View File

@@ -57,7 +57,7 @@ namespace SharpCompress.Common.Arc
return value switch
{
1 or 2 => CompressionType.None,
3 => CompressionType.RLE90,
3 => CompressionType.Packed,
4 => CompressionType.Squeezed,
5 or 6 or 7 or 8 => CompressionType.Crunched,
9 => CompressionType.Squashed,

View File

@@ -44,7 +44,7 @@ namespace SharpCompress.Common.Arc
Header.CompressedSize
);
break;
case CompressionType.RLE90:
case CompressionType.Packed:
compressedStream = new RunLength90Stream(
_stream,
(int)Header.CompressedSize
@@ -54,6 +54,14 @@ namespace SharpCompress.Common.Arc
compressedStream = new SqueezeStream(_stream, (int)Header.CompressedSize);
break;
case CompressionType.Crunched:
if (Header.OriginalSize > 128 * 1024)
{
throw new NotSupportedException(
"CompressionMethod: "
+ Header.CompressionMethod
+ " with size > 128KB"
);
}
compressedStream = new ArcLzwStream(
_stream,
(int)Header.CompressedSize,

View File

@@ -8,4 +8,5 @@ public enum ArchiveType
SevenZip,
GZip,
Arc,
Arj,
}

View File

@@ -0,0 +1,58 @@
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using SharpCompress.Common.Arc;
using SharpCompress.Common.Arj.Headers;
namespace SharpCompress.Common.Arj
{
public class ArjEntry : Entry
{
private readonly ArjFilePart _filePart;
internal ArjEntry(ArjFilePart filePart)
{
_filePart = filePart;
}
public override long Crc => _filePart.Header.OriginalCrc32;
public override string? Key => _filePart?.Header.Name;
public override string? LinkTarget => null;
public override long CompressedSize => _filePart?.Header.CompressedSize ?? 0;
public override CompressionType CompressionType
{
get
{
if (_filePart.Header.CompressionMethod == CompressionMethod.Stored)
{
return CompressionType.None;
}
return CompressionType.ArjLZ77;
}
}
public override long Size => _filePart?.Header.OriginalSize ?? 0;
public override DateTime? LastModifiedTime => _filePart.Header.DateTimeModified.DateTime;
public override DateTime? CreatedTime => _filePart.Header.DateTimeCreated.DateTime;
public override DateTime? LastAccessedTime => _filePart.Header.DateTimeAccessed.DateTime;
public override DateTime? ArchivedTime => null;
public override bool IsEncrypted => false;
public override bool IsDirectory => _filePart.Header.FileType == FileType.Directory;
public override bool IsSplitAfter => false;
internal override IEnumerable<FilePart> Parts => _filePart.Empty();
}
}

View File

@@ -0,0 +1,72 @@
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using SharpCompress.Common.Arj.Headers;
using SharpCompress.Compressors.Arj;
using SharpCompress.IO;
namespace SharpCompress.Common.Arj
{
public class ArjFilePart : FilePart
{
private readonly Stream _stream;
internal ArjLocalHeader Header { get; set; }
internal ArjFilePart(ArjLocalHeader localArjHeader, Stream seekableStream)
: base(localArjHeader.ArchiveEncoding)
{
_stream = seekableStream;
Header = localArjHeader;
}
internal override string? FilePartName => Header.Name;
internal override Stream GetCompressedStream()
{
if (_stream != null)
{
Stream compressedStream;
switch (Header.CompressionMethod)
{
case CompressionMethod.Stored:
compressedStream = new ReadOnlySubStream(
_stream,
Header.DataStartPosition,
Header.CompressedSize
);
break;
case CompressionMethod.CompressedMost:
case CompressionMethod.Compressed:
case CompressionMethod.CompressedFaster:
if (Header.OriginalSize > 128 * 1024)
{
throw new NotSupportedException(
"CompressionMethod: "
+ Header.CompressionMethod
+ " with size > 128KB"
);
}
compressedStream = new LhaStream<Lh7DecoderCfg>(
_stream,
(int)Header.OriginalSize
);
break;
case CompressionMethod.CompressedFastest:
compressedStream = new LHDecoderStream(_stream, (int)Header.OriginalSize);
break;
default:
throw new NotSupportedException(
"CompressionMethod: " + Header.CompressionMethod
);
}
return compressedStream;
}
return _stream.NotNull();
}
internal override Stream GetRawStream() => _stream;
}
}

View File

@@ -0,0 +1,36 @@
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using SharpCompress.Common.Rar;
using SharpCompress.Common.Rar.Headers;
using SharpCompress.Readers;
namespace SharpCompress.Common.Arj
{
public class ArjVolume : Volume
{
public ArjVolume(Stream stream, ReaderOptions readerOptions, int index = 0)
: base(stream, readerOptions, index) { }
public override bool IsFirstVolume
{
get { return true; }
}
/// <summary>
/// ArjArchive is part of a multi-part archive.
/// </summary>
public override bool IsMultiVolume
{
get { return false; }
}
internal IEnumerable<ArjFilePart> GetVolumeFileParts()
{
return new List<ArjFilePart>();
}
}
}

View File

@@ -0,0 +1,142 @@
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using SharpCompress.Common.Zip.Headers;
using SharpCompress.Crypto;
namespace SharpCompress.Common.Arj.Headers
{
public enum ArjHeaderType
{
MainHeader,
LocalHeader,
}
public abstract class ArjHeader
{
private const int FIRST_HDR_SIZE = 34;
private const ushort ARJ_MAGIC = 0xEA60;
public ArjHeader(ArjHeaderType type)
{
ArjHeaderType = type;
}
public ArjHeaderType ArjHeaderType { get; }
public byte Flags { get; set; }
public FileType FileType { get; set; }
public abstract ArjHeader? Read(Stream reader);
public byte[] ReadHeader(Stream stream)
{
// check for magic bytes
Span<byte> magic = stackalloc byte[2];
if (stream.Read(magic) != 2)
{
return Array.Empty<byte>();
}
var magicValue = (ushort)(magic[0] | magic[1] << 8);
if (magicValue != ARJ_MAGIC)
{
throw new InvalidDataException("Not an ARJ file (wrong magic bytes)");
}
// read header_size
byte[] headerBytes = new byte[2];
stream.Read(headerBytes, 0, 2);
var headerSize = (ushort)(headerBytes[0] | headerBytes[1] << 8);
if (headerSize < 1)
{
return Array.Empty<byte>();
}
var body = new byte[headerSize];
var read = stream.Read(body, 0, headerSize);
if (read < headerSize)
{
return Array.Empty<byte>();
}
byte[] crc = new byte[4];
read = stream.Read(crc, 0, 4);
var checksum = Crc32Stream.Compute(body);
// Compute the hash value
if (checksum != BitConverter.ToUInt32(crc, 0))
{
throw new InvalidDataException("Header checksum is invalid");
}
return body;
}
protected List<byte[]> ReadExtendedHeaders(Stream reader)
{
List<byte[]> extendedHeader = new List<byte[]>();
byte[] buffer = new byte[2];
while (true)
{
int bytesRead = reader.Read(buffer, 0, 2);
if (bytesRead < 2)
{
throw new EndOfStreamException(
"Unexpected end of stream while reading extended header size."
);
}
var extHeaderSize = (ushort)(buffer[0] | (buffer[1] << 8));
if (extHeaderSize == 0)
{
return extendedHeader;
}
byte[] header = new byte[extHeaderSize];
bytesRead = reader.Read(header, 0, extHeaderSize);
if (bytesRead < extHeaderSize)
{
throw new EndOfStreamException(
"Unexpected end of stream while reading extended header data."
);
}
byte[] crc = new byte[4];
bytesRead = reader.Read(crc, 0, 4);
if (bytesRead < 4)
{
throw new EndOfStreamException(
"Unexpected end of stream while reading extended header CRC."
);
}
var checksum = Crc32Stream.Compute(header);
if (checksum != BitConverter.ToUInt32(crc, 0))
{
throw new InvalidDataException("Extended header checksum is invalid");
}
extendedHeader.Add(header);
}
}
// Flag helpers
public bool IsGabled => (Flags & 0x01) != 0;
public bool IsAnsiPage => (Flags & 0x02) != 0;
public bool IsVolume => (Flags & 0x04) != 0;
public bool IsArjProtected => (Flags & 0x08) != 0;
public bool IsPathSym => (Flags & 0x10) != 0;
public bool IsBackup => (Flags & 0x20) != 0;
public bool IsSecured => (Flags & 0x40) != 0;
public bool IsAltName => (Flags & 0x80) != 0;
public static FileType FileTypeFromByte(byte value)
{
return Enum.IsDefined(typeof(FileType), value)
? (FileType)value
: Headers.FileType.Unknown;
}
}
}

View File

@@ -0,0 +1,161 @@
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Runtime.CompilerServices;
using System.Text;
using System.Threading.Tasks;
namespace SharpCompress.Common.Arj.Headers
{
public class ArjLocalHeader : ArjHeader
{
public ArchiveEncoding ArchiveEncoding { get; }
public long DataStartPosition { get; protected set; }
public byte ArchiverVersionNumber { get; set; }
public byte MinVersionToExtract { get; set; }
public HostOS HostOS { get; set; }
public CompressionMethod CompressionMethod { get; set; }
public DosDateTime DateTimeModified { get; set; } = new DosDateTime(0);
public long CompressedSize { get; set; }
public long OriginalSize { get; set; }
public long OriginalCrc32 { get; set; }
public int FileSpecPosition { get; set; }
public int FileAccessMode { get; set; }
public byte FirstChapter { get; set; }
public byte LastChapter { get; set; }
public long ExtendedFilePosition { get; set; }
public DosDateTime DateTimeAccessed { get; set; } = new DosDateTime(0);
public DosDateTime DateTimeCreated { get; set; } = new DosDateTime(0);
public long OriginalSizeEvenForVolumes { get; set; }
public string Name { get; set; } = string.Empty;
public string Comment { get; set; } = string.Empty;
private const byte StdHdrSize = 30;
private const byte R9HdrSize = 46;
public ArjLocalHeader(ArchiveEncoding archiveEncoding)
: base(ArjHeaderType.LocalHeader)
{
ArchiveEncoding =
archiveEncoding ?? throw new ArgumentNullException(nameof(archiveEncoding));
}
public override ArjHeader? Read(Stream stream)
{
var body = ReadHeader(stream);
if (body.Length > 0)
{
ReadExtendedHeaders(stream);
var header = LoadFrom(body);
header.DataStartPosition = stream.Position;
return header;
}
return null;
}
public ArjLocalHeader LoadFrom(byte[] headerBytes)
{
int offset = 0;
int ReadInt16()
{
if (offset + 1 >= headerBytes.Length)
{
throw new EndOfStreamException();
}
var v = headerBytes[offset] & 0xFF | (headerBytes[offset + 1] & 0xFF) << 8;
offset += 2;
return v;
}
long ReadInt32()
{
if (offset + 3 >= headerBytes.Length)
{
throw new EndOfStreamException();
}
long v =
headerBytes[offset] & 0xFF
| (headerBytes[offset + 1] & 0xFF) << 8
| (headerBytes[offset + 2] & 0xFF) << 16
| (headerBytes[offset + 3] & 0xFF) << 24;
offset += 4;
return v;
}
byte headerSize = headerBytes[offset++];
ArchiverVersionNumber = headerBytes[offset++];
MinVersionToExtract = headerBytes[offset++];
HostOS hostOS = (HostOS)headerBytes[offset++];
Flags = headerBytes[offset++];
CompressionMethod = CompressionMethodFromByte(headerBytes[offset++]);
FileType = FileTypeFromByte(headerBytes[offset++]);
offset++; // Skip 1 byte
var rawTimestamp = ReadInt32();
DateTimeModified =
rawTimestamp != 0 ? new DosDateTime(rawTimestamp) : new DosDateTime(0);
CompressedSize = ReadInt32();
OriginalSize = ReadInt32();
OriginalCrc32 = ReadInt32();
FileSpecPosition = ReadInt16();
FileAccessMode = ReadInt16();
FirstChapter = headerBytes[offset++];
LastChapter = headerBytes[offset++];
ExtendedFilePosition = 0;
OriginalSizeEvenForVolumes = 0;
if (headerSize > StdHdrSize)
{
ExtendedFilePosition = ReadInt32();
if (headerSize >= R9HdrSize)
{
rawTimestamp = ReadInt32();
DateTimeAccessed =
rawTimestamp != 0 ? new DosDateTime(rawTimestamp) : new DosDateTime(0);
rawTimestamp = ReadInt32();
DateTimeCreated =
rawTimestamp != 0 ? new DosDateTime(rawTimestamp) : new DosDateTime(0);
OriginalSizeEvenForVolumes = ReadInt32();
}
}
Name = Encoding.ASCII.GetString(
headerBytes,
offset,
Array.IndexOf(headerBytes, (byte)0, offset) - offset
);
offset += Name.Length + 1;
Comment = Encoding.ASCII.GetString(
headerBytes,
offset,
Array.IndexOf(headerBytes, (byte)0, offset) - offset
);
offset += Comment.Length + 1;
return this;
}
public static CompressionMethod CompressionMethodFromByte(byte value)
{
return value switch
{
0 => CompressionMethod.Stored,
1 => CompressionMethod.CompressedMost,
2 => CompressionMethod.Compressed,
3 => CompressionMethod.CompressedFaster,
4 => CompressionMethod.CompressedFastest,
8 => CompressionMethod.NoDataNoCrc,
9 => CompressionMethod.NoData,
_ => CompressionMethod.Unknown,
};
}
}
}

View File

@@ -0,0 +1,138 @@
using System;
using System.IO;
using System.Text;
using SharpCompress.Compressors.Deflate;
using SharpCompress.Crypto;
namespace SharpCompress.Common.Arj.Headers
{
public class ArjMainHeader : ArjHeader
{
private const int FIRST_HDR_SIZE = 34;
private const ushort ARJ_MAGIC = 0xEA60;
public ArchiveEncoding ArchiveEncoding { get; }
public int ArchiverVersionNumber { get; private set; }
public int MinVersionToExtract { get; private set; }
public HostOS HostOs { get; private set; }
public int SecurityVersion { get; private set; }
public DosDateTime CreationDateTime { get; private set; } = new DosDateTime(0);
public long CompressedSize { get; private set; }
public long ArchiveSize { get; private set; }
public long SecurityEnvelope { get; private set; }
public int FileSpecPosition { get; private set; }
public int SecurityEnvelopeLength { get; private set; }
public int EncryptionVersion { get; private set; }
public int LastChapter { get; private set; }
public int ArjProtectionFactor { get; private set; }
public int Flags2 { get; private set; }
public string Name { get; private set; } = string.Empty;
public string Comment { get; private set; } = string.Empty;
public ArjMainHeader(ArchiveEncoding archiveEncoding)
: base(ArjHeaderType.MainHeader)
{
ArchiveEncoding =
archiveEncoding ?? throw new ArgumentNullException(nameof(archiveEncoding));
}
public override ArjHeader? Read(Stream stream)
{
var body = ReadHeader(stream);
ReadExtendedHeaders(stream);
return LoadFrom(body);
}
public ArjMainHeader LoadFrom(byte[] headerBytes)
{
var offset = 1;
byte ReadByte()
{
if (offset >= headerBytes.Length)
{
throw new EndOfStreamException();
}
return (byte)(headerBytes[offset++] & 0xFF);
}
int ReadInt16()
{
if (offset + 1 >= headerBytes.Length)
{
throw new EndOfStreamException();
}
var v = headerBytes[offset] & 0xFF | (headerBytes[offset + 1] & 0xFF) << 8;
offset += 2;
return v;
}
long ReadInt32()
{
if (offset + 3 >= headerBytes.Length)
{
throw new EndOfStreamException();
}
long v =
headerBytes[offset] & 0xFF
| (headerBytes[offset + 1] & 0xFF) << 8
| (headerBytes[offset + 2] & 0xFF) << 16
| (headerBytes[offset + 3] & 0xFF) << 24;
offset += 4;
return v;
}
string ReadNullTerminatedString(byte[] x, int startIndex)
{
var result = new StringBuilder();
int i = startIndex;
while (i < x.Length && x[i] != 0)
{
result.Append((char)x[i]);
i++;
}
// Skip the null terminator
i++;
if (i < x.Length)
{
byte[] remainder = new byte[x.Length - i];
Array.Copy(x, i, remainder, 0, remainder.Length);
x = remainder;
}
return result.ToString();
}
ArchiverVersionNumber = ReadByte();
MinVersionToExtract = ReadByte();
var hostOsByte = ReadByte();
HostOs = hostOsByte <= 11 ? (HostOS)hostOsByte : HostOS.Unknown;
Flags = ReadByte();
SecurityVersion = ReadByte();
FileType = FileTypeFromByte(ReadByte());
offset++; // skip reserved
CreationDateTime = new DosDateTime((int)ReadInt32());
CompressedSize = ReadInt32();
ArchiveSize = ReadInt32();
SecurityEnvelope = ReadInt32();
FileSpecPosition = ReadInt16();
SecurityEnvelopeLength = ReadInt16();
EncryptionVersion = ReadByte();
LastChapter = ReadByte();
Name = ReadNullTerminatedString(headerBytes, offset);
Comment = ReadNullTerminatedString(headerBytes, offset + 1 + Name.Length);
return this;
}
}
}

View File

@@ -0,0 +1,20 @@
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace SharpCompress.Common.Arj.Headers
{
public enum CompressionMethod
{
Stored = 0,
CompressedMost = 1,
Compressed = 2,
CompressedFaster = 3,
CompressedFastest = 4,
NoDataNoCrc = 8,
NoData = 9,
Unknown,
}
}

View File

@@ -0,0 +1,37 @@
using System;
namespace SharpCompress.Common.Arj.Headers
{
public class DosDateTime
{
public DateTime DateTime { get; }
public DosDateTime(long dosValue)
{
// Ensure only the lower 32 bits are used
int value = unchecked((int)(dosValue & 0xFFFFFFFF));
var date = (value >> 16) & 0xFFFF;
var time = value & 0xFFFF;
var day = date & 0x1F;
var month = (date >> 5) & 0x0F;
var year = ((date >> 9) & 0x7F) + 1980;
var second = (time & 0x1F) * 2;
var minute = (time >> 5) & 0x3F;
var hour = (time >> 11) & 0x1F;
try
{
DateTime = new DateTime(year, month, day, hour, minute, second);
}
catch
{
DateTime = DateTime.MinValue;
}
}
public override string ToString() => DateTime.ToString("yyyy-MM-dd HH:mm:ss");
}
}

View File

@@ -0,0 +1,13 @@
namespace SharpCompress.Common.Arj.Headers
{
public enum FileType : byte
{
Binary = 0,
Text7Bit = 1,
CommentHeader = 2,
Directory = 3,
VolumeLabel = 4,
ChapterLabel = 5,
Unknown = 255,
}
}

View File

@@ -0,0 +1,19 @@
namespace SharpCompress.Common.Arj.Headers
{
public enum HostOS
{
MsDos = 0,
PrimOS = 1,
Unix = 2,
Amiga = 3,
MacOs = 4,
OS2 = 5,
AppleGS = 6,
AtariST = 7,
NeXT = 8,
VaxVMS = 9,
Win95 = 10,
Win32 = 11,
Unknown = 255,
}
}

View File

@@ -23,10 +23,11 @@ public enum CompressionType
Reduce4,
Explode,
Squeezed,
RLE90,
Packed,
Crunched,
Squashed,
Crushed,
Distilled,
ZStandard,
ArjLZ77,
}

View File

@@ -1,6 +1,8 @@
using System;
using System.IO;
using System.IO.Compression;
using System.Threading;
using System.Threading.Tasks;
using SharpCompress.IO;
using SharpCompress.Readers;
@@ -51,8 +53,22 @@ public class EntryStream : Stream, IStreamStack
_completed = true;
}
/// <summary>
/// Asynchronously skip the rest of the entry stream.
/// </summary>
public async Task SkipEntryAsync(CancellationToken cancellationToken = default)
{
await this.SkipAsync(cancellationToken).ConfigureAwait(false);
_completed = true;
}
protected override void Dispose(bool disposing)
{
if (_isDisposed)
{
return;
}
_isDisposed = true;
if (!(_completed || _reader.Cancelled))
{
SkipEntry();
@@ -70,12 +86,6 @@ public class EntryStream : Stream, IStreamStack
lzmaStream.Flush(); //Lzma over reads. Knock it back
}
}
if (_isDisposed)
{
return;
}
_isDisposed = true;
#if DEBUG_STREAMS
this.DebugDispose(typeof(EntryStream));
#endif
@@ -83,6 +93,39 @@ public class EntryStream : Stream, IStreamStack
_stream.Dispose();
}
#if !NETFRAMEWORK && !NETSTANDARD2_0
public override async ValueTask DisposeAsync()
{
if (_isDisposed)
{
return;
}
_isDisposed = true;
if (!(_completed || _reader.Cancelled))
{
await SkipEntryAsync().ConfigureAwait(false);
}
//Need a safe standard approach to this - it's okay for compression to overreads. Handling needs to be standardised
if (_stream is IStreamStack ss)
{
if (ss.BaseStream() is SharpCompress.Compressors.Deflate.DeflateStream deflateStream)
{
await deflateStream.FlushAsync().ConfigureAwait(false);
}
else if (ss.BaseStream() is SharpCompress.Compressors.LZMA.LzmaStream lzmaStream)
{
await lzmaStream.FlushAsync().ConfigureAwait(false);
}
}
#if DEBUG_STREAMS
this.DebugDispose(typeof(EntryStream));
#endif
await base.DisposeAsync().ConfigureAwait(false);
await _stream.DisposeAsync().ConfigureAwait(false);
}
#endif
public override bool CanRead => true;
public override bool CanSeek => false;
@@ -91,6 +134,8 @@ public class EntryStream : Stream, IStreamStack
public override void Flush() { }
public override Task FlushAsync(CancellationToken cancellationToken) => Task.CompletedTask;
public override long Length => _stream.Length;
public override long Position
@@ -109,6 +154,38 @@ public class EntryStream : Stream, IStreamStack
return read;
}
public override async Task<int> ReadAsync(
byte[] buffer,
int offset,
int count,
CancellationToken cancellationToken
)
{
var read = await _stream
.ReadAsync(buffer, offset, count, cancellationToken)
.ConfigureAwait(false);
if (read <= 0)
{
_completed = true;
}
return read;
}
#if !NETFRAMEWORK && !NETSTANDARD2_0
public override async ValueTask<int> ReadAsync(
Memory<byte> buffer,
CancellationToken cancellationToken = default
)
{
var read = await _stream.ReadAsync(buffer, cancellationToken).ConfigureAwait(false);
if (read <= 0)
{
_completed = true;
}
return read;
}
#endif
public override int ReadByte()
{
var value = _stream.ReadByte();
@@ -125,4 +202,11 @@ public class EntryStream : Stream, IStreamStack
public override void Write(byte[] buffer, int offset, int count) =>
throw new NotSupportedException();
public override Task WriteAsync(
byte[] buffer,
int offset,
int count,
CancellationToken cancellationToken
) => throw new NotSupportedException();
}

View File

@@ -1,10 +1,22 @@
using System;
using System.IO;
using System.Runtime.InteropServices;
using System.Threading;
using System.Threading.Tasks;
namespace SharpCompress.Common;
internal static class ExtractionMethods
{
/// <summary>
/// Gets the appropriate StringComparison for path checks based on the file system.
/// Windows uses case-insensitive file systems, while Unix-like systems use case-sensitive file systems.
/// </summary>
private static StringComparison PathComparison =>
RuntimeInformation.IsOSPlatform(OSPlatform.Windows)
? StringComparison.OrdinalIgnoreCase
: StringComparison.Ordinal;
/// <summary>
/// Extract to specific directory, retaining filename
/// </summary>
@@ -46,7 +58,7 @@ internal static class ExtractionMethods
if (!Directory.Exists(destdir))
{
if (!destdir.StartsWith(fullDestinationDirectoryPath, StringComparison.Ordinal))
if (!destdir.StartsWith(fullDestinationDirectoryPath, PathComparison))
{
throw new ExtractionException(
"Entry is trying to create a directory outside of the destination directory."
@@ -66,12 +78,7 @@ internal static class ExtractionMethods
{
destinationFileName = Path.GetFullPath(destinationFileName);
if (
!destinationFileName.StartsWith(
fullDestinationDirectoryPath,
StringComparison.Ordinal
)
)
if (!destinationFileName.StartsWith(fullDestinationDirectoryPath, PathComparison))
{
throw new ExtractionException(
"Entry is trying to write a file outside of the destination directory."
@@ -116,4 +123,110 @@ internal static class ExtractionMethods
entry.PreserveExtractionOptions(destinationFileName, options);
}
}
public static async Task WriteEntryToDirectoryAsync(
IEntry entry,
string destinationDirectory,
ExtractionOptions? options,
Func<string, ExtractionOptions?, Task> writeAsync,
CancellationToken cancellationToken = default
)
{
string destinationFileName;
var fullDestinationDirectoryPath = Path.GetFullPath(destinationDirectory);
//check for trailing slash.
if (
fullDestinationDirectoryPath[fullDestinationDirectoryPath.Length - 1]
!= Path.DirectorySeparatorChar
)
{
fullDestinationDirectoryPath += Path.DirectorySeparatorChar;
}
if (!Directory.Exists(fullDestinationDirectoryPath))
{
throw new ExtractionException(
$"Directory does not exist to extract to: {fullDestinationDirectoryPath}"
);
}
options ??= new ExtractionOptions() { Overwrite = true };
var file = Path.GetFileName(entry.Key.NotNull("Entry Key is null")).NotNull("File is null");
file = Utility.ReplaceInvalidFileNameChars(file);
if (options.ExtractFullPath)
{
var folder = Path.GetDirectoryName(entry.Key.NotNull("Entry Key is null"))
.NotNull("Directory is null");
var destdir = Path.GetFullPath(Path.Combine(fullDestinationDirectoryPath, folder));
if (!Directory.Exists(destdir))
{
if (!destdir.StartsWith(fullDestinationDirectoryPath, PathComparison))
{
throw new ExtractionException(
"Entry is trying to create a directory outside of the destination directory."
);
}
Directory.CreateDirectory(destdir);
}
destinationFileName = Path.Combine(destdir, file);
}
else
{
destinationFileName = Path.Combine(fullDestinationDirectoryPath, file);
}
if (!entry.IsDirectory)
{
destinationFileName = Path.GetFullPath(destinationFileName);
if (!destinationFileName.StartsWith(fullDestinationDirectoryPath, PathComparison))
{
throw new ExtractionException(
"Entry is trying to write a file outside of the destination directory."
);
}
await writeAsync(destinationFileName, options).ConfigureAwait(false);
}
else if (options.ExtractFullPath && !Directory.Exists(destinationFileName))
{
Directory.CreateDirectory(destinationFileName);
}
}
public static async Task WriteEntryToFileAsync(
IEntry entry,
string destinationFileName,
ExtractionOptions? options,
Func<string, FileMode, Task> openAndWriteAsync,
CancellationToken cancellationToken = default
)
{
if (entry.LinkTarget != null)
{
if (options?.WriteSymbolicLink is null)
{
throw new ExtractionException(
"Entry is a symbolic link but ExtractionOptions.WriteSymbolicLink delegate is null"
);
}
options.WriteSymbolicLink(destinationFileName, entry.LinkTarget);
}
else
{
var fm = FileMode.Create;
options ??= new ExtractionOptions() { Overwrite = true };
if (!options.Overwrite)
{
fm = FileMode.CreateNew;
}
await openAndWriteAsync(destinationFileName, fm).ConfigureAwait(false);
entry.PreserveExtractionOptions(destinationFileName, options);
}
}
}

View File

@@ -24,8 +24,14 @@ internal sealed class GZipFilePart : FilePart
stream.Position = stream.Length - 8;
ReadTrailer();
stream.Position = position;
EntryStartPosition = position;
}
else
{
// For non-seekable streams, we can't read the trailer or track position.
// Set to 0 since the stream will be read sequentially from its current position.
EntryStartPosition = 0;
}
EntryStartPosition = stream.Position;
}
internal long EntryStartPosition { get; }

View File

@@ -67,6 +67,7 @@ internal class SevenZipFilePart : FilePart
}
}
private const uint K_COPY = 0x0;
private const uint K_LZMA2 = 0x21;
private const uint K_LZMA = 0x030101;
private const uint K_PPMD = 0x030401;
@@ -82,6 +83,7 @@ internal class SevenZipFilePart : FilePart
var coder = Folder.NotNull()._coders.First();
return coder._methodId._id switch
{
K_COPY => CompressionType.None,
K_LZMA or K_LZMA2 => CompressionType.LZMA,
K_PPMD => CompressionType.PPMd,
K_B_ZIP2 => CompressionType.BZip2,

View File

@@ -25,6 +25,10 @@ internal sealed class TarHeader
internal const int BLOCK_SIZE = 512;
// Maximum size for long name/link headers to prevent memory exhaustion attacks
// This is generous enough for most real-world scenarios (32KB)
private const int MAX_LONG_NAME_SIZE = 32768;
internal void Write(Stream output)
{
var buffer = new byte[BLOCK_SIZE];
@@ -186,6 +190,15 @@ internal sealed class TarHeader
private string ReadLongName(BinaryReader reader, byte[] buffer)
{
var size = ReadSize(buffer);
// Validate size to prevent memory exhaustion from malformed headers
if (size < 0 || size > MAX_LONG_NAME_SIZE)
{
throw new InvalidFormatException(
$"Long name size {size} is invalid or exceeds maximum allowed size of {MAX_LONG_NAME_SIZE} bytes"
);
}
var nameLength = (int)size;
var nameBytes = reader.ReadBytes(nameLength);
var remainingBytesToRead = BLOCK_SIZE - (nameLength % BLOCK_SIZE);

View File

@@ -66,6 +66,36 @@ internal class TarReadOnlySubStream : SharpCompressStream, IStreamStack
base.Dispose(disposing);
}
#if !NETFRAMEWORK && !NETSTANDARD2_0
public override async System.Threading.Tasks.ValueTask DisposeAsync()
{
if (_isDisposed)
{
return;
}
_isDisposed = true;
#if DEBUG_STREAMS
this.DebugDispose(typeof(TarReadOnlySubStream));
#endif
// Ensure we read all remaining blocks for this entry.
await Stream.SkipAsync(BytesLeftToRead).ConfigureAwait(false);
_amountRead += BytesLeftToRead;
// If the last block wasn't a full 512 bytes, skip the remaining padding bytes.
var bytesInLastBlock = _amountRead % 512;
if (bytesInLastBlock != 0)
{
await Stream.SkipAsync(512 - bytesInLastBlock).ConfigureAwait(false);
}
// Call base Dispose instead of base DisposeAsync to avoid double disposal
base.Dispose(true);
GC.SuppressFinalize(this);
}
#endif
private long BytesLeftToRead { get; set; }
public override bool CanRead => true;
@@ -76,6 +106,10 @@ internal class TarReadOnlySubStream : SharpCompressStream, IStreamStack
public override void Flush() { }
public override System.Threading.Tasks.Task FlushAsync(
System.Threading.CancellationToken cancellationToken
) => System.Threading.Tasks.Task.CompletedTask;
public override long Length => throw new NotSupportedException();
public override long Position
@@ -114,6 +148,48 @@ internal class TarReadOnlySubStream : SharpCompressStream, IStreamStack
return value;
}
public override async System.Threading.Tasks.Task<int> ReadAsync(
byte[] buffer,
int offset,
int count,
System.Threading.CancellationToken cancellationToken
)
{
if (BytesLeftToRead < count)
{
count = (int)BytesLeftToRead;
}
var read = await Stream
.ReadAsync(buffer, offset, count, cancellationToken)
.ConfigureAwait(false);
if (read > 0)
{
BytesLeftToRead -= read;
_amountRead += read;
}
return read;
}
#if !NETFRAMEWORK && !NETSTANDARD2_0
public override async System.Threading.Tasks.ValueTask<int> ReadAsync(
System.Memory<byte> buffer,
System.Threading.CancellationToken cancellationToken = default
)
{
if (BytesLeftToRead < buffer.Length)
{
buffer = buffer.Slice(0, (int)BytesLeftToRead);
}
var read = await Stream.ReadAsync(buffer, cancellationToken).ConfigureAwait(false);
if (read > 0)
{
BytesLeftToRead -= read;
_amountRead += read;
}
return read;
}
#endif
public override long Seek(long offset, SeekOrigin origin) => throw new NotSupportedException();
public override void SetLength(long value) => throw new NotSupportedException();

View File

@@ -91,8 +91,15 @@ internal abstract class ZipFileEntry : ZipHeader
protected void LoadExtra(byte[] extra)
{
for (var i = 0; i < extra.Length - 4; )
for (var i = 0; i < extra.Length; )
{
// Ensure we have at least a header (2-byte ID + 2-byte length)
if (i + 4 > extra.Length)
{
// Incomplete header — stop parsing extras
break;
}
var type = (ExtraDataType)BinaryPrimitives.ReadUInt16LittleEndian(extra.AsSpan(i));
if (!Enum.IsDefined(typeof(ExtraDataType), type))
{
@@ -106,7 +113,17 @@ internal abstract class ZipFileEntry : ZipHeader
if (length > extra.Length)
{
// bad extras block
return;
break; // allow processing optional other blocks
}
// Some ZIP files contain vendor-specific or malformed extra fields where the declared
// data length extends beyond the remaining buffer. This adjustment ensures that
// we only read data within bounds (i + 4 + length <= extra.Length)
// The example here is: 41 43 18 00 41 52 43 30 46 EB FF FF 51 29 03 C6 03 00 00 00 00 00 00 00 00
// No existing zip utility uses 0x4341 ('AC')
if (i + 4 + length > extra.Length)
{
// incomplete or corrupt field
break; // allow processing optional other blocks
}
var data = new byte[length];

View File

@@ -36,11 +36,10 @@ internal class StreamingZipHeaderFactory : ZipHeaderFactory
throw new ArgumentException("Stream must be a SharpCompressStream", nameof(stream));
}
}
SharpCompressStream rewindableStream = (SharpCompressStream)stream;
var rewindableStream = (SharpCompressStream)stream;
while (true)
{
ZipHeader? header;
var reader = new BinaryReader(rewindableStream);
uint headerBytes = 0;
if (
@@ -155,7 +154,7 @@ internal class StreamingZipHeaderFactory : ZipHeaderFactory
}
_lastEntryHeader = null;
header = ReadHeader(headerBytes, reader);
var header = ReadHeader(headerBytes, reader);
if (header is null)
{
yield break;

View File

@@ -24,10 +24,29 @@
// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
// THE SOFTWARE.
using System;
using System.Buffers;
using System.IO;
using System.Threading;
using System.Threading.Tasks;
namespace SharpCompress.Compressors.ADC;
/// <summary>
/// Result of an ADC decompression operation
/// </summary>
public class AdcDecompressResult
{
/// <summary>
/// Number of bytes read from input
/// </summary>
public int BytesRead { get; set; }
/// <summary>
/// Decompressed output buffer
/// </summary>
public byte[]? Output { get; set; }
}
/// <summary>
/// Provides static methods for decompressing Apple Data Compression data
/// </summary>
@@ -78,6 +97,173 @@ public static class ADCBase
public static int Decompress(byte[] input, out byte[]? output, int bufferSize = 262144) =>
Decompress(new MemoryStream(input), out output, bufferSize);
/// <summary>
/// Decompresses a byte buffer asynchronously that's compressed with ADC
/// </summary>
/// <param name="input">Compressed buffer</param>
/// <param name="bufferSize">Max size for decompressed data</param>
/// <param name="cancellationToken">Cancellation token</param>
/// <returns>Result containing bytes read and decompressed data</returns>
public static async Task<AdcDecompressResult> DecompressAsync(
byte[] input,
int bufferSize = 262144,
CancellationToken cancellationToken = default
) => await DecompressAsync(new MemoryStream(input), bufferSize, cancellationToken);
/// <summary>
/// Decompresses a stream asynchronously that's compressed with ADC
/// </summary>
/// <param name="input">Stream containing compressed data</param>
/// <param name="bufferSize">Max size for decompressed data</param>
/// <param name="cancellationToken">Cancellation token</param>
/// <returns>Result containing bytes read and decompressed data</returns>
public static async Task<AdcDecompressResult> DecompressAsync(
Stream input,
int bufferSize = 262144,
CancellationToken cancellationToken = default
)
{
var result = new AdcDecompressResult();
if (input is null || input.Length == 0)
{
result.BytesRead = 0;
result.Output = null;
return result;
}
var start = (int)input.Position;
var position = (int)input.Position;
int chunkSize;
int offset;
int chunkType;
var buffer = ArrayPool<byte>.Shared.Rent(bufferSize);
var outPosition = 0;
var full = false;
byte[] temp = ArrayPool<byte>.Shared.Rent(3);
try
{
while (position < input.Length)
{
cancellationToken.ThrowIfCancellationRequested();
var readByte = input.ReadByte();
if (readByte == -1)
{
break;
}
chunkType = GetChunkType((byte)readByte);
switch (chunkType)
{
case PLAIN:
chunkSize = GetChunkSize((byte)readByte);
if (outPosition + chunkSize > bufferSize)
{
full = true;
break;
}
var readCount = await input.ReadAsync(
buffer,
outPosition,
chunkSize,
cancellationToken
);
outPosition += readCount;
position += readCount + 1;
break;
case TWO_BYTE:
chunkSize = GetChunkSize((byte)readByte);
temp[0] = (byte)readByte;
temp[1] = (byte)input.ReadByte();
offset = GetOffset(temp.AsSpan(0, 2));
if (outPosition + chunkSize > bufferSize)
{
full = true;
break;
}
if (offset == 0)
{
var lastByte = buffer[outPosition - 1];
for (var i = 0; i < chunkSize; i++)
{
buffer[outPosition] = lastByte;
outPosition++;
}
position += 2;
}
else
{
for (var i = 0; i < chunkSize; i++)
{
buffer[outPosition] = buffer[outPosition - offset - 1];
outPosition++;
}
position += 2;
}
break;
case THREE_BYTE:
chunkSize = GetChunkSize((byte)readByte);
temp[0] = (byte)readByte;
temp[1] = (byte)input.ReadByte();
temp[2] = (byte)input.ReadByte();
offset = GetOffset(temp.AsSpan(0, 3));
if (outPosition + chunkSize > bufferSize)
{
full = true;
break;
}
if (offset == 0)
{
var lastByte = buffer[outPosition - 1];
for (var i = 0; i < chunkSize; i++)
{
buffer[outPosition] = lastByte;
outPosition++;
}
position += 3;
}
else
{
for (var i = 0; i < chunkSize; i++)
{
buffer[outPosition] = buffer[outPosition - offset - 1];
outPosition++;
}
position += 3;
}
break;
}
if (full)
{
break;
}
}
var output = new byte[outPosition];
Array.Copy(buffer, output, outPosition);
result.BytesRead = position - start;
result.Output = output;
return result;
}
finally
{
ArrayPool<byte>.Shared.Return(buffer);
ArrayPool<byte>.Shared.Return(temp);
}
}
/// <summary>
/// Decompresses a stream that's compressed with ADC
/// </summary>

View File

@@ -28,6 +28,8 @@
using System;
using System.IO;
using System.Threading;
using System.Threading.Tasks;
using SharpCompress.IO;
namespace SharpCompress.Compressors.ADC;
@@ -187,6 +189,76 @@ public sealed class ADCStream : Stream, IStreamStack
return copied;
}
public override async Task<int> ReadAsync(
byte[] buffer,
int offset,
int count,
CancellationToken cancellationToken = default
)
{
if (count == 0)
{
return 0;
}
if (buffer is null)
{
throw new ArgumentNullException(nameof(buffer));
}
if (count < 0)
{
throw new ArgumentOutOfRangeException(nameof(count));
}
if (offset < buffer.GetLowerBound(0))
{
throw new ArgumentOutOfRangeException(nameof(offset));
}
if ((offset + count) > buffer.GetLength(0))
{
throw new ArgumentOutOfRangeException(nameof(count));
}
if (_outBuffer is null)
{
var result = await ADCBase.DecompressAsync(
_stream,
cancellationToken: cancellationToken
);
_outBuffer = result.Output;
_outPosition = 0;
}
var inPosition = offset;
var toCopy = count;
var copied = 0;
while (_outPosition + toCopy >= _outBuffer.Length)
{
cancellationToken.ThrowIfCancellationRequested();
var piece = _outBuffer.Length - _outPosition;
Array.Copy(_outBuffer, _outPosition, buffer, inPosition, piece);
inPosition += piece;
copied += piece;
_position += piece;
toCopy -= piece;
var result = await ADCBase.DecompressAsync(
_stream,
cancellationToken: cancellationToken
);
_outBuffer = result.Output;
_outPosition = 0;
if (result.BytesRead == 0 || _outBuffer is null || _outBuffer.Length == 0)
{
return copied;
}
}
Array.Copy(_outBuffer, _outPosition, buffer, inPosition, toCopy);
_outPosition += toCopy;
_position += toCopy;
copied += toCopy;
return copied;
}
public override long Seek(long offset, SeekOrigin origin) => throw new NotSupportedException();
public override void SetLength(long value) => throw new NotSupportedException();

View File

@@ -0,0 +1,72 @@
using System;
using System.IO;
namespace SharpCompress.Compressors.Arj
{
[CLSCompliant(true)]
public class BitReader
{
private readonly Stream _input;
private int _bitBuffer; // currently buffered bits
private int _bitCount; // number of bits in buffer
public BitReader(Stream input)
{
_input = input ?? throw new ArgumentNullException(nameof(input));
_bitBuffer = 0;
_bitCount = 0;
}
/// <summary>
/// Reads a single bit from the stream. Returns 0 or 1.
/// </summary>
public int ReadBit()
{
if (_bitCount == 0)
{
int nextByte = _input.ReadByte();
if (nextByte < 0)
{
throw new EndOfStreamException("No more data available in BitReader.");
}
_bitBuffer = nextByte;
_bitCount = 8;
}
int bit = (_bitBuffer >> (_bitCount - 1)) & 1;
_bitCount--;
return bit;
}
/// <summary>
/// Reads n bits (up to 32) from the stream.
/// </summary>
public int ReadBits(int count)
{
if (count < 0 || count > 32)
{
throw new ArgumentOutOfRangeException(
nameof(count),
"Count must be between 0 and 32."
);
}
int result = 0;
for (int i = 0; i < count; i++)
{
result = (result << 1) | ReadBit();
}
return result;
}
/// <summary>
/// Resets any buffered bits.
/// </summary>
public void AlignToByte()
{
_bitCount = 0;
_bitBuffer = 0;
}
}
}

View File

@@ -0,0 +1,43 @@
using System;
using System.Collections;
using System.Collections.Generic;
namespace SharpCompress.Compressors.Arj
{
/// <summary>
/// Iterator that reads & pushes values back into the ring buffer.
/// </summary>
public class HistoryIterator : IEnumerator<byte>
{
private int _index;
private readonly IRingBuffer _ring;
public HistoryIterator(IRingBuffer ring, int startIndex)
{
_ring = ring;
_index = startIndex;
}
public bool MoveNext()
{
Current = _ring[_index];
_index = unchecked(_index + 1);
// Push value back into the ring buffer
_ring.Push(Current);
return true; // iterator is infinite
}
public void Reset()
{
throw new NotSupportedException();
}
public byte Current { get; private set; }
object IEnumerator.Current => Current;
public void Dispose() { }
}
}

View File

@@ -0,0 +1,218 @@
using System;
using System.Collections.Generic;
using System.IO;
using System.Text;
namespace SharpCompress.Compressors.Arj
{
[CLSCompliant(true)]
public enum NodeType
{
Leaf,
Branch,
}
[CLSCompliant(true)]
public sealed class TreeEntry
{
public readonly NodeType Type;
public readonly int LeafValue;
public readonly int BranchIndex;
public const int MAX_INDEX = 4096;
private TreeEntry(NodeType type, int leafValue, int branchIndex)
{
Type = type;
LeafValue = leafValue;
BranchIndex = branchIndex;
}
public static TreeEntry Leaf(int value)
{
return new TreeEntry(NodeType.Leaf, value, -1);
}
public static TreeEntry Branch(int index)
{
if (index >= MAX_INDEX)
{
throw new ArgumentOutOfRangeException(
nameof(index),
"Branch index exceeds MAX_INDEX"
);
}
return new TreeEntry(NodeType.Branch, 0, index);
}
}
[CLSCompliant(true)]
public sealed class HuffTree
{
private readonly List<TreeEntry> _tree;
public HuffTree(int capacity = 0)
{
_tree = new List<TreeEntry>(capacity);
}
public void SetSingle(int value)
{
_tree.Clear();
_tree.Add(TreeEntry.Leaf(value));
}
public void BuildTree(byte[] lengths, int count)
{
if (lengths == null)
{
throw new ArgumentNullException(nameof(lengths));
}
if (count < 0 || count > lengths.Length)
{
throw new ArgumentOutOfRangeException(nameof(count));
}
if (count > TreeEntry.MAX_INDEX / 2)
{
throw new ArgumentException(
$"Count exceeds maximum allowed: {TreeEntry.MAX_INDEX / 2}"
);
}
byte[] slice = new byte[count];
Array.Copy(lengths, slice, count);
BuildTree(slice);
}
public void BuildTree(byte[] valueLengths)
{
if (valueLengths == null)
{
throw new ArgumentNullException(nameof(valueLengths));
}
if (valueLengths.Length > TreeEntry.MAX_INDEX / 2)
{
throw new InvalidOperationException("Too many code lengths");
}
_tree.Clear();
int maxAllocated = 1; // start with a single (root) node
for (byte currentLen = 1; ; currentLen++)
{
// add missing branches up to current limit
int maxLimit = maxAllocated;
for (int i = _tree.Count; i < maxLimit; i++)
{
// TreeEntry.Branch may throw if index too large
try
{
_tree.Add(TreeEntry.Branch(maxAllocated));
}
catch (ArgumentOutOfRangeException e)
{
_tree.Clear();
throw new InvalidOperationException("Branch index exceeds limit", e);
}
// each branch node allocates two children
maxAllocated += 2;
}
// fill tree with leaves found in the lengths table at the current length
bool moreLeaves = false;
for (int value = 0; value < valueLengths.Length; value++)
{
byte len = valueLengths[value];
if (len == currentLen)
{
_tree.Add(TreeEntry.Leaf(value));
}
else if (len > currentLen)
{
moreLeaves = true; // there are more leaves to process
}
}
// sanity check (too many leaves)
if (_tree.Count > maxAllocated)
{
throw new InvalidOperationException("Too many leaves");
}
// stop when no longer finding longer codes
if (!moreLeaves)
{
break;
}
}
// ensure tree is complete
if (_tree.Count != maxAllocated)
{
throw new InvalidOperationException(
$"Missing some leaves: tree count = {_tree.Count}, expected = {maxAllocated}"
);
}
}
public int ReadEntry(BitReader reader)
{
if (_tree.Count == 0)
{
throw new InvalidOperationException("Tree not initialized");
}
TreeEntry node = _tree[0];
while (true)
{
if (node.Type == NodeType.Leaf)
{
return node.LeafValue;
}
int bit = reader.ReadBit();
int index = node.BranchIndex + bit;
if (index >= _tree.Count)
{
throw new InvalidOperationException("Invalid branch index during read");
}
node = _tree[index];
}
}
public override string ToString()
{
var result = new StringBuilder();
void FormatStep(int index, string prefix)
{
var node = _tree[index];
if (node.Type == NodeType.Leaf)
{
result.AppendLine($"{prefix} -> {node.LeafValue}");
}
else
{
FormatStep(node.BranchIndex, prefix + "0");
FormatStep(node.BranchIndex + 1, prefix + "1");
}
}
if (_tree.Count > 0)
{
FormatStep(0, "");
}
return result.ToString();
}
}
}

View File

@@ -0,0 +1,9 @@
namespace SharpCompress.Compressors.Arj
{
public interface ILhaDecoderConfig
{
int HistoryBits { get; }
int OffsetBits { get; }
RingBuffer RingBuffer { get; }
}
}

View File

@@ -0,0 +1,17 @@
namespace SharpCompress.Compressors.Arj
{
public interface IRingBuffer
{
int BufferSize { get; }
int Cursor { get; }
void SetCursor(int pos);
void Push(byte value);
HistoryIterator IterFromOffset(int offset);
HistoryIterator IterFromPos(int pos);
byte this[int index] { get; }
}
}

View File

@@ -0,0 +1,191 @@
using System;
using System.Collections.Generic;
using System.IO;
using SharpCompress.IO;
namespace SharpCompress.Compressors.Arj
{
[CLSCompliant(true)]
public sealed class LHDecoderStream : Stream, IStreamStack
{
#if DEBUG_STREAMS
long IStreamStack.InstanceId { get; set; }
#endif
int IStreamStack.DefaultBufferSize { get; set; }
Stream IStreamStack.BaseStream() => _stream;
int IStreamStack.BufferSize
{
get => 0;
set { }
}
int IStreamStack.BufferPosition
{
get => 0;
set { }
}
void IStreamStack.SetPosition(long position) { }
private readonly BitReader _bitReader;
private readonly Stream _stream;
// Buffer containing *all* bytes decoded so far.
private readonly List<byte> _buffer = new();
private long _readPosition;
private readonly int _originalSize;
private bool _finishedDecoding;
private bool _disposed;
private const int THRESHOLD = 3;
public LHDecoderStream(Stream compressedStream, int originalSize)
{
_stream = compressedStream ?? throw new ArgumentNullException(nameof(compressedStream));
if (!compressedStream.CanRead)
throw new ArgumentException(
"compressedStream must be readable.",
nameof(compressedStream)
);
_bitReader = new BitReader(compressedStream);
_originalSize = originalSize;
_readPosition = 0;
_finishedDecoding = (originalSize == 0);
}
public Stream BaseStream => _stream;
public override bool CanRead => true;
public override bool CanSeek => false;
public override bool CanWrite => false;
public override long Length => _originalSize;
public override long Position
{
get => _readPosition;
set => throw new NotSupportedException();
}
/// <summary>
/// Decodes a single element (literal or back-reference) and appends it to _buffer.
/// Returns true if data was added, or false if all input has already been decoded.
/// </summary>
private bool DecodeNext()
{
if (_buffer.Count >= _originalSize)
{
_finishedDecoding = true;
return false;
}
int len = DecodeVal(0, 7);
if (len == 0)
{
byte nextChar = (byte)_bitReader.ReadBits(8);
_buffer.Add(nextChar);
}
else
{
int repCount = len + THRESHOLD - 1;
int backPtr = DecodeVal(9, 13);
if (backPtr >= _buffer.Count)
throw new InvalidDataException("Invalid back_ptr in LH stream");
int srcIndex = _buffer.Count - 1 - backPtr;
for (int j = 0; j < repCount && _buffer.Count < _originalSize; j++)
{
byte b = _buffer[srcIndex];
_buffer.Add(b);
srcIndex++;
// srcIndex may grow; it's allowed (source region can overlap destination)
}
}
if (_buffer.Count >= _originalSize)
{
_finishedDecoding = true;
}
return true;
}
private int DecodeVal(int from, int to)
{
int add = 0;
int bit = from;
while (bit < to && _bitReader.ReadBits(1) == 1)
{
add |= 1 << bit;
bit++;
}
int res = bit > 0 ? _bitReader.ReadBits(bit) : 0;
return res + add;
}
/// <summary>
/// Reads decompressed bytes into buffer[offset..offset+count].
/// The method decodes additional data on demand when needed.
/// </summary>
public override int Read(byte[] buffer, int offset, int count)
{
if (_disposed)
throw new ObjectDisposedException(nameof(LHDecoderStream));
if (buffer == null)
throw new ArgumentNullException(nameof(buffer));
if (offset < 0 || count < 0 || offset + count > buffer.Length)
throw new ArgumentOutOfRangeException("offset/count");
if (_readPosition >= _originalSize)
return 0; // EOF
int totalRead = 0;
while (totalRead < count && _readPosition < _originalSize)
{
if (_readPosition >= _buffer.Count)
{
bool had = DecodeNext();
if (!had)
{
break;
}
}
int available = _buffer.Count - (int)_readPosition;
if (available <= 0)
{
if (!_finishedDecoding)
{
continue;
}
break;
}
int toCopy = Math.Min(available, count - totalRead);
_buffer.CopyTo((int)_readPosition, buffer, offset + totalRead, toCopy);
_readPosition += toCopy;
totalRead += toCopy;
}
return totalRead;
}
public override void Flush() => throw new NotSupportedException();
public override long Seek(long offset, SeekOrigin origin) =>
throw new NotSupportedException();
public override void SetLength(long value) => throw new NotSupportedException();
public override void Write(byte[] buffer, int offset, int count) =>
throw new NotSupportedException();
}
}

View File

@@ -0,0 +1,9 @@
namespace SharpCompress.Compressors.Arj
{
public class Lh5DecoderCfg : ILhaDecoderConfig
{
public int HistoryBits => 14;
public int OffsetBits => 4;
public RingBuffer RingBuffer { get; } = new RingBuffer(1 << 14);
}
}

View File

@@ -0,0 +1,9 @@
namespace SharpCompress.Compressors.Arj
{
public class Lh7DecoderCfg : ILhaDecoderConfig
{
public int HistoryBits => 17;
public int OffsetBits => 5;
public RingBuffer RingBuffer { get; } = new RingBuffer(1 << 17);
}
}

View File

@@ -0,0 +1,363 @@
using System;
using System.Data;
using System.IO;
using System.Linq;
using SharpCompress.IO;
namespace SharpCompress.Compressors.Arj
{
[CLSCompliant(true)]
public sealed class LhaStream<C> : Stream, IStreamStack
where C : ILhaDecoderConfig, new()
{
private readonly BitReader _bitReader;
private readonly Stream _stream;
private readonly HuffTree _commandTree;
private readonly HuffTree _offsetTree;
private int _remainingCommands;
private (int offset, int count)? _copyProgress;
private readonly RingBuffer _ringBuffer;
private readonly C _config = new C();
private const int NUM_COMMANDS = 510;
private const int NUM_TEMP_CODELEN = 20;
private readonly int _originalSize;
private int _producedBytes = 0;
#if DEBUG_STREAMS
long IStreamStack.InstanceId { get; set; }
#endif
int IStreamStack.DefaultBufferSize { get; set; }
Stream IStreamStack.BaseStream() => _stream;
int IStreamStack.BufferSize
{
get => 0;
set { }
}
int IStreamStack.BufferPosition
{
get => 0;
set { }
}
void IStreamStack.SetPosition(long position) { }
public LhaStream(Stream compressedStream, int originalSize)
{
_stream = compressedStream ?? throw new ArgumentNullException(nameof(compressedStream));
_bitReader = new BitReader(compressedStream);
_ringBuffer = _config.RingBuffer;
_commandTree = new HuffTree(NUM_COMMANDS * 2);
_offsetTree = new HuffTree(NUM_TEMP_CODELEN * 2);
_remainingCommands = 0;
_copyProgress = null;
_originalSize = originalSize;
}
public override bool CanRead => true;
public override bool CanSeek => false;
public override bool CanWrite => false;
public override long Length => throw new NotSupportedException();
public override long Position
{
get => throw new NotSupportedException();
set => throw new NotSupportedException();
}
public override void Flush() { }
public override long Seek(long offset, SeekOrigin origin) =>
throw new NotSupportedException();
public override void SetLength(long value) => throw new NotSupportedException();
public override void Write(byte[] buffer, int offset, int count) =>
throw new NotSupportedException();
public override int Read(byte[] buffer, int offset, int count)
{
if (buffer == null)
{
throw new ArgumentNullException(nameof(buffer));
}
if (offset < 0 || count < 0 || (offset + count) > buffer.Length)
{
throw new ArgumentOutOfRangeException();
}
if (_producedBytes >= _originalSize)
{
return 0; // EOF
}
if (count == 0)
{
return 0;
}
int bytesRead = FillBuffer(buffer);
return bytesRead;
}
private byte ReadCodeLength()
{
byte len = (byte)_bitReader.ReadBits(3);
if (len == 7)
{
while (_bitReader.ReadBit() != 0)
{
len++;
if (len > 255)
{
throw new InvalidOperationException("Code length overflow");
}
}
}
return len;
}
private int ReadCodeSkip(int skipRange)
{
int bits;
int increment;
switch (skipRange)
{
case 0:
return 1;
case 1:
bits = 4;
increment = 3; // 3..=18
break;
default:
bits = 9;
increment = 20; // 20..=531
break;
}
int skip = _bitReader.ReadBits(bits);
return skip + increment;
}
private void ReadTempTree()
{
byte[] codeLengths = new byte[NUM_TEMP_CODELEN];
// number of codes to read (5 bits)
int numCodes = _bitReader.ReadBits(5);
// single code only
if (numCodes == 0)
{
int code = _bitReader.ReadBits(5);
_offsetTree.SetSingle((byte)code);
return;
}
if (numCodes > NUM_TEMP_CODELEN)
{
throw new Exception("temporary codelen table has invalid size");
}
// read actual lengths
int count = Math.Min(3, numCodes);
for (int i = 0; i < count; i++)
{
codeLengths[i] = (byte)ReadCodeLength();
}
// 2-bit skip value follows
int skip = _bitReader.ReadBits(2);
if (3 + skip > numCodes)
{
throw new Exception("temporary codelen table has invalid size");
}
for (int i = 3 + skip; i < numCodes; i++)
{
codeLengths[i] = (byte)ReadCodeLength();
}
_offsetTree.BuildTree(codeLengths, numCodes);
}
private void ReadCommandTree()
{
byte[] codeLengths = new byte[NUM_COMMANDS];
// number of codes to read (9 bits)
int numCodes = _bitReader.ReadBits(9);
// single code only
if (numCodes == 0)
{
int code = _bitReader.ReadBits(9);
_commandTree.SetSingle((ushort)code);
return;
}
if (numCodes > NUM_COMMANDS)
{
throw new Exception("commands codelen table has invalid size");
}
int index = 0;
while (index < numCodes)
{
for (int n = 0; n < numCodes - index; n++)
{
int code = _offsetTree.ReadEntry(_bitReader);
if (code >= 0 && code <= 2) // skip range
{
int skipCount = ReadCodeSkip(code);
index += n + skipCount;
goto outerLoop;
}
else
{
codeLengths[index + n] = (byte)(code - 2);
}
}
break;
outerLoop:
;
}
_commandTree.BuildTree(codeLengths, numCodes);
}
private void ReadOffsetTree()
{
int numCodes = _bitReader.ReadBits(_config.OffsetBits);
if (numCodes == 0)
{
int code = _bitReader.ReadBits(_config.OffsetBits);
_offsetTree.SetSingle(code);
return;
}
if (numCodes > _config.HistoryBits)
{
throw new InvalidDataException("Offset code table too large");
}
byte[] codeLengths = new byte[NUM_TEMP_CODELEN];
for (int i = 0; i < numCodes; i++)
{
codeLengths[i] = (byte)ReadCodeLength();
}
_offsetTree.BuildTree(codeLengths, numCodes);
}
private void BeginNewBlock()
{
ReadTempTree();
ReadCommandTree();
ReadOffsetTree();
}
private int ReadCommand() => _commandTree.ReadEntry(_bitReader);
private int ReadOffset()
{
int bits = _offsetTree.ReadEntry(_bitReader);
if (bits <= 1)
{
return bits;
}
int res = _bitReader.ReadBits(bits - 1);
return res | (1 << (bits - 1));
}
private int CopyFromHistory(byte[] target, int targetIndex, int offset, int count)
{
var historyIter = _ringBuffer.IterFromOffset(offset);
int copied = 0;
while (
copied < count && historyIter.MoveNext() && (targetIndex + copied) < target.Length
)
{
target[targetIndex + copied] = historyIter.Current;
copied++;
}
if (copied < count)
{
_copyProgress = (offset, count - copied);
}
return copied;
}
public int FillBuffer(byte[] buffer)
{
int bufLen = buffer.Length;
int bufIndex = 0;
// stop when we reached original size
if (_producedBytes >= _originalSize)
{
return 0;
}
// calculate limit, so that we don't go over the original size
int remaining = (int)Math.Min(bufLen, _originalSize - _producedBytes);
while (bufIndex < remaining)
{
if (_copyProgress.HasValue)
{
var (offset, count) = _copyProgress.Value;
int copied = CopyFromHistory(
buffer,
bufIndex,
offset,
(int)Math.Min(count, remaining - bufIndex)
);
bufIndex += copied;
_copyProgress = null;
}
if (_remainingCommands == 0)
{
_remainingCommands = _bitReader.ReadBits(16);
if (bufIndex + _remainingCommands > remaining)
{
break;
}
BeginNewBlock();
}
_remainingCommands--;
int command = ReadCommand();
if (command >= 0 && command <= 0xFF)
{
byte value = (byte)command;
buffer[bufIndex++] = value;
_ringBuffer.Push(value);
}
else
{
int count = command - 0x100 + 3;
int offset = ReadOffset();
int copyCount = (int)Math.Min(count, remaining - bufIndex);
bufIndex += CopyFromHistory(buffer, bufIndex, offset, copyCount);
}
}
_producedBytes += bufIndex;
return bufIndex;
}
}
}

View File

@@ -0,0 +1,67 @@
using System;
using System.Collections;
using System.Collections.Generic;
namespace SharpCompress.Compressors.Arj
{
/// <summary>
/// A fixed-size ring buffer where N must be a power of two.
/// </summary>
public class RingBuffer : IRingBuffer
{
private readonly byte[] _buffer;
private int _cursor;
public int BufferSize { get; }
public int Cursor => _cursor;
private readonly int _mask;
public RingBuffer(int size)
{
if ((size & (size - 1)) != 0)
{
throw new ArgumentException("RingArrayBuffer size must be a power of two");
}
BufferSize = size;
_buffer = new byte[size];
_cursor = 0;
_mask = size - 1;
// Fill with spaces
for (int i = 0; i < size; i++)
{
_buffer[i] = (byte)' ';
}
}
public void SetCursor(int pos)
{
_cursor = pos & _mask;
}
public void Push(byte value)
{
int index = _cursor;
_buffer[index & _mask] = value;
_cursor = (index + 1) & _mask;
}
public byte this[int index] => _buffer[index & _mask];
public HistoryIterator IterFromOffset(int offset)
{
int masked = (offset & _mask) + 1;
int startIndex = _cursor + BufferSize - masked;
return new HistoryIterator(this, startIndex);
}
public HistoryIterator IterFromPos(int pos)
{
int startIndex = pos & _mask;
return new HistoryIterator(this, startIndex);
}
}
}

View File

@@ -1,5 +1,7 @@
using System;
using System.IO;
using System.Threading;
using System.Threading.Tasks;
using SharpCompress.IO;
namespace SharpCompress.Compressors.BZip2;
@@ -96,13 +98,37 @@ public sealed class BZip2Stream : Stream, IStreamStack
public override void SetLength(long value) => stream.SetLength(value);
#if !NETFRAMEWORK&& !NETSTANDARD2_0
#if !NETFRAMEWORK && !NETSTANDARD2_0
public override int Read(Span<byte> buffer) => stream.Read(buffer);
public override void Write(ReadOnlySpan<byte> buffer) => stream.Write(buffer);
public override async ValueTask<int> ReadAsync(
Memory<byte> buffer,
CancellationToken cancellationToken = default
) => await stream.ReadAsync(buffer, cancellationToken).ConfigureAwait(false);
public override async ValueTask WriteAsync(
ReadOnlyMemory<byte> buffer,
CancellationToken cancellationToken = default
) => await stream.WriteAsync(buffer, cancellationToken).ConfigureAwait(false);
#endif
public override async Task<int> ReadAsync(
byte[] buffer,
int offset,
int count,
CancellationToken cancellationToken = default
) => await stream.ReadAsync(buffer, offset, count, cancellationToken).ConfigureAwait(false);
public override async Task WriteAsync(
byte[] buffer,
int offset,
int count,
CancellationToken cancellationToken = default
) => await stream.WriteAsync(buffer, offset, count, cancellationToken).ConfigureAwait(false);
public override void Write(byte[] buffer, int offset, int count) =>
stream.Write(buffer, offset, count);

View File

@@ -2,6 +2,8 @@
using System;
using System.IO;
using System.Threading;
using System.Threading.Tasks;
using SharpCompress.IO;
/*
@@ -1127,6 +1129,28 @@ internal class CBZip2InputStream : Stream, IStreamStack
return k;
}
public override Task<int> ReadAsync(
byte[] buffer,
int offset,
int count,
CancellationToken cancellationToken = default
)
{
var c = -1;
int k;
for (k = 0; k < count; ++k)
{
cancellationToken.ThrowIfCancellationRequested();
c = ReadByte();
if (c == -1)
{
break;
}
buffer[k + offset] = (byte)c;
}
return Task.FromResult(k);
}
public override long Seek(long offset, SeekOrigin origin) => 0;
public override void SetLength(long value) { }

View File

@@ -1,5 +1,7 @@
using System;
using System.IO;
using System.Threading;
using System.Threading.Tasks;
using SharpCompress.IO;
/*
@@ -2022,6 +2024,21 @@ internal sealed class CBZip2OutputStream : Stream, IStreamStack
}
}
public override Task WriteAsync(
byte[] buffer,
int offset,
int count,
CancellationToken cancellationToken = default
)
{
for (var k = 0; k < count; ++k)
{
cancellationToken.ThrowIfCancellationRequested();
WriteByte(buffer[k + offset]);
}
return Task.CompletedTask;
}
public override bool CanRead => false;
public override bool CanSeek => false;

View File

@@ -28,6 +28,7 @@ using System;
using System.IO;
using System.Text;
using System.Threading;
using System.Threading.Tasks;
using SharpCompress.IO;
namespace SharpCompress.Compressors.Deflate;
@@ -289,6 +290,34 @@ public class DeflateStream : Stream, IStreamStack
_baseStream.Flush();
}
public override async Task FlushAsync(CancellationToken cancellationToken)
{
if (_disposed)
{
throw new ObjectDisposedException("DeflateStream");
}
await _baseStream.FlushAsync(cancellationToken).ConfigureAwait(false);
}
#if !NETFRAMEWORK && !NETSTANDARD2_0
public override async ValueTask DisposeAsync()
{
if (_disposed)
{
return;
}
_disposed = true;
if (_baseStream != null)
{
await _baseStream.DisposeAsync().ConfigureAwait(false);
}
#if DEBUG_STREAMS
this.DebugDispose(typeof(DeflateStream));
#endif
await base.DisposeAsync().ConfigureAwait(false);
}
#endif
/// <summary>
/// Read data from the stream.
/// </summary>
@@ -325,6 +354,36 @@ public class DeflateStream : Stream, IStreamStack
return _baseStream.Read(buffer, offset, count);
}
public override async Task<int> ReadAsync(
byte[] buffer,
int offset,
int count,
CancellationToken cancellationToken
)
{
if (_disposed)
{
throw new ObjectDisposedException("DeflateStream");
}
return await _baseStream
.ReadAsync(buffer, offset, count, cancellationToken)
.ConfigureAwait(false);
}
#if !NETFRAMEWORK && !NETSTANDARD2_0
public override async ValueTask<int> ReadAsync(
Memory<byte> buffer,
CancellationToken cancellationToken = default
)
{
if (_disposed)
{
throw new ObjectDisposedException("DeflateStream");
}
return await _baseStream.ReadAsync(buffer, cancellationToken).ConfigureAwait(false);
}
#endif
public override int ReadByte()
{
if (_disposed)
@@ -386,6 +445,36 @@ public class DeflateStream : Stream, IStreamStack
_baseStream.Write(buffer, offset, count);
}
public override async Task WriteAsync(
byte[] buffer,
int offset,
int count,
CancellationToken cancellationToken
)
{
if (_disposed)
{
throw new ObjectDisposedException("DeflateStream");
}
await _baseStream
.WriteAsync(buffer, offset, count, cancellationToken)
.ConfigureAwait(false);
}
#if !NETFRAMEWORK && !NETSTANDARD2_0
public override async ValueTask WriteAsync(
ReadOnlyMemory<byte> buffer,
CancellationToken cancellationToken = default
)
{
if (_disposed)
{
throw new ObjectDisposedException("DeflateStream");
}
await _baseStream.WriteAsync(buffer, cancellationToken).ConfigureAwait(false);
}
#endif
public override void WriteByte(byte value)
{
if (_disposed)

View File

@@ -30,6 +30,8 @@ using System;
using System.Buffers.Binary;
using System.IO;
using System.Text;
using System.Threading;
using System.Threading.Tasks;
using SharpCompress.IO;
namespace SharpCompress.Compressors.Deflate;
@@ -257,6 +259,15 @@ public class GZipStream : Stream, IStreamStack
BaseStream.Flush();
}
public override async Task FlushAsync(CancellationToken cancellationToken)
{
if (_disposed)
{
throw new ObjectDisposedException("GZipStream");
}
await BaseStream.FlushAsync(cancellationToken).ConfigureAwait(false);
}
/// <summary>
/// Read and decompress data from the source stream.
/// </summary>
@@ -309,6 +320,54 @@ public class GZipStream : Stream, IStreamStack
return n;
}
public override async Task<int> ReadAsync(
byte[] buffer,
int offset,
int count,
CancellationToken cancellationToken
)
{
if (_disposed)
{
throw new ObjectDisposedException("GZipStream");
}
var n = await BaseStream
.ReadAsync(buffer, offset, count, cancellationToken)
.ConfigureAwait(false);
if (!_firstReadDone)
{
_firstReadDone = true;
FileName = BaseStream._GzipFileName;
Comment = BaseStream._GzipComment;
LastModified = BaseStream._GzipMtime;
}
return n;
}
#if !NETFRAMEWORK && !NETSTANDARD2_0
public override async ValueTask<int> ReadAsync(
Memory<byte> buffer,
CancellationToken cancellationToken = default
)
{
if (_disposed)
{
throw new ObjectDisposedException("GZipStream");
}
var n = await BaseStream.ReadAsync(buffer, cancellationToken).ConfigureAwait(false);
if (!_firstReadDone)
{
_firstReadDone = true;
FileName = BaseStream._GzipFileName;
Comment = BaseStream._GzipComment;
LastModified = BaseStream._GzipMtime;
}
return n;
}
#endif
/// <summary>
/// Calling this method always throws a <see cref="NotImplementedException"/>.
/// </summary>
@@ -368,6 +427,77 @@ public class GZipStream : Stream, IStreamStack
BaseStream.Write(buffer, offset, count);
}
public override async Task WriteAsync(
byte[] buffer,
int offset,
int count,
CancellationToken cancellationToken
)
{
if (_disposed)
{
throw new ObjectDisposedException("GZipStream");
}
if (BaseStream._streamMode == ZlibBaseStream.StreamMode.Undefined)
{
if (BaseStream._wantCompress)
{
// first write in compression, therefore, emit the GZIP header
_headerByteCount = EmitHeader();
}
else
{
throw new InvalidOperationException();
}
}
await BaseStream.WriteAsync(buffer, offset, count, cancellationToken).ConfigureAwait(false);
}
#if !NETFRAMEWORK && !NETSTANDARD2_0
public override async ValueTask WriteAsync(
ReadOnlyMemory<byte> buffer,
CancellationToken cancellationToken = default
)
{
if (_disposed)
{
throw new ObjectDisposedException("GZipStream");
}
if (BaseStream._streamMode == ZlibBaseStream.StreamMode.Undefined)
{
if (BaseStream._wantCompress)
{
// first write in compression, therefore, emit the GZIP header
_headerByteCount = EmitHeader();
}
else
{
throw new InvalidOperationException();
}
}
await BaseStream.WriteAsync(buffer, cancellationToken).ConfigureAwait(false);
}
public override async ValueTask DisposeAsync()
{
if (_disposed)
{
return;
}
_disposed = true;
if (BaseStream != null)
{
await BaseStream.DisposeAsync().ConfigureAwait(false);
}
#if DEBUG_STREAMS
this.DebugDispose(typeof(GZipStream));
#endif
await base.DisposeAsync().ConfigureAwait(false);
}
#endif
#endregion Stream methods
public string? Comment

View File

@@ -31,6 +31,8 @@ using System.Buffers.Binary;
using System.Collections.Generic;
using System.IO;
using System.Text;
using System.Threading;
using System.Threading.Tasks;
using SharpCompress.Common.Tar.Headers;
using SharpCompress.IO;
@@ -197,6 +199,69 @@ internal class ZlibBaseStream : Stream, IStreamStack
} while (!done);
}
public override async Task WriteAsync(
byte[] buffer,
int offset,
int count,
CancellationToken cancellationToken
)
{
// workitem 7159
// calculate the CRC on the unccompressed data (before writing)
if (crc != null)
{
crc.SlurpBlock(buffer, offset, count);
}
if (_streamMode == StreamMode.Undefined)
{
_streamMode = StreamMode.Writer;
}
else if (_streamMode != StreamMode.Writer)
{
throw new ZlibException("Cannot Write after Reading.");
}
if (count == 0)
{
return;
}
// first reference of z property will initialize the private var _z
z.InputBuffer = buffer;
_z.NextIn = offset;
_z.AvailableBytesIn = count;
var done = false;
do
{
_z.OutputBuffer = workingBuffer;
_z.NextOut = 0;
_z.AvailableBytesOut = _workingBuffer.Length;
var rc = (_wantCompress) ? _z.Deflate(_flushMode) : _z.Inflate(_flushMode);
if (rc != ZlibConstants.Z_OK && rc != ZlibConstants.Z_STREAM_END)
{
throw new ZlibException((_wantCompress ? "de" : "in") + "flating: " + _z.Message);
}
await _stream
.WriteAsync(
_workingBuffer,
0,
_workingBuffer.Length - _z.AvailableBytesOut,
cancellationToken
)
.ConfigureAwait(false);
done = _z.AvailableBytesIn == 0 && _z.AvailableBytesOut != 0;
// If GZIP and de-compress, we're done when 8 bytes remain.
if (_flavor == ZlibStreamFlavor.GZIP && !_wantCompress)
{
done = (_z.AvailableBytesIn == 8 && _z.AvailableBytesOut != 0);
}
} while (!done);
}
private void finish()
{
if (_z is null)
@@ -335,6 +400,111 @@ internal class ZlibBaseStream : Stream, IStreamStack
}
}
private async Task finishAsync(CancellationToken cancellationToken = default)
{
if (_z is null)
{
return;
}
if (_streamMode == StreamMode.Writer)
{
var done = false;
do
{
_z.OutputBuffer = workingBuffer;
_z.NextOut = 0;
_z.AvailableBytesOut = _workingBuffer.Length;
var rc =
(_wantCompress) ? _z.Deflate(FlushType.Finish) : _z.Inflate(FlushType.Finish);
if (rc != ZlibConstants.Z_STREAM_END && rc != ZlibConstants.Z_OK)
{
var verb = (_wantCompress ? "de" : "in") + "flating";
if (_z.Message is null)
{
throw new ZlibException(String.Format("{0}: (rc = {1})", verb, rc));
}
throw new ZlibException(verb + ": " + _z.Message);
}
if (_workingBuffer.Length - _z.AvailableBytesOut > 0)
{
await _stream
.WriteAsync(
_workingBuffer,
0,
_workingBuffer.Length - _z.AvailableBytesOut,
cancellationToken
)
.ConfigureAwait(false);
}
done = _z.AvailableBytesIn == 0 && _z.AvailableBytesOut != 0;
// If GZIP and de-compress, we're done when 8 bytes remain.
if (_flavor == ZlibStreamFlavor.GZIP && !_wantCompress)
{
done = (_z.AvailableBytesIn == 8 && _z.AvailableBytesOut != 0);
}
} while (!done);
await FlushAsync(cancellationToken).ConfigureAwait(false);
// workitem 7159
if (_flavor == ZlibStreamFlavor.GZIP)
{
if (_wantCompress)
{
// Emit the GZIP trailer: CRC32 and size mod 2^32
byte[] intBuf = new byte[4];
BinaryPrimitives.WriteInt32LittleEndian(intBuf, crc.Crc32Result);
await _stream.WriteAsync(intBuf, 0, 4, cancellationToken).ConfigureAwait(false);
var c2 = (int)(crc.TotalBytesRead & 0x00000000FFFFFFFF);
BinaryPrimitives.WriteInt32LittleEndian(intBuf, c2);
await _stream.WriteAsync(intBuf, 0, 4, cancellationToken).ConfigureAwait(false);
}
else
{
throw new ZlibException("Writing with decompression is not supported.");
}
}
}
// workitem 7159
else if (_streamMode == StreamMode.Reader)
{
if (_flavor == ZlibStreamFlavor.GZIP)
{
if (!_wantCompress)
{
// workitem 8501: handle edge case (decompress empty stream)
if (_z.TotalBytesOut == 0L)
{
return;
}
// Read and potentially verify the GZIP trailer: CRC32 and size mod 2^32
byte[] trailer = new byte[8];
// workitem 8679
if (_z.AvailableBytesIn != 8)
{
// Make sure we have read to the end of the stream
_z.InputBuffer.AsSpan(_z.NextIn, _z.AvailableBytesIn).CopyTo(trailer);
var bytesNeeded = 8 - _z.AvailableBytesIn;
var bytesRead = await _stream
.ReadAsync(trailer, _z.AvailableBytesIn, bytesNeeded, cancellationToken)
.ConfigureAwait(false);
}
}
else
{
throw new ZlibException("Reading with compression is not supported.");
}
}
}
}
private void end()
{
if (z is null)
@@ -382,6 +552,38 @@ internal class ZlibBaseStream : Stream, IStreamStack
}
}
#if !NETFRAMEWORK && !NETSTANDARD2_0
public override async ValueTask DisposeAsync()
{
if (isDisposed)
{
return;
}
isDisposed = true;
#if DEBUG_STREAMS
this.DebugDispose(typeof(ZlibBaseStream));
#endif
await base.DisposeAsync().ConfigureAwait(false);
if (_stream is null)
{
return;
}
try
{
await finishAsync().ConfigureAwait(false);
}
finally
{
end();
if (_stream != null)
{
await _stream.DisposeAsync().ConfigureAwait(false);
_stream = null;
}
}
}
#endif
public override void Flush()
{
_stream.Flush();
@@ -390,6 +592,14 @@ internal class ZlibBaseStream : Stream, IStreamStack
z.AvailableBytesIn = 0;
}
public override async Task FlushAsync(CancellationToken cancellationToken)
{
await _stream.FlushAsync(cancellationToken).ConfigureAwait(false);
//rewind the buffer
((IStreamStack)this).Rewind(z.AvailableBytesIn); //unused
z.AvailableBytesIn = 0;
}
public override Int64 Seek(Int64 offset, SeekOrigin origin) =>
throw new NotSupportedException();
@@ -436,6 +646,31 @@ internal class ZlibBaseStream : Stream, IStreamStack
return _encoding.GetString(buffer, 0, buffer.Length);
}
private async Task<string> ReadZeroTerminatedStringAsync(CancellationToken cancellationToken)
{
var list = new List<byte>();
var done = false;
do
{
// workitem 7740
var n = await _stream.ReadAsync(_buf1, 0, 1, cancellationToken).ConfigureAwait(false);
if (n != 1)
{
throw new ZlibException("Unexpected EOF reading GZIP header.");
}
if (_buf1[0] == 0)
{
done = true;
}
else
{
list.Add(_buf1[0]);
}
} while (!done);
var buffer = list.ToArray();
return _encoding.GetString(buffer, 0, buffer.Length);
}
private int _ReadAndValidateGzipHeader()
{
var totalBytesRead = 0;
@@ -494,6 +729,68 @@ internal class ZlibBaseStream : Stream, IStreamStack
return totalBytesRead;
}
private async Task<int> _ReadAndValidateGzipHeaderAsync(CancellationToken cancellationToken)
{
var totalBytesRead = 0;
// read the header on the first read
byte[] header = new byte[10];
var n = await _stream.ReadAsync(header, 0, 10, cancellationToken).ConfigureAwait(false);
// workitem 8501: handle edge case (decompress empty stream)
if (n == 0)
{
return 0;
}
if (n != 10)
{
throw new ZlibException("Not a valid GZIP stream.");
}
if (header[0] != 0x1F || header[1] != 0x8B || header[2] != 8)
{
throw new ZlibException("Bad GZIP header.");
}
var timet = BinaryPrimitives.ReadInt32LittleEndian(header.AsSpan(4));
_GzipMtime = TarHeader.EPOCH.AddSeconds(timet);
totalBytesRead += n;
if ((header[3] & 0x04) == 0x04)
{
// read and discard extra field
n = await _stream.ReadAsync(header, 0, 2, cancellationToken).ConfigureAwait(false); // 2-byte length field
totalBytesRead += n;
var extraLength = (short)(header[0] + header[1] * 256);
var extra = new byte[extraLength];
n = await _stream
.ReadAsync(extra, 0, extra.Length, cancellationToken)
.ConfigureAwait(false);
if (n != extraLength)
{
throw new ZlibException("Unexpected end-of-file reading GZIP header.");
}
totalBytesRead += n;
}
if ((header[3] & 0x08) == 0x08)
{
_GzipFileName = await ReadZeroTerminatedStringAsync(cancellationToken)
.ConfigureAwait(false);
}
if ((header[3] & 0x10) == 0x010)
{
_GzipComment = await ReadZeroTerminatedStringAsync(cancellationToken)
.ConfigureAwait(false);
}
if ((header[3] & 0x02) == 0x02)
{
await _stream.ReadAsync(_buf1, 0, 1, cancellationToken).ConfigureAwait(false); // CRC16, ignore
}
return totalBytesRead;
}
public override Int32 Read(Byte[] buffer, Int32 offset, Int32 count)
{
// According to MS documentation, any implementation of the IO.Stream.Read function must:
@@ -678,6 +975,220 @@ internal class ZlibBaseStream : Stream, IStreamStack
return rc;
}
public override async Task<int> ReadAsync(
byte[] buffer,
int offset,
int count,
CancellationToken cancellationToken
)
{
// According to MS documentation, any implementation of the IO.Stream.Read function must:
// (a) throw an exception if offset & count reference an invalid part of the buffer,
// or if count < 0, or if buffer is null
// (b) return 0 only upon EOF, or if count = 0
// (c) if not EOF, then return at least 1 byte, up to <count> bytes
if (_streamMode == StreamMode.Undefined)
{
if (!_stream.CanRead)
{
throw new ZlibException("The stream is not readable.");
}
// for the first read, set up some controls.
_streamMode = StreamMode.Reader;
// (The first reference to _z goes through the private accessor which
// may initialize it.)
z.AvailableBytesIn = 0;
if (_flavor == ZlibStreamFlavor.GZIP)
{
_gzipHeaderByteCount = await _ReadAndValidateGzipHeaderAsync(cancellationToken)
.ConfigureAwait(false);
// workitem 8501: handle edge case (decompress empty stream)
if (_gzipHeaderByteCount == 0)
{
return 0;
}
}
}
if (_streamMode != StreamMode.Reader)
{
throw new ZlibException("Cannot Read after Writing.");
}
var rc = 0;
// set up the output of the deflate/inflate codec:
_z.OutputBuffer = buffer;
_z.NextOut = offset;
_z.AvailableBytesOut = count;
if (count == 0)
{
return 0;
}
if (nomoreinput && _wantCompress)
{
// no more input data available; therefore we flush to
// try to complete the read
rc = _z.Deflate(FlushType.Finish);
if (rc != ZlibConstants.Z_OK && rc != ZlibConstants.Z_STREAM_END)
{
throw new ZlibException(
String.Format("Deflating: rc={0} msg={1}", rc, _z.Message)
);
}
rc = (count - _z.AvailableBytesOut);
// calculate CRC after reading
if (crc != null)
{
crc.SlurpBlock(buffer, offset, rc);
}
return rc;
}
if (buffer is null)
{
throw new ArgumentNullException(nameof(buffer));
}
if (count < 0)
{
throw new ArgumentOutOfRangeException(nameof(count));
}
if (offset < buffer.GetLowerBound(0))
{
throw new ArgumentOutOfRangeException(nameof(offset));
}
if ((offset + count) > buffer.GetLength(0))
{
throw new ArgumentOutOfRangeException(nameof(count));
}
// This is necessary in case _workingBuffer has been resized. (new byte[])
// (The first reference to _workingBuffer goes through the private accessor which
// may initialize it.)
_z.InputBuffer = workingBuffer;
do
{
// need data in _workingBuffer in order to deflate/inflate. Here, we check if we have any.
if ((_z.AvailableBytesIn == 0) && (!nomoreinput))
{
// No data available, so try to Read data from the captive stream.
_z.NextIn = 0;
_z.AvailableBytesIn = await _stream
.ReadAsync(_workingBuffer, 0, _workingBuffer.Length, cancellationToken)
.ConfigureAwait(false);
if (_z.AvailableBytesIn == 0)
{
nomoreinput = true;
}
}
// we have data in InputBuffer; now compress or decompress as appropriate
rc = (_wantCompress) ? _z.Deflate(_flushMode) : _z.Inflate(_flushMode);
if (nomoreinput && (rc == ZlibConstants.Z_BUF_ERROR))
{
return 0;
}
if (rc != ZlibConstants.Z_OK && rc != ZlibConstants.Z_STREAM_END)
{
throw new ZlibException(
String.Format(
"{0}flating: rc={1} msg={2}",
(_wantCompress ? "de" : "in"),
rc,
_z.Message
)
);
}
if (
(nomoreinput || rc == ZlibConstants.Z_STREAM_END) && (_z.AvailableBytesOut == count)
)
{
break; // nothing more to read
}
} //while (_z.AvailableBytesOut == count && rc == ZlibConstants.Z_OK);
while (_z.AvailableBytesOut > 0 && !nomoreinput && rc == ZlibConstants.Z_OK);
// workitem 8557
// is there more room in output?
if (_z.AvailableBytesOut > 0)
{
if (rc == ZlibConstants.Z_OK && _z.AvailableBytesIn == 0)
{
// deferred
}
// are we completely done reading?
if (nomoreinput)
{
// and in compression?
if (_wantCompress)
{
// no more input data available; therefore we flush to
// try to complete the read
rc = _z.Deflate(FlushType.Finish);
if (rc != ZlibConstants.Z_OK && rc != ZlibConstants.Z_STREAM_END)
{
throw new ZlibException(
String.Format("Deflating: rc={0} msg={1}", rc, _z.Message)
);
}
}
}
}
rc = (count - _z.AvailableBytesOut);
// calculate CRC after reading
if (crc != null)
{
crc.SlurpBlock(buffer, offset, rc);
}
if (rc == ZlibConstants.Z_STREAM_END && z.AvailableBytesIn != 0 && !_wantCompress)
{
//rewind the buffer
((IStreamStack)this).Rewind(z.AvailableBytesIn); //unused
z.AvailableBytesIn = 0;
}
return rc;
}
#if !NETFRAMEWORK && !NETSTANDARD2_0
public override async ValueTask<int> ReadAsync(
Memory<byte> buffer,
CancellationToken cancellationToken = default
)
{
// Use ArrayPool to rent a buffer and delegate to byte[] ReadAsync
byte[] array = System.Buffers.ArrayPool<byte>.Shared.Rent(buffer.Length);
try
{
int read = await ReadAsync(array, 0, buffer.Length, cancellationToken)
.ConfigureAwait(false);
array.AsSpan(0, read).CopyTo(buffer.Span);
return read;
}
finally
{
System.Buffers.ArrayPool<byte>.Shared.Return(array);
}
}
#endif
public override Boolean CanRead => _stream.CanRead;
public override Boolean CanSeek => _stream.CanSeek;

View File

@@ -28,6 +28,8 @@
using System;
using System.IO;
using System.Text;
using System.Threading;
using System.Threading.Tasks;
using SharpCompress.IO;
namespace SharpCompress.Compressors.Deflate;
@@ -266,6 +268,34 @@ public class ZlibStream : Stream, IStreamStack
_baseStream.Flush();
}
public override async Task FlushAsync(CancellationToken cancellationToken)
{
if (_disposed)
{
throw new ObjectDisposedException("ZlibStream");
}
await _baseStream.FlushAsync(cancellationToken).ConfigureAwait(false);
}
#if !NETFRAMEWORK && !NETSTANDARD2_0
public override async ValueTask DisposeAsync()
{
if (_disposed)
{
return;
}
_disposed = true;
if (_baseStream != null)
{
await _baseStream.DisposeAsync().ConfigureAwait(false);
}
#if DEBUG_STREAMS
this.DebugDispose(typeof(ZlibStream));
#endif
await base.DisposeAsync().ConfigureAwait(false);
}
#endif
/// <summary>
/// Read data from the stream.
/// </summary>
@@ -301,6 +331,36 @@ public class ZlibStream : Stream, IStreamStack
return _baseStream.Read(buffer, offset, count);
}
public override async Task<int> ReadAsync(
byte[] buffer,
int offset,
int count,
CancellationToken cancellationToken
)
{
if (_disposed)
{
throw new ObjectDisposedException("ZlibStream");
}
return await _baseStream
.ReadAsync(buffer, offset, count, cancellationToken)
.ConfigureAwait(false);
}
#if !NETFRAMEWORK && !NETSTANDARD2_0
public override async ValueTask<int> ReadAsync(
Memory<byte> buffer,
CancellationToken cancellationToken = default
)
{
if (_disposed)
{
throw new ObjectDisposedException("ZlibStream");
}
return await _baseStream.ReadAsync(buffer, cancellationToken).ConfigureAwait(false);
}
#endif
public override int ReadByte()
{
if (_disposed)
@@ -355,6 +415,36 @@ public class ZlibStream : Stream, IStreamStack
_baseStream.Write(buffer, offset, count);
}
public override async Task WriteAsync(
byte[] buffer,
int offset,
int count,
CancellationToken cancellationToken
)
{
if (_disposed)
{
throw new ObjectDisposedException("ZlibStream");
}
await _baseStream
.WriteAsync(buffer, offset, count, cancellationToken)
.ConfigureAwait(false);
}
#if !NETFRAMEWORK && !NETSTANDARD2_0
public override async ValueTask WriteAsync(
ReadOnlyMemory<byte> buffer,
CancellationToken cancellationToken = default
)
{
if (_disposed)
{
throw new ObjectDisposedException("ZlibStream");
}
await _baseStream.WriteAsync(buffer, cancellationToken).ConfigureAwait(false);
}
#endif
public override void WriteByte(byte value)
{
if (_disposed)

View File

@@ -2,12 +2,12 @@
// The .NET Foundation licenses this file to you under the MIT license.
// See the LICENSE file in the project root for more information.
#nullable disable
using System;
using System.Diagnostics;
using System.IO;
using System.Runtime.CompilerServices;
using System.Threading;
using System.Threading.Tasks;
using SharpCompress.Common;
using SharpCompress.Common.Zip;
using SharpCompress.IO;
@@ -39,7 +39,6 @@ public sealed class Deflate64Stream : Stream, IStreamStack
private const int DEFAULT_BUFFER_SIZE = 8192;
private Stream _stream;
private CompressionMode _mode;
private InflaterManaged _inflater;
private byte[] _buffer;
@@ -62,61 +61,23 @@ public sealed class Deflate64Stream : Stream, IStreamStack
throw new ArgumentException("Deflate64: input stream is not readable", nameof(stream));
}
InitializeInflater(stream, ZipCompressionMethod.Deflate64);
#if DEBUG_STREAMS
this.DebugConstruct(typeof(Deflate64Stream));
#endif
}
/// <summary>
/// Sets up this DeflateManagedStream to be used for Inflation/Decompression
/// </summary>
private void InitializeInflater(
Stream stream,
ZipCompressionMethod method = ZipCompressionMethod.Deflate
)
{
Debug.Assert(stream != null);
Debug.Assert(
method == ZipCompressionMethod.Deflate || method == ZipCompressionMethod.Deflate64
);
if (!stream.CanRead)
{
throw new ArgumentException("Deflate64: input stream is not readable", nameof(stream));
}
_inflater = new InflaterManaged(method == ZipCompressionMethod.Deflate64);
_inflater = new InflaterManaged(true);
_stream = stream;
_mode = CompressionMode.Decompress;
_buffer = new byte[DEFAULT_BUFFER_SIZE];
#if DEBUG_STREAMS
this.DebugConstruct(typeof(Deflate64Stream));
#endif
}
public override bool CanRead
{
get
{
if (_stream is null)
{
return false;
}
public override bool CanRead => _stream.CanRead;
return (_mode == CompressionMode.Decompress && _stream.CanRead);
}
}
public override bool CanWrite
{
get
{
if (_stream is null)
{
return false;
}
return (_mode == CompressionMode.Compress && _stream.CanWrite);
}
}
public override bool CanWrite => false;
public override bool CanSeek => false;
@@ -138,7 +99,6 @@ public sealed class Deflate64Stream : Stream, IStreamStack
public override int Read(byte[] array, int offset, int count)
{
EnsureDecompressionMode();
ValidateParameters(array, offset, count);
EnsureNotDisposed();
@@ -185,6 +145,106 @@ public sealed class Deflate64Stream : Stream, IStreamStack
return count - remainingCount;
}
public override async Task<int> ReadAsync(
byte[] array,
int offset,
int count,
CancellationToken cancellationToken
)
{
ValidateParameters(array, offset, count);
EnsureNotDisposed();
int bytesRead;
var currentOffset = offset;
var remainingCount = count;
while (true)
{
bytesRead = _inflater.Inflate(array, currentOffset, remainingCount);
currentOffset += bytesRead;
remainingCount -= bytesRead;
if (remainingCount == 0)
{
break;
}
if (_inflater.Finished())
{
// if we finished decompressing, we can't have anything left in the outputwindow.
Debug.Assert(
_inflater.AvailableOutput == 0,
"We should have copied all stuff out!"
);
break;
}
var bytes = await _stream
.ReadAsync(_buffer, 0, _buffer.Length, cancellationToken)
.ConfigureAwait(false);
if (bytes <= 0)
{
break;
}
else if (bytes > _buffer.Length)
{
// The stream is either malicious or poorly implemented and returned a number of
// bytes larger than the buffer supplied to it.
throw new InvalidFormatException("Deflate64: invalid data");
}
_inflater.SetInput(_buffer, 0, bytes);
}
return count - remainingCount;
}
#if !NETFRAMEWORK && !NETSTANDARD2_0
public override async ValueTask<int> ReadAsync(
Memory<byte> buffer,
CancellationToken cancellationToken = default
)
{
EnsureNotDisposed();
// InflaterManaged doesn't have a Span-based Inflate method, so we need to work with arrays
// For large buffers, we could rent from ArrayPool, but for simplicity we'll use the buffer's array if available
if (
System.Runtime.InteropServices.MemoryMarshal.TryGetArray<byte>(
buffer,
out var arraySegment
)
)
{
// Fast path: the Memory<byte> is backed by an array
return await ReadAsync(
arraySegment.Array!,
arraySegment.Offset,
arraySegment.Count,
cancellationToken
)
.ConfigureAwait(false);
}
else
{
// Slow path: rent a temporary array
var tempBuffer = System.Buffers.ArrayPool<byte>.Shared.Rent(buffer.Length);
try
{
var bytesRead = await ReadAsync(tempBuffer, 0, buffer.Length, cancellationToken)
.ConfigureAwait(false);
tempBuffer.AsMemory(0, bytesRead).CopyTo(buffer);
return bytesRead;
}
finally
{
System.Buffers.ArrayPool<byte>.Shared.Return(tempBuffer);
}
}
}
#endif
private void ValidateParameters(byte[] array, int offset, int count)
{
if (array is null)
@@ -220,26 +280,6 @@ public sealed class Deflate64Stream : Stream, IStreamStack
private static void ThrowStreamClosedException() =>
throw new ObjectDisposedException(null, "Deflate64: stream has been disposed");
private void EnsureDecompressionMode()
{
if (_mode != CompressionMode.Decompress)
{
ThrowCannotReadFromDeflateManagedStreamException();
}
}
[MethodImpl(MethodImplOptions.NoInlining)]
private static void ThrowCannotReadFromDeflateManagedStreamException() =>
throw new InvalidOperationException("Deflate64: cannot read from this stream");
private void EnsureCompressionMode()
{
if (_mode != CompressionMode.Compress)
{
ThrowCannotWriteToDeflateManagedStreamException();
}
}
[MethodImpl(MethodImplOptions.NoInlining)]
private static void ThrowCannotWriteToDeflateManagedStreamException() =>
throw new InvalidOperationException("Deflate64: cannot write to this stream");
@@ -281,20 +321,17 @@ public sealed class Deflate64Stream : Stream, IStreamStack
#endif
if (disposing)
{
_stream?.Dispose();
_stream.Dispose();
}
}
finally
{
_stream = null;
try
{
_inflater?.Dispose();
_inflater.Dispose();
}
finally
{
_inflater = null;
base.Dispose(disposing);
}
}

View File

@@ -2,6 +2,8 @@ using System;
using System.IO;
using System.Security.Cryptography;
using System.Text;
using System.Threading;
using System.Threading.Tasks;
using SharpCompress.Compressors.LZMA.Utilites;
using SharpCompress.IO;
@@ -283,5 +285,70 @@ internal sealed class AesDecoderStream : DecoderStream2, IStreamStack
return count;
}
public override async Task<int> ReadAsync(
byte[] buffer,
int offset,
int count,
CancellationToken cancellationToken = default
)
{
if (count == 0 || mWritten == mLimit)
{
return 0;
}
if (mUnderflow > 0)
{
return HandleUnderflow(buffer, offset, count);
}
// Need at least 16 bytes to proceed.
if (mEnding - mOffset < 16)
{
Buffer.BlockCopy(mBuffer, mOffset, mBuffer, 0, mEnding - mOffset);
mEnding -= mOffset;
mOffset = 0;
do
{
cancellationToken.ThrowIfCancellationRequested();
var read = await mStream
.ReadAsync(mBuffer, mEnding, mBuffer.Length - mEnding, cancellationToken)
.ConfigureAwait(false);
if (read == 0)
{
// We are not done decoding and have less than 16 bytes.
throw new EndOfStreamException();
}
mEnding += read;
} while (mEnding - mOffset < 16);
}
// We shouldn't return more data than we are limited to.
if (count > mLimit - mWritten)
{
count = (int)(mLimit - mWritten);
}
// We cannot transform less than 16 bytes into the target buffer,
// but we also cannot return zero, so we need to handle this.
if (count < 16)
{
return HandleUnderflow(buffer, offset, count);
}
if (count > mEnding - mOffset)
{
count = mEnding - mOffset;
}
// Otherwise we transform directly into the target buffer.
var processed = mDecoder.TransformBlock(mBuffer, mOffset, count & ~15, buffer, offset);
mOffset += processed;
mWritten += processed;
return processed;
}
#endregion
}

View File

@@ -1,6 +1,8 @@
using System;
using System.Collections.Generic;
using System.IO;
using System.Threading;
using System.Threading.Tasks;
using SharpCompress.IO;
namespace SharpCompress.Compressors.LZMA;
@@ -191,6 +193,18 @@ internal class Bcj2DecoderStream : DecoderStream2, IStreamStack
return count;
}
public override Task<int> ReadAsync(
byte[] buffer,
int offset,
int count,
CancellationToken cancellationToken = default
)
{
cancellationToken.ThrowIfCancellationRequested();
// Bcj2DecoderStream uses complex state machine with multiple streams
return Task.FromResult(Read(buffer, offset, count));
}
public override int ReadByte()
{
if (_mFinished)

View File

@@ -1,12 +1,13 @@
using System;
using System.IO;
using SharpCompress.Common;
namespace SharpCompress.Compressors.LZMA;
/// <summary>
/// The exception that is thrown when an error in input stream occurs during decoding.
/// </summary>
internal class DataErrorException : Exception
internal class DataErrorException : SharpCompressException
{
public DataErrorException()
: base("Data Error") { }
@@ -15,7 +16,7 @@ internal class DataErrorException : Exception
/// <summary>
/// The exception that is thrown when the value of an argument is outside the allowable range.
/// </summary>
internal class InvalidParamException : Exception
internal class InvalidParamException : SharpCompressException
{
public InvalidParamException()
: base("Invalid Parameter") { }

View File

@@ -1,11 +1,14 @@
#nullable disable
using System;
using System.Buffers;
using System.IO;
using System.Threading;
using System.Threading.Tasks;
namespace SharpCompress.Compressors.LZMA.LZ;
internal class OutWindow
internal class OutWindow : IDisposable
{
private byte[] _buffer;
private int _windowSize;
@@ -15,19 +18,22 @@ internal class OutWindow
private int _pendingDist;
private Stream _stream;
public long _total;
public long _limit;
private long _total;
private long _limit;
public long Total => _total;
public void Create(int windowSize)
{
if (_windowSize != windowSize)
{
_buffer = new byte[windowSize];
}
else
{
_buffer[windowSize - 1] = 0;
if (_buffer is not null)
{
ArrayPool<byte>.Shared.Return(_buffer);
}
_buffer = ArrayPool<byte>.Shared.Rent(windowSize);
}
_buffer[windowSize - 1] = 0;
_windowSize = windowSize;
_pos = 0;
_streamPos = 0;
@@ -36,7 +42,22 @@ internal class OutWindow
_limit = 0;
}
public void Reset() => Create(_windowSize);
public void Dispose()
{
ReleaseStream();
if (_buffer is null)
{
return;
}
ArrayPool<byte>.Shared.Return(_buffer);
_buffer = null;
}
public void Reset()
{
ReleaseStream();
Create(_windowSize);
}
public void Init(Stream stream)
{
@@ -66,7 +87,13 @@ internal class OutWindow
_stream = null;
}
public void Flush()
public async Task ReleaseStreamAsync(CancellationToken cancellationToken = default)
{
await FlushAsync(cancellationToken).ConfigureAwait(false);
_stream = null;
}
private void Flush()
{
if (_stream is null)
{
@@ -85,6 +112,27 @@ internal class OutWindow
_streamPos = _pos;
}
private async Task FlushAsync(CancellationToken cancellationToken = default)
{
if (_stream is null)
{
return;
}
var size = _pos - _streamPos;
if (size == 0)
{
return;
}
await _stream
.WriteAsync(_buffer, _streamPos, size, cancellationToken)
.ConfigureAwait(false);
if (_pos >= _windowSize)
{
_pos = 0;
}
_streamPos = _pos;
}
public void CopyPending()
{
if (_pendingLen < 1)
@@ -105,6 +153,26 @@ internal class OutWindow
_pendingLen = rem;
}
public async Task CopyPendingAsync(CancellationToken cancellationToken = default)
{
if (_pendingLen < 1)
{
return;
}
var rem = _pendingLen;
var pos = (_pendingDist < _pos ? _pos : _pos + _windowSize) - _pendingDist - 1;
while (rem > 0 && HasSpace)
{
if (pos >= _windowSize)
{
pos = 0;
}
await PutByteAsync(_buffer[pos++], cancellationToken).ConfigureAwait(false);
rem--;
}
_pendingLen = rem;
}
public void CopyBlock(int distance, int len)
{
var rem = len;
@@ -138,6 +206,43 @@ internal class OutWindow
_pendingDist = distance;
}
public async Task CopyBlockAsync(
int distance,
int len,
CancellationToken cancellationToken = default
)
{
var rem = len;
var pos = (distance < _pos ? _pos : _pos + _windowSize) - distance - 1;
var targetSize = HasSpace ? (int)Math.Min(rem, _limit - _total) : 0;
var sizeUntilWindowEnd = Math.Min(_windowSize - _pos, _windowSize - pos);
var sizeUntilOverlap = Math.Abs(pos - _pos);
var fastSize = Math.Min(Math.Min(sizeUntilWindowEnd, sizeUntilOverlap), targetSize);
if (fastSize >= 2)
{
_buffer.AsSpan(pos, fastSize).CopyTo(_buffer.AsSpan(_pos, fastSize));
_pos += fastSize;
pos += fastSize;
_total += fastSize;
if (_pos >= _windowSize)
{
await FlushAsync(cancellationToken).ConfigureAwait(false);
}
rem -= fastSize;
}
while (rem > 0 && HasSpace)
{
if (pos >= _windowSize)
{
pos = 0;
}
await PutByteAsync(_buffer[pos++], cancellationToken).ConfigureAwait(false);
rem--;
}
_pendingLen = rem;
_pendingDist = distance;
}
public void PutByte(byte b)
{
_buffer[_pos++] = b;
@@ -148,6 +253,16 @@ internal class OutWindow
}
}
public async Task PutByteAsync(byte b, CancellationToken cancellationToken = default)
{
_buffer[_pos++] = b;
_total++;
if (_pos >= _windowSize)
{
await FlushAsync(cancellationToken).ConfigureAwait(false);
}
}
public byte GetByte(int distance)
{
var pos = _pos - distance - 1;
@@ -188,6 +303,44 @@ internal class OutWindow
return len - size;
}
public async Task<int> CopyStreamAsync(
Stream stream,
int len,
CancellationToken cancellationToken = default
)
{
var size = len;
while (size > 0 && _pos < _windowSize && _total < _limit)
{
cancellationToken.ThrowIfCancellationRequested();
var curSize = _windowSize - _pos;
if (curSize > _limit - _total)
{
curSize = (int)(_limit - _total);
}
if (curSize > size)
{
curSize = size;
}
var numReadBytes = await stream
.ReadAsync(_buffer, _pos, curSize, cancellationToken)
.ConfigureAwait(false);
if (numReadBytes == 0)
{
throw new DataErrorException();
}
size -= numReadBytes;
_pos += numReadBytes;
_total += numReadBytes;
if (_pos >= _windowSize)
{
await FlushAsync(cancellationToken).ConfigureAwait(false);
}
}
return len - size;
}
public void SetLimit(long size) => _limit = _total + size;
public bool HasSpace => _pos < _windowSize && _total < _limit;

View File

@@ -1,6 +1,8 @@
using System;
using System.Buffers.Binary;
using System.IO;
using System.Threading;
using System.Threading.Tasks;
using SharpCompress.Common;
using SharpCompress.Crypto;
using SharpCompress.IO;
@@ -157,6 +159,11 @@ public sealed class LZipStream : Stream, IStreamStack
#if !NETFRAMEWORK && !NETSTANDARD2_0
public override ValueTask<int> ReadAsync(
Memory<byte> buffer,
CancellationToken cancellationToken = default
) => _stream.ReadAsync(buffer, cancellationToken);
public override int Read(Span<byte> buffer) => _stream.Read(buffer);
public override void Write(ReadOnlySpan<byte> buffer)
@@ -179,6 +186,25 @@ public sealed class LZipStream : Stream, IStreamStack
++_writeCount;
}
public override Task<int> ReadAsync(
byte[] buffer,
int offset,
int count,
CancellationToken cancellationToken = default
) => _stream.ReadAsync(buffer, offset, count, cancellationToken);
public override async Task WriteAsync(
byte[] buffer,
int offset,
int count,
CancellationToken cancellationToken
)
{
cancellationToken.ThrowIfCancellationRequested();
await _stream.WriteAsync(buffer, offset, count, cancellationToken);
_writeCount += count;
}
#endregion
/// <summary>

View File

@@ -1,6 +1,7 @@
#nullable disable
using System;
using System.Diagnostics.CodeAnalysis;
using System.IO;
using SharpCompress.Compressors.LZMA.LZ;
using SharpCompress.Compressors.LZMA.RangeCoder;
@@ -199,6 +200,9 @@ public class Decoder : ICoder, ISetDecoderProperties // ,System.IO.Stream
}
}
#if !NETFRAMEWORK && !NETSTANDARD2_0
[MemberNotNull(nameof(_outWindow))]
#endif
private void CreateDictionary()
{
if (_dictionarySize < 0)
@@ -294,7 +298,7 @@ public class Decoder : ICoder, ISetDecoderProperties // ,System.IO.Stream
}
else
{
_outWindow.SetLimit(long.MaxValue - _outWindow._total);
_outWindow.SetLimit(long.MaxValue - _outWindow.Total);
}
var rangeDecoder = new RangeCoder.Decoder();
@@ -305,6 +309,43 @@ public class Decoder : ICoder, ISetDecoderProperties // ,System.IO.Stream
_outWindow.ReleaseStream();
rangeDecoder.ReleaseStream();
_outWindow.Dispose();
_outWindow = null;
}
public async System.Threading.Tasks.Task CodeAsync(
Stream inStream,
Stream outStream,
long inSize,
long outSize,
ICodeProgress progress,
System.Threading.CancellationToken cancellationToken = default
)
{
if (_outWindow is null)
{
CreateDictionary();
}
_outWindow.Init(outStream);
if (outSize > 0)
{
_outWindow.SetLimit(outSize);
}
else
{
_outWindow.SetLimit(long.MaxValue - _outWindow.Total);
}
var rangeDecoder = new RangeCoder.Decoder();
rangeDecoder.Init(inStream);
await CodeAsync(_dictionarySize, _outWindow, rangeDecoder, cancellationToken)
.ConfigureAwait(false);
await _outWindow.ReleaseStreamAsync(cancellationToken).ConfigureAwait(false);
rangeDecoder.ReleaseStream();
_outWindow.Dispose();
_outWindow = null;
}
@@ -316,7 +357,7 @@ public class Decoder : ICoder, ISetDecoderProperties // ,System.IO.Stream
while (outWindow.HasSpace)
{
var posState = (uint)outWindow._total & _posStateMask;
var posState = (uint)outWindow.Total & _posStateMask;
if (
_isMatchDecoders[(_state._index << Base.K_NUM_POS_STATES_BITS_MAX) + posState]
.Decode(rangeDecoder) == 0
@@ -328,18 +369,14 @@ public class Decoder : ICoder, ISetDecoderProperties // ,System.IO.Stream
{
b = _literalDecoder.DecodeWithMatchByte(
rangeDecoder,
(uint)outWindow._total,
(uint)outWindow.Total,
prevByte,
outWindow.GetByte((int)_rep0)
);
}
else
{
b = _literalDecoder.DecodeNormal(
rangeDecoder,
(uint)outWindow._total,
prevByte
);
b = _literalDecoder.DecodeNormal(rangeDecoder, (uint)outWindow.Total, prevByte);
}
outWindow.PutByte(b);
_state.UpdateChar();
@@ -424,7 +461,7 @@ public class Decoder : ICoder, ISetDecoderProperties // ,System.IO.Stream
_rep0 = posSlot;
}
}
if (_rep0 >= outWindow._total || _rep0 >= dictionarySizeCheck)
if (_rep0 >= outWindow.Total || _rep0 >= dictionarySizeCheck)
{
if (_rep0 == 0xFFFFFFFF)
{
@@ -438,6 +475,143 @@ public class Decoder : ICoder, ISetDecoderProperties // ,System.IO.Stream
return false;
}
internal async System.Threading.Tasks.Task<bool> CodeAsync(
int dictionarySize,
OutWindow outWindow,
RangeCoder.Decoder rangeDecoder,
System.Threading.CancellationToken cancellationToken = default
)
{
var dictionarySizeCheck = Math.Max(dictionarySize, 1);
await outWindow.CopyPendingAsync(cancellationToken).ConfigureAwait(false);
while (outWindow.HasSpace)
{
cancellationToken.ThrowIfCancellationRequested();
var posState = (uint)outWindow.Total & _posStateMask;
if (
_isMatchDecoders[(_state._index << Base.K_NUM_POS_STATES_BITS_MAX) + posState]
.Decode(rangeDecoder) == 0
)
{
byte b;
var prevByte = outWindow.GetByte(0);
if (!_state.IsCharState())
{
b = _literalDecoder.DecodeWithMatchByte(
rangeDecoder,
(uint)outWindow.Total,
prevByte,
outWindow.GetByte((int)_rep0)
);
}
else
{
b = _literalDecoder.DecodeNormal(rangeDecoder, (uint)outWindow.Total, prevByte);
}
await outWindow.PutByteAsync(b, cancellationToken).ConfigureAwait(false);
_state.UpdateChar();
}
else
{
uint len;
if (_isRepDecoders[_state._index].Decode(rangeDecoder) == 1)
{
if (_isRepG0Decoders[_state._index].Decode(rangeDecoder) == 0)
{
if (
_isRep0LongDecoders[
(_state._index << Base.K_NUM_POS_STATES_BITS_MAX) + posState
]
.Decode(rangeDecoder) == 0
)
{
_state.UpdateShortRep();
await outWindow
.PutByteAsync(outWindow.GetByte((int)_rep0), cancellationToken)
.ConfigureAwait(false);
continue;
}
}
else
{
uint distance;
if (_isRepG1Decoders[_state._index].Decode(rangeDecoder) == 0)
{
distance = _rep1;
}
else
{
if (_isRepG2Decoders[_state._index].Decode(rangeDecoder) == 0)
{
distance = _rep2;
}
else
{
distance = _rep3;
_rep3 = _rep2;
}
_rep2 = _rep1;
}
_rep1 = _rep0;
_rep0 = distance;
}
len = _repLenDecoder.Decode(rangeDecoder, posState) + Base.K_MATCH_MIN_LEN;
_state.UpdateRep();
}
else
{
_rep3 = _rep2;
_rep2 = _rep1;
_rep1 = _rep0;
len = Base.K_MATCH_MIN_LEN + _lenDecoder.Decode(rangeDecoder, posState);
_state.UpdateMatch();
var posSlot = _posSlotDecoder[Base.GetLenToPosState(len)].Decode(rangeDecoder);
if (posSlot >= Base.K_START_POS_MODEL_INDEX)
{
var numDirectBits = (int)((posSlot >> 1) - 1);
_rep0 = ((2 | (posSlot & 1)) << numDirectBits);
if (posSlot < Base.K_END_POS_MODEL_INDEX)
{
_rep0 += BitTreeDecoder.ReverseDecode(
_posDecoders,
_rep0 - posSlot - 1,
rangeDecoder,
numDirectBits
);
}
else
{
_rep0 += (
rangeDecoder.DecodeDirectBits(numDirectBits - Base.K_NUM_ALIGN_BITS)
<< Base.K_NUM_ALIGN_BITS
);
_rep0 += _posAlignDecoder.ReverseDecode(rangeDecoder);
}
}
else
{
_rep0 = posSlot;
}
}
if (_rep0 >= outWindow.Total || _rep0 >= dictionarySizeCheck)
{
if (_rep0 == 0xFFFFFFFF)
{
return true;
}
throw new DataErrorException();
}
await outWindow
.CopyBlockAsync((int)_rep0, (int)len, cancellationToken)
.ConfigureAwait(false);
}
}
return false;
}
public void SetDecoderProperties(byte[] properties)
{
if (properties.Length < 1)
@@ -473,29 +647,4 @@ public class Decoder : ICoder, ISetDecoderProperties // ,System.IO.Stream
}
_outWindow.Train(stream);
}
/*
public override bool CanRead { get { return true; }}
public override bool CanWrite { get { return true; }}
public override bool CanSeek { get { return true; }}
public override long Length { get { return 0; }}
public override long Position
{
get { return 0; }
set { }
}
public override void Flush() { }
public override int Read(byte[] buffer, int offset, int count)
{
return 0;
}
public override void Write(byte[] buffer, int offset, int count)
{
}
public override long Seek(long offset, System.IO.SeekOrigin origin)
{
return 0;
}
public override void SetLength(long value) {}
*/
}

View File

@@ -3,6 +3,8 @@
using System;
using System.Buffers.Binary;
using System.IO;
using System.Threading;
using System.Threading.Tasks;
using SharpCompress.Compressors.LZMA.LZ;
using SharpCompress.IO;
@@ -178,6 +180,7 @@ public class LzmaStream : Stream, IStreamStack
_position = _encoder.Code(null, true);
}
_inputStream?.Dispose();
_outWindow.Dispose();
}
base.Dispose(disposing);
}
@@ -422,6 +425,82 @@ public class LzmaStream : Stream, IStreamStack
}
}
private async Task DecodeChunkHeaderAsync(CancellationToken cancellationToken = default)
{
var controlBuffer = new byte[1];
await _inputStream.ReadAsync(controlBuffer, 0, 1, cancellationToken).ConfigureAwait(false);
var control = controlBuffer[0];
_inputPosition++;
if (control == 0x00)
{
_endReached = true;
return;
}
if (control >= 0xE0 || control == 0x01)
{
_needProps = true;
_needDictReset = false;
_outWindow.Reset();
}
else if (_needDictReset)
{
throw new DataErrorException();
}
if (control >= 0x80)
{
_uncompressedChunk = false;
_availableBytes = (control & 0x1F) << 16;
var buffer = new byte[2];
await _inputStream.ReadAsync(buffer, 0, 2, cancellationToken).ConfigureAwait(false);
_availableBytes += (buffer[0] << 8) + buffer[1] + 1;
_inputPosition += 2;
await _inputStream.ReadAsync(buffer, 0, 2, cancellationToken).ConfigureAwait(false);
_rangeDecoderLimit = (buffer[0] << 8) + buffer[1] + 1;
_inputPosition += 2;
if (control >= 0xC0)
{
_needProps = false;
await _inputStream
.ReadAsync(controlBuffer, 0, 1, cancellationToken)
.ConfigureAwait(false);
Properties[0] = controlBuffer[0];
_inputPosition++;
_decoder = new Decoder();
_decoder.SetDecoderProperties(Properties);
}
else if (_needProps)
{
throw new DataErrorException();
}
else if (control >= 0xA0)
{
_decoder = new Decoder();
_decoder.SetDecoderProperties(Properties);
}
_rangeDecoder.Init(_inputStream);
}
else if (control > 0x02)
{
throw new DataErrorException();
}
else
{
_uncompressedChunk = true;
var buffer = new byte[2];
await _inputStream.ReadAsync(buffer, 0, 2, cancellationToken).ConfigureAwait(false);
_availableBytes = (buffer[0] << 8) + buffer[1] + 1;
_inputPosition += 2;
}
}
public override long Seek(long offset, SeekOrigin origin) => throw new NotSupportedException();
public override void SetLength(long value) => throw new NotSupportedException();
@@ -434,5 +513,128 @@ public class LzmaStream : Stream, IStreamStack
}
}
public override async Task<int> ReadAsync(
byte[] buffer,
int offset,
int count,
CancellationToken cancellationToken
)
{
if (_endReached)
{
return 0;
}
var total = 0;
while (total < count)
{
cancellationToken.ThrowIfCancellationRequested();
if (_availableBytes == 0)
{
if (_isLzma2)
{
await DecodeChunkHeaderAsync(cancellationToken).ConfigureAwait(false);
}
else
{
_endReached = true;
}
if (_endReached)
{
break;
}
}
var toProcess = count - total;
if (toProcess > _availableBytes)
{
toProcess = (int)_availableBytes;
}
_outWindow.SetLimit(toProcess);
if (_uncompressedChunk)
{
_inputPosition += await _outWindow
.CopyStreamAsync(_inputStream, toProcess, cancellationToken)
.ConfigureAwait(false);
}
else if (
await _decoder
.CodeAsync(_dictionarySize, _outWindow, _rangeDecoder, cancellationToken)
.ConfigureAwait(false)
&& _outputSize < 0
)
{
_availableBytes = _outWindow.AvailableBytes;
}
var read = _outWindow.Read(buffer, offset, toProcess);
total += read;
offset += read;
_position += read;
_availableBytes -= read;
if (_availableBytes == 0 && !_uncompressedChunk)
{
if (
!_rangeDecoder.IsFinished
|| (_rangeDecoderLimit >= 0 && _rangeDecoder._total != _rangeDecoderLimit)
)
{
_outWindow.SetLimit(toProcess + 1);
if (
!await _decoder
.CodeAsync(
_dictionarySize,
_outWindow,
_rangeDecoder,
cancellationToken
)
.ConfigureAwait(false)
)
{
_rangeDecoder.ReleaseStream();
throw new DataErrorException();
}
}
_rangeDecoder.ReleaseStream();
_inputPosition += _rangeDecoder._total;
if (_outWindow.HasPending)
{
throw new DataErrorException();
}
}
}
if (_endReached)
{
if (_inputSize >= 0 && _inputPosition != _inputSize)
{
throw new DataErrorException();
}
if (_outputSize >= 0 && _position != _outputSize)
{
throw new DataErrorException();
}
}
return total;
}
public override Task WriteAsync(
byte[] buffer,
int offset,
int count,
CancellationToken cancellationToken
)
{
cancellationToken.ThrowIfCancellationRequested();
Write(buffer, offset, count);
return Task.CompletedTask;
}
public byte[] Properties { get; } = new byte[5];
}

View File

@@ -1,5 +1,7 @@
using System;
using System.IO;
using System.Threading;
using System.Threading.Tasks;
using SharpCompress.IO;
namespace SharpCompress.Compressors.LZMA.Utilites;
@@ -101,4 +103,22 @@ internal class CrcBuilderStream : Stream, IStreamStack
_mCrc = Crc.Update(_mCrc, buffer, offset, count);
_mTarget.Write(buffer, offset, count);
}
public override async Task WriteAsync(
byte[] buffer,
int offset,
int count,
CancellationToken cancellationToken = default
)
{
cancellationToken.ThrowIfCancellationRequested();
if (_mFinished)
{
throw new InvalidOperationException("CRC calculation has been finished.");
}
Processed += count;
_mCrc = Crc.Update(_mCrc, buffer, offset, count);
await _mTarget.WriteAsync(buffer, offset, count, cancellationToken);
}
}

View File

@@ -1,6 +1,9 @@
using System;
using System.Buffers;
using System.Diagnostics;
using System.IO;
using System.Threading;
using System.Threading.Tasks;
namespace SharpCompress.Compressors.LZMA.Utilites;
@@ -11,7 +14,7 @@ public class CrcCheckStream : Stream
private uint _mCurrentCrc;
private bool _mClosed;
private readonly long[] _mBytes = new long[256];
private readonly long[] _mBytes = ArrayPool<long>.Shared.Rent(256);
private long _mLength;
public CrcCheckStream(uint crc)
@@ -65,6 +68,7 @@ public class CrcCheckStream : Stream
finally
{
base.Dispose(disposing);
ArrayPool<long>.Shared.Return(_mBytes);
}
}
@@ -101,4 +105,16 @@ public class CrcCheckStream : Stream
_mCurrentCrc = Crc.Update(_mCurrentCrc, buffer, offset, count);
}
public override Task WriteAsync(
byte[] buffer,
int offset,
int count,
CancellationToken cancellationToken
)
{
cancellationToken.ThrowIfCancellationRequested();
Write(buffer, offset, count);
return Task.CompletedTask;
}
}

View File

@@ -1,13 +1,13 @@
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using SharpCompress.IO;
namespace SharpCompress.Compressors.RLE90
{
/// <summary>
/// Real-time streaming RLE90 decompression stream.
/// Decompresses bytes on demand without buffering the entire file in memory.
/// </summary>
public class RunLength90Stream : Stream, IStreamStack
{
#if DEBUG_STREAMS
@@ -31,13 +31,19 @@ namespace SharpCompress.Compressors.RLE90
void IStreamStack.SetPosition(long position) { }
private readonly Stream _stream;
private readonly int _compressedSize;
private int _bytesReadFromSource;
private const byte DLE = 0x90;
private int _compressedSize;
private bool _processed = false;
private bool _inDleMode;
private byte _lastByte;
private int _repeatCount;
private bool _endOfCompressedData;
public RunLength90Stream(Stream stream, int compressedSize)
{
_stream = stream;
_stream = stream ?? throw new ArgumentNullException(nameof(stream));
_compressedSize = compressedSize;
#if DEBUG_STREAMS
this.DebugConstruct(typeof(RunLength90Stream));
@@ -53,44 +59,93 @@ namespace SharpCompress.Compressors.RLE90
}
public override bool CanRead => true;
public override bool CanSeek => false;
public override bool CanWrite => false;
public override long Length => throw new NotImplementedException();
public override long Length => throw new NotSupportedException();
public override long Position
{
get => _stream.Position;
set => throw new NotImplementedException();
get => throw new NotSupportedException();
set => throw new NotSupportedException();
}
public override void Flush() => throw new NotImplementedException();
public override void Flush() => throw new NotSupportedException();
public override long Seek(long offset, SeekOrigin origin) =>
throw new NotSupportedException();
public override void SetLength(long value) => throw new NotSupportedException();
public override void Write(byte[] buffer, int offset, int count) =>
throw new NotSupportedException();
public override int Read(byte[] buffer, int offset, int count)
{
if (_processed)
if (buffer == null)
throw new ArgumentNullException(nameof(buffer));
if (offset < 0 || count < 0 || offset + count > buffer.Length)
throw new ArgumentOutOfRangeException();
int bytesWritten = 0;
while (bytesWritten < count && !_endOfCompressedData)
{
return 0;
// Handle pending repeat bytes first
if (_repeatCount > 0)
{
int toWrite = Math.Min(_repeatCount, count - bytesWritten);
for (int i = 0; i < toWrite; i++)
{
buffer[offset + bytesWritten++] = _lastByte;
}
_repeatCount -= toWrite;
continue;
}
// Try to read the next byte from compressed data
if (_bytesReadFromSource >= _compressedSize)
{
_endOfCompressedData = true;
break;
}
int next = _stream.ReadByte();
if (next == -1)
{
_endOfCompressedData = true;
break;
}
_bytesReadFromSource++;
byte c = (byte)next;
if (_inDleMode)
{
_inDleMode = false;
if (c == 0)
{
buffer[offset + bytesWritten++] = DLE;
_lastByte = DLE;
}
else
{
_repeatCount = c - 1;
// Well handle these repeats in next loop iteration.
}
}
else if (c == DLE)
{
_inDleMode = true;
}
else
{
buffer[offset + bytesWritten++] = c;
_lastByte = c;
}
}
_processed = true;
using var binaryReader = new BinaryReader(_stream);
byte[] compressedBuffer = binaryReader.ReadBytes(_compressedSize);
var unpacked = RLE.UnpackRLE(compressedBuffer);
unpacked.CopyTo(buffer);
return unpacked.Count;
return bytesWritten;
}
public override long Seek(long offset, SeekOrigin origin) =>
throw new NotImplementedException();
public override void SetLength(long value) => throw new NotImplementedException();
public override void Write(byte[] buffer, int offset, int count) =>
throw new NotImplementedException();
}
}

View File

@@ -1,4 +1,6 @@
using System.IO;
using System.Threading;
using System.Threading.Tasks;
using SharpCompress.Common.Rar.Headers;
namespace SharpCompress.Compressors.Rar;
@@ -8,6 +10,14 @@ internal interface IRarUnpack
void DoUnpack(FileHeader fileHeader, Stream readStream, Stream writeStream);
void DoUnpack();
Task DoUnpackAsync(
FileHeader fileHeader,
Stream readStream,
Stream writeStream,
CancellationToken cancellationToken
);
Task DoUnpackAsync(CancellationToken cancellationToken);
// eg u/i pause/resume button
bool Suspended { get; set; }

View File

@@ -150,6 +150,136 @@ internal sealed class MultiVolumeReadOnlyStream : Stream, IStreamStack
return totalRead;
}
public override async System.Threading.Tasks.Task<int> ReadAsync(
byte[] buffer,
int offset,
int count,
System.Threading.CancellationToken cancellationToken
)
{
var totalRead = 0;
var currentOffset = offset;
var currentCount = count;
while (currentCount > 0)
{
var readSize = currentCount;
if (currentCount > maxPosition - currentPosition)
{
readSize = (int)(maxPosition - currentPosition);
}
var read = await currentStream
.ReadAsync(buffer, currentOffset, readSize, cancellationToken)
.ConfigureAwait(false);
if (read < 0)
{
throw new EndOfStreamException();
}
currentPosition += read;
currentOffset += read;
currentCount -= read;
totalRead += read;
if (
((maxPosition - currentPosition) == 0)
&& filePartEnumerator.Current.FileHeader.IsSplitAfter
)
{
if (filePartEnumerator.Current.FileHeader.R4Salt != null)
{
throw new InvalidFormatException(
"Sharpcompress currently does not support multi-volume decryption."
);
}
var fileName = filePartEnumerator.Current.FileHeader.FileName;
if (!filePartEnumerator.MoveNext())
{
throw new InvalidFormatException(
"Multi-part rar file is incomplete. Entry expects a new volume: "
+ fileName
);
}
InitializeNextFilePart();
}
else
{
break;
}
}
currentPartTotalReadBytes += totalRead;
currentEntryTotalReadBytes += totalRead;
streamListener.FireCompressedBytesRead(
currentPartTotalReadBytes,
currentEntryTotalReadBytes
);
return totalRead;
}
#if NETCOREAPP2_1_OR_GREATER || NETSTANDARD2_1_OR_GREATER
public override async System.Threading.Tasks.ValueTask<int> ReadAsync(
Memory<byte> buffer,
System.Threading.CancellationToken cancellationToken = default
)
{
var totalRead = 0;
var currentOffset = 0;
var currentCount = buffer.Length;
while (currentCount > 0)
{
var readSize = currentCount;
if (currentCount > maxPosition - currentPosition)
{
readSize = (int)(maxPosition - currentPosition);
}
var read = await currentStream
.ReadAsync(buffer.Slice(currentOffset, readSize), cancellationToken)
.ConfigureAwait(false);
if (read < 0)
{
throw new EndOfStreamException();
}
currentPosition += read;
currentOffset += read;
currentCount -= read;
totalRead += read;
if (
((maxPosition - currentPosition) == 0)
&& filePartEnumerator.Current.FileHeader.IsSplitAfter
)
{
if (filePartEnumerator.Current.FileHeader.R4Salt != null)
{
throw new InvalidFormatException(
"Sharpcompress currently does not support multi-volume decryption."
);
}
var fileName = filePartEnumerator.Current.FileHeader.FileName;
if (!filePartEnumerator.MoveNext())
{
throw new InvalidFormatException(
"Multi-part rar file is incomplete. Entry expects a new volume: "
+ fileName
);
}
InitializeNextFilePart();
}
else
{
break;
}
}
currentPartTotalReadBytes += totalRead;
currentEntryTotalReadBytes += totalRead;
streamListener.FireCompressedBytesRead(
currentPartTotalReadBytes,
currentEntryTotalReadBytes
);
return totalRead;
}
#endif
public override bool CanRead => true;
public override bool CanSeek => false;

View File

@@ -1,6 +1,8 @@
using System;
using System.IO;
using System.Linq;
using System.Threading;
using System.Threading.Tasks;
using SharpCompress.Common;
using SharpCompress.Common.Rar.Headers;
using SharpCompress.IO;
@@ -103,7 +105,7 @@ internal class RarBLAKE2spStream : RarStream, IStreamStack
byte[] _hash = { };
public RarBLAKE2spStream(
private RarBLAKE2spStream(
IRarUnpack unpack,
FileHeader fileHeader,
MultiVolumeReadOnlyStream readStream
@@ -121,6 +123,29 @@ internal class RarBLAKE2spStream : RarStream, IStreamStack
ResetCrc();
}
public static RarBLAKE2spStream Create(
IRarUnpack unpack,
FileHeader fileHeader,
MultiVolumeReadOnlyStream readStream
)
{
var stream = new RarBLAKE2spStream(unpack, fileHeader, readStream);
stream.Initialize();
return stream;
}
public static async Task<RarBLAKE2spStream> CreateAsync(
IRarUnpack unpack,
FileHeader fileHeader,
MultiVolumeReadOnlyStream readStream,
CancellationToken cancellationToken = default
)
{
var stream = new RarBLAKE2spStream(unpack, fileHeader, readStream);
await stream.InitializeAsync(cancellationToken);
return stream;
}
protected override void Dispose(bool disposing)
{
#if DEBUG_STREAMS
@@ -333,4 +358,59 @@ internal class RarBLAKE2spStream : RarStream, IStreamStack
return result;
}
public override async System.Threading.Tasks.Task<int> ReadAsync(
byte[] buffer,
int offset,
int count,
System.Threading.CancellationToken cancellationToken
)
{
var result = await base.ReadAsync(buffer, offset, count, cancellationToken)
.ConfigureAwait(false);
if (result != 0)
{
Update(_blake2sp, new ReadOnlySpan<byte>(buffer, offset, result), result);
}
else
{
_hash = Final(_blake2sp);
if (!disableCRCCheck && !(GetCrc().SequenceEqual(readStream.CurrentCrc)) && count != 0)
{
// NOTE: we use the last FileHeader in a multipart volume to check CRC
throw new InvalidFormatException("file crc mismatch");
}
}
return result;
}
#if NETCOREAPP2_1_OR_GREATER || NETSTANDARD2_1_OR_GREATER
public override async System.Threading.Tasks.ValueTask<int> ReadAsync(
Memory<byte> buffer,
System.Threading.CancellationToken cancellationToken = default
)
{
var result = await base.ReadAsync(buffer, cancellationToken).ConfigureAwait(false);
if (result != 0)
{
Update(_blake2sp, buffer.Span.Slice(0, result), result);
}
else
{
_hash = Final(_blake2sp);
if (
!disableCRCCheck
&& !(GetCrc().SequenceEqual(readStream.CurrentCrc))
&& buffer.Length != 0
)
{
// NOTE: we use the last FileHeader in a multipart volume to check CRC
throw new InvalidFormatException("file crc mismatch");
}
}
return result;
}
#endif
}

View File

@@ -1,5 +1,7 @@
using System;
using System.IO;
using System.Threading;
using System.Threading.Tasks;
using SharpCompress.Common;
using SharpCompress.Common.Rar.Headers;
using SharpCompress.IO;
@@ -31,7 +33,7 @@ internal class RarCrcStream : RarStream, IStreamStack
private uint currentCrc;
private readonly bool disableCRC;
public RarCrcStream(
private RarCrcStream(
IRarUnpack unpack,
FileHeader fileHeader,
MultiVolumeReadOnlyStream readStream
@@ -46,6 +48,29 @@ internal class RarCrcStream : RarStream, IStreamStack
ResetCrc();
}
public static RarCrcStream Create(
IRarUnpack unpack,
FileHeader fileHeader,
MultiVolumeReadOnlyStream readStream
)
{
var stream = new RarCrcStream(unpack, fileHeader, readStream);
stream.Initialize();
return stream;
}
public static async Task<RarCrcStream> CreateAsync(
IRarUnpack unpack,
FileHeader fileHeader,
MultiVolumeReadOnlyStream readStream,
CancellationToken cancellationToken = default
)
{
var stream = new RarCrcStream(unpack, fileHeader, readStream);
await stream.InitializeAsync(cancellationToken);
return stream;
}
protected override void Dispose(bool disposing)
{
#if DEBUG_STREAMS
@@ -77,4 +102,56 @@ internal class RarCrcStream : RarStream, IStreamStack
return result;
}
public override async System.Threading.Tasks.Task<int> ReadAsync(
byte[] buffer,
int offset,
int count,
System.Threading.CancellationToken cancellationToken
)
{
var result = await base.ReadAsync(buffer, offset, count, cancellationToken)
.ConfigureAwait(false);
if (result != 0)
{
currentCrc = RarCRC.CheckCrc(currentCrc, buffer, offset, result);
}
else if (
!disableCRC
&& GetCrc() != BitConverter.ToUInt32(readStream.CurrentCrc, 0)
&& count != 0
)
{
// NOTE: we use the last FileHeader in a multipart volume to check CRC
throw new InvalidFormatException("file crc mismatch");
}
return result;
}
#if NETCOREAPP2_1_OR_GREATER || NETSTANDARD2_1_OR_GREATER
public override async System.Threading.Tasks.ValueTask<int> ReadAsync(
Memory<byte> buffer,
System.Threading.CancellationToken cancellationToken = default
)
{
cancellationToken.ThrowIfCancellationRequested();
var result = await base.ReadAsync(buffer, cancellationToken).ConfigureAwait(false);
if (result != 0)
{
currentCrc = RarCRC.CheckCrc(currentCrc, buffer.Span, 0, result);
}
else if (
!disableCRC
&& GetCrc() != BitConverter.ToUInt32(readStream.CurrentCrc, 0)
&& buffer.Length != 0
)
{
// NOTE: we use the last FileHeader in a multipart volume to check CRC
throw new InvalidFormatException("file crc mismatch");
}
return result;
}
#endif
}

View File

@@ -3,6 +3,8 @@
using System;
using System.Buffers;
using System.IO;
using System.Threading;
using System.Threading.Tasks;
using SharpCompress.Common.Rar.Headers;
using SharpCompress.IO;
@@ -56,13 +58,24 @@ internal class RarStream : Stream, IStreamStack
#if DEBUG_STREAMS
this.DebugConstruct(typeof(RarStream));
#endif
}
public void Initialize()
{
fetch = true;
unpack.DoUnpack(fileHeader, readStream, this);
fetch = false;
_position = 0;
}
public async Task InitializeAsync(CancellationToken cancellationToken = default)
{
fetch = true;
await unpack.DoUnpackAsync(fileHeader, readStream, this, cancellationToken);
fetch = false;
_position = 0;
}
protected override void Dispose(bool disposing)
{
if (!isDisposed)
@@ -131,6 +144,73 @@ internal class RarStream : Stream, IStreamStack
return outTotal;
}
public override async System.Threading.Tasks.Task<int> ReadAsync(
byte[] buffer,
int offset,
int count,
System.Threading.CancellationToken cancellationToken
) => await ReadImplAsync(buffer, offset, count, cancellationToken).ConfigureAwait(false);
private async System.Threading.Tasks.Task<int> ReadImplAsync(
byte[] buffer,
int offset,
int count,
System.Threading.CancellationToken cancellationToken
)
{
outTotal = 0;
if (tmpCount > 0)
{
var toCopy = tmpCount < count ? tmpCount : count;
Buffer.BlockCopy(tmpBuffer, tmpOffset, buffer, offset, toCopy);
tmpOffset += toCopy;
tmpCount -= toCopy;
offset += toCopy;
count -= toCopy;
outTotal += toCopy;
}
if (count > 0 && unpack.DestSize > 0)
{
outBuffer = buffer;
outOffset = offset;
outCount = count;
fetch = true;
await unpack.DoUnpackAsync(cancellationToken).ConfigureAwait(false);
fetch = false;
}
_position += outTotal;
if (count > 0 && outTotal == 0 && _position != Length)
{
// sanity check, eg if we try to decompress a redir entry
throw new InvalidOperationException(
$"unpacked file size does not match header: expected {Length} found {_position}"
);
}
return outTotal;
}
#if NETCOREAPP2_1_OR_GREATER || NETSTANDARD2_1_OR_GREATER
public override async System.Threading.Tasks.ValueTask<int> ReadAsync(
Memory<byte> buffer,
System.Threading.CancellationToken cancellationToken = default
)
{
cancellationToken.ThrowIfCancellationRequested();
var array = System.Buffers.ArrayPool<byte>.Shared.Rent(buffer.Length);
try
{
var bytesRead = await ReadImplAsync(array, 0, buffer.Length, cancellationToken)
.ConfigureAwait(false);
new ReadOnlySpan<byte>(array, 0, bytesRead).CopyTo(buffer.Span);
return bytesRead;
}
finally
{
System.Buffers.ArrayPool<byte>.Shared.Return(array);
}
}
#endif
public override long Seek(long offset, SeekOrigin origin) => throw new NotSupportedException();
public override void SetLength(long value) => throw new NotSupportedException();
@@ -165,6 +245,18 @@ internal class RarStream : Stream, IStreamStack
}
}
public override System.Threading.Tasks.Task WriteAsync(
byte[] buffer,
int offset,
int count,
System.Threading.CancellationToken cancellationToken
)
{
cancellationToken.ThrowIfCancellationRequested();
Write(buffer, offset, count);
return System.Threading.Tasks.Task.CompletedTask;
}
private void EnsureBufferCapacity(int count)
{
if (this.tmpBuffer.Length < this.tmpCount + count)

View File

@@ -13,7 +13,7 @@ using SharpCompress.Compressors.Rar.VM;
namespace SharpCompress.Compressors.Rar.UnpackV1;
internal sealed partial class Unpack : BitInput, IRarUnpack, IDisposable
internal sealed partial class Unpack : BitInput, IRarUnpack
{
private readonly BitInput Inp;
private bool disposed;
@@ -22,15 +22,17 @@ internal sealed partial class Unpack : BitInput, IRarUnpack, IDisposable
// to ease in porting Unpack50.cs
Inp = this;
public void Dispose()
public override void Dispose()
{
if (!disposed)
{
if (!externalWindow)
base.Dispose();
if (!externalWindow && window is not null)
{
ArrayPool<byte>.Shared.Return(window);
window = null;
}
rarVM.Dispose();
disposed = true;
}
}
@@ -153,6 +155,25 @@ internal sealed partial class Unpack : BitInput, IRarUnpack, IDisposable
DoUnpack();
}
public async System.Threading.Tasks.Task DoUnpackAsync(
FileHeader fileHeader,
Stream readStream,
Stream writeStream,
System.Threading.CancellationToken cancellationToken = default
)
{
destUnpSize = fileHeader.UncompressedSize;
this.fileHeader = fileHeader;
this.readStream = readStream;
this.writeStream = writeStream;
if (!fileHeader.IsSolid)
{
Init(null);
}
suspended = false;
await DoUnpackAsync(cancellationToken).ConfigureAwait(false);
}
public void DoUnpack()
{
if (fileHeader.CompressionMethod == 0)
@@ -187,6 +208,42 @@ internal sealed partial class Unpack : BitInput, IRarUnpack, IDisposable
}
}
public async System.Threading.Tasks.Task DoUnpackAsync(
System.Threading.CancellationToken cancellationToken = default
)
{
if (fileHeader.CompressionMethod == 0)
{
await UnstoreFileAsync(cancellationToken).ConfigureAwait(false);
return;
}
switch (fileHeader.CompressionAlgorithm)
{
case 15: // rar 1.5 compression
await unpack15Async(fileHeader.IsSolid, cancellationToken).ConfigureAwait(false);
break;
case 20: // rar 2.x compression
case 26: // files larger than 2GB
await unpack20Async(fileHeader.IsSolid, cancellationToken).ConfigureAwait(false);
break;
case 29: // rar 3.x compression
case 36: // alternative hash
await Unpack29Async(fileHeader.IsSolid, cancellationToken).ConfigureAwait(false);
break;
case 50: // rar 5.x compression
await Unpack5Async(fileHeader.IsSolid, cancellationToken).ConfigureAwait(false);
break;
default:
throw new InvalidFormatException(
"unknown rar compression version " + fileHeader.CompressionAlgorithm
);
}
}
private void UnstoreFile()
{
Span<byte> buffer = stackalloc byte[(int)Math.Min(0x10000, destUnpSize)];
@@ -203,6 +260,26 @@ internal sealed partial class Unpack : BitInput, IRarUnpack, IDisposable
} while (!suspended && destUnpSize > 0);
}
private async System.Threading.Tasks.Task UnstoreFileAsync(
System.Threading.CancellationToken cancellationToken = default
)
{
var buffer = new byte[(int)Math.Min(0x10000, destUnpSize)];
do
{
var code = await readStream
.ReadAsync(buffer, 0, buffer.Length, cancellationToken)
.ConfigureAwait(false);
if (code == 0 || code == -1)
{
break;
}
code = code < destUnpSize ? code : (int)destUnpSize;
await writeStream.WriteAsync(buffer, 0, code, cancellationToken).ConfigureAwait(false);
destUnpSize -= code;
} while (!suspended && destUnpSize > 0);
}
private void Unpack29(bool solid)
{
Span<int> DDecode = stackalloc int[PackDef.DC];
@@ -481,6 +558,281 @@ internal sealed partial class Unpack : BitInput, IRarUnpack, IDisposable
UnpWriteBuf();
}
private async System.Threading.Tasks.Task Unpack29Async(
bool solid,
System.Threading.CancellationToken cancellationToken = default
)
{
int[] DDecode = new int[PackDef.DC];
byte[] DBits = new byte[PackDef.DC];
int Bits;
if (DDecode[1] == 0)
{
int Dist = 0,
BitLength = 0,
Slot = 0;
for (var I = 0; I < DBitLengthCounts.Length; I++, BitLength++)
{
var count = DBitLengthCounts[I];
for (var J = 0; J < count; J++, Slot++, Dist += (1 << BitLength))
{
DDecode[Slot] = Dist;
DBits[Slot] = (byte)BitLength;
}
}
}
FileExtracted = true;
if (!suspended)
{
UnpInitData(solid);
if (!await unpReadBufAsync(cancellationToken).ConfigureAwait(false))
{
return;
}
if ((!solid || !tablesRead) && !ReadTables())
{
return;
}
}
if (ppmError)
{
return;
}
while (true)
{
unpPtr &= PackDef.MAXWINMASK;
if (inAddr > readBorder)
{
if (!await unpReadBufAsync(cancellationToken).ConfigureAwait(false))
{
break;
}
}
if (((wrPtr - unpPtr) & PackDef.MAXWINMASK) < 260 && wrPtr != unpPtr)
{
await UnpWriteBufAsync(cancellationToken).ConfigureAwait(false);
if (destUnpSize < 0)
{
return;
}
if (suspended)
{
FileExtracted = false;
return;
}
}
if (unpBlockType == BlockTypes.BLOCK_PPM)
{
var ch = ppm.DecodeChar();
if (ch == -1)
{
ppmError = true;
break;
}
if (ch == PpmEscChar)
{
var nextCh = ppm.DecodeChar();
if (nextCh == 0)
{
if (!ReadTables())
{
break;
}
continue;
}
if (nextCh == 2 || nextCh == -1)
{
break;
}
if (nextCh == 3)
{
if (!ReadVMCode())
{
break;
}
continue;
}
if (nextCh == 4)
{
uint Distance = 0,
Length = 0;
var failed = false;
for (var I = 0; I < 4 && !failed; I++)
{
var ch2 = ppm.DecodeChar();
if (ch2 == -1)
{
failed = true;
}
else if (I == 3)
{
Length = (uint)ch2;
}
else
{
Distance = (Distance << 8) + (uint)ch2;
}
}
if (failed)
{
break;
}
CopyString(Length + 32, Distance + 2);
continue;
}
if (nextCh == 5)
{
var length = ppm.DecodeChar();
if (length == -1)
{
break;
}
CopyString((uint)(length + 4), 1);
continue;
}
}
window[unpPtr++] = (byte)ch;
continue;
}
var Number = this.decodeNumber(LD);
if (Number < 256)
{
window[unpPtr++] = (byte)Number;
continue;
}
if (Number >= 271)
{
var Length = LDecode[Number -= 271] + 3;
if ((Bits = LBits[Number]) > 0)
{
Length += GetBits() >> (16 - Bits);
AddBits(Bits);
}
var DistNumber = this.decodeNumber(DD);
var Distance = DDecode[DistNumber] + 1;
if ((Bits = DBits[DistNumber]) > 0)
{
if (DistNumber > 9)
{
if (Bits > 4)
{
Distance += (GetBits() >> (20 - Bits)) << 4;
AddBits(Bits - 4);
}
if (lowDistRepCount > 0)
{
lowDistRepCount--;
Distance += prevLowDist;
}
else
{
var LowDist = this.decodeNumber(LDD);
if (LowDist == 16)
{
lowDistRepCount = PackDef.LOW_DIST_REP_COUNT - 1;
Distance += prevLowDist;
}
else
{
Distance += LowDist;
prevLowDist = (int)LowDist;
}
}
}
else
{
Distance += GetBits() >> (16 - Bits);
AddBits(Bits);
}
}
if (Distance >= 0x2000)
{
Length++;
if (Distance >= 0x40000)
{
Length++;
}
}
InsertOldDist(Distance);
lastLength = Length;
CopyString(Length, Distance);
continue;
}
if (Number == 256)
{
if (!ReadEndOfBlock())
{
break;
}
continue;
}
if (Number == 257)
{
if (!ReadVMCode())
{
break;
}
continue;
}
if (Number == 258)
{
if (lastLength != 0)
{
CopyString(lastLength, oldDist[0]);
}
continue;
}
if (Number < 263)
{
var DistNum = Number - 259;
var Distance = (uint)oldDist[DistNum];
for (var I = DistNum; I > 0; I--)
{
oldDist[I] = oldDist[I - 1];
}
oldDist[0] = (int)Distance;
var LengthNumber = this.decodeNumber(RD);
var Length = LDecode[LengthNumber] + 2;
if ((Bits = LBits[LengthNumber]) > 0)
{
Length += GetBits() >> (16 - Bits);
AddBits(Bits);
}
lastLength = Length;
CopyString((uint)Length, Distance);
continue;
}
if (Number < 272)
{
var Distance = SDDecode[Number -= 263] + 1;
if ((Bits = SDBits[Number]) > 0)
{
Distance += GetBits() >> (16 - Bits);
AddBits(Bits);
}
InsertOldDist((uint)Distance);
lastLength = 2;
CopyString(2, (uint)Distance);
}
}
await UnpWriteBufAsync(cancellationToken).ConfigureAwait(false);
}
private void UnpWriteBuf()
{
var WrittenBorder = wrPtr;
@@ -574,104 +926,111 @@ internal sealed partial class Unpack : BitInput, IRarUnpack, IDisposable
var FilteredDataOffset = Prg.FilteredDataOffset;
var FilteredDataSize = Prg.FilteredDataSize;
var FilteredData = new byte[FilteredDataSize];
for (var i = 0; i < FilteredDataSize; i++)
var FilteredData = ArrayPool<byte>.Shared.Rent(FilteredDataSize);
try
{
FilteredData[i] = rarVM.Mem[FilteredDataOffset + i];
Array.Copy(
rarVM.Mem,
FilteredDataOffset,
FilteredData,
0,
FilteredDataSize
);
// Prg.GlobalData.get(FilteredDataOffset
// +
// i);
}
prgStack[I] = null;
while (I + 1 < prgStack.Count)
{
var NextFilter = prgStack[I + 1];
if (
NextFilter is null
|| NextFilter.BlockStart != BlockStart
|| NextFilter.BlockLength != FilteredDataSize
|| NextFilter.NextWindow
)
{
break;
}
// apply several filters to same data block
rarVM.setMemory(0, FilteredData, 0, FilteredDataSize);
// .SetMemory(0,FilteredData,FilteredDataSize);
var pPrg = filters[NextFilter.ParentFilter].Program;
var NextPrg = NextFilter.Program;
if (pPrg.GlobalData.Count > RarVM.VM_FIXEDGLOBALSIZE)
{
// copy global data from previous script execution
// if any
// NextPrg->GlobalData.Alloc(ParentPrg->GlobalData.Size());
NextPrg.GlobalData.SetSize(pPrg.GlobalData.Count);
// memcpy(&NextPrg->GlobalData[VM_FIXEDGLOBALSIZE],&ParentPrg->GlobalData[VM_FIXEDGLOBALSIZE],ParentPrg->GlobalData.Size()-VM_FIXEDGLOBALSIZE);
for (
var i = 0;
i < pPrg.GlobalData.Count - RarVM.VM_FIXEDGLOBALSIZE;
i++
)
{
NextPrg.GlobalData[RarVM.VM_FIXEDGLOBALSIZE + i] = pPrg.GlobalData[
RarVM.VM_FIXEDGLOBALSIZE + i
];
}
}
ExecuteCode(NextPrg);
if (NextPrg.GlobalData.Count > RarVM.VM_FIXEDGLOBALSIZE)
{
// save global data for next script execution
if (pPrg.GlobalData.Count < NextPrg.GlobalData.Count)
{
pPrg.GlobalData.SetSize(NextPrg.GlobalData.Count);
}
// memcpy(&ParentPrg->GlobalData[VM_FIXEDGLOBALSIZE],&NextPrg->GlobalData[VM_FIXEDGLOBALSIZE],NextPrg->GlobalData.Size()-VM_FIXEDGLOBALSIZE);
for (
var i = 0;
i < NextPrg.GlobalData.Count - RarVM.VM_FIXEDGLOBALSIZE;
i++
)
{
pPrg.GlobalData[RarVM.VM_FIXEDGLOBALSIZE + i] = NextPrg.GlobalData[
RarVM.VM_FIXEDGLOBALSIZE + i
];
}
}
else
{
pPrg.GlobalData.Clear();
}
FilteredDataOffset = NextPrg.FilteredDataOffset;
FilteredDataSize = NextPrg.FilteredDataSize;
FilteredData = new byte[FilteredDataSize];
for (var i = 0; i < FilteredDataSize; i++)
{
FilteredData[i] = NextPrg.GlobalData[FilteredDataOffset + i];
}
I++;
prgStack[I] = null;
while (I + 1 < prgStack.Count)
{
var NextFilter = prgStack[I + 1];
if (
NextFilter is null
|| NextFilter.BlockStart != BlockStart
|| NextFilter.BlockLength != FilteredDataSize
|| NextFilter.NextWindow
)
{
break;
}
// apply several filters to same data block
rarVM.setMemory(0, FilteredData, 0, FilteredDataSize);
// .SetMemory(0,FilteredData,FilteredDataSize);
var pPrg = filters[NextFilter.ParentFilter].Program;
var NextPrg = NextFilter.Program;
if (pPrg.GlobalData.Count > RarVM.VM_FIXEDGLOBALSIZE)
{
// copy global data from previous script execution
// if any
// NextPrg->GlobalData.Alloc(ParentPrg->GlobalData.Size());
NextPrg.GlobalData.SetSize(pPrg.GlobalData.Count);
// memcpy(&NextPrg->GlobalData[VM_FIXEDGLOBALSIZE],&ParentPrg->GlobalData[VM_FIXEDGLOBALSIZE],ParentPrg->GlobalData.Size()-VM_FIXEDGLOBALSIZE);
for (
var i = 0;
i < pPrg.GlobalData.Count - RarVM.VM_FIXEDGLOBALSIZE;
i++
)
{
NextPrg.GlobalData[RarVM.VM_FIXEDGLOBALSIZE + i] =
pPrg.GlobalData[RarVM.VM_FIXEDGLOBALSIZE + i];
}
}
ExecuteCode(NextPrg);
if (NextPrg.GlobalData.Count > RarVM.VM_FIXEDGLOBALSIZE)
{
// save global data for next script execution
if (pPrg.GlobalData.Count < NextPrg.GlobalData.Count)
{
pPrg.GlobalData.SetSize(NextPrg.GlobalData.Count);
}
// memcpy(&ParentPrg->GlobalData[VM_FIXEDGLOBALSIZE],&NextPrg->GlobalData[VM_FIXEDGLOBALSIZE],NextPrg->GlobalData.Size()-VM_FIXEDGLOBALSIZE);
for (
var i = 0;
i < NextPrg.GlobalData.Count - RarVM.VM_FIXEDGLOBALSIZE;
i++
)
{
pPrg.GlobalData[RarVM.VM_FIXEDGLOBALSIZE + i] =
NextPrg.GlobalData[RarVM.VM_FIXEDGLOBALSIZE + i];
}
}
else
{
pPrg.GlobalData.Clear();
}
FilteredDataOffset = NextPrg.FilteredDataOffset;
FilteredDataSize = NextPrg.FilteredDataSize;
if (FilteredData.Length < FilteredDataSize)
{
ArrayPool<byte>.Shared.Return(FilteredData);
FilteredData = ArrayPool<byte>.Shared.Rent(FilteredDataSize);
}
for (var i = 0; i < FilteredDataSize; i++)
{
FilteredData[i] = NextPrg.GlobalData[FilteredDataOffset + i];
}
I++;
prgStack[I] = null;
}
writeStream.Write(FilteredData, 0, FilteredDataSize);
writtenFileSize += FilteredDataSize;
destUnpSize -= FilteredDataSize;
WrittenBorder = BlockEnd;
WriteSize = (unpPtr - WrittenBorder) & PackDef.MAXWINMASK;
}
finally
{
ArrayPool<byte>.Shared.Return(FilteredData);
}
writeStream.Write(FilteredData, 0, FilteredDataSize);
unpSomeRead = true;
writtenFileSize += FilteredDataSize;
destUnpSize -= FilteredDataSize;
WrittenBorder = BlockEnd;
WriteSize = (unpPtr - WrittenBorder) & PackDef.MAXWINMASK;
}
else
{
@@ -695,15 +1054,10 @@ internal sealed partial class Unpack : BitInput, IRarUnpack, IDisposable
private void UnpWriteArea(int startPtr, int endPtr)
{
if (endPtr != startPtr)
{
unpSomeRead = true;
}
if (endPtr < startPtr)
{
UnpWriteData(window, startPtr, -startPtr & PackDef.MAXWINMASK);
UnpWriteData(window, 0, endPtr);
unpAllBuf = true;
}
else
{
@@ -757,19 +1111,27 @@ internal sealed partial class Unpack : BitInput, IRarUnpack, IDisposable
// System.out.println("copyString(" + length + ", " + distance + ")");
var destPtr = unpPtr - distance;
var safeZone = PackDef.MAXWINSIZE - 260;
// System.out.println(unpPtr+":"+distance);
if (destPtr >= 0 && destPtr < PackDef.MAXWINSIZE - 260 && unpPtr < PackDef.MAXWINSIZE - 260)
// Fast path: use Array.Copy for bulk operations when in safe zone
if (destPtr >= 0 && destPtr < safeZone && unpPtr < safeZone && distance >= length)
{
window[unpPtr++] = window[destPtr++];
while (--length > 0)
// Non-overlapping copy: can use Array.Copy directly
Array.Copy(window, destPtr, window, unpPtr, length);
unpPtr += length;
}
else if (destPtr >= 0 && destPtr < safeZone && unpPtr < safeZone)
{
// Overlapping copy in safe zone: use byte-by-byte to handle self-referential copies
for (int i = 0; i < length; i++)
{
window[unpPtr++] = window[destPtr++];
window[unpPtr + i] = window[destPtr + i];
}
unpPtr += length;
}
else
{
// Slow path with wraparound mask
while (length-- != 0)
{
window[unpPtr] = window[destPtr++ & PackDef.MAXWINMASK];
@@ -1028,7 +1390,7 @@ internal sealed partial class Unpack : BitInput, IRarUnpack, IDisposable
vmCode.Add((byte)(GetBits() >> 8));
AddBits(8);
}
return (AddVMCode(FirstByte, vmCode, Length));
return AddVMCode(FirstByte, vmCode);
}
private bool ReadVMCodePPM()
@@ -1073,12 +1435,12 @@ internal sealed partial class Unpack : BitInput, IRarUnpack, IDisposable
}
vmCode.Add((byte)Ch); // VMCode[I]=Ch;
}
return (AddVMCode(FirstByte, vmCode, Length));
return AddVMCode(FirstByte, vmCode);
}
private bool AddVMCode(int firstByte, List<byte> vmCode, int length)
private bool AddVMCode(int firstByte, List<byte> vmCode)
{
var Inp = new BitInput();
using var Inp = new BitInput();
Inp.InitBitInput();
// memcpy(Inp.InBuf,Code,Min(BitInput::MAX_SIZE,CodeSize));
@@ -1086,7 +1448,6 @@ internal sealed partial class Unpack : BitInput, IRarUnpack, IDisposable
{
Inp.InBuf[i] = vmCode[i];
}
rarVM.init();
int FiltPos;
if ((firstByte & 0x80) != 0)
@@ -1199,19 +1560,28 @@ internal sealed partial class Unpack : BitInput, IRarUnpack, IDisposable
{
return (false);
}
Span<byte> VMCode = stackalloc byte[VMCodeSize];
for (var I = 0; I < VMCodeSize; I++)
{
if (Inp.Overflow(3))
{
return (false);
}
VMCode[I] = (byte)(Inp.GetBits() >> 8);
Inp.AddBits(8);
}
// VM.Prepare(&VMCode[0],VMCodeSize,&Filter->Prg);
rarVM.prepare(VMCode, VMCodeSize, Filter.Program);
var VMCode = ArrayPool<byte>.Shared.Rent(VMCodeSize);
try
{
for (var I = 0; I < VMCodeSize; I++)
{
if (Inp.Overflow(3))
{
return (false);
}
VMCode[I] = (byte)(Inp.GetBits() >> 8);
Inp.AddBits(8);
}
// VM.Prepare(&VMCode[0],VMCodeSize,&Filter->Prg);
rarVM.prepare(VMCode.AsSpan(0, VMCodeSize), Filter.Program);
}
finally
{
ArrayPool<byte>.Shared.Return(VMCode);
}
}
StackFilter.Program.AltCommands = Filter.Program.Commands; // StackFilter->Prg.AltCmd=&Filter->Prg.Cmd[0];
StackFilter.Program.CommandCount = Filter.Program.CommandCount;
@@ -1319,6 +1689,256 @@ internal sealed partial class Unpack : BitInput, IRarUnpack, IDisposable
}
}
private async System.Threading.Tasks.Task UnpWriteBufAsync(
System.Threading.CancellationToken cancellationToken = default
)
{
var WrittenBorder = wrPtr;
var WriteSize = (unpPtr - WrittenBorder) & PackDef.MAXWINMASK;
for (var I = 0; I < prgStack.Count; I++)
{
var flt = prgStack[I];
if (flt is null)
{
continue;
}
if (flt.NextWindow)
{
flt.NextWindow = false;
continue;
}
var BlockStart = flt.BlockStart;
var BlockLength = flt.BlockLength;
if (((BlockStart - WrittenBorder) & PackDef.MAXWINMASK) < WriteSize)
{
if (WrittenBorder != BlockStart)
{
await UnpWriteAreaAsync(WrittenBorder, BlockStart, cancellationToken)
.ConfigureAwait(false);
WrittenBorder = BlockStart;
WriteSize = (unpPtr - WrittenBorder) & PackDef.MAXWINMASK;
}
if (BlockLength <= WriteSize)
{
var BlockEnd = (BlockStart + BlockLength) & PackDef.MAXWINMASK;
if (BlockStart < BlockEnd || BlockEnd == 0)
{
rarVM.setMemory(0, window, BlockStart, BlockLength);
}
else
{
var FirstPartLength = PackDef.MAXWINSIZE - BlockStart;
rarVM.setMemory(0, window, BlockStart, FirstPartLength);
rarVM.setMemory(FirstPartLength, window, 0, BlockEnd);
}
var ParentPrg = filters[flt.ParentFilter].Program;
var Prg = flt.Program;
if (ParentPrg.GlobalData.Count > RarVM.VM_FIXEDGLOBALSIZE)
{
Prg.GlobalData.Clear();
for (
var i = 0;
i < ParentPrg.GlobalData.Count - RarVM.VM_FIXEDGLOBALSIZE;
i++
)
{
Prg.GlobalData[RarVM.VM_FIXEDGLOBALSIZE + i] = ParentPrg.GlobalData[
RarVM.VM_FIXEDGLOBALSIZE + i
];
}
}
ExecuteCode(Prg);
if (Prg.GlobalData.Count > RarVM.VM_FIXEDGLOBALSIZE)
{
if (ParentPrg.GlobalData.Count < Prg.GlobalData.Count)
{
ParentPrg.GlobalData.SetSize(Prg.GlobalData.Count);
}
for (var i = 0; i < Prg.GlobalData.Count - RarVM.VM_FIXEDGLOBALSIZE; i++)
{
ParentPrg.GlobalData[RarVM.VM_FIXEDGLOBALSIZE + i] = Prg.GlobalData[
RarVM.VM_FIXEDGLOBALSIZE + i
];
}
}
else
{
ParentPrg.GlobalData.Clear();
}
var FilteredDataOffset = Prg.FilteredDataOffset;
var FilteredDataSize = Prg.FilteredDataSize;
var FilteredData = ArrayPool<byte>.Shared.Rent(FilteredDataSize);
try
{
Array.Copy(
rarVM.Mem,
FilteredDataOffset,
FilteredData,
0,
FilteredDataSize
);
prgStack[I] = null;
while (I + 1 < prgStack.Count)
{
var NextFilter = prgStack[I + 1];
if (
NextFilter is null
|| NextFilter.BlockStart != BlockStart
|| NextFilter.BlockLength != FilteredDataSize
|| NextFilter.NextWindow
)
{
break;
}
rarVM.setMemory(0, FilteredData, 0, FilteredDataSize);
var pPrg = filters[NextFilter.ParentFilter].Program;
var NextPrg = NextFilter.Program;
if (pPrg.GlobalData.Count > RarVM.VM_FIXEDGLOBALSIZE)
{
NextPrg.GlobalData.SetSize(pPrg.GlobalData.Count);
for (
var i = 0;
i < pPrg.GlobalData.Count - RarVM.VM_FIXEDGLOBALSIZE;
i++
)
{
NextPrg.GlobalData[RarVM.VM_FIXEDGLOBALSIZE + i] =
pPrg.GlobalData[RarVM.VM_FIXEDGLOBALSIZE + i];
}
}
ExecuteCode(NextPrg);
if (NextPrg.GlobalData.Count > RarVM.VM_FIXEDGLOBALSIZE)
{
if (pPrg.GlobalData.Count < NextPrg.GlobalData.Count)
{
pPrg.GlobalData.SetSize(NextPrg.GlobalData.Count);
}
for (
var i = 0;
i < NextPrg.GlobalData.Count - RarVM.VM_FIXEDGLOBALSIZE;
i++
)
{
pPrg.GlobalData[RarVM.VM_FIXEDGLOBALSIZE + i] =
NextPrg.GlobalData[RarVM.VM_FIXEDGLOBALSIZE + i];
}
}
else
{
pPrg.GlobalData.Clear();
}
FilteredDataOffset = NextPrg.FilteredDataOffset;
FilteredDataSize = NextPrg.FilteredDataSize;
if (FilteredData.Length < FilteredDataSize)
{
ArrayPool<byte>.Shared.Return(FilteredData);
FilteredData = ArrayPool<byte>.Shared.Rent(FilteredDataSize);
}
for (var i = 0; i < FilteredDataSize; i++)
{
FilteredData[i] = NextPrg.GlobalData[FilteredDataOffset + i];
}
I++;
prgStack[I] = null;
}
await writeStream
.WriteAsync(FilteredData, 0, FilteredDataSize, cancellationToken)
.ConfigureAwait(false);
writtenFileSize += FilteredDataSize;
destUnpSize -= FilteredDataSize;
WrittenBorder = BlockEnd;
WriteSize = (unpPtr - WrittenBorder) & PackDef.MAXWINMASK;
}
finally
{
ArrayPool<byte>.Shared.Return(FilteredData);
}
}
else
{
for (var J = I; J < prgStack.Count; J++)
{
var filt = prgStack[J];
if (filt != null && filt.NextWindow)
{
filt.NextWindow = false;
}
}
wrPtr = WrittenBorder;
return;
}
}
}
await UnpWriteAreaAsync(WrittenBorder, unpPtr, cancellationToken).ConfigureAwait(false);
wrPtr = unpPtr;
}
private async System.Threading.Tasks.Task UnpWriteAreaAsync(
int startPtr,
int endPtr,
System.Threading.CancellationToken cancellationToken = default
)
{
if (endPtr < startPtr)
{
await UnpWriteDataAsync(
window,
startPtr,
-startPtr & PackDef.MAXWINMASK,
cancellationToken
)
.ConfigureAwait(false);
await UnpWriteDataAsync(window, 0, endPtr, cancellationToken).ConfigureAwait(false);
}
else
{
await UnpWriteDataAsync(window, startPtr, endPtr - startPtr, cancellationToken)
.ConfigureAwait(false);
}
}
private async System.Threading.Tasks.Task UnpWriteDataAsync(
byte[] data,
int offset,
int size,
System.Threading.CancellationToken cancellationToken = default
)
{
if (destUnpSize < 0)
{
return;
}
var writeSize = size;
if (writeSize > destUnpSize)
{
writeSize = (int)destUnpSize;
}
await writeStream
.WriteAsync(data, offset, writeSize, cancellationToken)
.ConfigureAwait(false);
writtenFileSize += size;
destUnpSize -= size;
}
private void CleanUp()
{
if (ppm != null)

View File

@@ -19,14 +19,9 @@ internal partial class Unpack
private bool suspended;
internal bool unpAllBuf;
//private ComprDataIO unpIO;
private Stream readStream;
private Stream writeStream;
internal bool unpSomeRead;
private int readTop;
private long destUnpSize;
@@ -321,6 +316,110 @@ internal partial class Unpack
oldUnpWriteBuf();
}
private async System.Threading.Tasks.Task unpack15Async(
bool solid,
System.Threading.CancellationToken cancellationToken = default
)
{
if (suspended)
{
unpPtr = wrPtr;
}
else
{
UnpInitData(solid);
oldUnpInitData(solid);
await unpReadBufAsync(cancellationToken).ConfigureAwait(false);
if (!solid)
{
initHuff();
unpPtr = 0;
}
else
{
unpPtr = wrPtr;
}
--destUnpSize;
}
if (destUnpSize >= 0)
{
getFlagsBuf();
FlagsCnt = 8;
}
while (destUnpSize >= 0)
{
unpPtr &= PackDef.MAXWINMASK;
if (
inAddr > readTop - 30
&& !await unpReadBufAsync(cancellationToken).ConfigureAwait(false)
)
{
break;
}
if (((wrPtr - unpPtr) & PackDef.MAXWINMASK) < 270 && wrPtr != unpPtr)
{
await oldUnpWriteBufAsync(cancellationToken).ConfigureAwait(false);
if (suspended)
{
return;
}
}
if (StMode != 0)
{
huffDecode();
continue;
}
if (--FlagsCnt < 0)
{
getFlagsBuf();
FlagsCnt = 7;
}
if ((FlagBuf & 0x80) != 0)
{
FlagBuf <<= 1;
if (Nlzb > Nhfb)
{
longLZ();
}
else
{
huffDecode();
}
}
else
{
FlagBuf <<= 1;
if (--FlagsCnt < 0)
{
getFlagsBuf();
FlagsCnt = 7;
}
if ((FlagBuf & 0x80) != 0)
{
FlagBuf <<= 1;
if (Nlzb > Nhfb)
{
huffDecode();
}
else
{
longLZ();
}
}
else
{
FlagBuf <<= 1;
shortLZ();
}
}
}
await oldUnpWriteBufAsync(cancellationToken).ConfigureAwait(false);
}
private bool unpReadBuf()
{
var dataSize = readTop - inAddr;
@@ -356,6 +455,40 @@ internal partial class Unpack
return (readCode != -1);
}
private async System.Threading.Tasks.Task<bool> unpReadBufAsync(
System.Threading.CancellationToken cancellationToken = default
)
{
var dataSize = readTop - inAddr;
if (dataSize < 0)
{
return (false);
}
if (inAddr > MAX_SIZE / 2)
{
if (dataSize > 0)
{
Array.Copy(InBuf, inAddr, InBuf, 0, dataSize);
}
inAddr = 0;
readTop = dataSize;
}
else
{
dataSize = readTop;
}
var readCode = await readStream
.ReadAsync(InBuf, dataSize, (MAX_SIZE - dataSize) & ~0xf, cancellationToken)
.ConfigureAwait(false);
if (readCode > 0)
{
readTop += readCode;
}
readBorder = readTop - 30;
return (readCode != -1);
}
private int getShortLen1(int pos) => pos == 1 ? Buf60 + 3 : ShortLen1[pos];
private int getShortLen2(int pos) => pos == 3 ? Buf60 + 3 : ShortLen2[pos];
@@ -808,15 +941,10 @@ internal partial class Unpack
private void oldUnpWriteBuf()
{
if (unpPtr != wrPtr)
{
unpSomeRead = true;
}
if (unpPtr < wrPtr)
{
writeStream.Write(window, wrPtr, -wrPtr & PackDef.MAXWINMASK);
writeStream.Write(window, 0, unpPtr);
unpAllBuf = true;
}
else
{
@@ -824,4 +952,26 @@ internal partial class Unpack
}
wrPtr = unpPtr;
}
private async System.Threading.Tasks.Task oldUnpWriteBufAsync(
System.Threading.CancellationToken cancellationToken = default
)
{
if (unpPtr < wrPtr)
{
await writeStream
.WriteAsync(window, wrPtr, -wrPtr & PackDef.MAXWINMASK, cancellationToken)
.ConfigureAwait(false);
await writeStream
.WriteAsync(window, 0, unpPtr, cancellationToken)
.ConfigureAwait(false);
}
else
{
await writeStream
.WriteAsync(window, wrPtr, unpPtr - wrPtr, cancellationToken)
.ConfigureAwait(false);
}
wrPtr = unpPtr;
}
}

View File

@@ -368,6 +368,163 @@ internal partial class Unpack
oldUnpWriteBuf();
}
private async System.Threading.Tasks.Task unpack20Async(
bool solid,
System.Threading.CancellationToken cancellationToken = default
)
{
int Bits;
if (suspended)
{
unpPtr = wrPtr;
}
else
{
UnpInitData(solid);
if (!await unpReadBufAsync(cancellationToken).ConfigureAwait(false))
{
return;
}
if (!solid)
{
if (!await ReadTables20Async(cancellationToken).ConfigureAwait(false))
{
return;
}
}
--destUnpSize;
}
while (destUnpSize >= 0)
{
unpPtr &= PackDef.MAXWINMASK;
if (inAddr > readTop - 30)
{
if (!await unpReadBufAsync(cancellationToken).ConfigureAwait(false))
{
break;
}
}
if (((wrPtr - unpPtr) & PackDef.MAXWINMASK) < 270 && wrPtr != unpPtr)
{
await oldUnpWriteBufAsync(cancellationToken).ConfigureAwait(false);
if (suspended)
{
return;
}
}
if (UnpAudioBlock != 0)
{
var AudioNumber = this.decodeNumber(MD[UnpCurChannel]);
if (AudioNumber == 256)
{
if (!await ReadTables20Async(cancellationToken).ConfigureAwait(false))
{
break;
}
continue;
}
window[unpPtr++] = DecodeAudio(AudioNumber);
if (++UnpCurChannel == UnpChannels)
{
UnpCurChannel = 0;
}
--destUnpSize;
continue;
}
var Number = this.decodeNumber(LD);
if (Number < 256)
{
window[unpPtr++] = (byte)Number;
--destUnpSize;
continue;
}
if (Number > 269)
{
var Length = LDecode[Number -= 270] + 3;
if ((Bits = LBits[Number]) > 0)
{
Length += Utility.URShift(GetBits(), (16 - Bits));
AddBits(Bits);
}
var DistNumber = this.decodeNumber(DD);
var Distance = DDecode[DistNumber] + 1;
if ((Bits = DBits[DistNumber]) > 0)
{
Distance += Utility.URShift(GetBits(), (16 - Bits));
AddBits(Bits);
}
if (Distance >= 0x2000)
{
Length++;
if (Distance >= 0x40000L)
{
Length++;
}
}
CopyString20(Length, Distance);
continue;
}
if (Number == 269)
{
if (!await ReadTables20Async(cancellationToken).ConfigureAwait(false))
{
break;
}
continue;
}
if (Number == 256)
{
CopyString20(lastLength, lastDist);
continue;
}
if (Number < 261)
{
var Distance = oldDist[(oldDistPtr - (Number - 256)) & 3];
var LengthNumber = this.decodeNumber(RD);
var Length = LDecode[LengthNumber] + 2;
if ((Bits = LBits[LengthNumber]) > 0)
{
Length += Utility.URShift(GetBits(), (16 - Bits));
AddBits(Bits);
}
if (Distance >= 0x101)
{
Length++;
if (Distance >= 0x2000)
{
Length++;
if (Distance >= 0x40000)
{
Length++;
}
}
}
CopyString20(Length, Distance);
continue;
}
if (Number < 270)
{
var Distance = SDDecode[Number -= 261] + 1;
if ((Bits = SDBits[Number]) > 0)
{
Distance += Utility.URShift(GetBits(), (16 - Bits));
AddBits(Bits);
}
CopyString20(2, Distance);
}
}
ReadLastTables();
await oldUnpWriteBufAsync(cancellationToken).ConfigureAwait(false);
}
private void CopyString20(int Length, int Distance)
{
lastDist = oldDist[oldDistPtr++ & 3] = Distance;
@@ -534,6 +691,120 @@ internal partial class Unpack
return (true);
}
private async System.Threading.Tasks.Task<bool> ReadTables20Async(
System.Threading.CancellationToken cancellationToken = default
)
{
byte[] BitLength = new byte[PackDef.BC20];
byte[] Table = new byte[PackDef.MC20 * 4];
int TableSize,
N,
I;
if (inAddr > readTop - 25)
{
if (!await unpReadBufAsync(cancellationToken).ConfigureAwait(false))
{
return (false);
}
}
var BitField = GetBits();
UnpAudioBlock = (BitField & 0x8000);
if (0 == (BitField & 0x4000))
{
new Span<byte>(UnpOldTable20).Clear();
}
AddBits(2);
if (UnpAudioBlock != 0)
{
UnpChannels = ((Utility.URShift(BitField, 12)) & 3) + 1;
if (UnpCurChannel >= UnpChannels)
{
UnpCurChannel = 0;
}
AddBits(2);
TableSize = PackDef.MC20 * UnpChannels;
}
else
{
TableSize = PackDef.NC20 + PackDef.DC20 + PackDef.RC20;
}
for (I = 0; I < PackDef.BC20; I++)
{
BitLength[I] = (byte)(Utility.URShift(GetBits(), 12));
AddBits(4);
}
UnpackUtility.makeDecodeTables(BitLength, 0, BD, PackDef.BC20);
I = 0;
while (I < TableSize)
{
if (inAddr > readTop - 5)
{
if (!await unpReadBufAsync(cancellationToken).ConfigureAwait(false))
{
return (false);
}
}
var Number = this.decodeNumber(BD);
if (Number < 16)
{
Table[I] = (byte)((Number + UnpOldTable20[I]) & 0xf);
I++;
}
else if (Number == 16)
{
N = (Utility.URShift(GetBits(), 14)) + 3;
AddBits(2);
while (N-- > 0 && I < TableSize)
{
Table[I] = Table[I - 1];
I++;
}
}
else
{
if (Number == 17)
{
N = (Utility.URShift(GetBits(), 13)) + 3;
AddBits(3);
}
else
{
N = (Utility.URShift(GetBits(), 9)) + 11;
AddBits(7);
}
while (N-- > 0 && I < TableSize)
{
Table[I++] = 0;
}
}
}
if (inAddr > readTop)
{
return (true);
}
if (UnpAudioBlock != 0)
{
for (I = 0; I < UnpChannels; I++)
{
UnpackUtility.makeDecodeTables(Table, I * PackDef.MC20, MD[I], PackDef.MC20);
}
}
else
{
UnpackUtility.makeDecodeTables(Table, 0, LD, PackDef.NC20);
UnpackUtility.makeDecodeTables(Table, PackDef.NC20, DD, PackDef.DC20);
UnpackUtility.makeDecodeTables(Table, PackDef.NC20 + PackDef.DC20, RD, PackDef.RC20);
}
for (var i = 0; i < UnpOldTable20.Length; i++)
{
UnpOldTable20[i] = Table[i];
}
return (true);
}
private void unpInitData20(bool Solid)
{
if (!Solid)

View File

@@ -479,6 +479,354 @@ internal partial class Unpack
return ReadCode != -1;
}
private async System.Threading.Tasks.Task<bool> UnpReadBufAsync(
System.Threading.CancellationToken cancellationToken = default
)
{
var DataSize = ReadTop - Inp.InAddr; // Data left to process.
if (DataSize < 0)
{
return false;
}
BlockHeader.BlockSize -= Inp.InAddr - BlockHeader.BlockStart;
if (Inp.InAddr > MAX_SIZE / 2)
{
if (DataSize > 0)
{
Array.Copy(InBuf, inAddr, InBuf, 0, DataSize);
}
Inp.InAddr = 0;
ReadTop = DataSize;
}
else
{
DataSize = ReadTop;
}
var ReadCode = 0;
if (MAX_SIZE != DataSize)
{
ReadCode = await readStream
.ReadAsync(InBuf, DataSize, MAX_SIZE - DataSize, cancellationToken)
.ConfigureAwait(false);
}
if (ReadCode > 0) // Can be also -1.
{
ReadTop += ReadCode;
}
ReadBorder = ReadTop - 30;
BlockHeader.BlockStart = Inp.InAddr;
if (BlockHeader.BlockSize != -1) // '-1' means not defined yet.
{
ReadBorder = Math.Min(ReadBorder, BlockHeader.BlockStart + BlockHeader.BlockSize - 1);
}
return ReadCode != -1;
}
public async System.Threading.Tasks.Task Unpack5Async(
bool Solid,
System.Threading.CancellationToken cancellationToken = default
)
{
FileExtracted = true;
if (!Suspended)
{
UnpInitData(Solid);
if (!await UnpReadBufAsync(cancellationToken).ConfigureAwait(false))
{
return;
}
// Check TablesRead5 to be sure that we read tables at least once
// regardless of current block header TablePresent flag.
// So we can safefly use these tables below.
if (
!await ReadBlockHeaderAsync(cancellationToken).ConfigureAwait(false)
|| !ReadTables()
|| !TablesRead5
)
{
return;
}
}
while (true)
{
UnpPtr &= MaxWinMask;
if (Inp.InAddr >= ReadBorder)
{
var FileDone = false;
// We use 'while', because for empty block containing only Huffman table,
// we'll be on the block border once again just after reading the table.
while (
Inp.InAddr > BlockHeader.BlockStart + BlockHeader.BlockSize - 1
|| Inp.InAddr == BlockHeader.BlockStart + BlockHeader.BlockSize - 1
&& Inp.InBit >= BlockHeader.BlockBitSize
)
{
if (BlockHeader.LastBlockInFile)
{
FileDone = true;
break;
}
if (
!await ReadBlockHeaderAsync(cancellationToken).ConfigureAwait(false)
|| !ReadTables()
)
{
return;
}
}
if (FileDone || !await UnpReadBufAsync(cancellationToken).ConfigureAwait(false))
{
break;
}
}
if (
((WriteBorder - UnpPtr) & MaxWinMask) < PackDef.MAX_LZ_MATCH + 3
&& WriteBorder != UnpPtr
)
{
UnpWriteBuf();
if (WrittenFileSize > DestUnpSize)
{
return;
}
if (Suspended)
{
FileExtracted = false;
return;
}
}
//uint MainSlot=DecodeNumber(Inp,LD);
var MainSlot = this.DecodeNumber(LD);
if (MainSlot < 256)
{
// if (Fragmented)
// FragWindow[UnpPtr++]=(byte)MainSlot;
// else
Window[UnpPtr++] = (byte)MainSlot;
continue;
}
if (MainSlot >= 262)
{
var Length = SlotToLength(MainSlot - 262);
//uint DBits,Distance=1,DistSlot=DecodeNumber(Inp,&BlockTables.DD);
int DBits;
uint Distance = 1,
DistSlot = this.DecodeNumber(DD);
if (DistSlot < 4)
{
DBits = 0;
Distance += DistSlot;
}
else
{
//DBits=DistSlot/2 - 1;
DBits = (int)((DistSlot / 2) - 1);
Distance += (2 | (DistSlot & 1)) << DBits;
}
if (DBits > 0)
{
if (DBits >= 4)
{
if (DBits > 4)
{
Distance += ((Inp.getbits() >> (36 - DBits)) << 4);
Inp.AddBits(DBits - 4);
}
//uint LowDist=DecodeNumber(Inp,&BlockTables.LDD);
var LowDist = this.DecodeNumber(LDD);
Distance += LowDist;
}
else
{
Distance += Inp.getbits() >> (32 - DBits);
Inp.AddBits(DBits);
}
}
if (Distance > 0x100)
{
Length++;
if (Distance > 0x2000)
{
Length++;
if (Distance > 0x40000)
{
Length++;
}
}
}
InsertOldDist(Distance);
LastLength = Length;
// if (Fragmented)
// FragWindow.CopyString(Length,Distance,UnpPtr,MaxWinMask);
// else
CopyString(Length, Distance);
continue;
}
if (MainSlot == 256)
{
var Filter = new UnpackFilter();
if (
!await ReadFilterAsync(Filter, cancellationToken).ConfigureAwait(false)
|| !AddFilter(Filter)
)
{
break;
}
continue;
}
if (MainSlot == 257)
{
if (LastLength != 0)
// if (Fragmented)
// FragWindow.CopyString(LastLength,OldDist[0],UnpPtr,MaxWinMask);
// else
//CopyString(LastLength,OldDist[0]);
{
CopyString(LastLength, OldDistN(0));
}
continue;
}
if (MainSlot < 262)
{
//uint DistNum=MainSlot-258;
var DistNum = (int)(MainSlot - 258);
//uint Distance=OldDist[DistNum];
var Distance = OldDistN(DistNum);
//for (uint I=DistNum;I>0;I--)
for (var I = DistNum; I > 0; I--)
//OldDistN[I]=OldDistN(I-1);
{
SetOldDistN(I, OldDistN(I - 1));
}
//OldDistN[0]=Distance;
SetOldDistN(0, Distance);
var LengthSlot = this.DecodeNumber(RD);
var Length = SlotToLength(LengthSlot);
LastLength = Length;
// if (Fragmented)
// FragWindow.CopyString(Length,Distance,UnpPtr,MaxWinMask);
// else
CopyString(Length, Distance);
continue;
}
}
UnpWriteBuf();
}
private async System.Threading.Tasks.Task<bool> ReadBlockHeaderAsync(
System.Threading.CancellationToken cancellationToken = default
)
{
Header.HeaderSize = 0;
if (!Inp.ExternalBuffer && Inp.InAddr > ReadTop - 7)
{
if (!await UnpReadBufAsync(cancellationToken).ConfigureAwait(false))
{
return false;
}
}
//Inp.faddbits((8-Inp.InBit)&7);
Inp.faddbits((uint)((8 - Inp.InBit) & 7));
var BlockFlags = (byte)(Inp.fgetbits() >> 8);
Inp.faddbits(8);
//uint ByteCount=((BlockFlags>>3)&3)+1; // Block size byte count.
var ByteCount = (uint)(((BlockFlags >> 3) & 3) + 1); // Block size byte count.
if (ByteCount == 4)
{
return false;
}
//Header.HeaderSize=2+ByteCount;
Header.HeaderSize = (int)(2 + ByteCount);
Header.BlockBitSize = (BlockFlags & 7) + 1;
var SavedCheckSum = (byte)(Inp.fgetbits() >> 8);
Inp.faddbits(8);
var BlockSize = 0;
//for (uint I=0;I<ByteCount;I++)
for (var I = 0; I < ByteCount; I++)
{
//BlockSize+=(Inp.fgetbits()>>8)<<(I*8);
BlockSize += (int)(Inp.fgetbits() >> 8) << (I * 8);
Inp.AddBits(8);
}
Header.BlockSize = BlockSize;
var CheckSum = (byte)(0x5a ^ BlockFlags ^ BlockSize ^ (BlockSize >> 8) ^ (BlockSize >> 16));
if (CheckSum != SavedCheckSum)
{
return false;
}
Header.BlockStart = Inp.InAddr;
ReadBorder = Math.Min(ReadBorder, Header.BlockStart + Header.BlockSize - 1);
Header.LastBlockInFile = (BlockFlags & 0x40) != 0;
Header.TablePresent = (BlockFlags & 0x80) != 0;
return true;
}
private async System.Threading.Tasks.Task<bool> ReadFilterAsync(
UnpackFilter Filter,
System.Threading.CancellationToken cancellationToken = default
)
{
if (!Inp.ExternalBuffer && Inp.InAddr > ReadTop - 16)
{
if (!await UnpReadBufAsync(cancellationToken).ConfigureAwait(false))
{
return false;
}
}
Filter.uBlockStart = ReadFilterData();
Filter.uBlockLength = ReadFilterData();
if (Filter.BlockLength > MAX_FILTER_BLOCK_SIZE)
{
Filter.BlockLength = 0;
}
//Filter.Type=Inp.fgetbits()>>13;
Filter.Type = (byte)(Inp.fgetbits() >> 13);
Inp.faddbits(3);
if (Filter.Type == (byte)FilterType.FILTER_DELTA)
{
//Filter.Channels=(Inp.fgetbits()>>11)+1;
Filter.Channels = (byte)((Inp.fgetbits() >> 11) + 1);
Inp.faddbits(5);
}
return true;
}
//?
// void UnpWriteBuf()
// {
@@ -814,116 +1162,5 @@ internal partial class Unpack
Header.TablePresent = (BlockFlags & 0x80) != 0;
return true;
}
//?
// bool ReadTables(BitInput Inp, ref UnpackBlockHeader Header, ref UnpackBlockTables Tables)
// {
// if (!Header.TablePresent)
// return true;
//
// if (!Inp.ExternalBuffer && Inp.InAddr>ReadTop-25)
// if (!UnpReadBuf())
// return false;
//
// byte BitLength[BC];
// for (uint I=0;I<BC;I++)
// {
// uint Length=(byte)(Inp.fgetbits() >> 12);
// Inp.faddbits(4);
// if (Length==15)
// {
// uint ZeroCount=(byte)(Inp.fgetbits() >> 12);
// Inp.faddbits(4);
// if (ZeroCount==0)
// BitLength[I]=15;
// else
// {
// ZeroCount+=2;
// while (ZeroCount-- > 0 && I<ASIZE(BitLength))
// BitLength[I++]=0;
// I--;
// }
// }
// else
// BitLength[I]=Length;
// }
//
// MakeDecodeTables(BitLength,&Tables.BD,BC);
//
// byte Table[HUFF_TABLE_SIZE];
// const uint TableSize=HUFF_TABLE_SIZE;
// for (uint I=0;I<TableSize;)
// {
// if (!Inp.ExternalBuffer && Inp.InAddr>ReadTop-5)
// if (!UnpReadBuf())
// return false;
// uint Number=DecodeNumber(Inp,&Tables.BD);
// if (Number<16)
// {
// Table[I]=Number;
// I++;
// }
// else
// if (Number<18)
// {
// uint N;
// if (Number==16)
// {
// N=(Inp.fgetbits() >> 13)+3;
// Inp.faddbits(3);
// }
// else
// {
// N=(Inp.fgetbits() >> 9)+11;
// Inp.faddbits(7);
// }
// if (I==0)
// {
// // We cannot have "repeat previous" code at the first position.
// // Multiple such codes would shift Inp position without changing I,
// // which can lead to reading beyond of Inp boundary in mutithreading
// // mode, where Inp.ExternalBuffer disables bounds check and we just
// // reserve a lot of buffer space to not need such check normally.
// return false;
// }
// else
// while (N-- > 0 && I<TableSize)
// {
// Table[I]=Table[I-1];
// I++;
// }
// }
// else
// {
// uint N;
// if (Number==18)
// {
// N=(Inp.fgetbits() >> 13)+3;
// Inp.faddbits(3);
// }
// else
// {
// N=(Inp.fgetbits() >> 9)+11;
// Inp.faddbits(7);
// }
// while (N-- > 0 && I<TableSize)
// Table[I++]=0;
// }
// }
// TablesRead5=true;
// if (!Inp.ExternalBuffer && Inp.InAddr>ReadTop)
// return false;
// MakeDecodeTables(&Table[0],&Tables.LD,NC);
// MakeDecodeTables(&Table[NC],&Tables.DD,DC);
// MakeDecodeTables(&Table[NC+DC],&Tables.LDD,LDC);
// MakeDecodeTables(&Table[NC+DC+LDC],&Tables.RD,RC);
// return true;
// }
//?
// void InitFilters()
// {
// Filters.SoftReset();
// }
}
#endif

View File

@@ -1,4 +1,5 @@
using System;
using System.Runtime.CompilerServices;
using SharpCompress.Compressors.Rar.VM;
namespace SharpCompress.Compressors.Rar.UnpackV1;
@@ -9,167 +10,15 @@ internal static class UnpackUtility
internal static uint DecodeNumber(this BitInput input, Decode.Decode dec) =>
(uint)input.decodeNumber(dec);
[MethodImpl(MethodImplOptions.AggressiveInlining)]
internal static int decodeNumber(this BitInput input, Decode.Decode dec)
{
int bits;
long bitField = input.GetBits() & 0xfffe;
// if (bitField < dec.getDecodeLen()[8]) {
// if (bitField < dec.getDecodeLen()[4]) {
// if (bitField < dec.getDecodeLen()[2]) {
// if (bitField < dec.getDecodeLen()[1]) {
// bits = 1;
// } else {
// bits = 2;
// }
// } else {
// if (bitField < dec.getDecodeLen()[3]) {
// bits = 3;
// } else {
// bits = 4;
// }
// }
// } else {
// if (bitField < dec.getDecodeLen()[6]) {
// if (bitField < dec.getDecodeLen()[5])
// bits = 5;
// else
// bits = 6;
// } else {
// if (bitField < dec.getDecodeLen()[7]) {
// bits = 7;
// } else {
// bits = 8;
// }
// }
// }
// } else {
// if (bitField < dec.getDecodeLen()[12]) {
// if (bitField < dec.getDecodeLen()[10])
// if (bitField < dec.getDecodeLen()[9])
// bits = 9;
// else
// bits = 10;
// else if (bitField < dec.getDecodeLen()[11])
// bits = 11;
// else
// bits = 12;
// } else {
// if (bitField < dec.getDecodeLen()[14]) {
// if (bitField < dec.getDecodeLen()[13]) {
// bits = 13;
// } else {
// bits = 14;
// }
// } else {
// bits = 15;
// }
// }
// }
// addbits(bits);
// int N = dec.getDecodePos()[bits]
// + (((int) bitField - dec.getDecodeLen()[bits - 1]) >>> (16 - bits));
// if (N >= dec.getMaxNum()) {
// N = 0;
// }
// return (dec.getDecodeNum()[N]);
var decodeLen = dec.DecodeLen;
if (bitField < decodeLen[8])
{
if (bitField < decodeLen[4])
{
if (bitField < decodeLen[2])
{
if (bitField < decodeLen[1])
{
bits = 1;
}
else
{
bits = 2;
}
}
else
{
if (bitField < decodeLen[3])
{
bits = 3;
}
else
{
bits = 4;
}
}
}
else
{
if (bitField < decodeLen[6])
{
if (bitField < decodeLen[5])
{
bits = 5;
}
else
{
bits = 6;
}
}
else
{
if (bitField < decodeLen[7])
{
bits = 7;
}
else
{
bits = 8;
}
}
}
}
else
{
if (bitField < decodeLen[12])
{
if (bitField < decodeLen[10])
{
if (bitField < decodeLen[9])
{
bits = 9;
}
else
{
bits = 10;
}
}
else if (bitField < decodeLen[11])
{
bits = 11;
}
else
{
bits = 12;
}
}
else
{
if (bitField < decodeLen[14])
{
if (bitField < decodeLen[13])
{
bits = 13;
}
else
{
bits = 14;
}
}
else
{
bits = 15;
}
}
}
// Binary search to find the bit length - faster than nested ifs
int bits = FindDecodeBits(bitField, decodeLen);
input.AddBits(bits);
var N =
dec.DecodePos[bits]
@@ -181,6 +30,52 @@ internal static class UnpackUtility
return (dec.DecodeNum[N]);
}
/// <summary>
/// Fast binary search to find which bit length matches the bitField.
/// Optimized with cached array access to minimize memory lookups.
/// </summary>
[MethodImpl(MethodImplOptions.AggressiveInlining)]
private static int FindDecodeBits(long bitField, int[] decodeLen)
{
// Cache critical values to reduce array access overhead
long len4 = decodeLen[4];
long len8 = decodeLen[8];
long len12 = decodeLen[12];
if (bitField < len8)
{
if (bitField < len4)
{
long len2 = decodeLen[2];
if (bitField < len2)
{
return bitField < decodeLen[1] ? 1 : 2;
}
return bitField < decodeLen[3] ? 3 : 4;
}
long len6 = decodeLen[6];
if (bitField < len6)
{
return bitField < decodeLen[5] ? 5 : 6;
}
return bitField < decodeLen[7] ? 7 : 8;
}
if (bitField < len12)
{
long len10 = decodeLen[10];
if (bitField < len10)
{
return bitField < decodeLen[9] ? 9 : 10;
}
return bitField < decodeLen[11] ? 11 : 12;
}
long len14 = decodeLen[14];
return bitField < len14 ? (bitField < decodeLen[13] ? 13 : 14) : 15;
}
internal static void makeDecodeTables(
Span<byte> lenTab,
int offset,
@@ -194,8 +89,7 @@ internal static class UnpackUtility
long M,
N;
new Span<int>(dec.DecodeNum).Clear(); // memset(Dec->DecodeNum,0,Size*sizeof(*Dec->DecodeNum));
new Span<int>(dec.DecodeNum).Clear();
for (i = 0; i < size; i++)
{
lenCount[lenTab[offset + i] & 0xF]++;

View File

@@ -29,9 +29,28 @@ internal partial class Unpack : IRarUnpack
// NOTE: caller has logic to check for -1 for error we throw instead.
readStream.Read(buf, offset, count);
private async System.Threading.Tasks.Task<int> UnpIO_UnpReadAsync(
byte[] buf,
int offset,
int count,
System.Threading.CancellationToken cancellationToken = default
) =>
// NOTE: caller has logic to check for -1 for error we throw instead.
await readStream.ReadAsync(buf, offset, count, cancellationToken).ConfigureAwait(false);
private void UnpIO_UnpWrite(byte[] buf, size_t offset, uint count) =>
writeStream.Write(buf, checked((int)offset), checked((int)count));
private async System.Threading.Tasks.Task UnpIO_UnpWriteAsync(
byte[] buf,
size_t offset,
uint count,
System.Threading.CancellationToken cancellationToken = default
) =>
await writeStream
.WriteAsync(buf, checked((int)offset), checked((int)count), cancellationToken)
.ConfigureAwait(false);
public void DoUnpack(FileHeader fileHeader, Stream readStream, Stream writeStream)
{
// as of 12/2017 .NET limits array indexing to using a signed integer
@@ -53,6 +72,25 @@ internal partial class Unpack : IRarUnpack
DoUnpack();
}
public async System.Threading.Tasks.Task DoUnpackAsync(
FileHeader fileHeader,
Stream readStream,
Stream writeStream,
System.Threading.CancellationToken cancellationToken = default
)
{
DestUnpSize = fileHeader.UncompressedSize;
this.fileHeader = fileHeader;
this.readStream = readStream;
this.writeStream = writeStream;
if (!fileHeader.IsStored)
{
Init(fileHeader.WindowSize, fileHeader.IsSolid);
}
Suspended = false;
await DoUnpackAsync(cancellationToken).ConfigureAwait(false);
}
public void DoUnpack()
{
if (fileHeader.IsStored)
@@ -65,6 +103,27 @@ internal partial class Unpack : IRarUnpack
}
}
public async System.Threading.Tasks.Task DoUnpackAsync(
System.Threading.CancellationToken cancellationToken = default
)
{
if (fileHeader.IsStored)
{
await UnstoreFileAsync(cancellationToken).ConfigureAwait(false);
}
else
{
// TODO: When compression methods are converted to async, call them here
// For now, fall back to synchronous version
await DoUnpackAsync(
fileHeader.CompressionAlgorithm,
fileHeader.IsSolid,
cancellationToken
)
.ConfigureAwait(false);
}
}
private void UnstoreFile()
{
Span<byte> b = stackalloc byte[(int)Math.Min(0x10000, DestUnpSize)];
@@ -80,6 +139,25 @@ internal partial class Unpack : IRarUnpack
} while (!Suspended);
}
private async System.Threading.Tasks.Task UnstoreFileAsync(
System.Threading.CancellationToken cancellationToken = default
)
{
var buffer = new byte[(int)Math.Min(0x10000, DestUnpSize)];
do
{
var n = await readStream
.ReadAsync(buffer, 0, buffer.Length, cancellationToken)
.ConfigureAwait(false);
if (n == 0)
{
break;
}
await writeStream.WriteAsync(buffer, 0, n, cancellationToken).ConfigureAwait(false);
DestUnpSize -= n;
} while (!Suspended);
}
public bool Suspended { get; set; }
public long DestSize => DestUnpSize;

View File

@@ -200,6 +200,102 @@ internal partial class Unpack
UnpWriteBuf20();
}
private async System.Threading.Tasks.Task Unpack15Async(
bool Solid,
System.Threading.CancellationToken cancellationToken = default
)
{
UnpInitData(Solid);
UnpInitData15(Solid);
await UnpReadBufAsync(cancellationToken).ConfigureAwait(false);
if (!Solid)
{
InitHuff();
UnpPtr = 0;
}
else
{
UnpPtr = WrPtr;
}
--DestUnpSize;
if (DestUnpSize >= 0)
{
GetFlagsBuf();
FlagsCnt = 8;
}
while (DestUnpSize >= 0)
{
UnpPtr &= MaxWinMask;
if (
Inp.InAddr > ReadTop - 30
&& !await UnpReadBufAsync(cancellationToken).ConfigureAwait(false)
)
{
break;
}
if (((WrPtr - UnpPtr) & MaxWinMask) < 270 && WrPtr != UnpPtr)
{
await UnpWriteBuf20Async(cancellationToken).ConfigureAwait(false);
}
if (StMode != 0)
{
HuffDecode();
continue;
}
if (--FlagsCnt < 0)
{
GetFlagsBuf();
FlagsCnt = 7;
}
if ((FlagBuf & 0x80) != 0)
{
FlagBuf <<= 1;
if (Nlzb > Nhfb)
{
LongLZ();
}
else
{
HuffDecode();
}
}
else
{
FlagBuf <<= 1;
if (--FlagsCnt < 0)
{
GetFlagsBuf();
FlagsCnt = 7;
}
if ((FlagBuf & 0x80) != 0)
{
FlagBuf <<= 1;
if (Nlzb > Nhfb)
{
HuffDecode();
}
else
{
LongLZ();
}
}
else
{
FlagBuf <<= 1;
ShortLZ();
}
}
}
await UnpWriteBuf20Async(cancellationToken).ConfigureAwait(false);
}
//#define GetShortLen1(pos) ((pos)==1 ? Buf60+3:ShortLen1[pos])
private uint GetShortLen1(uint pos) => ((pos) == 1 ? (uint)(Buf60 + 3) : ShortLen1[pos]);

View File

@@ -349,6 +349,170 @@ internal partial class Unpack
UnpWriteBuf20();
}
private async System.Threading.Tasks.Task Unpack20Async(
bool Solid,
System.Threading.CancellationToken cancellationToken = default
)
{
uint Bits;
if (Suspended)
{
UnpPtr = WrPtr;
}
else
{
UnpInitData(Solid);
if (!await UnpReadBufAsync(cancellationToken).ConfigureAwait(false))
{
return;
}
if (
(!Solid || !TablesRead2)
&& !await ReadTables20Async(cancellationToken).ConfigureAwait(false)
)
{
return;
}
--DestUnpSize;
}
while (DestUnpSize >= 0)
{
UnpPtr &= MaxWinMask;
if (Inp.InAddr > ReadTop - 30)
{
if (!await UnpReadBufAsync(cancellationToken).ConfigureAwait(false))
{
break;
}
}
if (((WrPtr - UnpPtr) & MaxWinMask) < 270 && WrPtr != UnpPtr)
{
await UnpWriteBuf20Async(cancellationToken).ConfigureAwait(false);
if (Suspended)
{
return;
}
}
if (UnpAudioBlock)
{
var AudioNumber = DecodeNumber(Inp, MD[UnpCurChannel]);
if (AudioNumber == 256)
{
if (!await ReadTables20Async(cancellationToken).ConfigureAwait(false))
{
break;
}
continue;
}
Window[UnpPtr++] = DecodeAudio((int)AudioNumber);
if (++UnpCurChannel == UnpChannels)
{
UnpCurChannel = 0;
}
--DestUnpSize;
continue;
}
var Number = DecodeNumber(Inp, BlockTables.LD);
if (Number < 256)
{
Window[UnpPtr++] = (byte)Number;
--DestUnpSize;
continue;
}
if (Number > 269)
{
var Length = (uint)(LDecode[Number -= 270] + 3);
if ((Bits = LBits[Number]) > 0)
{
Length += Inp.getbits() >> (int)(16 - Bits);
Inp.addbits(Bits);
}
var DistNumber = DecodeNumber(Inp, BlockTables.DD);
var Distance = DDecode[DistNumber] + 1;
if ((Bits = DBits[DistNumber]) > 0)
{
Distance += Inp.getbits() >> (int)(16 - Bits);
Inp.addbits(Bits);
}
if (Distance >= 0x2000)
{
Length++;
if (Distance >= 0x40000L)
{
Length++;
}
}
CopyString20(Length, Distance);
continue;
}
if (Number == 269)
{
if (!await ReadTables20Async(cancellationToken).ConfigureAwait(false))
{
break;
}
continue;
}
if (Number == 256)
{
CopyString20(LastLength, LastDist);
continue;
}
if (Number < 261)
{
var Distance = OldDist[(OldDistPtr - (Number - 256)) & 3];
var LengthNumber = DecodeNumber(Inp, BlockTables.RD);
var Length = (uint)(LDecode[LengthNumber] + 2);
if ((Bits = LBits[LengthNumber]) > 0)
{
Length += Inp.getbits() >> (int)(16 - Bits);
Inp.addbits(Bits);
}
if (Distance >= 0x101)
{
Length++;
if (Distance >= 0x2000)
{
Length++;
if (Distance >= 0x40000)
{
Length++;
}
}
}
CopyString20(Length, Distance);
continue;
}
if (Number < 270)
{
var Distance = (uint)(SDDecode[Number -= 261] + 1);
if ((Bits = SDBits[Number]) > 0)
{
Distance += Inp.getbits() >> (int)(16 - Bits);
Inp.addbits(Bits);
}
CopyString20(2, Distance);
continue;
}
}
ReadLastTables();
await UnpWriteBuf20Async(cancellationToken).ConfigureAwait(false);
}
private void UnpWriteBuf20()
{
if (UnpPtr != WrPtr)
@@ -370,6 +534,36 @@ internal partial class Unpack
WrPtr = UnpPtr;
}
private async System.Threading.Tasks.Task UnpWriteBuf20Async(
System.Threading.CancellationToken cancellationToken = default
)
{
if (UnpPtr != WrPtr)
{
UnpSomeRead = true;
}
if (UnpPtr < WrPtr)
{
await UnpIO_UnpWriteAsync(
Window,
WrPtr,
(uint)(-(int)WrPtr & MaxWinMask),
cancellationToken
)
.ConfigureAwait(false);
await UnpIO_UnpWriteAsync(Window, 0, UnpPtr, cancellationToken).ConfigureAwait(false);
UnpAllBuf = true;
}
else
{
await UnpIO_UnpWriteAsync(Window, WrPtr, UnpPtr - WrPtr, cancellationToken)
.ConfigureAwait(false);
}
WrPtr = UnpPtr;
}
private bool ReadTables20()
{
Span<byte> BitLength = stackalloc byte[checked((int)BC20)];
@@ -490,6 +684,130 @@ internal partial class Unpack
return true;
}
private async System.Threading.Tasks.Task<bool> ReadTables20Async(
System.Threading.CancellationToken cancellationToken = default
)
{
byte[] BitLength = new byte[checked((int)BC20)];
byte[] Table = new byte[checked((int)MC20 * 4)];
if (Inp.InAddr > ReadTop - 25)
{
if (!await UnpReadBufAsync(cancellationToken).ConfigureAwait(false))
{
return false;
}
}
var BitField = Inp.getbits();
UnpAudioBlock = (BitField & 0x8000) != 0;
if ((BitField & 0x4000) != 0)
{
Array.Clear(UnpOldTable20, 0, UnpOldTable20.Length);
}
Inp.addbits(2);
uint TableSize;
if (UnpAudioBlock)
{
UnpChannels = ((BitField >> 12) & 3) + 1;
if (UnpCurChannel >= UnpChannels)
{
UnpCurChannel = 0;
}
Inp.addbits(2);
TableSize = MC20 * UnpChannels;
}
else
{
TableSize = NC20 + DC20 + RC20;
}
for (int I = 0; I < checked((int)BC20); I++)
{
BitLength[I] = (byte)(Inp.getbits() >> 12);
Inp.addbits(4);
}
MakeDecodeTables(BitLength, 0, BlockTables.BD, BC20);
for (int I = 0; I < checked((int)TableSize); )
{
if (Inp.InAddr > ReadTop - 5)
{
if (!await UnpReadBufAsync(cancellationToken).ConfigureAwait(false))
{
return false;
}
}
var Number = DecodeNumber(Inp, BlockTables.BD);
if (Number < 16)
{
Table[I] = (byte)((Number + UnpOldTable20[I]) & 0xF);
I++;
}
else if (Number < 18)
{
uint N;
if (Number == 16)
{
N = (Inp.getbits() >> 14) + 3;
Inp.addbits(2);
}
else
{
N = (Inp.getbits() >> 13) + 11;
Inp.addbits(3);
}
if (I == 0)
{
return false;
}
while (N-- > 0 && I < checked((int)TableSize))
{
Table[I] = Table[I - 1];
I++;
}
}
else
{
uint N;
if (Number == 18)
{
N = (Inp.getbits() >> 13) + 3;
Inp.addbits(3);
}
else
{
N = (Inp.getbits() >> 9) + 11;
Inp.addbits(7);
}
while (N-- > 0 && I < checked((int)TableSize))
{
Table[I++] = 0;
}
}
}
if (UnpAudioBlock)
{
for (int I = 0; I < UnpChannels; I++)
{
MakeDecodeTables(Table, (int)(I * MC20), MD[I], MC20);
}
}
else
{
MakeDecodeTables(Table, 0, BlockTables.LD, NC20);
MakeDecodeTables(Table, (int)NC20, BlockTables.DD, DC20);
MakeDecodeTables(Table, (int)(NC20 + DC20), BlockTables.RD, RC20);
}
Array.Copy(Table, 0, this.UnpOldTable20, 0, UnpOldTable20.Length);
return true;
}
private void ReadLastTables()
{
if (ReadTop >= Inp.InAddr + 5)

View File

@@ -1,793 +0,0 @@
#if !Rar2017_64bit
#else
using nint = System.Int64;
using nuint = System.UInt64;
using size_t = System.UInt64;
#endif
//using static SharpCompress.Compressors.Rar.UnpackV2017.Unpack.Unpack30Local;
/*
namespace SharpCompress.Compressors.Rar.UnpackV2017
{
internal partial class Unpack
{
#if !RarV2017_RAR5ONLY
// We use it instead of direct PPM.DecodeChar call to be sure that
// we reset PPM structures in case of corrupt data. It is important,
// because these structures can be invalid after PPM.DecodeChar returned -1.
int SafePPMDecodeChar()
{
int Ch=PPM.DecodeChar();
if (Ch==-1) // Corrupt PPM data found.
{
PPM.CleanUp(); // Reset possibly corrupt PPM data structures.
UnpBlockType=BLOCK_LZ; // Set faster and more fail proof LZ mode.
}
return(Ch);
}
internal static class Unpack30Local {
public static readonly byte[] LDecode={0,1,2,3,4,5,6,7,8,10,12,14,16,20,24,28,32,40,48,56,64,80,96,112,128,160,192,224};
public static readonly byte[] LBits= {0,0,0,0,0,0,0,0,1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3, 4, 4, 4, 4, 5, 5, 5, 5};
public static readonly int[] DDecode = new int[DC];
public static readonly byte[] DBits = new byte[DC];
public static readonly int[] DBitLengthCounts= {4,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,14,0,12};
public static readonly byte[] SDDecode={0,4,8,16,32,64,128,192};
public static readonly byte[] SDBits= {2,2,3, 4, 5, 6, 6, 6};
}
void Unpack29(bool Solid)
{
uint Bits;
if (DDecode[1]==0)
{
int Dist=0,BitLength=0,Slot=0;
for (int I=0;I<DBitLengthCounts.Length;I++,BitLength++)
for (int J=0;J<DBitLengthCounts[I];J++,Slot++,Dist+=(1<<BitLength))
{
DDecode[Slot]=Dist;
DBits[Slot]=(byte)BitLength;
}
}
FileExtracted=true;
if (!Suspended)
{
UnpInitData(Solid);
if (!UnpReadBuf30())
return;
if ((!Solid || !TablesRead3) && !ReadTables30())
return;
}
while (true)
{
UnpPtr&=MaxWinMask;
if (Inp.InAddr>ReadBorder)
{
if (!UnpReadBuf30())
break;
}
if (((WrPtr-UnpPtr) & MaxWinMask)<260 && WrPtr!=UnpPtr)
{
UnpWriteBuf30();
if (WrittenFileSize>DestUnpSize)
return;
if (Suspended)
{
FileExtracted=false;
return;
}
}
if (UnpBlockType==BLOCK_PPM)
{
// Here speed is critical, so we do not use SafePPMDecodeChar,
// because sometimes even the inline function can introduce
// some additional penalty.
int Ch=PPM.DecodeChar();
if (Ch==-1) // Corrupt PPM data found.
{
PPM.CleanUp(); // Reset possibly corrupt PPM data structures.
UnpBlockType=BLOCK_LZ; // Set faster and more fail proof LZ mode.
break;
}
if (Ch==PPMEscChar)
{
int NextCh=SafePPMDecodeChar();
if (NextCh==0) // End of PPM encoding.
{
if (!ReadTables30())
break;
continue;
}
if (NextCh==-1) // Corrupt PPM data found.
break;
if (NextCh==2) // End of file in PPM mode.
break;
if (NextCh==3) // Read VM code.
{
if (!ReadVMCodePPM())
break;
continue;
}
if (NextCh==4) // LZ inside of PPM.
{
uint Distance=0,Length;
bool Failed=false;
for (int I=0;I<4 && !Failed;I++)
{
int _Ch=SafePPMDecodeChar();
if (_Ch==-1)
Failed=true;
else
if (I==3)
Length=(byte)_Ch;
else
Distance=(Distance<<8)+(byte)_Ch;
}
if (Failed)
break;
CopyString(Length+32,Distance+2);
continue;
}
if (NextCh==5) // One byte distance match (RLE) inside of PPM.
{
int Length=SafePPMDecodeChar();
if (Length==-1)
break;
CopyString((uint)(Length+4),1);
continue;
}
// If we are here, NextCh must be 1, what means that current byte
// is equal to our 'escape' byte, so we just store it to Window.
}
Window[UnpPtr++]=(byte)Ch;
continue;
}
uint Number=DecodeNumber(Inp,BlockTables.LD);
if (Number<256)
{
Window[UnpPtr++]=(byte)Number;
continue;
}
if (Number>=271)
{
uint Length=(uint)(LDecode[Number-=271]+3);
if ((Bits=LBits[Number])>0)
{
Length+=Inp.getbits()>>(int)(16-Bits);
Inp.addbits(Bits);
}
uint DistNumber=DecodeNumber(Inp,BlockTables.DD);
uint Distance=(uint)(DDecode[DistNumber]+1);
if ((Bits=DBits[DistNumber])>0)
{
if (DistNumber>9)
{
if (Bits>4)
{
Distance+=((Inp.getbits()>>(int)(20-Bits))<<4);
Inp.addbits(Bits-4);
}
if (LowDistRepCount>0)
{
LowDistRepCount--;
Distance+=(uint)PrevLowDist;
}
else
{
uint LowDist=DecodeNumber(Inp,BlockTables.LDD);
if (LowDist==16)
{
LowDistRepCount=(int)(LOW_DIST_REP_COUNT-1);
Distance+=(uint)PrevLowDist;
}
else
{
Distance+=LowDist;
PrevLowDist=(int)LowDist;
}
}
}
else
{
Distance+=Inp.getbits()>>(int)(16-Bits);
Inp.addbits(Bits);
}
}
if (Distance>=0x2000)
{
Length++;
if (Distance>=0x40000)
Length++;
}
InsertOldDist(Distance);
LastLength=Length;
CopyString(Length,Distance);
continue;
}
if (Number==256)
{
if (!ReadEndOfBlock())
break;
continue;
}
if (Number==257)
{
if (!ReadVMCode())
break;
continue;
}
if (Number==258)
{
if (LastLength!=0)
CopyString(LastLength,OldDist[0]);
continue;
}
if (Number<263)
{
uint DistNum=Number-259;
uint Distance=OldDist[DistNum];
for (uint I=DistNum;I>0;I--)
OldDist[I]=OldDist[I-1];
OldDist[0]=Distance;
uint LengthNumber=DecodeNumber(Inp,BlockTables.RD);
int Length=LDecode[LengthNumber]+2;
if ((Bits=LBits[LengthNumber])>0)
{
Length+=(int)(Inp.getbits()>>(int)(16-Bits));
Inp.addbits(Bits);
}
LastLength=(uint)Length;
CopyString((uint)Length,Distance);
continue;
}
if (Number<272)
{
uint Distance=(uint)(SDDecode[Number-=263]+1);
if ((Bits=SDBits[Number])>0)
{
Distance+=Inp.getbits()>>(int)(16-Bits);
Inp.addbits(Bits);
}
InsertOldDist(Distance);
LastLength=2;
CopyString(2,Distance);
continue;
}
}
UnpWriteBuf30();
}
// Return 'false' to quit unpacking the current file or 'true' to continue.
bool ReadEndOfBlock()
{
uint BitField=Inp.getbits();
bool NewTable,NewFile=false;
// "1" - no new file, new table just here.
// "00" - new file, no new table.
// "01" - new file, new table (in beginning of next file).
if ((BitField & 0x8000)!=0)
{
NewTable=true;
Inp.addbits(1);
}
else
{
NewFile=true;
NewTable=(BitField & 0x4000)!=0;
Inp.addbits(2);
}
TablesRead3=!NewTable;
// Quit immediately if "new file" flag is set. If "new table" flag
// is present, we'll read the table in beginning of next file
// based on 'TablesRead3' 'false' value.
if (NewFile)
return false;
return ReadTables30(); // Quit only if we failed to read tables.
}
bool ReadVMCode()
{
// Entire VM code is guaranteed to fully present in block defined
// by current Huffman table. Compressor checks that VM code does not cross
// Huffman block boundaries.
uint FirstByte=Inp.getbits()>>8;
Inp.addbits(8);
uint Length=(FirstByte & 7)+1;
if (Length==7)
{
Length=(Inp.getbits()>>8)+7;
Inp.addbits(8);
}
else
if (Length==8)
{
Length=Inp.getbits();
Inp.addbits(16);
}
if (Length==0)
return false;
Array<byte> VMCode(Length);
for (uint I=0;I<Length;I++)
{
// Try to read the new buffer if only one byte is left.
// But if we read all bytes except the last, one byte is enough.
if (Inp.InAddr>=ReadTop-1 && !UnpReadBuf30() && I<Length-1)
return false;
VMCode[I]=Inp.getbits()>>8;
Inp.addbits(8);
}
return AddVMCode(FirstByte,&VMCode[0],Length);
}
bool ReadVMCodePPM()
{
uint FirstByte=(uint)SafePPMDecodeChar();
if ((int)FirstByte==-1)
return false;
uint Length=(FirstByte & 7)+1;
if (Length==7)
{
int B1=SafePPMDecodeChar();
if (B1==-1)
return false;
Length=B1+7;
}
else
if (Length==8)
{
int B1=SafePPMDecodeChar();
if (B1==-1)
return false;
int B2=SafePPMDecodeChar();
if (B2==-1)
return false;
Length=B1*256+B2;
}
if (Length==0)
return false;
Array<byte> VMCode(Length);
for (uint I=0;I<Length;I++)
{
int Ch=SafePPMDecodeChar();
if (Ch==-1)
return false;
VMCode[I]=Ch;
}
return AddVMCode(FirstByte,&VMCode[0],Length);
}
bool AddVMCode(uint FirstByte,byte[] Code,int CodeSize)
{
VMCodeInp.InitBitInput();
//x memcpy(VMCodeInp.InBuf,Code,Min(BitInput.MAX_SIZE,CodeSize));
Array.Copy(Code, 0, VMCodeInp.InBuf, 0, Math.Min(BitInput.MAX_SIZE,CodeSize));
VM.Init();
uint FiltPos;
if ((FirstByte & 0x80)!=0)
{
FiltPos=RarVM.ReadData(VMCodeInp);
if (FiltPos==0)
InitFilters30(false);
else
FiltPos--;
}
else
FiltPos=(uint)this.LastFilter; // Use the same filter as last time.
if (FiltPos>Filters30.Count || FiltPos>OldFilterLengths.Count)
return false;
LastFilter=(int)FiltPos;
bool NewFilter=(FiltPos==Filters30.Count);
UnpackFilter30 StackFilter=new UnpackFilter30(); // New filter for PrgStack.
UnpackFilter30 Filter;
if (NewFilter) // New filter code, never used before since VM reset.
{
if (FiltPos>MAX3_UNPACK_FILTERS)
{
// Too many different filters, corrupt archive.
//delete StackFilter;
return false;
}
Filters30.Add(1);
Filters30[Filters30.Count-1]=Filter=new UnpackFilter30();
StackFilter.ParentFilter=(uint)(Filters30.Count-1);
// Reserve one item to store the data block length of our new filter
// entry. We'll set it to real block length below, after reading it.
// But we need to initialize it now, because when processing corrupt
// data, we can access this item even before we set it to real value.
OldFilterLengths.Add(0);
}
else // Filter was used in the past.
{
Filter=Filters30[(int)FiltPos];
StackFilter.ParentFilter=FiltPos;
}
int EmptyCount=0;
for (int I=0;I<PrgStack.Count;I++)
{
PrgStack[I-EmptyCount]=PrgStack[I];
if (PrgStack[I]==null)
EmptyCount++;
if (EmptyCount>0)
PrgStack[I]=null;
}
if (EmptyCount==0)
{
if (PrgStack.Count>MAX3_UNPACK_FILTERS)
{
//delete StackFilter;
return false;
}
PrgStack.Add(1);
EmptyCount=1;
}
size_t StackPos=(uint)(this.PrgStack.Count-EmptyCount);
PrgStack[(int)StackPos]=StackFilter;
uint BlockStart=RarVM.ReadData(VMCodeInp);
if ((FirstByte & 0x40)!=0)
BlockStart+=258;
StackFilter.BlockStart=(uint)((BlockStart+UnpPtr)&MaxWinMask);
if ((FirstByte & 0x20)!=0)
{
StackFilter.BlockLength=RarVM.ReadData(VMCodeInp);
// Store the last data block length for current filter.
OldFilterLengths[(int)FiltPos]=(int)StackFilter.BlockLength;
}
else
{
// Set the data block size to same value as the previous block size
// for same filter. It is possible for corrupt data to access a new
// and not filled yet item of OldFilterLengths array here. This is why
// we set new OldFilterLengths items to zero above.
StackFilter.BlockLength=FiltPos<OldFilterLengths.Count ? OldFilterLengths[(int)FiltPos]:0;
}
StackFilter.NextWindow=WrPtr!=UnpPtr && ((WrPtr-UnpPtr)&MaxWinMask)<=BlockStart;
// DebugLog("\nNextWindow: UnpPtr=%08x WrPtr=%08x BlockStart=%08x",UnpPtr,WrPtr,BlockStart);
memset(StackFilter.Prg.InitR,0,sizeof(StackFilter.Prg.InitR));
StackFilter.Prg.InitR[4]=StackFilter.BlockLength;
if ((FirstByte & 0x10)!=0) // Set registers to optional parameters if any.
{
uint InitMask=VMCodeInp.fgetbits()>>9;
VMCodeInp.faddbits(7);
for (int I=0;I<7;I++)
if ((InitMask & (1<<I)) != 0)
StackFilter.Prg.InitR[I]=RarVM.ReadData(VMCodeInp);
}
if (NewFilter)
{
uint VMCodeSize=RarVM.ReadData(VMCodeInp);
if (VMCodeSize>=0x10000 || VMCodeSize==0)
return false;
Array<byte> VMCode(VMCodeSize);
for (uint I=0;I<VMCodeSize;I++)
{
if (VMCodeInp.Overflow(3))
return false;
VMCode[I]=VMCodeInp.fgetbits()>>8;
VMCodeInp.faddbits(8);
}
VM.Prepare(&VMCode[0],VMCodeSize,&Filter->Prg);
}
StackFilter.Prg.Type=Filter.Prg.Type;
return true;
}
bool UnpReadBuf30()
{
int DataSize=ReadTop-Inp.InAddr; // Data left to process.
if (DataSize<0)
return false;
if (Inp.InAddr>BitInput.MAX_SIZE/2)
{
// If we already processed more than half of buffer, let's move
// remaining data into beginning to free more space for new data
// and ensure that calling function does not cross the buffer border
// even if we did not read anything here. Also it ensures that read size
// is not less than CRYPT_BLOCK_SIZE, so we can align it without risk
// to make it zero.
if (DataSize>0)
//x memmove(Inp.InBuf,Inp.InBuf+Inp.InAddr,DataSize);
Array.Copy(Inp.InBuf,Inp.InAddr,Inp.InBuf,0,DataSize);
Inp.InAddr=0;
ReadTop=DataSize;
}
else
DataSize=ReadTop;
int ReadCode=UnpIO_UnpRead(Inp.InBuf,DataSize,BitInput.MAX_SIZE-DataSize);
if (ReadCode>0)
ReadTop+=ReadCode;
ReadBorder=ReadTop-30;
return ReadCode!=-1;
}
void UnpWriteBuf30()
{
uint WrittenBorder=(uint)WrPtr;
uint WriteSize=(uint)((UnpPtr-WrittenBorder)&MaxWinMask);
for (int I=0;I<PrgStack.Count;I++)
{
// Here we apply filters to data which we need to write.
// We always copy data to virtual machine memory before processing.
// We cannot process them just in place in Window buffer, because
// these data can be used for future string matches, so we must
// preserve them in original form.
UnpackFilter30 flt=PrgStack[I];
if (flt==null)
continue;
if (flt.NextWindow)
{
flt.NextWindow=false;
continue;
}
uint BlockStart=flt.BlockStart;
uint BlockLength=flt.BlockLength;
if (((BlockStart-WrittenBorder)&MaxWinMask)<WriteSize)
{
if (WrittenBorder!=BlockStart)
{
UnpWriteArea(WrittenBorder,BlockStart);
WrittenBorder=BlockStart;
WriteSize=(uint)((UnpPtr-WrittenBorder)&MaxWinMask);
}
if (BlockLength<=WriteSize)
{
uint BlockEnd=(BlockStart+BlockLength)&MaxWinMask;
if (BlockStart<BlockEnd || BlockEnd==0)
VM.SetMemory(0,Window+BlockStart,BlockLength);
else
{
uint FirstPartLength=uint(MaxWinSize-BlockStart);
VM.SetMemory(0,Window+BlockStart,FirstPartLength);
VM.SetMemory(FirstPartLength,Window,BlockEnd);
}
VM_PreparedProgram *ParentPrg=&Filters30[flt->ParentFilter]->Prg;
VM_PreparedProgram *Prg=&flt->Prg;
ExecuteCode(Prg);
byte[] FilteredData=Prg.FilteredData;
uint FilteredDataSize=Prg.FilteredDataSize;
delete PrgStack[I];
PrgStack[I]=null;
while (I+1<PrgStack.Count)
{
UnpackFilter30 NextFilter=PrgStack[I+1];
// It is required to check NextWindow here.
if (NextFilter==null || NextFilter.BlockStart!=BlockStart ||
NextFilter.BlockLength!=FilteredDataSize || NextFilter.NextWindow)
break;
// Apply several filters to same data block.
VM.SetMemory(0,FilteredData,FilteredDataSize);
VM_PreparedProgram *ParentPrg=&Filters30[NextFilter.ParentFilter]->Prg;
VM_PreparedProgram *NextPrg=&NextFilter->Prg;
ExecuteCode(NextPrg);
FilteredData=NextPrg.FilteredData;
FilteredDataSize=NextPrg.FilteredDataSize;
I++;
delete PrgStack[I];
PrgStack[I]=null;
}
UnpIO_UnpWrite(FilteredData,0,FilteredDataSize);
UnpSomeRead=true;
WrittenFileSize+=FilteredDataSize;
WrittenBorder=BlockEnd;
WriteSize=(uint)((UnpPtr-WrittenBorder)&MaxWinMask);
}
else
{
// Current filter intersects the window write border, so we adjust
// the window border to process this filter next time, not now.
for (size_t J=I;J<PrgStack.Count;J++)
{
UnpackFilter30 flt=PrgStack[J];
if (flt!=null && flt.NextWindow)
flt.NextWindow=false;
}
WrPtr=WrittenBorder;
return;
}
}
}
UnpWriteArea(WrittenBorder,UnpPtr);
WrPtr=UnpPtr;
}
void ExecuteCode(VM_PreparedProgram *Prg)
{
Prg->InitR[6]=(uint)WrittenFileSize;
VM.Execute(Prg);
}
bool ReadTables30()
{
byte[] BitLength = new byte[BC];
byte[] Table = new byte[HUFF_TABLE_SIZE30];
if (Inp.InAddr>ReadTop-25)
if (!UnpReadBuf30())
return(false);
Inp.faddbits((uint)((8-Inp.InBit)&7));
uint BitField=Inp.fgetbits();
if ((BitField & 0x8000) != 0)
{
UnpBlockType=BLOCK_PPM;
return(PPM.DecodeInit(this,PPMEscChar));
}
UnpBlockType=BLOCK_LZ;
PrevLowDist=0;
LowDistRepCount=0;
if ((BitField & 0x4000) == 0)
Utility.Memset(UnpOldTable,0,UnpOldTable.Length);
Inp.faddbits(2);
for (uint I=0;I<BC;I++)
{
uint Length=(byte)(Inp.fgetbits() >> 12);
Inp.faddbits(4);
if (Length==15)
{
uint ZeroCount=(byte)(Inp.fgetbits() >> 12);
Inp.faddbits(4);
if (ZeroCount==0)
BitLength[I]=15;
else
{
ZeroCount+=2;
while (ZeroCount-- > 0 && I<BitLength.Length)
BitLength[I++]=0;
I--;
}
}
else
BitLength[I]=(byte)Length;
}
MakeDecodeTables(BitLength,0,BlockTables.BD,BC30);
const uint TableSize=HUFF_TABLE_SIZE30;
for (uint I=0;I<TableSize;)
{
if (Inp.InAddr>ReadTop-5)
if (!UnpReadBuf30())
return(false);
uint Number=DecodeNumber(Inp,BlockTables.BD);
if (Number<16)
{
Table[I]=(byte)((Number+this.UnpOldTable[I]) & 0xf);
I++;
}
else
if (Number<18)
{
uint N;
if (Number==16)
{
N=(Inp.fgetbits() >> 13)+3;
Inp.faddbits(3);
}
else
{
N=(Inp.fgetbits() >> 9)+11;
Inp.faddbits(7);
}
if (I==0)
return false; // We cannot have "repeat previous" code at the first position.
else
while (N-- > 0 && I<TableSize)
{
Table[I]=Table[I-1];
I++;
}
}
else
{
uint N;
if (Number==18)
{
N=(Inp.fgetbits() >> 13)+3;
Inp.faddbits(3);
}
else
{
N=(Inp.fgetbits() >> 9)+11;
Inp.faddbits(7);
}
while (N-- > 0 && I<TableSize)
Table[I++]=0;
}
}
TablesRead3=true;
if (Inp.InAddr>ReadTop)
return false;
MakeDecodeTables(Table,0,BlockTables.LD,NC30);
MakeDecodeTables(Table,(int)NC30,BlockTables.DD,DC30);
MakeDecodeTables(Table,(int)(NC30+DC30),BlockTables.LDD,LDC30);
MakeDecodeTables(Table,(int)(NC30+DC30+LDC30),BlockTables.RD,RC30);
//x memcpy(UnpOldTable,Table,sizeof(UnpOldTable));
Array.Copy(Table,0,UnpOldTable,0,UnpOldTable.Length);
return true;
}
#endif
void UnpInitData30(bool Solid)
{
if (!Solid)
{
TablesRead3=false;
Utility.Memset(UnpOldTable, 0, UnpOldTable.Length);
PPMEscChar=2;
UnpBlockType=BLOCK_LZ;
}
InitFilters30(Solid);
}
void InitFilters30(bool Solid)
{
if (!Solid)
{
//OldFilterLengths.SoftReset();
OldFilterLengths.Clear();
LastFilter=0;
//for (size_t I=0;I<Filters30.Count;I++)
// delete Filters30[I];
//Filters30.SoftReset();
Filters30.Clear();
}
//for (size_t I=0;I<PrgStack.Count;I++)
// delete PrgStack[I];
//PrgStack.SoftReset();
PrgStack.Clear();
}
}
}
*/

View File

@@ -222,6 +222,180 @@ internal partial class Unpack
UnpWriteBuf();
}
private async System.Threading.Tasks.Task Unpack5Async(
bool Solid,
System.Threading.CancellationToken cancellationToken = default
)
{
FileExtracted = true;
if (!Suspended)
{
UnpInitData(Solid);
if (!await UnpReadBufAsync(cancellationToken).ConfigureAwait(false))
{
return;
}
if (
!ReadBlockHeader(Inp, ref BlockHeader)
|| !ReadTables(Inp, ref BlockHeader, ref BlockTables)
|| !TablesRead5
)
{
return;
}
}
while (true)
{
UnpPtr &= MaxWinMask;
if (Inp.InAddr >= ReadBorder)
{
var FileDone = false;
while (
Inp.InAddr > BlockHeader.BlockStart + BlockHeader.BlockSize - 1
|| Inp.InAddr == BlockHeader.BlockStart + BlockHeader.BlockSize - 1
&& Inp.InBit >= BlockHeader.BlockBitSize
)
{
if (BlockHeader.LastBlockInFile)
{
FileDone = true;
break;
}
if (
!ReadBlockHeader(Inp, ref BlockHeader)
|| !ReadTables(Inp, ref BlockHeader, ref BlockTables)
)
{
return;
}
}
if (FileDone || !await UnpReadBufAsync(cancellationToken).ConfigureAwait(false))
{
break;
}
}
if (((WriteBorder - UnpPtr) & MaxWinMask) < MAX_LZ_MATCH + 3 && WriteBorder != UnpPtr)
{
await UnpWriteBufAsync(cancellationToken).ConfigureAwait(false);
if (WrittenFileSize > DestUnpSize)
{
return;
}
}
uint MainSlot = DecodeNumber(Inp, BlockTables.LD);
if (MainSlot < 256)
{
if (Fragmented)
{
FragWindow[UnpPtr++] = (byte)MainSlot;
}
else
{
Window[UnpPtr++] = (byte)MainSlot;
}
continue;
}
if (MainSlot >= 262)
{
uint Length = SlotToLength(Inp, MainSlot - 262);
uint DBits,
Distance = 1,
DistSlot = DecodeNumber(Inp, BlockTables.DD);
if (DistSlot < 4)
{
DBits = 0;
Distance += DistSlot;
}
else
{
DBits = (DistSlot / 2) - 1;
Distance += (2 | (DistSlot & 1)) << (int)DBits;
}
if (DBits > 0)
{
if (DBits >= 4)
{
if (DBits > 4)
{
Distance += ((Inp.getbits() >> (int)(20 - DBits)) << 4);
Inp.addbits(DBits - 4);
}
uint LowDist = DecodeNumber(Inp, BlockTables.LDD);
Distance += LowDist;
}
else
{
Distance += Inp.getbits() >> (int)(16 - DBits);
Inp.addbits(DBits);
}
}
if (Distance > 0x100)
{
Length++;
if (Distance > 0x2000)
{
Length++;
if (Distance > 0x40000)
{
Length++;
}
}
}
InsertOldDist(Distance);
LastLength = Length;
CopyString(Length, Distance);
continue;
}
if (MainSlot == 256)
{
var Filter = new UnpackFilter();
if (!ReadFilter(Inp, Filter) || !AddFilter(Filter))
{
break;
}
continue;
}
if (MainSlot == 257)
{
if (LastLength != 0)
{
CopyString((uint)LastLength, (uint)OldDist[0]);
}
continue;
}
if (MainSlot < 262)
{
uint DistNum = MainSlot - 258;
uint Distance = (uint)OldDist[(int)DistNum];
for (var I = (int)DistNum; I > 0; I--)
{
OldDist[I] = OldDist[I - 1];
}
OldDist[0] = Distance;
uint LengthSlot = DecodeNumber(Inp, BlockTables.RD);
uint Length = SlotToLength(Inp, LengthSlot);
LastLength = Length;
CopyString(Length, Distance);
continue;
}
}
await UnpWriteBufAsync(cancellationToken).ConfigureAwait(false);
}
private uint ReadFilterData(BitInput Inp)
{
var ByteCount = (Inp.fgetbits() >> 14) + 1;
@@ -339,6 +513,58 @@ internal partial class Unpack
return ReadCode != -1;
}
private async System.Threading.Tasks.Task<bool> UnpReadBufAsync(
System.Threading.CancellationToken cancellationToken = default
)
{
var DataSize = ReadTop - Inp.InAddr; // Data left to process.
if (DataSize < 0)
{
return false;
}
BlockHeader.BlockSize -= Inp.InAddr - BlockHeader.BlockStart;
if (Inp.InAddr > MAX_SIZE / 2)
{
if (DataSize > 0)
{
Buffer.BlockCopy(Inp.InBuf, Inp.InAddr, Inp.InBuf, 0, DataSize);
}
Inp.InAddr = 0;
ReadTop = DataSize;
}
else
{
DataSize = ReadTop;
}
var ReadCode = 0;
if (MAX_SIZE != DataSize)
{
ReadCode = await UnpIO_UnpReadAsync(
Inp.InBuf,
DataSize,
MAX_SIZE - DataSize,
cancellationToken
)
.ConfigureAwait(false);
}
if (ReadCode > 0) // Can be also -1.
{
ReadTop += ReadCode;
}
ReadBorder = ReadTop - 30;
BlockHeader.BlockStart = Inp.InAddr;
if (BlockHeader.BlockSize != -1) // '-1' means not defined yet.
{
ReadBorder = Math.Min(ReadBorder, BlockHeader.BlockStart + BlockHeader.BlockSize - 1);
}
return ReadCode != -1;
}
private void UnpWriteBuf()
{
var WrittenBorder = WrPtr;
@@ -413,7 +639,7 @@ internal partial class Unpack
else
//x memcpy(Mem,Window+BlockStart,BlockLength);
{
Utility.Copy(Window, BlockStart, Mem, 0, BlockLength);
Buffer.BlockCopy(Window, (int)BlockStart, Mem, 0, (int)BlockLength);
}
}
else
@@ -427,9 +653,21 @@ internal partial class Unpack
else
{
//x memcpy(Mem,Window+BlockStart,FirstPartLength);
Utility.Copy(Window, BlockStart, Mem, 0, FirstPartLength);
Buffer.BlockCopy(
Window,
(int)BlockStart,
Mem,
0,
(int)FirstPartLength
);
//x memcpy(Mem+FirstPartLength,Window,BlockEnd);
Utility.Copy(Window, 0, Mem, FirstPartLength, BlockEnd);
Buffer.BlockCopy(
Window,
0,
Mem,
(int)FirstPartLength,
(int)BlockEnd
);
}
}
@@ -521,6 +759,163 @@ internal partial class Unpack
}
}
private async System.Threading.Tasks.Task UnpWriteBufAsync(
System.Threading.CancellationToken cancellationToken = default
)
{
var WrittenBorder = WrPtr;
var FullWriteSize = (UnpPtr - WrittenBorder) & MaxWinMask;
var WriteSizeLeft = FullWriteSize;
var NotAllFiltersProcessed = false;
for (var I = 0; I < Filters.Count; I++)
{
var flt = Filters[I];
if (flt.Type == FILTER_NONE)
{
continue;
}
if (flt.NextWindow)
{
if (((flt.BlockStart - WrPtr) & MaxWinMask) <= FullWriteSize)
{
flt.NextWindow = false;
}
continue;
}
var BlockStart = flt.BlockStart;
var BlockLength = flt.BlockLength;
if (((BlockStart - WrittenBorder) & MaxWinMask) < WriteSizeLeft)
{
if (WrittenBorder != BlockStart)
{
await UnpWriteAreaAsync(WrittenBorder, BlockStart, cancellationToken)
.ConfigureAwait(false);
WrittenBorder = BlockStart;
WriteSizeLeft = (UnpPtr - WrittenBorder) & MaxWinMask;
}
if (BlockLength <= WriteSizeLeft)
{
if (BlockLength > 0)
{
var BlockEnd = (BlockStart + BlockLength) & MaxWinMask;
FilterSrcMemory = EnsureCapacity(
FilterSrcMemory,
checked((int)BlockLength)
);
var Mem = FilterSrcMemory;
if (BlockStart < BlockEnd || BlockEnd == 0)
{
if (Fragmented)
{
FragWindow.CopyData(Mem, 0, BlockStart, BlockLength);
}
else
{
Buffer.BlockCopy(Window, (int)BlockStart, Mem, 0, (int)BlockLength);
}
}
else
{
var FirstPartLength = MaxWinSize - BlockStart;
if (Fragmented)
{
FragWindow.CopyData(Mem, 0, BlockStart, FirstPartLength);
FragWindow.CopyData(Mem, FirstPartLength, 0, BlockEnd);
}
else
{
Buffer.BlockCopy(
Window,
(int)BlockStart,
Mem,
0,
(int)FirstPartLength
);
Buffer.BlockCopy(
Window,
0,
Mem,
(int)FirstPartLength,
(int)BlockEnd
);
}
}
var OutMem = ApplyFilter(Mem, BlockLength, flt);
Filters[I].Type = FILTER_NONE;
if (OutMem != null)
{
await UnpIO_UnpWriteAsync(OutMem, 0, BlockLength, cancellationToken)
.ConfigureAwait(false);
WrittenFileSize += BlockLength;
}
WrittenBorder = BlockEnd;
WriteSizeLeft = (UnpPtr - WrittenBorder) & MaxWinMask;
}
}
else
{
NotAllFiltersProcessed = true;
for (var J = I; J < Filters.Count; J++)
{
var fltj = Filters[J];
if (
fltj.Type != FILTER_NONE
&& fltj.NextWindow == false
&& ((fltj.BlockStart - WrPtr) & MaxWinMask) < FullWriteSize
)
{
fltj.NextWindow = true;
}
}
break;
}
}
}
var EmptyCount = 0;
for (var I = 0; I < Filters.Count; I++)
{
if (EmptyCount > 0)
{
Filters[I - EmptyCount] = Filters[I];
}
if (Filters[I].Type == FILTER_NONE)
{
EmptyCount++;
}
}
if (EmptyCount > 0)
{
Filters.RemoveRange(Filters.Count - EmptyCount, EmptyCount);
}
if (!NotAllFiltersProcessed)
{
await UnpWriteAreaAsync(WrittenBorder, UnpPtr, cancellationToken).ConfigureAwait(false);
WrPtr = UnpPtr;
}
WriteBorder = (UnpPtr + Math.Min(MaxWinSize, UNPACK_MAX_WRITE)) & MaxWinMask;
if (
WriteBorder == UnpPtr
|| WrPtr != UnpPtr
&& ((WrPtr - UnpPtr) & MaxWinMask) < ((WriteBorder - UnpPtr) & MaxWinMask)
)
{
WriteBorder = WrPtr;
}
}
private byte[] ApplyFilter(byte[] __d, uint DataSize, UnpackFilter Flt)
{
var Data = 0;
@@ -652,6 +1047,48 @@ internal partial class Unpack
}
}
private async System.Threading.Tasks.Task UnpWriteAreaAsync(
size_t StartPtr,
size_t EndPtr,
System.Threading.CancellationToken cancellationToken = default
)
{
if (EndPtr != StartPtr)
{
UnpSomeRead = true;
}
if (EndPtr < StartPtr)
{
UnpAllBuf = true;
}
if (Fragmented)
{
var SizeToWrite = (EndPtr - StartPtr) & MaxWinMask;
while (SizeToWrite > 0)
{
var BlockSize = FragWindow.GetBlockSize(StartPtr, SizeToWrite);
FragWindow.GetBuffer(StartPtr, out var __buffer, out var __offset);
await UnpWriteDataAsync(__buffer, __offset, BlockSize, cancellationToken)
.ConfigureAwait(false);
SizeToWrite -= BlockSize;
StartPtr = (StartPtr + BlockSize) & MaxWinMask;
}
}
else if (EndPtr < StartPtr)
{
await UnpWriteDataAsync(Window, StartPtr, MaxWinSize - StartPtr, cancellationToken)
.ConfigureAwait(false);
await UnpWriteDataAsync(Window, 0, EndPtr, cancellationToken).ConfigureAwait(false);
}
else
{
await UnpWriteDataAsync(Window, StartPtr, EndPtr - StartPtr, cancellationToken)
.ConfigureAwait(false);
}
}
private void UnpWriteData(byte[] Data, size_t offset, size_t Size)
{
if (WrittenFileSize >= DestUnpSize)
@@ -670,6 +1107,29 @@ internal partial class Unpack
WrittenFileSize += Size;
}
private async System.Threading.Tasks.Task UnpWriteDataAsync(
byte[] Data,
size_t offset,
size_t Size,
System.Threading.CancellationToken cancellationToken = default
)
{
if (WrittenFileSize >= DestUnpSize)
{
return;
}
var WriteSize = Size;
var LeftToWrite = DestUnpSize - WrittenFileSize;
if (WriteSize > LeftToWrite)
{
WriteSize = (size_t)LeftToWrite;
}
await UnpIO_UnpWriteAsync(Data, offset, WriteSize, cancellationToken).ConfigureAwait(false);
WrittenFileSize += Size;
}
private void UnpInitData50(bool Solid)
{
if (!Solid)

View File

@@ -1,16 +1,11 @@
#nullable disable
using System;
using System.Buffers;
using SharpCompress.Common;
using static SharpCompress.Compressors.Rar.UnpackV2017.PackDef;
using static SharpCompress.Compressors.Rar.UnpackV2017.UnpackGlobal;
#if !Rar2017_64bit
using size_t = System.UInt32;
#else
using nint = System.Int64;
using nuint = System.UInt64;
using size_t = System.UInt64;
#endif
namespace SharpCompress.Compressors.Rar.UnpackV2017;
@@ -42,11 +37,9 @@ internal sealed partial class Unpack : BitInput
// It prevents crash if first DoUnpack call is later made with wrong
// (true) 'Solid' value.
UnpInitData(false);
#if !RarV2017_SFX_MODULE
// RAR 1.5 decompression initialization
UnpInitData15(false);
InitHuff();
#endif
}
// later: may need Dispose() if we support thread pool
@@ -110,7 +103,7 @@ internal sealed partial class Unpack : BitInput
throw new InvalidFormatException("Grow && Fragmented");
}
var NewWindow = Fragmented ? null : new byte[WinSize];
var NewWindow = Fragmented ? null : ArrayPool<byte>.Shared.Rent((int)WinSize);
if (NewWindow == null)
{
@@ -126,6 +119,7 @@ internal sealed partial class Unpack : BitInput
if (Window != null) // If allocated by preceding files.
{
//free(Window);
ArrayPool<byte>.Shared.Return(Window);
Window = null;
}
FragWindow.Init(WinSize);
@@ -170,7 +164,6 @@ internal sealed partial class Unpack : BitInput
// just for extra safety.
switch (Method)
{
#if !RarV2017_SFX_MODULE
case 15: // rar 1.5 compression
if (!Fragmented)
{
@@ -186,33 +179,64 @@ internal sealed partial class Unpack : BitInput
}
break;
#endif
#if !RarV2017_RAR5ONLY
case 29: // rar 3.x compression
if (!Fragmented)
{
throw new NotImplementedException();
}
break;
case 50: // RAR 5.0 compression algorithm.
Unpack5(Solid);
break;
#if !Rar2017_NOSTRICT
default:
throw new InvalidFormatException("unknown compression method " + Method);
#endif
}
}
private async System.Threading.Tasks.Task DoUnpackAsync(
uint Method,
bool Solid,
System.Threading.CancellationToken cancellationToken = default
)
{
// Methods <50 will crash in Fragmented mode when accessing NULL Window.
// They cannot be called in such mode now, but we check it below anyway
// just for extra safety.
switch (Method)
{
#if !RarV2017_SFX_MODULE
case 15: // rar 1.5 compression
if (!Fragmented)
{
await Unpack15Async(Solid, cancellationToken).ConfigureAwait(false);
}
break;
case 20: // rar 2.x compression
case 26: // files larger than 2GB
if (!Fragmented)
{
await Unpack20Async(Solid, cancellationToken).ConfigureAwait(false);
}
break;
#endif
#if !RarV2017_RAR5ONLY
case 29: // rar 3.x compression
if (!Fragmented)
{
// TODO: Create Unpack29Async when ready
throw new NotImplementedException();
}
break;
#endif
case 50: // RAR 5.0 compression algorithm.
/*#if RarV2017_RAR_SMP
if (MaxUserThreads > 1)
{
// We do not use the multithreaded unpack routine to repack RAR archives
// in 'suspended' mode, because unlike the single threaded code it can
// write more than one dictionary for same loop pass. So we would need
// larger buffers of unknown size. Also we do not support multithreading
// in fragmented window mode.
if (!Fragmented)
{
Unpack5MT(Solid);
break;
}
}
#endif*/
Unpack5(Solid);
// RAR 5.0 has full async support via UnpReadBufAsync and UnpWriteBuf
await Unpack5Async(Solid, cancellationToken).ConfigureAwait(false);
break;
#if !Rar2017_NOSTRICT
default:

Some files were not shown because too many files have changed in this diff Show More