mirror of
https://github.com/adamhathcock/sharpcompress.git
synced 2026-02-08 05:27:04 +00:00
Bug: Memory exhaustion when auto-detecting a specific tar.lz archive #724
Reference in New Issue
Block a user
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @IDXGI on GitHub (Nov 17, 2025).
Originally assigned to: @Copilot on GitHub.
Summary
When reading a specific
.tar.lzfile without providing an extension hint, the library attempts to auto-detect the format. This process incorrectly identifies the file as aTararchive with aLongLinkheader, leading to an attempt to allocate a massive amount of memory (e.g., 20GB). This causes the application to either crash or fail to open the archive. Standard compression utilities can open this same file without any issues.The root cause appears to be a lack of validation in
TarHeader.Read()and its helper methods.Steps to Reproduce
.tar.lzfile.ReaderOptions.ExtensionHint, forcing the library to auto-detect the archive type.Root Cause Analysis
The problem occurs because the auto-detection mechanism first tries to parse the file as a standard
Tararchive. My file is a.tar.lz, but a byte at a specific offset is misinterpreted.In
TarHeader.Read(), the code enters a loop to process headers.For my specific file, the byte at offset 157 (read as
entryType) happens to matchEntryType.LongLink. This triggers a call toTarHeader.ReadLongName().Inside
ReadLongName(), theReadSize(buffer)method calculates an extremely large value fornameLengthbased on the misinterpreted header data. The subsequent call toreader.ReadBytes(nameLength)attempts to allocate a massive array without any sanity checks.The
BinaryReader.ReadBytes()method directly allocates memory based on the providedcount.Stream Corruption
After the
Tarparsing attempt fails (likely due to anEndOfStreamExceptionor I/O error fromStream.ReadAtLeast()), the underlyingStreamorSharpCompressStreamappears to be left in a corrupted state.When the auto-detection logic proceeds to the correct
tar.lzformat, it fails to read the header correctly. For example, it does not see the "LZIP" magic bytes at the beginning of the stream, even though debugging shows the bytes are present in the buffer. This strongly suggests that the stream's internal position or state has been irrecoverably altered by the failed read attempt.Workaround
The issue can be avoided by explicitly setting
ReaderOptions.ExtensionHintto guide the parser. This skips the problematicTarauto-detection step.However, most users would expect the auto-detection to be robust and would not think to set this option unless they have investigated the source code.
@adamhathcock commented on GitHub (Nov 19, 2025):
Please make a PR or provide this crafted file.
@adamhathcock commented on GitHub (Nov 19, 2025):
Let me know if the linked issue seems like it solves your problem. Seems valid to me
@IDXGI commented on GitHub (Nov 19, 2025):
The fix looks feasible. Thanks!
Regarding the stream corruption issue mentioned earlier, I found the root cause and have described it in a new issue.