mirror of
https://github.com/adamhathcock/sharpcompress.git
synced 2026-02-04 05:25:00 +00:00
[PR #1024] [MERGED] Fix memory exhaustion in TAR header auto-detection #1447
Reference in New Issue
Block a user
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
📋 Pull Request Information
Original PR: https://github.com/adamhathcock/sharpcompress/pull/1024
Author: @Copilot
Created: 11/19/2025
Status: ✅ Merged
Merged: 11/19/2025
Merged by: @adamhathcock
Base:
master← Head:copilot/fix-memory-exhaustion-bug📝 Commits (2)
0698031Initial plan7b06652Add validation to prevent memory exhaustion in TAR long name headers📊 Changes
3 files changed (+69 additions, -1 deletions)
View changed files
📝
.gitignore(+2 -1)📝
src/SharpCompress/Common/Tar/Headers/TarHeader.cs(+13 -0)📝
tests/SharpCompress.Test/Tar/TarReaderTests.cs(+54 -0)📄 Description
During auto-detection without extension hints, random bytes in compressed files (e.g., tar.lz) can be misinterpreted as TAR
LongName/LongLinkheaders with multi-gigabyte sizes, causing memory exhaustion.Changes
Added size validation in
TarHeader.ReadLongName()MAX_LONG_NAME_SIZEconstant (32KB) - covers real-world path limitsif (size < 0 || size > MAX_LONG_NAME_SIZE)InvalidFormatException(caught byIsTarFile, returns false, auto-detection continues)Added regression test
Tar_Malformed_LongName_Excessive_Sizecreates malformed header with 8GB sizeExample
All existing tests pass. No breaking changes.
Original prompt
This section details on the original issue you should resolve
<issue_title>Bug: Memory exhaustion when auto-detecting a specific tar.lz archive</issue_title>
<issue_description>### Summary
When reading a specific
.tar.lzfile without providing an extension hint, the library attempts to auto-detect the format. This process incorrectly identifies the file as aTararchive with aLongLinkheader, leading to an attempt to allocate a massive amount of memory (e.g., 20GB). This causes the application to either crash or fail to open the archive. Standard compression utilities can open this same file without any issues.The root cause appears to be a lack of validation in
TarHeader.Read()and its helper methods.Steps to Reproduce
.tar.lzfile.ReaderOptions.ExtensionHint, forcing the library to auto-detect the archive type.Root Cause Analysis
The problem occurs because the auto-detection mechanism first tries to parse the file as a standard
Tararchive. My file is a.tar.lz, but a byte at a specific offset is misinterpreted.In
TarHeader.Read(), the code enters a loop to process headers.For my specific file, the byte at offset 157 (read as
entryType) happens to matchEntryType.LongLink. This triggers a call toTarHeader.ReadLongName().Inside
ReadLongName(), theReadSize(buffer)method calculates an extremely large value fornameLengthbased on the misinterpreted header data. The subsequent call toreader.ReadBytes(nameLength)attempts to allocate a massive array without any sanity checks.The
BinaryReader.ReadBytes()method directly allocates memory based on the providedcount.Stream Corruption
After the
Tarparsing attempt fails (likely due to anEndOfStreamExceptionor I/O error fromStream.ReadAtLeast()), the underlyingStreamorSharpCompressStreamappears to be left in a corrupted state.When the auto-detection logic proceeds to the correct
tar.lzformat, it fails to read the header correctly. For example, it does not see the "LZIP" magic bytes at the beginning of the stream, even though debugging shows the bytes are present in the buffer. This strongly suggests that the stream's internal position or state has been irrecoverably altered by the failed read attempt.Workaround
The issue can be avoided by explicitly setting
ReaderOptions.ExtensionHintto guide the parser. This skips the problematicTarauto-detection step.However, most users would expect the auto-detection to be robust and would not think to set this option unless they have investigated the source code.</issue_description>
Comments on the Issue (you are @copilot in this section)
@adamhathcock Please make a P...💬 We'd love your input! Share your thoughts on Copilot coding agent in our 2 minute survey.
🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.