EntryStream.Dispose() calls Flush() on Deflate/LZMA streams causing NotSupportedException on non-seekable streams #765

Closed
opened 2026-01-29 22:17:14 +00:00 by claunia · 3 comments
Owner

Originally created by @rleroux-regnology on GitHub (Jan 22, 2026).

Originally assigned to: @adamhathcock, @Copilot on GitHub.

Hi,

Since SharpCompress 0.41.0, EntryStream.Dispose() calls Flush() on some internal decompression streams:

//Need a safe standard approach to this - it's okay for compression to overreads. Handling needs to be standardised
if (_stream is IStreamStack ss)
{
    if (ss.BaseStream() is SharpCompress.Compressors.Deflate.DeflateStream deflateStream)
    {
        deflateStream.Flush(); //Deflate over reads. Knock it back
    }
    else if (ss.BaseStream() is SharpCompress.Compressors.LZMA.LzmaStream lzmaStream)
    {
        lzmaStream.Flush(); //Lzma over reads. Knock it back
    }
}

This causes a NotSupportedException in some legitimate streaming scenarios.


Context / real-world scenario

I'm using SharpCompress in a pure streaming pipeline in ASP.NET Core:

  • Source stream: HttpRequest.Body

  • Read via MultipartReader (multipart/form-data)

  • Archive entries are processed sequentially using ReaderFactory.Open(...).MoveToNextEntry()

  • Entry streams are non-seekable by design

In this setup, Flush() on DeflateStream / LzmaStream may internally try to access Position / Seek on the underlying stream stack, which is not supported and throws NotSupportedException.

This happens during EntryStream.Dispose(), which breaks the iteration and prevents moving to the next entry.


Why this is problematic

From a consumer point of view:

  • Dispose() is expected to be safe and non-throwing

  • Especially in streaming scenarios, Dispose() is required to advance to the next entry

  • Throwing NotSupportedException during Dispose() makes SharpCompress unusable in valid non-seekable streaming pipelines


Expected behavior / suggestion

At minimum, EntryStream.Dispose() should:

  • Not throw if Flush() is not supported

  • Swallow or ignore NotSupportedException coming from Flush()

Example defensive pattern:

try
{
    deflateStream.Flush();
}
catch (NotSupportedException)
{
    // ignore: underlying stream does not support required operations
}

Or more generally: Dispose() should never fail due to optional stream realignment logic.


Workaround on consumer side

Currently I have to wrap Dispose() with a try/catch and manually dispose the base stream via IStreamStack, which works but feels like something the library should handle.


Summary

  • The new Flush() in EntryStream.Dispose() breaks valid non-seekable streaming scenarios.

  • The over-read problem is real, but the current solution is unsafe.

  • Dispose() should not throw in this case.

Originally created by @rleroux-regnology on GitHub (Jan 22, 2026). Originally assigned to: @adamhathcock, @Copilot on GitHub. Hi, Since SharpCompress **0.41.0**, `EntryStream.Dispose()` calls `Flush()` on some internal decompression streams: ```cs //Need a safe standard approach to this - it's okay for compression to overreads. Handling needs to be standardised if (_stream is IStreamStack ss) { if (ss.BaseStream() is SharpCompress.Compressors.Deflate.DeflateStream deflateStream) { deflateStream.Flush(); //Deflate over reads. Knock it back } else if (ss.BaseStream() is SharpCompress.Compressors.LZMA.LzmaStream lzmaStream) { lzmaStream.Flush(); //Lzma over reads. Knock it back } } ``` This causes a `NotSupportedException` in some legitimate streaming scenarios. --- ### Context / real-world scenario I'm using SharpCompress in a pure streaming pipeline in ASP.NET Core: - Source stream: `HttpRequest.Body` - Read via MultipartReader (multipart/form-data) - Archive entries are processed sequentially using `ReaderFactory.Open(...).MoveToNextEntry()` - Entry streams are **non-seekable by design** In this setup, `Flush()` on `DeflateStream` / `LzmaStream `may internally try to access `Position` / `Seek` on the underlying stream stack, which is **not supported** and throws `NotSupportedException`. This happens during `EntryStream.Dispose()`, which breaks the iteration and prevents moving to the next entry. --- ### Why this is problematic From a consumer point of view: - `Dispose()` is expected to be **safe and non-throwing** - Especially in streaming scenarios, `Dispose()` is required to advance to the next entry - Throwing `NotSupportedException` during `Dispose()` makes SharpCompress unusable in valid non-seekable streaming pipelines --- ### Expected behavior / suggestion At minimum, `EntryStream.Dispose()` should: - Not throw if `Flush()` is not supported - Swallow or ignore `NotSupportedException` coming from `Flush()` Example defensive pattern: ```cs try { deflateStream.Flush(); } catch (NotSupportedException) { // ignore: underlying stream does not support required operations } ``` Or more generally: `Dispose()` should never fail due to optional stream realignment logic. --- ### Workaround on consumer side Currently I have to wrap `Dispose()` with a try/catch and manually dispose the base stream via `IStreamStack`, which works but feels like something the library should handle. --- ### Summary - The new `Flush()` in `EntryStream.Dispose()` breaks valid non-seekable streaming scenarios. - The over-read problem is real, but the current solution is unsafe. - `Dispose()` should not throw in this case.
claunia added the enhancement label 2026-01-29 22:17:14 +00:00
Author
Owner

@adamhathcock commented on GitHub (Jan 22, 2026):

This should be easy to test, right? Use the ForwardOnlyStream on something with your same code? I have to confess I don't completely understand the scenario and why it's different from other streaming scenarios.

@adamhathcock commented on GitHub (Jan 22, 2026): This should be easy to test, right? Use the ForwardOnlyStream on something with your same code? I have to confess I don't completely understand the scenario and why it's different from other streaming scenarios.
Author
Owner

@adamhathcock commented on GitHub (Jan 22, 2026):

I think was done as requested

@adamhathcock commented on GitHub (Jan 22, 2026): I think was done as requested
Author
Owner

@rleroux-regnology commented on GitHub (Jan 23, 2026):

I think was done as requested

Thank you very much for the quick fix. I just retested with the latest changes and I no longer get errors when calling Dispose.

Unfortunately I'm still encountering issues when ReaderFactory consumes my MultipartReaderStream.

I have a silent issue when iterating through entries, I've created a new issue based on this: https://github.com/adamhathcock/sharpcompress/issues/1155

@rleroux-regnology commented on GitHub (Jan 23, 2026): > I think was done as requested Thank you very much for the quick fix. I just retested with the latest changes and I no longer get errors when calling Dispose. Unfortunately I'm still encountering issues when `ReaderFactory` consumes my `MultipartReaderStream`. I have a silent issue when iterating through entries, I've created a new issue based on this: https://github.com/adamhathcock/sharpcompress/issues/1155
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: starred/sharpcompress#765