mirror of
https://github.com/aaru-dps/Aaru.git
synced 2026-02-04 00:54:33 +00:00
Dumping large hard disks to Aaru format throws OverflowException when hashing image file #877
Reference in New Issue
Block a user
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @kkaisershot on GitHub (Apr 8, 2022).
Originally assigned to: @claunia on GitHub.
Version
5.3.0
Commit hash
No response
Tested debug version?
Which operating systems have you used?
What is the architectural bit size you're using?
What processor are you using?
Device manufacturer
Seagate
Device model
2N1AP5-500
Bus the device uses to attach to the computer
USB cable or card reader manufacturer
Unknown; cable included with drive
USB cable or card reader model
Unknown; cable included with drive
What were you doing when it failed?
Description
Dumping large hard disks to Aaru format results in a "System.OverflowException: The input exceeds data types" at SpamSumContext.cs:119 upon nearly completing hashing the image file. I encountered this with a 2TB drive.
This is probably related to #722 and #649, however since this throws a different exception with a different drive than #722, and at a different point in the dumping process than #649, I filed this as a separate issue.
As in #722, the dropdown in the bug report form only lets me select 5.3.0 as the latest LTS version, however I was also able to reproduce this in 5.3.1, both release and debug builds.
Exact command line used
./aaru media dump -O compress=False,deduplicate=False -v -d /dev/sdb /mnt/NACM2YNL.aaruf 2>&1 | tee output.txt
Expected behavior
Hashing the image file should complete without error, proceeding to hash sectors and finally close the image.
Actual behavior
Output of command execution with debug output enabled
Media details
Seagate Portable Drive
P/N: 2N1AP5-500
2TB
@TheRogueArchivist commented on GitHub (Apr 11, 2022):
Very interesting, it is a different error but it seems to happen in the same line of code as #722. Same general advice to workaround this issue should apply, thank you for being so diligent and noticing that the errors were different!