Enhance documentation for various functions

This commit is contained in:
2025-09-30 16:03:34 +01:00
parent bebdbee136
commit d5d2bb100f
13 changed files with 1513 additions and 114 deletions

View File

@@ -29,11 +29,45 @@
* @brief Processes a data block from the image stream. * @brief Processes a data block from the image stream.
* *
* Reads a data block from the image, decompresses if needed, and updates the context with its contents. * Reads a data block from the image, decompresses if needed, and updates the context with its contents.
* This function handles various types of data blocks including compressed (LZMA) and uncompressed data,
* performs CRC validation, and stores the processed data in the appropriate context fields.
* *
* @param ctx Pointer to the aaruformat context. * @param ctx Pointer to the aaruformat context.
* @param entry Pointer to the index entry describing the data block. * @param entry Pointer to the index entry describing the data block.
* @return AARUF_STATUS_OK on success, or an error code on failure. *
* @return Returns one of the following status codes:
* @retval AARUF_STATUS_OK (0) Successfully processed the data block. This is returned when:
* - The block is processed successfully and all validations pass
* - A NoData block type is encountered (these are skipped)
* - A UserData block type is encountered (these update sector size but are otherwise skipped)
* - Block validation fails but processing continues (non-fatal errors like CRC mismatches)
* - Memory allocation failures occur (processing continues with other blocks)
* - Block reading failures occur (processing continues with other blocks)
* - Unknown compression types are encountered (block is skipped)
*
* @retval AARUF_ERROR_NOT_AARUFORMAT (-1) The context or image stream is invalid (NULL pointers).
*
* @retval AARUF_ERROR_CANNOT_READ_BLOCK (-7) Failed to seek to the block position in the image stream.
* This occurs when fseek() fails or the file position doesn't match the expected offset.
*
* @retval AARUF_ERROR_CANNOT_DECOMPRESS_BLOCK (-17) LZMA decompression failed. This can happen when:
* - The LZMA decoder returns an error code
* - The decompressed data size doesn't match the expected block length
*
* @note Most validation and reading errors are treated as non-fatal and result in AARUF_STATUS_OK
* being returned while the problematic block is skipped. This allows processing to continue
* with other blocks in the image.
*
* @note The function performs the following validations:
* - Block identifier matches the expected block type
* - Block data type matches the expected data type
* - CRC64 checksum validation (with version-specific byte order handling)
* - Proper decompression for LZMA-compressed blocks
*
* @warning Memory allocated for block data is stored in the context and should be freed when
* the context is destroyed. The function may replace existing data in the context.
*/ */
int32_t process_data_block(aaruformatContext *ctx, IndexEntry *entry) int32_t process_data_block(aaruformatContext *ctx, IndexEntry *entry)
{ {
TRACE("Entering process_data_block(%p, %p)", ctx, entry); TRACE("Entering process_data_block(%p, %p)", ctx, entry);

View File

@@ -33,9 +33,94 @@
* @brief Closes an AaruFormat image context and frees resources. * @brief Closes an AaruFormat image context and frees resources.
* *
* Closes the image file, frees memory, and releases all resources associated with the context. * Closes the image file, frees memory, and releases all resources associated with the context.
* For images opened in write mode, this function performs critical finalization operations
* including writing cached DDT tables, updating the index, writing the final image header,
* and ensuring all data structures are properly persisted. It handles both single-level
* and multi-level DDT structures and performs comprehensive cleanup of all allocated resources.
* *
* @param context Pointer to the aaruformat context to close. * @param context Pointer to the aaruformat context to close.
* @return 0 on success, or -1 on error. *
* @return Returns one of the following status codes:
* @retval 0 Successfully closed the context and freed all resources. This is returned when:
* - The context is valid and properly initialized
* - For write mode: All cached data is successfully written to the image file
* - DDT tables (single-level or multi-level) are successfully written and indexed
* - The image index is successfully written with proper CRC validation
* - The final image header is updated with correct index offset and written
* - All memory resources are successfully freed
* - The image stream is closed without errors
*
* @retval -1 Context validation failed. This occurs when:
* - The context parameter is NULL
* - The context magic number doesn't match AARU_MAGIC (invalid context type)
* - The errno is set to EINVAL to indicate invalid argument
*
* @retval AARUF_ERROR_CANNOT_WRITE_HEADER (-16) Write operations failed. This occurs when:
* - Cannot write the initial image header at position 0 (for write mode)
* - Cannot write cached secondary DDT header or data to the image file
* - Cannot write primary DDT header or table data to the image file
* - Cannot write single-level DDT header or table data to the image file
* - Cannot write index header to the image file
* - Cannot write all index entries to the image file
* - Cannot update the final image header with the correct index offset
* - Any file I/O operation fails during the finalization process
*
* @retval Error codes from aaruf_close_current_block() may be propagated when:
* - The current writing buffer cannot be properly closed and written
* - Block finalization fails during the close operation
* - Compression or writing of the final block encounters errors
*
* @note Write Mode Finalization Process:
* - Writes the image header at the beginning of the file
* - Closes any open writing buffer (current block being written)
* - Writes cached secondary DDT tables and updates primary DDT references
* - Writes primary DDT tables (single-level or multi-level) with CRC validation
* - Writes the complete index with all block references at the end of the file
* - Updates the image header with the final index offset
*
* @note DDT Finalization:
* - For multi-level DDTs: Writes cached secondary tables and updates primary table pointers
* - For single-level DDTs: Writes the complete table directly to the designated position
* - Calculates and validates CRC64 checksums for all DDT data
* - Updates index entries to reference newly written DDT blocks
*
* @note Index Writing:
* - Creates IndexHeader3 structure with entry count and CRC validation
* - Writes all IndexEntry structures sequentially after the header
* - Aligns index position to block boundaries for optimal access
* - Updates the main image header with the final index file offset
*
* @note Memory Cleanup:
* - Frees all allocated DDT tables (userDataDdtMini, userDataDdtBig, cached secondary tables)
* - Frees sector metadata arrays (sectorPrefix, sectorSuffix, sectorSubchannel, etc.)
* - Frees media tag hash table and all associated tag data
* - Frees track entries, metadata blocks, and hardware information
* - Closes LRU caches for block headers and data
* - Unmaps memory-mapped DDT structures (Linux-specific)
*
* @note Platform-Specific Operations:
* - Linux: Unmaps memory-mapped DDT structures using munmap() if not loaded in memory
* - Other platforms: Standard memory deallocation only
*
* @note Error Handling Strategy:
* - Critical write failures return immediately with error codes
* - Memory cleanup continues even if some write operations fail
* - All allocated resources are freed regardless of write success/failure
* - File stream is always closed, even on error conditions
*
* @warning This function must be called to properly finalize write-mode images.
* Failing to call aaruf_close() on write-mode contexts will result in
* incomplete or corrupted image files.
*
* @warning After calling this function, the context pointer becomes invalid and
* should not be used. All operations on the context will result in
* undefined behavior.
*
* @warning For write-mode contexts, this function performs extensive file I/O.
* Ensure sufficient disk space and proper file permissions before calling.
*
* @warning The function sets errno to EINVAL for context validation failures
* but uses library-specific error codes for write operation failures.
*/ */
int aaruf_close(void *context) int aaruf_close(void *context)
{ {

View File

@@ -30,6 +30,9 @@
* @brief Creates a new AaruFormat image file. * @brief Creates a new AaruFormat image file.
* *
* Allocates and initializes a new aaruformat context and image file with the specified parameters. * Allocates and initializes a new aaruformat context and image file with the specified parameters.
* This function sets up all necessary data structures including headers, DDT (deduplication table),
* caches, and index entries for writing a new AaruFormat image. It also handles file creation,
* memory allocation, and proper initialization of the writing context.
* *
* @param filepath Path to the image file to create. * @param filepath Path to the image file to create.
* @param media_type Media type identifier. * @param media_type Media type identifier.
@@ -37,12 +40,63 @@
* @param user_sectors Number of user data sectors. * @param user_sectors Number of user data sectors.
* @param negative_sectors Number of negative sectors. * @param negative_sectors Number of negative sectors.
* @param overflow_sectors Number of overflow sectors. * @param overflow_sectors Number of overflow sectors.
* @param options String with creation options. * @param options String with creation options (parsed for alignment and shift parameters).
* @param application_name Pointer to the application name string. * @param application_name Pointer to the application name string.
* @param application_name_length Length of the application name string. * @param application_name_length Length of the application name string (must be ≤ AARU_HEADER_APP_NAME_LEN).
* @param application_major_version Major version of the application. * @param application_major_version Major version of the application.
* @param application_minor_version Minor version of the application. * @param application_minor_version Minor version of the application.
* @return Pointer to the created aaruformat context, or NULL on failure. *
* @return Returns one of the following:
* @retval aaruformatContext* Successfully created and initialized context. The returned pointer contains:
* - Properly initialized AaruFormat headers and metadata
* - Allocated and configured DDT structures for deduplication
* - Initialized block and header caches for performance
* - Open file stream ready for writing operations
* - Index entries array ready for block tracking
* - ECC context initialized for Compact Disc support
*
* @retval NULL Creation failed. The specific error can be determined by checking errno, which will be set to:
* - AARUF_ERROR_NOT_ENOUGH_MEMORY (-9) when memory allocation fails for:
* * Context allocation
* * Readable sector tags array allocation
* * Application version string allocation
* * Image version string allocation
* * DDT table allocation (userDataDdtMini or userDataDdtBig)
* * Index entries array allocation
* - AARUF_ERROR_CANNOT_CREATE_FILE (-19) when file operations fail:
* * Unable to open the specified filepath for writing
* * File seek operations fail during initialization
* * File system errors or permission issues
* - AARUF_ERROR_INVALID_APP_NAME_LENGTH (-20) when:
* * application_name_length exceeds AARU_HEADER_APP_NAME_LEN
*
* @note Memory Management:
* - The function performs extensive memory allocation for various context structures
* - On failure, all previously allocated memory is properly cleaned up
* - The returned context must be freed using appropriate cleanup functions
*
* @note File Operations:
* - Creates a new file at the specified path (overwrites existing files)
* - Opens the file in binary read/write mode ("wb+")
* - Positions the file pointer at the calculated data start position
* - File alignment is handled based on parsed options
*
* @note DDT Initialization:
* - Uses DDT version 2 format with configurable compression and alignment
* - Supports both small (16-bit) and big (32-bit) DDT entry sizes
* - Calculates optimal table sizes based on sector counts and shift parameters
* - All DDT entries are initialized to zero (indicating unallocated sectors)
*
* @note Options Parsing:
* - The options string is parsed to extract block_alignment, data_shift, and table_shift
* - These parameters affect memory usage, performance, and file organization
* - Invalid options may result in suboptimal performance but won't cause failure
*
* @warning The created context is in writing mode and expects proper finalization
* before closing to ensure index and metadata are written correctly.
*
* @warning Application name length validation is strict - exceeding the limit will
* cause creation failure with AARUF_ERROR_INVALID_APP_NAME_LENGTH.
*/ */
void *aaruf_create(const char *filepath, uint32_t media_type, uint32_t sector_size, uint64_t user_sectors, void *aaruf_create(const char *filepath, uint32_t media_type, uint32_t sector_size, uint64_t user_sectors,
uint64_t negative_sectors, uint64_t overflow_sectors, const char *options, uint64_t negative_sectors, uint64_t overflow_sectors, const char *options,

View File

@@ -32,11 +32,54 @@
* @brief Processes a DDT v1 block from the image stream. * @brief Processes a DDT v1 block from the image stream.
* *
* Reads and decompresses (if needed) a DDT v1 block, verifies its integrity, and loads it into memory or maps it. * Reads and decompresses (if needed) a DDT v1 block, verifies its integrity, and loads it into memory or maps it.
* This function handles both user data DDT blocks and CD sector prefix/suffix corrected DDT blocks, supporting
* both LZMA compression and uncompressed formats. On Linux, uncompressed blocks can be memory-mapped for
* improved performance.
* *
* @param ctx Pointer to the aaruformat context. * @param ctx Pointer to the aaruformat context.
* @param entry Pointer to the index entry describing the DDT block. * @param entry Pointer to the index entry describing the DDT block.
* @param foundUserDataDdt Pointer to a boolean that will be set to true if a user data DDT was found and loaded. * @param found_user_data_ddt Pointer to a boolean that will be set to true if a user data DDT was found and loaded.
* @return AARUF_STATUS_OK on success, or an error code on failure. *
* @return Returns one of the following status codes:
* @retval AARUF_STATUS_OK (0) Successfully processed the DDT block. This is returned when:
* - The DDT block is successfully read, decompressed (if needed), and loaded into memory
* - Memory mapping of uncompressed DDT succeeds on Linux systems
* - CD sector prefix/suffix corrected DDT blocks are processed successfully
* - Memory allocation failures occur for non-critical operations (processing continues)
* - File reading errors occur for compressed data or LZMA properties (processing continues)
* - Unknown compression types are encountered (block is skipped)
* - Memory mapping fails on Linux (sets found_user_data_ddt to false but continues)
* - Uncompressed DDT is encountered on non-Linux systems (not yet implemented, continues)
*
* @retval AARUF_ERROR_NOT_AARUFORMAT (-1) The context or image stream is invalid (NULL pointers).
*
* @retval AARUF_ERROR_CANNOT_READ_BLOCK (-7) Failed to access the DDT block in the image stream. This occurs when:
* - fseek() fails to position at the DDT block offset
* - The file position doesn't match the expected offset after seeking
* - Failed to read the DDT header from the image stream
* - The number of bytes read for the DDT header is insufficient
*
* @retval AARUF_ERROR_CANNOT_DECOMPRESS_BLOCK (-17) LZMA decompression failed. This can happen when:
* - The LZMA decoder returns a non-zero error code during decompression
* - The decompressed data size doesn't match the expected DDT block length
* - This error causes immediate function termination and memory cleanup
*
* @note The function exhibits different error handling strategies depending on the operation:
* - Critical errors (seek failures, header read failures, decompression failures) cause immediate return
* - Non-critical errors (memory allocation failures, unknown compression types) allow processing to continue
* - The found_user_data_ddt flag is updated to reflect the success of user data DDT loading
*
* @note Memory Management:
* - Allocated DDT data is stored in the context (ctx->userDataDdt, ctx->sectorPrefixDdt, ctx->sectorSuffixDdt)
* - On Linux, uncompressed DDTs may be memory-mapped instead of allocated
* - Memory is automatically cleaned up on decompression errors
*
* @note Platform-specific behavior:
* - Linux: Supports memory mapping of uncompressed DDT blocks for better performance
* - Non-Linux: Uncompressed DDT processing is not yet implemented
*
* @warning The function modifies context state including sector count, shift value, and DDT version.
* Ensure proper context cleanup when the function completes.
*/ */
int32_t process_ddt_v1(aaruformatContext *ctx, IndexEntry *entry, bool *found_user_data_ddt) int32_t process_ddt_v1(aaruformatContext *ctx, IndexEntry *entry, bool *found_user_data_ddt)
{ {
@@ -301,14 +344,44 @@ int32_t process_ddt_v1(aaruformatContext *ctx, IndexEntry *entry, bool *found_us
/** /**
* @brief Decodes a DDT v1 entry for a given sector address. * @brief Decodes a DDT v1 entry for a given sector address.
* *
* Determines the offset and block offset for a sector using the DDT v1 table. * Determines the offset and block offset for a sector using the DDT v1 table. This function performs
* bit manipulation on the DDT entry to extract the sector offset within a block and the block offset,
* and determines whether the sector was dumped or not based on the DDT entry value.
* *
* @param ctx Pointer to the aaruformat context. * @param ctx Pointer to the aaruformat context containing the loaded DDT table.
* @param sector_address Logical sector address to decode. * @param sector_address Logical sector address to decode (must be within valid range).
* @param offset Pointer to store the resulting offset. * @param offset Pointer to store the resulting sector offset within the block.
* @param block_offset Pointer to store the resulting block offset. * @param block_offset Pointer to store the resulting block offset in the image.
* @param sector_status Pointer to store the sector status. * @param sector_status Pointer to store the sector status (dumped or not dumped).
* @return AARUF_STATUS_OK on success, or an error code on failure. *
* @return Returns one of the following status codes:
* @retval AARUF_STATUS_OK (0) Successfully decoded the DDT entry. This is always returned when:
* - The context and image stream are valid
* - The DDT entry is successfully extracted and decoded
* - The offset, block_offset, and sector_status are successfully populated
* - A zero DDT entry is encountered (indicates sector not dumped)
* - A non-zero DDT entry is encountered (indicates sector was dumped)
*
* @retval AARUF_ERROR_NOT_AARUFORMAT (-1) The context or image stream is invalid (NULL pointers).
* This is the only error condition that can occur in this function.
*
* @note This is a lightweight function that performs only basic validation and bit manipulation.
* It does not perform bounds checking on the sector_address parameter.
*
* @note DDT Entry Decoding:
* - Uses a bit mask derived from ctx->shift to extract the offset within block
* - Right-shifts the DDT entry by ctx->shift bits to get the block offset
* - A zero DDT entry indicates the sector was not dumped (SectorStatusNotDumped)
* - A non-zero DDT entry indicates the sector was dumped (SectorStatusDumped)
*
* @warning The function assumes:
* - The DDT table (ctx->userDataDdt) has been properly loaded by process_ddt_v1()
* - The sector_address is within the valid range of the DDT table
* - The shift value (ctx->shift) has been properly initialized
* - All output parameters (offset, block_offset, sector_status) are valid pointers
*
* @warning No bounds checking is performed on sector_address. Accessing beyond the DDT table
* boundaries will result in undefined behavior.
*/ */
int32_t decode_ddt_entry_v1(aaruformatContext *ctx, uint64_t sector_address, uint64_t *offset, uint64_t *block_offset, int32_t decode_ddt_entry_v1(aaruformatContext *ctx, uint64_t sector_address, uint64_t *offset, uint64_t *block_offset,
uint8_t *sector_status) uint8_t *sector_status)

View File

@@ -29,11 +29,69 @@
* @brief Processes a DDT v2 block from the image stream. * @brief Processes a DDT v2 block from the image stream.
* *
* Reads and decompresses (if needed) a DDT v2 block, verifies its CRC, and loads it into memory. * Reads and decompresses (if needed) a DDT v2 block, verifies its CRC, and loads it into memory.
* This function handles both user data DDT blocks and CD sector prefix/suffix corrected DDT blocks,
* supporting both LZMA compression and uncompressed formats. It performs CRC64 validation and
* stores the processed DDT data in the appropriate context fields based on size type (small/big).
* *
* @param ctx Pointer to the aaruformat context. * @param ctx Pointer to the aaruformat context.
* @param entry Pointer to the index entry describing the DDT block. * @param entry Pointer to the index entry describing the DDT block.
* @param foundUserDataDdt Pointer to a boolean that will be set to true if a user data DDT was found and loaded. * @param found_user_data_ddt Pointer to a boolean that will be set to true if a user data DDT was found and loaded.
* @return AARUF_STATUS_OK on success, or an error code on failure. *
* @return Returns one of the following status codes:
* @retval AARUF_STATUS_OK (0) Successfully processed the DDT block. This is returned when:
* - The DDT block is successfully read, decompressed (if needed), and loaded into memory
* - CRC64 validation passes for the DDT data
* - User data DDT blocks are processed and context is properly updated
* - CD sector prefix/suffix corrected DDT blocks are processed successfully
* - Memory allocation failures occur for non-critical operations (processing continues)
* - File reading errors occur for compressed data or LZMA properties (processing continues)
* - Unknown compression types are encountered (block is skipped)
*
* @retval AARUF_ERROR_NOT_AARUFORMAT (-1) The context or image stream is invalid (NULL pointers).
*
* @retval AARUF_ERROR_CANNOT_READ_BLOCK (-7) Failed to access the DDT block in the image stream. This occurs when:
* - fseek() fails to position at the DDT block offset
* - The file position doesn't match the expected offset after seeking
* - Failed to read the DDT header from the image stream
* - The number of bytes read for the DDT header is insufficient
* - CRC64 context initialization fails (internal error)
*
* @retval AARUF_ERROR_CANNOT_DECOMPRESS_BLOCK (-17) LZMA decompression failed. This can happen when:
* - The LZMA decoder returns a non-zero error code during decompression
* - The decompressed data size doesn't match the expected DDT block length
* - This error causes immediate function termination and memory cleanup
*
* @retval AARUF_ERROR_INVALID_BLOCK_CRC (-18) CRC64 validation failed. This occurs when:
* - Calculated CRC64 doesn't match the expected CRC64 in the DDT header
* - Data corruption is detected in the DDT block
* - This applies to both compressed and uncompressed DDT blocks
*
* @note Error Handling Strategy:
* - Critical errors (seek failures, header read failures, decompression failures, CRC failures) cause immediate return
* - Non-critical errors (memory allocation failures, unknown compression types) allow processing to continue
* - The found_user_data_ddt flag is updated to reflect the success of user data DDT loading
*
* @note DDT v2 Features:
* - Supports both small (16-bit) and big (32-bit) DDT entry sizes
* - Handles multi-level DDT hierarchies with tableShift parameter
* - Updates context with sector counts, DDT version, and primary DDT offset
* - Stores DDT data in size-appropriate context fields (userDataDdtMini/Big, sectorPrefixDdt, etc.)
*
* @note Memory Management:
* - Allocated DDT data is stored in the context and becomes part of the context lifecycle
* - Memory is automatically cleaned up on decompression or CRC validation errors
* - Buffer memory is reused for the final DDT data storage (no double allocation)
*
* @note CRC Validation:
* - All DDT blocks undergo CRC64 validation regardless of compression type
* - CRC is calculated on the final decompressed data
* - Uses standard CRC64 calculation (no version-specific endianness conversion like v1)
*
* @warning The function modifies context state including sector count, DDT version, and primary DDT offset.
* Ensure proper context cleanup when the function completes.
*
* @warning Memory allocated for DDT data becomes part of the context and should not be freed separately.
* The context cleanup functions will handle DDT memory deallocation.
*/ */
int32_t process_ddt_v2(aaruformatContext *ctx, IndexEntry *entry, bool *found_user_data_ddt) int32_t process_ddt_v2(aaruformatContext *ctx, IndexEntry *entry, bool *found_user_data_ddt)
{ {
@@ -418,14 +476,45 @@ int32_t process_ddt_v2(aaruformatContext *ctx, IndexEntry *entry, bool *found_us
/** /**
* @brief Decodes a DDT v2 entry for a given sector address. * @brief Decodes a DDT v2 entry for a given sector address.
* *
* Determines the offset and block offset for a sector using the DDT v2 table(s). * Determines the offset and block offset for a sector using the DDT v2 table(s). This function acts
* as a dispatcher that automatically selects between single-level and multi-level DDT decoding based
* on the tableShift parameter in the DDT header. It provides a unified interface for DDT v2 entry
* decoding regardless of the underlying table structure complexity.
* *
* @param ctx Pointer to the aaruformat context. * @param ctx Pointer to the aaruformat context containing the loaded DDT structures.
* @param sector_address Logical sector address to decode. * @param sector_address Logical sector address to decode (will be adjusted for negative sectors).
* @param offset Pointer to store the resulting offset. * @param offset Pointer to store the resulting sector offset within the block.
* @param block_offset Pointer to store the resulting block offset. * @param block_offset Pointer to store the resulting block offset in the image.
* @param sector_status Pointer to store the sector status. * @param sector_status Pointer to store the sector status (dumped, not dumped, etc.).
* @return AARUF_STATUS_OK on success, or an error code on failure. *
* @return Returns one of the following status codes:
* @retval AARUF_STATUS_OK (0) Successfully decoded the DDT entry. This is always returned when:
* - The context and image stream are valid
* - The appropriate decoding function (single-level or multi-level) completes successfully
* - All output parameters are properly populated with decoded values
*
* @retval AARUF_ERROR_NOT_AARUFORMAT (-1) The context or image stream is invalid (NULL pointers).
* This is the only error condition that can occur at this dispatcher level.
*
* @retval Other error codes may be returned by the underlying decoding functions:
* - From decode_ddt_single_level_v2(): AARUF_ERROR_CANNOT_READ_BLOCK (-7)
* - From decode_ddt_multi_level_v2(): AARUF_ERROR_CANNOT_READ_BLOCK (-7),
* AARUF_ERROR_CANNOT_DECOMPRESS_BLOCK (-17), AARUF_ERROR_INVALID_BLOCK_CRC (-18)
*
* @note Function Selection:
* - If tableShift > 0: Uses multi-level DDT decoding (decode_ddt_multi_level_v2)
* - If tableShift = 0: Uses single-level DDT decoding (decode_ddt_single_level_v2)
* - The tableShift parameter is read from ctx->userDataDdtHeader.tableShift
*
* @note This function performs minimal validation and primarily acts as a dispatcher.
* Most error conditions and complex logic are handled by the underlying functions.
*
* @warning The function assumes the DDT has been properly loaded by process_ddt_v2().
* Calling this function with an uninitialized or corrupted DDT will result in
* undefined behavior from the underlying decoding functions.
*
* @warning All output parameters must be valid pointers. No bounds checking is performed
* on the sector_address parameter at this level.
*/ */
int32_t decode_ddt_entry_v2(aaruformatContext *ctx, uint64_t sector_address, uint64_t *offset, uint64_t *block_offset, int32_t decode_ddt_entry_v2(aaruformatContext *ctx, uint64_t sector_address, uint64_t *offset, uint64_t *block_offset,
uint8_t *sector_status) uint8_t *sector_status)
@@ -450,14 +539,61 @@ int32_t decode_ddt_entry_v2(aaruformatContext *ctx, uint64_t sector_address, uin
/** /**
* @brief Decodes a single-level DDT v2 entry for a given sector address. * @brief Decodes a single-level DDT v2 entry for a given sector address.
* *
* Used when the DDT table does not use multi-level indirection. * Used when the DDT table does not use multi-level indirection (tableShift = 0). This function
* performs direct lookup in the primary DDT table to extract sector offset, block offset, and
* sector status information. It handles both small (16-bit) and big (32-bit) DDT entry formats
* and performs bit manipulation to decode the packed DDT entry values.
* *
* @param ctx Pointer to the aaruformat context. * @param ctx Pointer to the aaruformat context containing the loaded DDT table.
* @param sector_address Logical sector address to decode. * @param sector_address Logical sector address to decode (adjusted for negative sectors).
* @param offset Pointer to store the resulting offset. * @param offset Pointer to store the resulting sector offset within the block.
* @param block_offset Pointer to store the resulting block offset. * @param block_offset Pointer to store the resulting block offset in the image.
* @param sector_status Pointer to store the sector status. * @param sector_status Pointer to store the sector status (dumped, not dumped, etc.).
* @return AARUF_STATUS_OK on success, or an error code on failure. *
* @return Returns one of the following status codes:
* @retval AARUF_STATUS_OK (0) Successfully decoded the DDT entry. This is always returned when:
* - The context and image stream are valid
* - The tableShift validation passes (must be 0)
* - The DDT size type is recognized (SmallDdtSizeType or BigDdtSizeType)
* - The DDT entry is successfully extracted and decoded
* - All output parameters are properly populated with decoded values
* - Zero DDT entries are handled (indicates sector not dumped)
*
* @retval AARUF_ERROR_NOT_AARUFORMAT (-1) The context or image stream is invalid (NULL pointers).
*
* @retval AARUF_ERROR_CANNOT_READ_BLOCK (-7) Configuration or validation errors. This occurs when:
* - The tableShift is not zero (should use multi-level decoding instead)
* - The DDT size type is unknown/unsupported (not SmallDdtSizeType or BigDdtSizeType)
* - Internal consistency checks fail
*
* @note DDT Entry Decoding (Small DDT - 16-bit entries):
* - Bits 15-12: Sector status (4 bits)
* - Bits 11-0: Combined offset and block index (12 bits)
* - Offset mask: Derived from dataShift parameter
* - Block offset: Calculated using blockAlignmentShift parameter
*
* @note DDT Entry Decoding (Big DDT - 32-bit entries):
* - Bits 31-28: Sector status (4 bits)
* - Bits 27-0: Combined offset and block index (28 bits)
* - Offset mask and block offset calculated same as small DDT
*
* @note Negative Sector Handling:
* - Sector address is automatically adjusted by adding ctx->userDataDdtHeader.negative
* - This allows proper indexing into the DDT table for negative sector addresses
*
* @note Zero Entry Handling:
* - A zero DDT entry indicates the sector was not dumped
* - Sets sector_status to SectorStatusNotDumped and zeros offset/block_offset
* - This is a normal condition and not an error
*
* @warning The function assumes the DDT table has been properly loaded and is accessible
* via ctx->userDataDdtMini or ctx->userDataDdtBig depending on size type.
*
* @warning No bounds checking is performed on sector_address. Accessing beyond the DDT
* table boundaries will result in undefined behavior.
*
* @warning This function should only be called when tableShift is 0. Calling it with
* tableShift > 0 will result in AARUF_ERROR_CANNOT_READ_BLOCK.
*/ */
int32_t decode_ddt_single_level_v2(aaruformatContext *ctx, uint64_t sector_address, uint64_t *offset, int32_t decode_ddt_single_level_v2(aaruformatContext *ctx, uint64_t sector_address, uint64_t *offset,
uint64_t *block_offset, uint8_t *sector_status) uint64_t *block_offset, uint8_t *sector_status)
@@ -531,14 +667,89 @@ int32_t decode_ddt_single_level_v2(aaruformatContext *ctx, uint64_t sector_addre
/** /**
* @brief Decodes a multi-level DDT v2 entry for a given sector address. * @brief Decodes a multi-level DDT v2 entry for a given sector address.
* *
* Used when the DDT table uses multi-level indirection (tableShift > 0). * Used when the DDT table uses multi-level indirection (tableShift > 0). This function handles
* the complex process of navigating a hierarchical DDT structure where the primary table points
* to secondary tables that contain the actual sector mappings. It includes caching mechanisms
* for secondary tables, supports both compressed and uncompressed secondary tables, and performs
* comprehensive validation including CRC verification.
* *
* @param ctx Pointer to the aaruformat context. * @param ctx Pointer to the aaruformat context containing the loaded primary DDT table.
* @param sector_address Logical sector address to decode. * @param sector_address Logical sector address to decode (adjusted for negative sectors).
* @param offset Pointer to store the resulting offset. * @param offset Pointer to store the resulting sector offset within the block.
* @param block_offset Pointer to store the resulting block offset. * @param block_offset Pointer to store the resulting block offset in the image.
* @param sector_status Pointer to store the sector status. * @param sector_status Pointer to store the sector status (dumped, not dumped, etc.).
* @return AARUF_STATUS_OK on success, or an error code on failure. *
* @return Returns one of the following status codes:
* @retval AARUF_STATUS_OK (0) Successfully decoded the DDT entry. This is returned when:
* - The context and image stream are valid
* - The tableShift validation passes (must be > 0)
* - The DDT size type is recognized (SmallDdtSizeType or BigDdtSizeType)
* - Secondary DDT table is successfully loaded (from cache or file)
* - Secondary DDT decompression succeeds (if needed)
* - Secondary DDT CRC validation passes
* - The DDT entry is successfully extracted and decoded
* - All output parameters are properly populated with decoded values
* - Zero DDT entries are handled (indicates sector not dumped)
*
* @retval AARUF_ERROR_NOT_AARUFORMAT (-1) The context or image stream is invalid (NULL pointers).
*
* @retval AARUF_ERROR_CANNOT_READ_BLOCK (-7) Configuration, validation, or file access errors. This occurs when:
* - The tableShift is zero (should use single-level decoding instead)
* - The DDT size type is unknown/unsupported (not SmallDdtSizeType or BigDdtSizeType)
* - Cannot read the secondary DDT header from the image stream
* - Secondary DDT header validation fails (wrong identifier or type)
* - Cannot read uncompressed secondary DDT data from the image stream
* - CRC64 context initialization fails (internal error)
* - Memory allocation fails for secondary DDT data (critical failure)
* - Unknown compression type encountered in secondary DDT
*
* @retval AARUF_ERROR_CANNOT_DECOMPRESS_BLOCK (-17) LZMA decompression failed for secondary DDT. This occurs when:
* - Memory allocation fails for compressed data or decompression buffer
* - Cannot read LZMA properties from the image stream
* - Cannot read compressed secondary DDT data from the image stream
* - The LZMA decoder returns a non-zero error code during decompression
* - The decompressed data size doesn't match the expected secondary DDT length
*
* @retval AARUF_ERROR_INVALID_BLOCK_CRC (-18) CRC64 validation failed for secondary DDT. This occurs when:
* - Calculated CRC64 doesn't match the expected CRC64 in the secondary DDT header
* - Data corruption is detected in the secondary DDT data
* - This applies to both compressed and uncompressed secondary DDT blocks
*
* @note Multi-level DDT Navigation:
* - Uses tableShift to calculate items per DDT entry (2^tableShift)
* - Calculates DDT position by dividing sector address by items per entry
* - Retrieves secondary DDT offset from primary table at calculated position
* - Converts block offset to file offset using blockAlignmentShift
*
* @note Secondary DDT Caching:
* - Maintains a single cached secondary DDT in memory (ctx->cachedSecondaryDdtSmall/Big)
* - Compares requested offset with cached offset (ctx->cachedDdtOffset)
* - Only loads from disk if the requested secondary DDT is not currently cached
* - Caching improves performance for sequential sector access patterns
*
* @note Secondary DDT Processing:
* - Supports both LZMA compression and uncompressed formats
* - Performs full CRC64 validation of secondary DDT data
* - Handles both small (16-bit) and big (32-bit) entry formats
* - Same bit manipulation as single-level DDT for final entry decoding
*
* @note Error Handling Strategy:
* - Memory allocation failures for secondary DDT loading are treated as critical errors
* - File I/O errors and validation failures cause immediate function termination
* - Unknown compression types are treated as errors (unlike the processing functions)
* - All allocated memory is cleaned up on error conditions
*
* @warning This function should only be called when tableShift > 0. Calling it with
* tableShift = 0 will result in AARUF_ERROR_CANNOT_READ_BLOCK.
*
* @warning The function assumes the primary DDT table has been properly loaded and is accessible
* via ctx->userDataDdtMini or ctx->userDataDdtBig depending on size type.
*
* @warning Secondary DDT caching means that memory usage can increase during operation.
* The cached secondary DDT is replaced when a different secondary table is needed.
*
* @warning No bounds checking is performed on sector_address or calculated DDT positions.
* Accessing beyond table boundaries will result in undefined behavior.
*/ */
int32_t decode_ddt_multi_level_v2(aaruformatContext *ctx, uint64_t sector_address, uint64_t *offset, int32_t decode_ddt_multi_level_v2(aaruformatContext *ctx, uint64_t sector_address, uint64_t *offset,
uint64_t *block_offset, uint8_t *sector_status) uint64_t *block_offset, uint8_t *sector_status)

View File

@@ -24,10 +24,62 @@
/** /**
* @brief Identifies a file as an AaruFormat image using a file path. * @brief Identifies a file as an AaruFormat image using a file path.
* *
* Opens the file at the given path and determines if it is an AaruFormat image. * Opens the file at the given path and determines if it is an AaruFormat image by examining
* the file header for valid AaruFormat signatures and version information. This function
* provides a simple file-based interface that handles file opening, identification, and
* cleanup automatically. It delegates the actual identification logic to aaruf_identify_stream().
* *
* @param filename Path to the file to identify. * @param filename Path to the file to identify (must be accessible and readable).
* @return If positive, confidence value (100 = maximum confidence, 0 = not recognized). If negative, error value. *
* @return Returns one of the following values:
* @retval 100 Maximum confidence - Definitive AaruFormat image. This is returned when:
* - The file header contains a valid AaruFormat signature (DIC_MAGIC or AARU_MAGIC)
* - The image major version is less than or equal to the supported version (AARUF_VERSION)
* - The file structure passes all header validation checks
* - This indicates the file is definitely a supported AaruFormat image
*
* @retval 0 Not recognized - File is not an AaruFormat image. This is returned when:
* - The file header doesn't contain a recognized AaruFormat signature
* - The image major version exceeds the maximum supported version
* - The file header cannot be read completely (file too small or corrupted)
* - The file format doesn't match AaruFormat specifications
*
* @retval Positive errno values - File access errors from system calls. Common values include:
* - ENOENT (2) - File not found or path doesn't exist
* - EACCES (13) - Permission denied, file not readable
* - EISDIR (21) - Path refers to a directory, not a file
* - EMFILE (24) - Too many open files (process limit reached)
* - ENFILE (23) - System limit on open files reached
* - ENOMEM (12) - Insufficient memory to open file
* - EIO (5) - I/O error occurred during file access
* - Other platform-specific errno values from fopen()
*
* @note Identification Process:
* - Opens the file in binary read mode ("rb")
* - Delegates identification to aaruf_identify_stream() for actual header analysis
* - Automatically closes the file stream regardless of identification result
* - Returns system errno values directly if file opening fails
*
* @note Confidence Scoring:
* - Uses binary scoring: 100 (definitive match) or 0 (no match)
* - No intermediate confidence levels are returned
* - Designed for simple yes/no identification rather than probabilistic matching
*
* @note Version Compatibility:
* - Only recognizes AaruFormat versions up to AARUF_VERSION
* - Future versions beyond library support are treated as unrecognized
* - Backwards compatible with older DIC_MAGIC identifiers
*
* @warning The function opens and closes the file for each identification.
* For repeated operations on the same file, consider using aaruf_identify_stream()
* with a pre-opened stream for better performance.
*
* @warning File access permissions and availability are checked at runtime.
* Network files or files on removable media may cause variable access times.
*
* @warning The function performs minimal file content validation. A positive result
* indicates the file appears to be AaruFormat but doesn't guarantee the
* entire file is valid or uncorrupted.
*/ */
int aaruf_identify(const char *filename) int aaruf_identify(const char *filename)
{ {
@@ -47,10 +99,64 @@ int aaruf_identify(const char *filename)
/** /**
* @brief Identifies a file as an AaruFormat image using an open stream. * @brief Identifies a file as an AaruFormat image using an open stream.
* *
* Determines if the provided stream is an AaruFormat image. * Determines if the provided stream is an AaruFormat image by reading and validating
* the file header at the beginning of the stream. This function performs the core
* identification logic by checking for valid AaruFormat signatures and version
* compatibility. It's designed to work with any FILE stream, making it suitable
* for integration with existing file handling code.
* *
* @param image_stream Stream of the file to identify. * @param image_stream Open FILE stream positioned at any location (will be repositioned to start).
* @return If positive, confidence value (100 = maximum confidence, 0 = not recognized). If negative, error value. *
* @return Returns one of the following values:
* @retval 100 Maximum confidence - Definitive AaruFormat image. This is returned when:
* - The stream is successfully repositioned to the beginning
* - The AaruFormat header is successfully read (AaruHeader structure)
* - The header identifier matches either DIC_MAGIC or AARU_MAGIC (valid signatures)
* - The image major version is less than or equal to AARUF_VERSION (supported version)
* - All validation checks pass indicating a compatible AaruFormat image
*
* @retval 0 Not recognized - Stream is not an AaruFormat image. This is returned when:
* - The stream parameter is NULL
* - Cannot read a complete AaruHeader structure from the stream (file too small)
* - The header identifier doesn't match DIC_MAGIC or AARU_MAGIC (wrong format)
* - The image major version exceeds AARUF_VERSION (unsupported future version)
* - Any validation check fails indicating the stream is not a valid AaruFormat image
*
* @note Stream Handling:
* - Always seeks to position 0 at the beginning of the function
* - Reads exactly one AaruHeader structure (size depends on format version)
* - Does not restore the original stream position after identification
* - Stream remains open and positioned after the header on function return
*
* @note Signature Recognition:
* - DIC_MAGIC: Legacy identifier from original DiscImageChef format
* - AARU_MAGIC: Current AaruFormat identifier
* - Both signatures are accepted for backwards compatibility
* - Signature validation is performed using exact byte matching
*
* @note Version Validation:
* - Only checks the major version number for compatibility
* - Minor version differences are ignored (assumed backwards compatible)
* - Future major versions are rejected to prevent compatibility issues
* - Version check prevents attempting to read unsupported format variants
*
* @note Confidence Scoring:
* - Binary result: 100 (definitive) or 0 (not recognized)
* - No probabilistic or partial matching
* - Designed for definitive identification rather than format detection
*
* @warning The function modifies the stream position by seeking to the beginning
* and reading the header. The stream position is not restored.
*
* @warning This function performs only header-level validation. A positive result
* indicates the file appears to have a valid AaruFormat header but doesn't
* guarantee the entire image structure is valid or uncorrupted.
*
* @warning The stream must support seeking operations. Non-seekable streams
* (like pipes or network streams) may cause undefined behavior.
*
* @warning No error codes are returned for I/O failures during header reading.
* Such failures result in a return value of 0 (not recognized).
*/ */
int aaruf_identify_stream(FILE *image_stream) int aaruf_identify_stream(FILE *image_stream)
{ {

View File

@@ -28,9 +28,53 @@
* @brief Processes an index block (version 1) from the image stream. * @brief Processes an index block (version 1) from the image stream.
* *
* Reads and parses an index block (version 1) from the image, returning an array of index entries. * Reads and parses an index block (version 1) from the image, returning an array of index entries.
* This function handles the legacy index format used in early AaruFormat versions, providing
* compatibility with older image files. It reads the IndexHeader structure followed by a
* sequential list of IndexEntry structures, validating the index identifier for format correctness.
* *
* @param ctx Pointer to the aaruformat context. * @param ctx Pointer to the aaruformat context containing the image stream and header information.
* @return Pointer to a UT_array of IndexEntry structures, or NULL on failure. *
* @return Returns one of the following values:
* @retval UT_array* Successfully processed the index block. This is returned when:
* - The context and image stream are valid
* - The index header is successfully read from the position specified in ctx->header.indexOffset
* - The index identifier matches IndexBlock (legacy format identifier)
* - All index entries are successfully read and stored in the UT_array
* - Memory allocation for the index entries array succeeds
* - The returned array contains all index entries from the version 1 index block
*
* @retval NULL Index processing failed. This occurs when:
* - The context parameter is NULL
* - The image stream (ctx->imageStream) is NULL or invalid
* - Cannot read the IndexHeader structure from the image stream
* - The index identifier doesn't match IndexBlock (incorrect format or corruption)
* - Memory allocation fails for the UT_array structure
* - File I/O errors occur while reading index entries
*
* @note Index Structure (Version 1):
* - IndexHeader: Contains identifier (IndexBlock), entry count, and metadata
* - IndexEntry array: Sequential list of entries describing block locations and types
* - No CRC validation is performed during processing (use verify_index_v1 for validation)
* - No compression support in version 1 index format
*
* @note Memory Management:
* - Returns a newly allocated UT_array that must be freed by the caller using utarray_free()
* - On error, any partially allocated memory is cleaned up before returning NULL
* - Each IndexEntry is copied into the array (no reference to original stream data)
*
* @note Version Compatibility:
* - Supports only legacy IndexBlock format (not IndexBlock2 or IndexBlock3)
* - Compatible with early AaruFormat and DiscImageChef image files
* - Does not handle subindex or hierarchical index structures
*
* @warning The caller is responsible for freeing the returned UT_array using utarray_free().
* Failure to free the array will result in memory leaks.
*
* @warning This function does not validate the CRC integrity of the index data.
* Use verify_index_v1() to ensure index integrity before processing.
*
* @warning The function assumes ctx->header.indexOffset points to a valid index block.
* Invalid offsets may cause file access errors or reading incorrect data.
*/ */
UT_array *process_index_v1(aaruformatContext *ctx) UT_array *process_index_v1(aaruformatContext *ctx)
{ {
@@ -75,10 +119,78 @@ UT_array *process_index_v1(aaruformatContext *ctx)
/** /**
* @brief Verifies the integrity of an index block (version 1) in the image stream. * @brief Verifies the integrity of an index block (version 1) in the image stream.
* *
* Checks the CRC64 of the index block without decompressing it. * Checks the CRC64 of the index block without decompressing it. This function performs
* comprehensive validation of the index structure including header validation, data
* integrity verification, and version-specific CRC calculation. It ensures the index
* block is valid and uncorrupted before the image can be safely used for data access.
* *
* @param ctx Pointer to the aaruformat context. * @param ctx Pointer to the aaruformat context containing image stream and header information.
* @return Status code (AARUF_STATUS_OK on success, or an error code). *
* @return Returns one of the following status codes:
* @retval AARUF_STATUS_OK (0) Successfully verified index integrity. This is returned when:
* - The context and image stream are valid
* - The index header is successfully read from ctx->header.indexOffset
* - The index identifier matches IndexBlock (version 1 format)
* - Memory allocation for index entries succeeds
* - All index entries are successfully read from the image stream
* - CRC64 calculation completes successfully with version-specific endianness handling
* - The calculated CRC64 matches the expected CRC64 in the index header
*
* @retval AARUF_ERROR_NOT_AARUFORMAT (-1) Invalid context or stream. This occurs when:
* - The context parameter is NULL
* - The image stream (ctx->imageStream) is NULL or invalid
*
* @retval AARUF_ERROR_CANNOT_READ_HEADER (-6) Index header reading failed. This occurs when:
* - Cannot read the complete IndexHeader structure from the image stream
* - File I/O errors prevent accessing the header at ctx->header.indexOffset
* - Insufficient data available at the specified index offset
*
* @retval AARUF_ERROR_CANNOT_READ_INDEX (-19) Index format or data access errors. This occurs when:
* - The index identifier doesn't match IndexBlock (wrong format or corruption)
* - Cannot read all index entries from the image stream
* - File I/O errors during index entry reading
* - Index structure is corrupted or truncated
*
* @retval AARUF_ERROR_NOT_ENOUGH_MEMORY (-9) Memory allocation failed. This occurs when:
* - Cannot allocate memory for the index entries array
* - System memory exhaustion prevents loading index data for verification
*
* @retval AARUF_ERROR_INVALID_BLOCK_CRC (-18) CRC64 validation failed. This occurs when:
* - The calculated CRC64 doesn't match the expected CRC64 in the index header
* - Index data corruption is detected
* - Data integrity verification fails indicating potential file damage
*
* @note CRC64 Validation Process:
* - Reads all index entries into memory for CRC calculation
* - Calculates CRC64 over the complete index entries array
* - Applies version-specific endianness conversion for compatibility
* - For imageMajorVersion <= AARUF_VERSION_V1: Uses bswap_64() for byte order correction
* - Compares calculated CRC64 with the value stored in the index header
*
* @note Version Compatibility:
* - Handles legacy byte order differences from original C# implementation
* - Supports IndexBlock format (version 1 only)
* - Does not support IndexBlock2 or IndexBlock3 formats
*
* @note Memory Management:
* - Allocates temporary memory for index entries during verification
* - Automatically frees allocated memory on both success and error conditions
* - Memory usage is proportional to the number of index entries
*
* @note Verification Scope:
* - Validates index header structure and identifier
* - Verifies data integrity through CRC64 calculation
* - Does not validate individual index entry contents or block references
* - Does not check for logical consistency of referenced blocks
*
* @warning This function reads the entire index into memory for CRC calculation.
* Large indexes may require significant memory allocation.
*
* @warning The function assumes ctx->header.indexOffset points to a valid index location.
* Invalid offsets will cause file access errors or incorrect validation.
*
* @warning CRC validation failure indicates potential data corruption and may suggest
* the image file is damaged or has been modified outside of library control.
*/ */
int32_t verify_index_v1(aaruformatContext *ctx) int32_t verify_index_v1(aaruformatContext *ctx)
{ {

View File

@@ -28,9 +28,54 @@
* @brief Processes an index block (version 2) from the image stream. * @brief Processes an index block (version 2) from the image stream.
* *
* Reads and parses an index block (version 2) from the image, returning an array of index entries. * Reads and parses an index block (version 2) from the image, returning an array of index entries.
* This function handles the intermediate index format used in mid-generation AaruFormat versions,
* providing compatibility with version 2 image files. It reads the IndexHeader2 structure followed
* by a sequential list of IndexEntry structures, validating the index identifier for format correctness.
* *
* @param ctx Pointer to the aaruformat context. * @param ctx Pointer to the aaruformat context containing the image stream and header information.
* @return Pointer to a UT_array of IndexEntry structures, or NULL on failure. *
* @return Returns one of the following values:
* @retval UT_array* Successfully processed the index block. This is returned when:
* - The context and image stream are valid
* - The index header is successfully read from the position specified in ctx->header.indexOffset
* - The index identifier matches IndexBlock2 (version 2 format identifier)
* - All index entries are successfully read and stored in the UT_array
* - Memory allocation for the index entries array succeeds
* - The returned array contains all index entries from the version 2 index block
*
* @retval NULL Index processing failed. This occurs when:
* - The context parameter is NULL
* - The image stream (ctx->imageStream) is NULL or invalid
* - Cannot read the IndexHeader2 structure from the image stream
* - The index identifier doesn't match IndexBlock2 (incorrect format or corruption)
* - Memory allocation fails for the UT_array structure
* - File I/O errors occur while reading index entries
*
* @note Index Structure (Version 2):
* - IndexHeader2: Contains identifier (IndexBlock2), entry count, and enhanced metadata
* - IndexEntry array: Sequential list of entries describing block locations and types
* - No CRC validation is performed during processing (use verify_index_v2 for validation)
* - No compression support in version 2 index format
* - Compatible with mid-generation AaruFormat improvements
*
* @note Memory Management:
* - Returns a newly allocated UT_array that must be freed by the caller using utarray_free()
* - On error, any partially allocated memory is cleaned up before returning NULL
* - Each IndexEntry is copied into the array (no reference to original stream data)
*
* @note Version Compatibility:
* - Supports only IndexBlock2 format (not IndexBlock or IndexBlock3)
* - Compatible with intermediate AaruFormat image files
* - Does not handle subindex or hierarchical index structures (introduced in v3)
*
* @warning The caller is responsible for freeing the returned UT_array using utarray_free().
* Failure to free the array will result in memory leaks.
*
* @warning This function does not validate the CRC integrity of the index data.
* Use verify_index_v2() to ensure index integrity before processing.
*
* @warning The function assumes ctx->header.indexOffset points to a valid index block.
* Invalid offsets may cause file access errors or reading incorrect data.
*/ */
UT_array *process_index_v2(aaruformatContext *ctx) UT_array *process_index_v2(aaruformatContext *ctx)
{ {
@@ -75,10 +120,78 @@ UT_array *process_index_v2(aaruformatContext *ctx)
/** /**
* @brief Verifies the integrity of an index block (version 2) in the image stream. * @brief Verifies the integrity of an index block (version 2) in the image stream.
* *
* Checks the CRC64 of the index block without decompressing it. * Checks the CRC64 of the index block without decompressing it. This function performs
* comprehensive validation of the version 2 index structure including header validation,
* data integrity verification, and version-specific CRC calculation. It ensures the index
* block is valid and uncorrupted before the image can be safely used for data access.
* *
* @param ctx Pointer to the aaruformat context. * @param ctx Pointer to the aaruformat context containing image stream and header information.
* @return Status code (AARUF_STATUS_OK on success, or an error code). *
* @return Returns one of the following status codes:
* @retval AARUF_STATUS_OK (0) Successfully verified index integrity. This is returned when:
* - The context and image stream are valid
* - The index header is successfully read from ctx->header.indexOffset
* - The index identifier matches IndexBlock2 (version 2 format)
* - Memory allocation for index entries succeeds
* - All index entries are successfully read from the image stream
* - CRC64 calculation completes successfully with version-specific endianness handling
* - The calculated CRC64 matches the expected CRC64 in the index header
*
* @retval AARUF_ERROR_NOT_AARUFORMAT (-1) Invalid context or stream. This occurs when:
* - The context parameter is NULL
* - The image stream (ctx->imageStream) is NULL or invalid
*
* @retval AARUF_ERROR_CANNOT_READ_HEADER (-6) Index header reading failed. This occurs when:
* - Cannot read the complete IndexHeader2 structure from the image stream
* - File I/O errors prevent accessing the header at ctx->header.indexOffset
* - Insufficient data available at the specified index offset
*
* @retval AARUF_ERROR_CANNOT_READ_INDEX (-19) Index format or data access errors. This occurs when:
* - The index identifier doesn't match IndexBlock2 (wrong format or corruption)
* - Cannot read all index entries from the image stream
* - File I/O errors during index entry reading
* - Index structure is corrupted or truncated
*
* @retval AARUF_ERROR_NOT_ENOUGH_MEMORY (-9) Memory allocation failed. This occurs when:
* - Cannot allocate memory for the index entries array
* - System memory exhaustion prevents loading index data for verification
*
* @retval AARUF_ERROR_INVALID_BLOCK_CRC (-18) CRC64 validation failed. This occurs when:
* - The calculated CRC64 doesn't match the expected CRC64 in the index header
* - Index data corruption is detected
* - Data integrity verification fails indicating potential file damage
*
* @note CRC64 Validation Process:
* - Reads all index entries into memory for CRC calculation
* - Calculates CRC64 over the complete index entries array
* - Applies version-specific endianness conversion for compatibility
* - For imageMajorVersion <= AARUF_VERSION_V1: Uses bswap_64() for byte order correction
* - Compares calculated CRC64 with the value stored in the IndexHeader2
*
* @note Version 2 Enhancements:
* - Uses IndexHeader2 structure with enhanced metadata support
* - Maintains compatibility with legacy endianness handling
* - Supports improved index entry organization compared to version 1
*
* @note Memory Management:
* - Allocates temporary memory for index entries during verification
* - Automatically frees allocated memory on both success and error conditions
* - Memory usage is proportional to the number of index entries
*
* @note Verification Scope:
* - Validates index header structure and identifier
* - Verifies data integrity through CRC64 calculation
* - Does not validate individual index entry contents or block references
* - Does not check for logical consistency of referenced blocks
*
* @warning This function reads the entire index into memory for CRC calculation.
* Large indexes may require significant memory allocation.
*
* @warning The function assumes ctx->header.indexOffset points to a valid index location.
* Invalid offsets will cause file access errors or incorrect validation.
*
* @warning CRC validation failure indicates potential data corruption and may suggest
* the image file is damaged or has been modified outside of library control.
*/ */
int32_t verify_index_v2(aaruformatContext *ctx) int32_t verify_index_v2(aaruformatContext *ctx)
{ {

View File

@@ -29,9 +29,67 @@
* @brief Processes an index block (version 3) from the image stream. * @brief Processes an index block (version 3) from the image stream.
* *
* Reads and parses an index block (version 3) from the image, returning an array of index entries. * Reads and parses an index block (version 3) from the image, returning an array of index entries.
* This function handles the advanced index format used in current AaruFormat versions, supporting
* hierarchical subindex structures for improved scalability. It reads the IndexHeader3 structure
* followed by index entries, recursively processing any subindex blocks encountered to create
* a flattened array of all index entries.
* *
* @param ctx Pointer to the aaruformat context. * @param ctx Pointer to the aaruformat context containing the image stream and header information.
* @return Pointer to a UT_array of IndexEntry structures, or NULL on failure. *
* @return Returns one of the following values:
* @retval UT_array* Successfully processed the index block and all subindexes. This is returned when:
* - The context and image stream are valid
* - The index header is successfully read from the position specified in ctx->header.indexOffset
* - The index identifier matches IndexBlock3 (version 3 format identifier)
* - All index entries are successfully read and stored in the UT_array
* - Any subindex blocks (IndexBlock3 entries) are recursively processed via add_subindex_entries()
* - Memory allocation for the index entries array succeeds
* - The returned array contains all index entries from the main index and all subindexes
*
* @retval NULL Index processing failed. This occurs when:
* - The context parameter is NULL
* - The image stream (ctx->imageStream) is NULL or invalid
* - Cannot read the IndexHeader3 structure from the image stream
* - The index identifier doesn't match IndexBlock3 (incorrect format or corruption)
* - Memory allocation fails for the UT_array structure
* - File I/O errors occur while reading index entries
* - Recursive subindex processing fails (errors in add_subindex_entries())
*
* @note Index Structure (Version 3):
* - IndexHeader3: Contains identifier (IndexBlock3), entry count, and advanced metadata
* - IndexEntry array: May contain regular entries and subindex references (IndexBlock3 type)
* - Hierarchical support: Subindex entries are recursively processed to flatten the structure
* - No CRC validation is performed during processing (use verify_index_v3 for validation)
* - Supports scalable index organization for large image files
*
* @note Subindex Processing:
* - When an IndexEntry has blockType == IndexBlock3, it references a subindex
* - Subindexes are recursively processed using add_subindex_entries()
* - All entries from subindexes are flattened into the main index entries array
* - Supports arbitrary nesting depth of subindexes
*
* @note Memory Management:
* - Returns a newly allocated UT_array that must be freed by the caller using utarray_free()
* - On error, any partially allocated memory is cleaned up before returning NULL
* - Each IndexEntry is copied into the array (no reference to original stream data)
* - Memory usage scales with total entries across all subindexes
*
* @note Version Compatibility:
* - Supports only IndexBlock3 format (not IndexBlock or IndexBlock2)
* - Compatible with current generation AaruFormat image files
* - Backward compatible with images that don't use subindexes
*
* @warning The caller is responsible for freeing the returned UT_array using utarray_free().
* Failure to free the array will result in memory leaks.
*
* @warning This function does not validate the CRC integrity of the index data.
* Use verify_index_v3() to ensure index integrity before processing.
*
* @warning Recursive subindex processing may cause significant memory usage and processing
* time for images with deeply nested or numerous subindexes.
*
* @warning The function assumes ctx->header.indexOffset points to a valid index block.
* Invalid offsets may cause file access errors or reading incorrect data.
*/ */
UT_array *process_index_v3(aaruformatContext *ctx) UT_array *process_index_v3(aaruformatContext *ctx)
{ {
@@ -83,10 +141,67 @@ UT_array *process_index_v3(aaruformatContext *ctx)
* @brief Adds entries from a subindex block (version 3) to the main index entries array. * @brief Adds entries from a subindex block (version 3) to the main index entries array.
* *
* Recursively reads subindex blocks and appends their entries to the main index entries array. * Recursively reads subindex blocks and appends their entries to the main index entries array.
* This function is a critical component of the hierarchical index system in AaruFormat version 3,
* enabling scalable index organization for large image files. It performs recursive traversal
* of subindex structures, flattening the hierarchy into a single array while maintaining
* entry order and handling nested subindex references.
* *
* @param ctx Pointer to the aaruformat context. * @param ctx Pointer to the aaruformat context containing the image stream.
* @param index_entries Pointer to the UT_array of main index entries. * @param index_entries Pointer to the UT_array of main index entries to append to.
* @param subindex_entry Pointer to the subindex entry to process. * @param subindex_entry Pointer to the IndexEntry describing the subindex location and metadata.
*
* @note Function Behavior:
* @retval No return value (void function) The function operates through side effects on the index_entries array:
* - Successfully appends subindex entries when all parameters are valid and file I/O succeeds
* - Recursively processes nested subindexes when entries have blockType == IndexBlock3
* - Silently returns without modification when validation fails or errors occur
* - Does not perform error reporting beyond logging (errors are handled gracefully)
*
* @note Success Conditions - Entries are added when:
* - All input parameters (ctx, index_entries, subindex_entry) are non-NULL
* - The image stream is valid and accessible
* - File positioning succeeds to subindex_entry->offset
* - The subindex header is successfully read from the image stream
* - The subindex identifier matches IndexBlock3 (correct format)
* - All entries in the subindex are successfully read and processed
* - Recursive calls for nested subindexes complete successfully
*
* @note Failure Conditions - Function returns silently when:
* - Any input parameter is NULL (ctx, index_entries, or subindex_entry)
* - The image stream (ctx->imageStream) is NULL or invalid
* - File I/O errors prevent reading the subindex header or entries
* - The subindex identifier doesn't match IndexBlock3 (format mismatch or corruption)
* - Memory operations fail during UT_array manipulation
*
* @note Recursive Processing:
* - When an entry has blockType == IndexBlock3, it indicates another subindex
* - The function recursively calls itself to process nested subindexes
* - Supports arbitrary nesting depth limited only by stack space and file structure
* - All entries are flattened into the main index_entries array regardless of nesting level
*
* @note Memory and Performance:
* - Memory usage scales with the total number of entries across all processed subindexes
* - File I/O is performed for each subindex block accessed
* - Processing time increases with the depth and breadth of subindex hierarchies
* - No internal memory allocation - uses existing UT_array structure
*
* @note Error Handling:
* - Designed for graceful degradation - errors don't propagate to callers
* - Invalid subindexes are skipped without affecting other processing
* - Partial success is possible when some subindexes fail but others succeed
* - Logging provides visibility into processing failures for debugging
*
* @warning This function modifies the index_entries array by appending new entries.
* Ensure the array has sufficient capacity and is properly initialized.
*
* @warning Recursive processing can cause significant stack usage for deeply nested
* subindex structures. Very deep hierarchies may cause stack overflow.
*
* @warning The function assumes subindex_entry->offset points to a valid subindex block.
* Invalid offsets will cause file access errors but are handled gracefully.
*
* @warning No validation is performed on individual IndexEntry contents - only
* structural validation of subindex headers is done.
*/ */
void add_subindex_entries(aaruformatContext *ctx, UT_array *index_entries, IndexEntry *subindex_entry) void add_subindex_entries(aaruformatContext *ctx, UT_array *index_entries, IndexEntry *subindex_entry)
{ {
@@ -131,10 +246,90 @@ void add_subindex_entries(aaruformatContext *ctx, UT_array *index_entries, Index
/** /**
* @brief Verifies the integrity of an index block (version 3) in the image stream. * @brief Verifies the integrity of an index block (version 3) in the image stream.
* *
* Checks the CRC64 of the index block and all subindexes without decompressing them. * Checks the CRC64 of the index block and all subindexes without decompressing them. This function
* performs comprehensive validation of the advanced version 3 index structure including header
* validation, data integrity verification, and version-specific CRC calculation. Note that this
* function validates only the main index block's CRC and does not recursively validate subindex
* CRCs, focusing on the primary index structure integrity.
* *
* @param ctx Pointer to the aaruformat context. * @param ctx Pointer to the aaruformat context containing image stream and header information.
* @return Status code (AARUF_STATUS_OK on success, or an error code). *
* @return Returns one of the following status codes:
* @retval AARUF_STATUS_OK (0) Successfully verified index integrity. This is returned when:
* - The context and image stream are valid
* - The index header is successfully read from ctx->header.indexOffset
* - The index identifier matches IndexBlock3 (version 3 format)
* - Memory allocation for index entries succeeds
* - All index entries are successfully read from the image stream
* - CRC64 calculation completes successfully with version-specific endianness handling
* - The calculated CRC64 matches the expected CRC64 in the index header
*
* @retval AARUF_ERROR_NOT_AARUFORMAT (-1) Invalid context or stream. This occurs when:
* - The context parameter is NULL
* - The image stream (ctx->imageStream) is NULL or invalid
*
* @retval AARUF_ERROR_CANNOT_READ_HEADER (-6) Index header reading failed. This occurs when:
* - Cannot read the complete IndexHeader3 structure from the image stream
* - File I/O errors prevent accessing the header at ctx->header.indexOffset
* - Insufficient data available at the specified index offset
*
* @retval AARUF_ERROR_CANNOT_READ_INDEX (-19) Index format or data access errors. This occurs when:
* - The index identifier doesn't match IndexBlock3 (wrong format or corruption)
* - Cannot read all index entries from the image stream
* - File I/O errors during index entry reading
* - Index structure is corrupted or truncated
*
* @retval AARUF_ERROR_NOT_ENOUGH_MEMORY (-9) Memory allocation failed. This occurs when:
* - Cannot allocate memory for the index entries array
* - System memory exhaustion prevents loading index data for verification
*
* @retval AARUF_ERROR_INVALID_BLOCK_CRC (-18) CRC64 validation failed. This occurs when:
* - The calculated CRC64 doesn't match the expected CRC64 in the index header
* - Index data corruption is detected
* - Data integrity verification fails indicating potential file damage
*
* @note CRC64 Validation Process:
* - Reads all main index entries into memory for CRC calculation
* - Calculates CRC64 over the complete main index entries array
* - Applies version-specific endianness conversion for compatibility
* - For imageMajorVersion <= AARUF_VERSION_V1: Uses bswap_64() for byte order correction
* - Compares calculated CRC64 with the value stored in the IndexHeader3
*
* @note Version 3 Specific Features:
* - Uses IndexHeader3 structure with hierarchical subindex support
* - Maintains compatibility with legacy endianness handling
* - Supports advanced index entry organization with subindex references
* - Only validates the main index block CRC - subindex CRCs are not recursively checked
*
* @note Validation Scope:
* - Validates main index header structure and identifier
* - Verifies data integrity of the primary index entries through CRC64 calculation
* - Does not validate individual IndexEntry contents or block references
* - Does not check for logical consistency of referenced blocks
* - Does not recursively validate subindex blocks (unlike hierarchical processing)
*
* @note Memory Management:
* - Allocates temporary memory for index entries during verification
* - Automatically frees allocated memory on both success and error conditions
* - Memory usage is proportional to the number of entries in the main index only
*
* @note Subindex Validation Limitation:
* - This function does not recursively validate subindex CRCs
* - Each subindex block contains its own CRC that would need separate validation
* - For complete integrity verification, subindex blocks should be validated individually
* - The main index CRC only covers the primary index entries, not the subindex content
*
* @warning This function reads only the main index into memory for CRC calculation.
* Subindex blocks are not loaded or validated by this function.
*
* @warning The function assumes ctx->header.indexOffset points to a valid index location.
* Invalid offsets will cause file access errors or incorrect validation.
*
* @warning CRC validation failure indicates potential data corruption in the main index
* and may suggest the image file is damaged or has been modified outside of library control.
*
* @warning For complete integrity verification of hierarchical indexes, additional validation
* of subindex blocks may be required beyond this function's scope.
*/ */
int32_t verify_index_v3(aaruformatContext *ctx) int32_t verify_index_v3(aaruformatContext *ctx)
{ {
@@ -196,7 +391,7 @@ int32_t verify_index_v3(aaruformatContext *ctx)
FATAL("Could not read index entries."); FATAL("Could not read index entries.");
free(index_entries); free(index_entries);
TRACE("Exiting verify_index_v2() = AARUF_ERROR_CANNOT_READ_INDEX"); TRACE("Exiting verify_index_v3() = AARUF_ERROR_CANNOT_READ_INDEX");
return AARUF_ERROR_CANNOT_READ_INDEX; return AARUF_ERROR_CANNOT_READ_INDEX;
} }

View File

@@ -32,9 +32,78 @@
* @brief Opens an existing AaruFormat image file. * @brief Opens an existing AaruFormat image file.
* *
* Opens the specified image file and returns a pointer to the initialized aaruformat context. * Opens the specified image file and returns a pointer to the initialized aaruformat context.
* This function performs comprehensive validation of the image file format, reads and processes
* all index entries, initializes data structures for reading operations, and sets up caches
* for optimal performance. It supports multiple AaruFormat versions and handles various block
* types including data blocks, deduplication tables, metadata, and checksums.
* *
* @param filepath Path to the image file to open. * @param filepath Path to the image file to open.
* @return Pointer to the opened aaruformat context, or NULL on failure. *
* @return Returns one of the following:
* @retval aaruformatContext* Successfully opened and initialized context. The returned pointer contains:
* - Validated AaruFormat headers and metadata
* - Processed index entries with all discoverable blocks
* - Loaded deduplication tables (DDT) for efficient sector access
* - Initialized block and header caches for performance
* - Open file stream ready for reading operations
* - Populated image information and geometry data
* - ECC context initialized for error correction support
*
* @retval NULL Opening failed. The specific error can be determined by checking errno, which will be set to:
* - AARUF_ERROR_NOT_ENOUGH_MEMORY (-9) when memory allocation fails for:
* * Context allocation
* * Readable sector tags bitmap allocation
* * Application version string allocation
* * Image version string allocation
* - AARUF_ERROR_FILE_TOO_SMALL (-2) when file reading fails:
* * Cannot read the AaruFormat header (file too small or corrupted)
* * Cannot read the extended header for version 2+ formats
* - AARUF_ERROR_NOT_AARUFORMAT (-1) when format validation fails:
* * File identifier doesn't match DIC_MAGIC or AARU_MAGIC
* * File is not a valid AaruFormat image
* - AARUF_ERROR_INCOMPATIBLE_VERSION (-3) when:
* * Image major version exceeds the maximum supported version
* * Future format versions that cannot be read by this library
* - AARUF_ERROR_CANNOT_READ_INDEX (-4) when index processing fails:
* * Cannot seek to the index offset specified in the header
* * Cannot read the index signature
* * Index signature is not a recognized index block type
* * Index processing functions return NULL (corrupted index)
* - Other error codes may be propagated from block processing functions:
* * Data block processing errors
* * DDT processing errors
* * Metadata processing errors
*
* @note Format Support:
* - Supports AaruFormat versions 1.x and 2.x
* - Automatically detects and handles different index formats (v1, v2, v3)
* - Backwards compatible with older DIC format identifiers
* - Handles both small and large deduplication tables
*
* @note Block Processing:
* - Processes all indexed blocks including data, DDT, geometry, metadata, tracks, CICM, dump hardware, and checksums
* - Non-critical block processing errors are logged but don't prevent opening
* - Critical errors (DDT processing failures) cause opening to fail
* - Unknown block types are logged but ignored
*
* @note Memory Management:
* - Allocates memory for various context structures and caches
* - On failure, all previously allocated memory is properly cleaned up
* - The returned context must be freed using aaruf_close()
*
* @note Performance Optimization:
* - Initializes block and header caches based on sector size and available memory
* - Cache sizes are calculated to optimize memory usage and access patterns
* - ECC context is pre-initialized for Compact Disc support
*
* @warning The function requires a valid user data deduplication table to be present.
* Images without a DDT will fail to open even if otherwise valid.
*
* @warning File access is performed in binary read mode. The file must be accessible
* and not locked by other processes.
*
* @warning Some memory allocations (version strings) are optional and failure doesn't
* prevent opening, but may affect functionality that depends on version information.
*/ */
void *aaruf_open(const char *filepath) void *aaruf_open(const char *filepath)
{ {

View File

@@ -29,17 +29,58 @@
* *
* Reads the specified media tag from the image and stores it in the provided buffer. * Reads the specified media tag from the image and stores it in the provided buffer.
* Media tags contain metadata information about the storage medium such as disc * Media tags contain metadata information about the storage medium such as disc
* information, lead-in/lead-out data, or manufacturer-specific information. * information, lead-in/lead-out data, manufacturer-specific information, or other
* medium-specific metadata. This function uses a hash table lookup for efficient
* tag retrieval and supports buffer size querying when data pointer is NULL.
* *
* @param context Pointer to the aaruformat context. * @param context Pointer to the aaruformat context.
* @param data Pointer to the buffer to store the tag data. Can be NULL to query tag length. * @param data Pointer to the buffer to store the tag data. Can be NULL to query tag length.
* @param tag Tag identifier to read. * @param tag Tag identifier to read (specific to media type and format).
* @param length Pointer to the length of the buffer on input; updated with actual tag length on output. * @param length Pointer to the length of the buffer on input; updated with actual tag length on output.
* *
* @return AARUF_STATUS_OK on success, * @return Returns one of the following status codes:
* AARUF_ERROR_NOT_AARUFORMAT if context is NULL or invalid (magic number mismatch), * @retval AARUF_STATUS_OK (0) Successfully read the media tag. This is returned when:
* AARUF_ERROR_MEDIA_TAG_NOT_PRESENT if the requested media tag identifier does not exist in the image, * - The context is valid and properly initialized
* AARUF_ERROR_BUFFER_TOO_SMALL if data is NULL or provided buffer size is insufficient for the tag data. * - The requested media tag exists in the image's media tag hash table
* - The provided buffer is large enough to contain the tag data
* - The tag data is successfully copied to the output buffer
* - The length parameter is updated with the actual tag data length
*
* @retval AARUF_ERROR_NOT_AARUFORMAT (-1) The context is invalid. This occurs when:
* - The context parameter is NULL
* - The context magic number doesn't match AARU_MAGIC (invalid context type)
*
* @retval AARUF_ERROR_MEDIA_TAG_NOT_PRESENT (-11) The requested media tag does not exist. This occurs when:
* - The tag identifier is not found in the image's media tag hash table
* - The image was created without the requested metadata
* - The tag identifier is not supported for this media type
* - The length parameter is set to 0 when this error is returned
*
* @retval AARUF_ERROR_BUFFER_TOO_SMALL (-10) The provided buffer is insufficient. This occurs when:
* - The data parameter is NULL (used for length querying)
* - The buffer length (*length) is smaller than the required tag data length
* - The length parameter is updated with the required size for retry
*
* @note Buffer Size Querying:
* - Pass data as NULL to query the required buffer size without reading data
* - The length parameter will be updated with the required size
* - This allows proper buffer allocation before the actual read operation
*
* @note Media Tag Types:
* - Tags are media-type specific (optical disc, floppy disk, hard disk, etc.)
* - Common tags include TOC data, lead-in/out, manufacturer data, defect lists
* - Tag availability depends on what was preserved during the imaging process
*
* @note Hash Table Lookup:
* - Uses efficient O(1) hash table lookup for tag retrieval
* - Tag identifiers are integer values specific to the media format
* - Hash table is populated during image opening from indexed metadata blocks
*
* @warning The function performs a direct memory copy operation. Ensure the output
* buffer has sufficient space to prevent buffer overflows.
*
* @warning Media tag data is stored as-is from the original medium. No format
* conversion or validation is performed on the tag content.
*/ */
int32_t aaruf_read_media_tag(void *context, uint8_t *data, int32_t tag, uint32_t *length) int32_t aaruf_read_media_tag(void *context, uint8_t *data, int32_t tag, uint32_t *length)
{ {
@@ -99,24 +140,103 @@ int32_t aaruf_read_media_tag(void *context, uint8_t *data, int32_t tag, uint32_t
* *
* Reads user data from the specified sector address in the image. This function * Reads user data from the specified sector address in the image. This function
* reads only the user data portion of the sector, without any additional metadata * reads only the user data portion of the sector, without any additional metadata
* or ECC/EDC information. * or ECC/EDC information. It handles block-based deduplication, compression
* (LZMA/FLAC), and caching for optimal performance. The function supports both
* DDT v1 and v2 formats for sector-to-block mapping.
* *
* @param context Pointer to the aaruformat context. * @param context Pointer to the aaruformat context.
* @param sector_address The logical sector address to read from. * @param sector_address The logical sector address to read from.
* @param data Pointer to buffer where sector data will be stored. Can be NULL to query length. * @param data Pointer to buffer where sector data will be stored. Can be NULL to query length.
* @param length Pointer to variable containing buffer size on input, actual data length on output. * @param length Pointer to variable containing buffer size on input, actual data length on output.
* *
* @return AARUF_STATUS_OK on success, * @return Returns one of the following status codes:
* AARUF_STATUS_SECTOR_NOT_DUMPED if sector was not dumped during imaging, * @retval AARUF_STATUS_OK (0) Successfully read the sector data. This is returned when:
* AARUF_ERROR_NOT_AARUFORMAT if context is NULL or invalid (magic number mismatch), * - The sector data is successfully retrieved from cache or decompressed from storage
* AARUF_ERROR_SECTOR_OUT_OF_BOUNDS if sector address exceeds image sector count, * - Block header and data are successfully read and processed
* AARUF_ERROR_BUFFER_TOO_SMALL if data is NULL or buffer size is insufficient, * - Decompression (if needed) completes successfully
* AARUF_ERROR_NOT_ENOUGH_MEMORY if memory allocation fails for block operations, * - The sector data is copied to the output buffer
* AARUF_ERROR_CANNOT_READ_HEADER if block header cannot be read from image stream, * - The length parameter is updated with the actual sector size
* AARUF_ERROR_CANNOT_READ_BLOCK if block data cannot be read from image stream, *
* AARUF_ERROR_CANNOT_DECOMPRESS_BLOCK if LZMA or FLAC decompression fails, * @retval AARUF_STATUS_SECTOR_NOT_DUMPED (1) The sector was not dumped during imaging. This occurs when:
* AARUF_ERROR_UNSUPPORTED_COMPRESSION if block uses unsupported compression algorithm, * - The DDT entry indicates the sector status is SectorStatusNotDumped
* or other error codes from DDT decoding functions. * - The original imaging process skipped this sector due to read errors
* - The length parameter is set to the expected sector size for buffer allocation
*
* @retval AARUF_ERROR_NOT_AARUFORMAT (-1) The context is invalid. This occurs when:
* - The context parameter is NULL
* - The context magic number doesn't match AARU_MAGIC (invalid context type)
*
* @retval AARUF_ERROR_SECTOR_OUT_OF_BOUNDS (-8) The sector address exceeds image bounds. This occurs when:
* - sector_address is greater than or equal to ctx->imageInfo.Sectors
* - Attempting to read beyond the logical extent of the imaged medium
*
* @retval AARUF_ERROR_BUFFER_TOO_SMALL (-10) The provided buffer is insufficient. This occurs when:
* - The data parameter is NULL (used for length querying)
* - The buffer length (*length) is smaller than the block's sector size
* - The length parameter is updated with the required size for retry
*
* @retval AARUF_ERROR_NOT_ENOUGH_MEMORY (-9) Memory allocation failed. This occurs when:
* - Cannot allocate memory for block header (if not cached)
* - Cannot allocate memory for uncompressed block data
* - Cannot allocate memory for compressed data buffer (LZMA/FLAC)
* - Cannot allocate memory for decompressed block buffer
*
* @retval AARUF_ERROR_CANNOT_READ_HEADER (-6) Block header reading failed. This occurs when:
* - Cannot read the complete BlockHeader structure from the image stream
* - File I/O errors prevent reading header data at the calculated block offset
* - Image file corruption or truncation
*
* @retval AARUF_ERROR_CANNOT_READ_BLOCK (-7) Block data reading failed. This occurs when:
* - Cannot read uncompressed block data from the image stream
* - Cannot read LZMA properties from compressed blocks
* - Cannot read compressed data from LZMA or FLAC blocks
* - File I/O errors during block data access
*
* @retval AARUF_ERROR_CANNOT_DECOMPRESS_BLOCK (-17) Decompression failed. This occurs when:
* - LZMA decoder returns a non-zero error code during decompression
* - FLAC decoder fails to decompress audio data properly
* - Decompressed data size doesn't match the expected block length
* - Compression algorithm encounters corrupted or invalid compressed data
*
* @retval AARUF_ERROR_UNSUPPORTED_COMPRESSION (-13) Unsupported compression algorithm. This occurs when:
* - The block header specifies a compression type not supported by this library
* - Future compression algorithms not implemented in this version
*
* @retval Other error codes may be propagated from DDT decoding functions:
* - From decode_ddt_entry_v1(): AARUF_ERROR_NOT_AARUFORMAT (-1)
* - From decode_ddt_entry_v2(): AARUF_ERROR_NOT_AARUFORMAT (-1), AARUF_ERROR_CANNOT_READ_BLOCK (-7),
* AARUF_ERROR_CANNOT_DECOMPRESS_BLOCK (-17), AARUF_ERROR_INVALID_BLOCK_CRC (-18)
*
* @note DDT Processing:
* - Automatically selects DDT v1 or v2 decoding based on ctx->ddtVersion
* - DDT decoding provides sector offset within block and block file offset
* - Handles deduplication where multiple sectors may reference the same block
*
* @note Caching Mechanism:
* - Block headers are cached for performance (ctx->blockHeaderCache)
* - Decompressed block data is cached to avoid repeated decompression (ctx->blockCache)
* - Cache hits eliminate file I/O and decompression overhead
* - Cache sizes are determined during context initialization
*
* @note Compression Support:
* - None: Direct reading of uncompressed block data
* - LZMA: Industry-standard compression with properties header
* - FLAC: Audio-optimized compression for CD audio blocks
* - Each compression type has specific decompression requirements and error conditions
*
* @note Buffer Size Querying:
* - Pass data as NULL to query the required buffer size without reading data
* - The length parameter will be updated with the block's sector size
* - Useful for dynamic buffer allocation before the actual read operation
*
* @warning This function reads only user data. For complete sector data including
* metadata and ECC/EDC information, use aaruf_read_sector_long().
*
* @warning Memory allocated for caching is managed by the context. Do not free
* cached block data as it may be reused by subsequent operations.
*
* @warning Sector addresses are zero-based. The maximum valid address is
* ctx->imageInfo.Sectors - 1.
*/ */
int32_t aaruf_read_sector(void *context, uint64_t sector_address, uint8_t *data, uint32_t *length) int32_t aaruf_read_sector(void *context, uint64_t sector_address, uint8_t *data, uint32_t *length)
{ {
@@ -418,26 +538,68 @@ int32_t aaruf_read_sector(void *context, uint64_t sector_address, uint8_t *data,
* Reads user data from the specified sector address within a particular track. * Reads user data from the specified sector address within a particular track.
* This function is specifically designed for optical disc images where sectors * This function is specifically designed for optical disc images where sectors
* are organized by tracks. The sector address is relative to the start of the track. * are organized by tracks. The sector address is relative to the start of the track.
* It validates the media type, locates the specified track by sequence number,
* calculates the absolute sector address, and delegates to aaruf_read_sector().
* *
* @param context Pointer to the aaruformat context. * @param context Pointer to the aaruformat context.
* @param data Pointer to buffer where sector data will be stored. * @param data Pointer to buffer where sector data will be stored.
* @param sector_address The sector address relative to the track start. * @param sector_address The sector address relative to the track start.
* @param length Pointer to variable containing buffer size on input, actual data length on output. * @param length Pointer to variable containing buffer size on input, actual data length on output.
* @param track The track number to read from. * @param track The track sequence number to read from.
* *
* @return AARUF_STATUS_OK on success, * @return Returns one of the following status codes:
* AARUF_STATUS_SECTOR_NOT_DUMPED if sector was not dumped during imaging, * @retval AARUF_STATUS_OK (0) Successfully read the sector data from the specified track. This is returned when:
* AARUF_ERROR_NOT_AARUFORMAT if context is NULL or invalid (magic number mismatch), * - The context is valid and the media type is OpticalDisc
* AARUF_ERROR_INCORRECT_MEDIA_TYPE if media is not an optical disc, * - The specified track sequence number exists in the data tracks array
* AARUF_ERROR_TRACK_NOT_FOUND if the specified track sequence does not exist, * - The underlying aaruf_read_sector() call succeeds
* AARUF_ERROR_SECTOR_OUT_OF_BOUNDS if calculated sector address exceeds image bounds, * - The sector data is successfully retrieved and copied to the output buffer
* AARUF_ERROR_BUFFER_TOO_SMALL if data buffer is NULL or insufficient size, *
* AARUF_ERROR_NOT_ENOUGH_MEMORY if memory allocation fails during sector reading, * @retval AARUF_ERROR_NOT_AARUFORMAT (-1) The context is invalid. This occurs when:
* AARUF_ERROR_CANNOT_READ_HEADER if block header cannot be read from image stream, * - The context parameter is NULL
* AARUF_ERROR_CANNOT_READ_BLOCK if block data cannot be read from image stream, * - The context magic number doesn't match AARU_MAGIC (invalid context type)
* AARUF_ERROR_CANNOT_DECOMPRESS_BLOCK if LZMA or FLAC decompression fails, *
* AARUF_ERROR_UNSUPPORTED_COMPRESSION if block uses unsupported compression, * @retval AARUF_ERROR_INCORRECT_MEDIA_TYPE (-5) The media is not an optical disc. This occurs when:
* or other error codes from underlying aaruf_read_sector function. * - ctx->imageInfo.XmlMediaType is not OpticalDisc
* - This function is only applicable to CD, DVD, BD, and other optical disc formats
*
* @retval AARUF_ERROR_TRACK_NOT_FOUND (-12) The specified track does not exist. This occurs when:
* - No track in ctx->dataTracks[] has a sequence number matching the requested track
* - The track may not contain data or may not have been imaged
* - Only data tracks are searched; audio-only tracks are not included
*
* @retval All other error codes from aaruf_read_sector() may be returned:
* - AARUF_STATUS_SECTOR_NOT_DUMPED (1) - Sector was not dumped during imaging
* - AARUF_ERROR_SECTOR_OUT_OF_BOUNDS (-8) - Calculated absolute sector address exceeds image bounds
* - AARUF_ERROR_BUFFER_TOO_SMALL (-10) - Data buffer is NULL or insufficient size
* - AARUF_ERROR_NOT_ENOUGH_MEMORY (-9) - Memory allocation fails during sector reading
* - AARUF_ERROR_CANNOT_READ_HEADER (-6) - Block header cannot be read from image stream
* - AARUF_ERROR_CANNOT_READ_BLOCK (-7) - Block data cannot be read from image stream
* - AARUF_ERROR_CANNOT_DECOMPRESS_BLOCK (-17) - LZMA or FLAC decompression fails
* - AARUF_ERROR_UNSUPPORTED_COMPRESSION (-13) - Block uses unsupported compression
*
* @note Track-Relative Addressing:
* - The sector_address parameter is relative to the start of the specified track
* - The function calculates the absolute sector address as: track.start + sector_address
* - Track boundaries are defined by track.start and track.end values
*
* @note Data Track Filtering:
* - Only tracks in the ctx->dataTracks[] array are searched
* - Audio-only tracks without data content are not included in this search
* - The track sequence number is the logical track number from the disc TOC
*
* @note Function Delegation:
* - This function is a wrapper that performs track validation and address calculation
* - The actual sector reading is performed by aaruf_read_sector()
* - All caching, decompression, and DDT processing occurs in the underlying function
*
* @warning This function is only applicable to optical disc media types.
* Attempting to use it with block media will result in AARUF_ERROR_INCORRECT_MEDIA_TYPE.
*
* @warning The sector_address is relative to the track start, not the disc start.
* Ensure correct address calculation when working with multi-track discs.
*
* @warning Track sequence numbers may not be contiguous. Always verify track
* existence before assuming a track number is valid.
*/ */
int32_t aaruf_read_track_sector(void *context, uint8_t *data, uint64_t sector_address, uint32_t *length, uint8_t track) int32_t aaruf_read_track_sector(void *context, uint8_t *data, uint64_t sector_address, uint32_t *length, uint8_t track)
{ {
@@ -489,28 +651,98 @@ int32_t aaruf_read_track_sector(void *context, uint8_t *data, uint64_t sector_ad
* Reads the complete sector data including user data, ECC/EDC, subchannel data, * Reads the complete sector data including user data, ECC/EDC, subchannel data,
* and other metadata depending on the media type. For optical discs, this returns * and other metadata depending on the media type. For optical discs, this returns
* a full 2352-byte sector with sync, header, user data, and ECC/EDC. For block * a full 2352-byte sector with sync, header, user data, and ECC/EDC. For block
* media with tags, this includes both the user data and tag information. * media with tags, this includes both the user data and tag information. The function
* handles complex sector reconstruction including ECC correction and format-specific
* metadata assembly.
* *
* @param context Pointer to the aaruformat context. * @param context Pointer to the aaruformat context.
* @param sector_address The logical sector address to read from. * @param sector_address The logical sector address to read from.
* @param data Pointer to buffer where complete sector data will be stored. Can be NULL to query length. * @param data Pointer to buffer where complete sector data will be stored. Can be NULL to query length.
* @param length Pointer to variable containing buffer size on input, actual data length on output. * @param length Pointer to variable containing buffer size on input, actual data length on output.
* *
* @return AARUF_STATUS_OK on success, * @return Returns one of the following status codes:
* AARUF_STATUS_SECTOR_NOT_DUMPED if sector was not dumped during imaging, * @retval AARUF_STATUS_OK (0) Successfully read the complete sector with metadata. This is returned when:
* AARUF_ERROR_NOT_AARUFORMAT if context is NULL or invalid (magic number mismatch), * - The sector data is successfully retrieved and reconstructed with all metadata
* AARUF_ERROR_BUFFER_TOO_SMALL if data is NULL or buffer size insufficient for complete sector, * - For optical discs: sync, header, user data, and ECC/EDC are assembled into 2352 bytes
* AARUF_ERROR_NOT_ENOUGH_MEMORY if memory allocation fails for bare data operations, * - For block media: user data and tags are combined according to media type
* AARUF_ERROR_TRACK_NOT_FOUND if sector's track cannot be found in track list, * - ECC reconstruction (if applicable) completes successfully
* AARUF_ERROR_INCORRECT_MEDIA_TYPE if media type doesn't support long sector reading, * - All required metadata (prefix/suffix, subheaders, etc.) is available and applied
* AARUF_ERROR_INVALID_TRACK_FORMAT if track has an unsupported or invalid format, *
* AARUF_ERROR_REACHED_UNREACHABLE_CODE if internal logic reaches unexpected state, * @retval AARUF_STATUS_SECTOR_NOT_DUMPED (1) The sector or metadata was not dumped. This can occur when:
* AARUF_ERROR_SECTOR_OUT_OF_BOUNDS if sector address exceeds image bounds (from aaruf_read_sector), * - The underlying sector data was not dumped during imaging
* AARUF_ERROR_CANNOT_READ_HEADER if block header cannot be read (from aaruf_read_sector), * - CD prefix or suffix metadata indicates NotDumped status for the sector
* AARUF_ERROR_CANNOT_READ_BLOCK if block data cannot be read (from aaruf_read_sector), * - ECC reconstruction cannot be performed due to missing correction data
* AARUF_ERROR_CANNOT_DECOMPRESS_BLOCK if decompression fails (from aaruf_read_sector), *
* AARUF_ERROR_UNSUPPORTED_COMPRESSION if compression algorithm not supported (from aaruf_read_sector), * @retval AARUF_ERROR_NOT_AARUFORMAT (-1) The context is invalid. This occurs when:
* or other error codes from underlying aaruf_read_sector function calls. * - The context parameter is NULL
* - The context magic number doesn't match AARU_MAGIC (invalid context type)
*
* @retval AARUF_ERROR_INCORRECT_MEDIA_TYPE (-5) The media type doesn't support long sector reading. This occurs when:
* - ctx->imageInfo.XmlMediaType is not OpticalDisc or BlockMedia
* - For BlockMedia: the specific media type is not supported (not Apple, Priam, etc.)
* - Media types that don't have extended sector formats
*
* @retval AARUF_ERROR_BUFFER_TOO_SMALL (-10) The buffer is insufficient for complete sector data. This occurs when:
* - The data parameter is NULL (used for length querying)
* - For optical discs: buffer length is less than 2352 bytes
* - For block media: buffer length is less than (user data size + tag size)
* - The length parameter is updated with the required size for retry
*
* @retval AARUF_ERROR_NOT_ENOUGH_MEMORY (-9) Memory allocation failed. This occurs when:
* - Cannot allocate memory for bare user data during optical disc processing
* - Cannot allocate memory for user data during block media processing
* - Memory allocation fails in underlying aaruf_read_sector() calls
*
* @retval AARUF_ERROR_TRACK_NOT_FOUND (-12) Cannot locate the sector's track. This occurs when:
* - For optical discs: the sector address doesn't fall within any data track boundaries
* - No track contains the specified sector address (address not in any track.start to track.end range)
* - The track list is empty or corrupted
*
* @retval AARUF_ERROR_INVALID_TRACK_FORMAT (-14) The track has an unsupported format. This occurs when:
* - The track type is not recognized (not Audio, Data, CdMode1, CdMode2*, etc.)
* - Internal track format validation fails
*
* @retval AARUF_ERROR_REACHED_UNREACHABLE_CODE (-15) Internal logic error. This occurs when:
* - Required metadata structures (sectorPrefix, sectorSuffix, etc.) are unexpectedly NULL
* - Control flow reaches states that should be impossible with valid image data
* - Indicates potential image corruption or library bug
*
* @retval All error codes from aaruf_read_sector() may be propagated:
* - AARUF_ERROR_SECTOR_OUT_OF_BOUNDS (-8) - Calculated sector address exceeds image bounds
* - AARUF_ERROR_CANNOT_READ_HEADER (-6) - Block header cannot be read
* - AARUF_ERROR_CANNOT_READ_BLOCK (-7) - Block data cannot be read
* - AARUF_ERROR_CANNOT_DECOMPRESS_BLOCK (-17) - Decompression fails
* - AARUF_ERROR_UNSUPPORTED_COMPRESSION (-13) - Compression algorithm not supported
*
* @note Optical Disc Sector Reconstruction:
* - Creates full 2352-byte sectors from separate user data, sync, header, and ECC/EDC components
* - Supports different CD modes: Mode 1, Mode 2 Form 1, Mode 2 Form 2, Mode 2 Formless
* - Handles ECC correction using stored correction data or reconstructed ECC
* - Uses prefix/suffix DDTs to determine correction strategy for each sector
*
* @note Block Media Tag Assembly:
* - Combines user data (typically 512 bytes) with media-specific tags
* - Tag sizes vary by media type: Apple (12-20 bytes), Priam (24 bytes)
* - Tags contain manufacturer-specific information like spare sector mapping
*
* @note ECC Reconstruction Modes:
* - Correct: Reconstructs ECC/EDC from user data (no stored correction data needed)
* - NotDumped: Indicates the metadata portion was not successfully read during imaging
* - Stored: Uses pre-computed correction data stored separately in the image
*
* @note Buffer Size Requirements:
* - Optical discs: Always 2352 bytes (full raw sector)
* - Block media: User data size + tag size (varies by media type)
* - Pass data as NULL to query the exact required buffer size
*
* @warning For optical discs, this function requires track information to be available.
* Images without track data may not support long sector reading.
*
* @warning The function performs complex sector reconstruction. Corrupted metadata
* or missing correction data may result in incorrect sector assembly.
*
* @warning Not all AaruFormat images contain the metadata necessary for long sector
* reading. Some images may only support basic sector reading via aaruf_read_sector().
*/ */
int32_t aaruf_read_sector_long(void *context, uint64_t sector_address, uint8_t *data, uint32_t *length) int32_t aaruf_read_sector_long(void *context, uint64_t sector_address, uint8_t *data, uint32_t *length)
{ {

View File

@@ -28,10 +28,74 @@
/** /**
* @brief Verifies the integrity of an AaruFormat image file. * @brief Verifies the integrity of an AaruFormat image file.
* *
* Checks the integrity of all blocks and deduplication tables in the image. * Checks the integrity of all blocks and deduplication tables in the image by validating CRC64
* checksums for each indexed block. This function performs comprehensive verification of data blocks,
* DDT v1 and v2 tables, tracks blocks, and other structural components. It reads blocks in chunks
* to optimize memory usage during verification and supports version-specific CRC endianness handling.
* *
* @param context Pointer to the aaruformat context. * @param context Pointer to the aaruformat context.
* @return AARUF_STATUS_OK on success, or an error code on failure. *
* @return Returns one of the following status codes:
* @retval AARUF_STATUS_OK (0) Successfully verified image integrity. This is returned when:
* - All indexed blocks are successfully read and processed
* - All CRC64 checksums match their expected values
* - Index verification passes for the detected index version
* - All block headers are readable and valid
* - Memory allocation for verification buffer succeeds
*
* @retval AARUF_ERROR_NOT_AARUFORMAT (-1) The context is invalid. This occurs when:
* - The context parameter is NULL
* - The context magic number doesn't match AARU_MAGIC (invalid context type)
*
* @retval AARUF_ERROR_CANNOT_READ_HEADER (-6) Failed to read critical headers. This occurs when:
* - Cannot read the index signature from the image stream
* - File I/O errors prevent reading header structures
*
* @retval AARUF_ERROR_CANNOT_READ_INDEX (-4) Index processing or validation failed. This occurs when:
* - Index signature is not a recognized type (IndexBlock, IndexBlock2, or IndexBlock3)
* - Index verification functions return error codes
* - Index processing functions return NULL (corrupted or invalid index)
*
* @retval AARUF_ERROR_NOT_ENOUGH_MEMORY (-9) Memory allocation failed. This occurs when:
* - Cannot allocate VERIFY_SIZE (1MB) buffer for reading block data during verification
* - System is out of available memory for verification operations
*
* @retval AARUF_ERROR_CANNOT_READ_BLOCK (-7) Block reading failed. This occurs when:
* - Cannot read block headers (DataBlock, DeDuplicationTable, DeDuplicationTable2, TracksBlock)
* - File I/O errors prevent reading block structure data
* - CRC64 context initialization fails (internal error)
*
* @retval AARUF_ERROR_INVALID_BLOCK_CRC (-18) CRC verification failed. This occurs when:
* - Calculated CRC64 doesn't match the expected CRC64 in block headers
* - Data corruption is detected in any verified block
* - This applies to DataBlock, DeDuplicationTable, DeDuplicationTable2, and TracksBlock types
*
* @note Verification Process:
* - Reads blocks in VERIFY_SIZE (1MB) chunks to optimize memory usage
* - Supports version-specific CRC64 endianness (v1 uses byte-swapped CRC64)
* - Verifies only blocks that have CRC checksums (skips blocks without checksums)
* - Unknown block types are logged but don't cause verification failure
*
* @note Block Types Verified:
* - DataBlock: Verifies compressed data CRC64 against block header
* - DeDuplicationTable (v1): Verifies DDT data CRC64 with version-specific endianness
* - DeDuplicationTable2 (v2): Verifies DDT data CRC64 (no endianness conversion)
* - TracksBlock: Verifies track entries CRC64 with version-specific endianness
* - Other block types are ignored during verification
*
* @note Performance Considerations:
* - Uses chunked reading to minimize memory footprint during verification
* - Processes blocks sequentially to maintain good disk access patterns
* - CRC64 contexts are created and destroyed for each block to prevent memory leaks
*
* @warning This function performs read-only verification and does not modify the image.
* It requires the image to be properly opened with a valid context.
*
* @warning Verification failure indicates data corruption or tampering. Images that
* fail verification should not be trusted for data recovery operations.
*
* @warning The function allocates a 1MB buffer for verification. Ensure sufficient
* memory is available before calling this function on resource-constrained systems.
*/ */
int32_t aaruf_verify_image(void *context) int32_t aaruf_verify_image(void *context)
{ {

View File

@@ -30,13 +30,64 @@
* @brief Writes a sector to the AaruFormat image. * @brief Writes a sector to the AaruFormat image.
* *
* Writes the given data to the specified sector address in the image, with the given status and length. * Writes the given data to the specified sector address in the image, with the given status and length.
* This function handles buffering data into blocks, automatically closing blocks when necessary (sector
* size changes or block size limits are reached), and managing the deduplication table (DDT) entries.
* *
* @param context Pointer to the aaruformat context. * @param context Pointer to the aaruformat context.
* @param sector_address Logical sector address to write. * @param sector_address Logical sector address to write.
* @param data Pointer to the data buffer to write. * @param data Pointer to the data buffer to write.
* @param sector_status Status of the sector to write. * @param sector_status Status of the sector to write.
* @param length Length of the data buffer. * @param length Length of the data buffer.
* @return AARUF_STATUS_OK on success, or an error code on failure. *
* @return Returns one of the following status codes:
* @retval AARUF_STATUS_OK (0) Successfully wrote the sector data. This is returned when:
* - The sector data is successfully copied to the writing buffer
* - The DDT entry is successfully updated for the sector address
* - Block management operations complete successfully
* - Buffer positions and offsets are properly updated
*
* @retval AARUF_ERROR_NOT_AARUFORMAT (-1) The context is invalid. This occurs when:
* - The context parameter is NULL
* - The context magic number doesn't match AARU_MAGIC (invalid context type)
*
* @retval AARUF_READ_ONLY (-22) Attempting to write to a read-only image. This occurs when:
* - The context's isWriting flag is false
* - The image was opened in read-only mode
*
* @retval AARUF_ERROR_NOT_ENOUGH_MEMORY (-9) Memory allocation failed. This occurs when:
* - Failed to allocate memory for the writing buffer when creating a new block
* - The system is out of available memory for buffer allocation
*
* @retval AARUF_ERROR_CANNOT_WRITE_BLOCK_HEADER (-23) Failed to write block header to the image file.
* This can occur during automatic block closure when:
* - The fwrite() call for the block header fails
* - Disk space is insufficient or file system errors occur
*
* @retval AARUF_ERROR_CANNOT_WRITE_BLOCK_DATA (-24) Failed to write block data to the image file.
* This can occur during automatic block closure when:
* - The fwrite() call for the block data fails
* - Disk space is insufficient or file system errors occur
*
* @note Block Management:
* - The function automatically closes the current block when sector size changes
* - Blocks are also closed when they reach the maximum size (determined by dataShift)
* - New blocks are created automatically when needed with appropriate headers
*
* @note Memory Management:
* - Writing buffers are allocated on-demand when creating new blocks
* - Buffer memory is freed when blocks are closed
* - Buffer size is calculated based on sector size and data shift parameters
*
* @note DDT Updates:
* - Each written sector updates the corresponding DDT entry
* - DDT entries track block offset, position, and sector status
* - Uses DDT version 2 format for entries
*
* @warning The function may trigger automatic block closure, which can result in disk I/O
* operations and potential write errors even for seemingly simple sector writes.
*
* @warning No bounds checking is performed on sector_address. Writing beyond media limits
* may result in undefined behavior (TODO: implement bounds checking).
*/ */
int32_t aaruf_write_sector(void *context, uint64_t sector_address, const uint8_t *data, uint8_t sector_status, int32_t aaruf_write_sector(void *context, uint64_t sector_address, const uint8_t *data, uint8_t sector_status,
uint32_t length) uint32_t length)