ddn.compressor.lzo
ddn.compressor.lzo
LZO compression provider for ddn.api.compressor.
This module provides a basic LZO compression and decompression implementation using a simplified LZO1X-style format. It includes:
- Streaming
CompressorandDecompressorclasses implementing the
ddn.api.compressor interface.
- Hash-based match finding with configurable compression levels (0-9).
- Simple block format with headers and checksums.
- Efficient compression suitable for in-memory data.
LZO (Lempel-Ziv-Oberhumer) is a lossless data compression algorithm that is focused on decompression speed. It is widely used in applications where fast decompression is critical.
Important: This implementation uses a simplified format that isNOT compatible with the lzop command-line tool or standard LZO file format. It is designed for use within the ddn.compressor API for internal data compression where interoperability with external tools is not required.
Compression levels control the trade-off between speed and compression ratio:
- Level 0: Store-only mode (no compression, fastest).
- Levels 1-3: Fast compression with smaller hash tables.
- Levels 4-6: Default compression with medium hash tables.
- Levels 7-9: Better compression with larger hash tables.
The default compression level is 3, providing a good balance between compression ratio and speed.
Copyright
Module Initializers 1
()Types 4
Result structure for fast decompression operations.
Contains information about bytes consumed from input and bytes produced to output, along with success/error status.
size_t bytesConsumedNumber of bytes consumed from the compressed input.size_t bytesProducedNumber of bytes produced to the decompressed output.bool successWhether decompression completed successfully.ErrorCode errorCodeError code if success is false.const(char)[] errorMessageError message if success is false.LZO compressor that implements ddn.api.compressor.Compressor.
This class provides streaming compression using the LZO1X algorithm. Internally it uses a pre-allocated malloc'd hash table and a buffer-offset tracking scheme to avoid unnecessary copies on block boundaries.
private CompressionOptions _optsprivate OutputSink _sinkprivate bool _finishedprivate ubyte[] _bufferprivate size_t _bufferOffsetprivate ulong _bytesInprivate ulong _bytesOutprivate uint * _hashTablePtrPre-allocated hash table (raw pointer, `malloc`'d).private size_t _hashTableSizeNumber of entries in the pre-allocated hash table.private uint _hashBitsNumber of hash bits for the pre-allocated hash table.private ubyte * _outBufPtrPre-allocated output buffer for compression (raw pointer, `malloc`'d).private size_t _outBufCapCapacity of the pre-allocated output buffer.private ubyte * _blockBufPtrPre-allocated block buffer for emitBlock (raw pointer, `malloc`'d). This avoids GC allocations from `~=` appends.private size_t _blockBufCapCapacity of the pre-allocated block buffer.void setOutputSink(OutputSink sink)Set the output sink delegate that will receive compressed chunks.void setProgressCallback(ProgressCallback callback)Set an optional progress callback.void write(const(ubyte)[] data)Feed more uncompressed data.void finish()Finalize the stream. No further writes are allowed.void reset()Reset the compression stream to initial state.bool setDictionary(const(ubyte)[] dict)Set dictionary (not supported for LZO).bool isFinished() @property constReturns true if finish() has been called and the stream is closed for further writes.void emitBlock(const(ubyte)[] uncompressed, const(ubyte)[] compressed)Emit a compressed block with header.this(CompressionOptions opts)Create a compressor with provided options.~thisDestructor – frees the `malloc`'d hash table, output buffer, and block buffer.LZO decompressor that implements ddn.api.compressor.Decompressor.
private DecompressionOptions _optsprivate OutputSink _sinkprivate bool _finishedprivate ubyte[] _inputBufferprivate ulong _bytesInprivate ulong _bytesOutprivate size_t _inputOffsetOffset into `inputBuffer` for the next unread byte.private ubyte * _decompBufPre-allocated output buffer for decompression (`malloc`'d).private size_t _decompBufSizeCapacity of `decompBuf` in bytes.void _ensureDecompBufSize(size_t needed)Ensure the decompression buffer is at least `needed` bytes.void setOutputSink(OutputSink sink)Set the output sink delegate for decompressed data.void setProgressCallback(ProgressCallback callback)Set an optional progress callback.void write(const(ubyte)[] data)Feed more compressed data.void finish()Finish the stream.void reset()Reset the decompression stream.bool setDictionary(const(ubyte)[] dict)Set dictionary (not supported for LZO).bool isFinished() @property constReturns true if finish() has been called and the stream is closed for further writes.this(DecompressionOptions opts)Create a decompressor with provided options.~thisDestructor – frees the `malloc`'d decompression buffer.Functions 16
uint hashBitsForLevel(int level) pure nothrow @nogcCompute the number of hash bits to use for a given compression level.uint hashTableSize(uint hashBits) pure nothrow @nogcCompute the hash-table size from the number of hash bits.uint readLE32p(const(ubyte) * p)Read a 32-bit little-endian word from an unaligned pointer.uint lzoHashIndex(const(ubyte) * p, uint hashBits)Compute the LZO hash index for the 3-byte sequence at `p`.size_t countMatch(const(ubyte)[] src, size_t pos1, size_t pos2, size_t maxLen)Count the number of matching bytes starting at two positions.auto findBestMatch(const(ubyte)[] src, size_t pos, ref LzoHashTable hashTab, size_t maxOff)Find the best match at the current position.size_t countMatchPtr(const(ubyte) * p1, const(ubyte) * p2,
size_t maxLen,
const(ubyte) * end1, const(ubyte) * end2) pure nothrow @nogcCount matching bytes between two raw pointers, with a maximum length.size_t compressBlockFast(const(ubyte) * src, size_t srcLen,
ubyte * outBuf, size_t outCap,
int level,
uint * ht = null,
size_t htSize = 0,
uint htBits = 0) @trusted @nogc nothrowCompress a single block using the fast pointer-based path.ubyte[] compressBlock(const(ubyte)[] src, int level)Compress a single block using LZO1X algorithm.void encodeMatch(ref ubyte[] dst, size_t off, size_t len)Encode a match to the output buffer.DecompressResult decompressBlockFast(
const(ubyte) * src,
size_t srcLen,
ubyte * outBuf,
size_t outBufLen) @system nothrowFast LZO1X block decompression using pre-allocated buffer and memcpy.Compressor makeLzoCompressor(CompressionOptions opts)Factory function that constructs an `LzoCompressor`.Decompressor makeLzoDecompressor(DecompressionOptions opts)Factory function that constructs an `LzoDecompressor`.Variables 11
ubyte[3] LZO_MAGICLZO magic number for LZO1X format
LZO_VERSION = 0x10LZO file format version
LZO_LIB_VERSION = 0x09LZO library version we're compatible with
size_t LZO_MIN_MATCHMinimum match length in LZO format
size_t LZO_MAX_MATCHMaximum match length in LZO format
size_t LZO_MAX_OFFSETMaximum offset for back-references (64KB)
size_t LZO_BLOCK_SIZEBlock size for LZO files (default: 256KB)
LZO_FLAG_ADLER32_D = 0x0001LZO file header flags
LZO_FLAG_ADLER32_C = 0x0002LZO_HASH_MULT = 0x9E37_79B1uMultiplicative constant used by LZO for hashing 3-byte sequences.
LZO_HASH_SENTINEL = 0xFFFF_FFFFuSentinel value indicating an empty hash table slot.