Documentation
¶
Index ¶
- Constants
- Variables
- func AppendDecoded(dst, src []byte) ([]byte, error)
- func AppendEncoded(dst, src []byte, level int) ([]byte, error)
- func Decode(dst, src []byte) ([]byte, error)
- func DecodedLen(src []byte) (int, error)
- func Encode(dst, src []byte, level int) ([]byte, error)
- func IndexStream(r io.Reader) ([]byte, error)
- func IsMinLZ(src []byte) (ok bool, size int, err error)
- func MaxEncodedLen(srcLen int) int
- func RemoveIndexHeaders(b []byte) []byte
- func RestoreIndexHeaders(in []byte) []byte
- func TryEncode(dst, src []byte, level int) []byte
- type ErrCantSeek
- type Index
- type OffsetPair
- type ReadSeeker
- type Reader
- func (r *Reader) DecodeConcurrent(w io.Writer, concurrent int) (written int64, err error)
- func (r *Reader) GetBufferCapacity() int
- func (r *Reader) Read(p []byte) (int, error)
- func (r *Reader) ReadByte() (byte, error)
- func (r *Reader) ReadSeeker(index []byte) (*ReadSeeker, error)
- func (r *Reader) Reset(reader io.Reader)
- func (r *Reader) Skip(n int64) error
- func (r *Reader) UserChunkCB(id uint8, fn func(r io.Reader) error) error
- func (r *Reader) WriteTo(w io.Writer) (n int64, err error)
- type ReaderOption
- type Writer
- func (w *Writer) AddUserChunk(id uint8, data []byte) (err error)
- func (w *Writer) AsyncFlush() error
- func (w *Writer) Close() error
- func (w *Writer) CloseIndex() ([]byte, error)
- func (w *Writer) EncodeBuffer(buf []byte) (err error)
- func (w *Writer) Flush() error
- func (w *Writer) ReadFrom(r io.Reader) (n int64, err error)
- func (w *Writer) Reset(writer io.Writer)
- func (w *Writer) Write(p []byte) (nRet int, errRet error)
- func (w *Writer) Written() (input, output int64)
- type WriterOption
- func WriterAddIndex(b bool) WriterOption
- func WriterBlockSize(n int) WriterOption
- func WriterConcurrency(n int) WriterOption
- func WriterCreateIndex(b bool) WriterOption
- func WriterCustomEncoder(fn func(dst, src []byte) int) WriterOption
- func WriterFlushOnWrite() WriterOption
- func WriterLevel(n int) WriterOption
- func WriterPadding(n int) WriterOption
- func WriterPaddingSrc(reader io.Reader) WriterOption
- func WriterUncompressed() WriterOption
Examples ¶
Constants ¶
const ( // LevelFastest is the fastest compression level. LevelFastest = 1 // LevelBalanced is the balanced compression level. // This is targeted to be approximately half the speed of LevelFastest. LevelBalanced = 2 // LevelSmallest will attempt the best possible compression. // There is no speed target for this level. LevelSmallest = 3 )
const ( IndexHeader = "s2idx\x00" IndexTrailer = "\x00xdi2s" )
const ( // MaxBlockSize is the maximum value where MaxEncodedLen will return a valid block size. MaxBlockSize = 8 << 20 // MinUserSkippableChunk is the lowest user defined skippable chunk ID. // All chunks IDs within this range will be ignored if not handled. MinUserSkippableChunk = 0x80 // MaxUserSkippableChunk is the last user defined skippable chunk ID. MaxUserSkippableChunk = 0xbf // MinUserNonSkippableChunk is the lowest user defined non-skippable chunk ID. // All chunks IDs within this range will cause an error if not handled. MinUserNonSkippableChunk = 0xc0 // MaxUserNonSkippableChunk is the last user defined non-skippable chunk ID. MaxUserNonSkippableChunk = 0xfd // ChunkTypePadding is a padding chunk. ChunkTypePadding = 0xfe // ChunkTypeStreamIdentifier is the Snappy/S2/MinLZ stream id chunk. ChunkTypeStreamIdentifier = 0xff // MaxUserChunkSize is the maximum possible size of a single chunk. MaxUserChunkSize = 1<<24 - 1 // 16777215 )
Variables ¶
var ( // ErrCorrupt reports that the input is invalid. ErrCorrupt = errors.New("minlz: corrupt input") // ErrCRC reports that the input failed CRC validation (streams only) ErrCRC = errors.New("minlz: corrupt input, crc mismatch") // ErrTooLarge reports that the uncompressed length is too large. ErrTooLarge = errors.New("minlz: decoded block is too large") // ErrUnsupported reports that the input isn't supported. ErrUnsupported = errors.New("minlz: unsupported input") // ErrInvalidLevel is returned when an invalid compression level is requested. ErrInvalidLevel = errors.New("minlz: invalid compression level") )
Functions ¶
func AppendDecoded ¶
AppendDecoded will append the decoded version of src to dst. If the decoded content cannot fit within dst, it will cause an allocation. This decoder has automatic fallback to Snappy/S2. To reject fallback check with IsMinLZ. The dst and src must not overlap. It is valid to pass a nil dst.
func AppendEncoded ¶
AppendEncoded will append the encoded version of src to dst. If dst has MaxEncodedLen(len(src)) capacity left it will be done without allocation. See Encode for more information.
func Decode ¶
Decode returns the decoded form of src. The returned slice may be a sub- slice of dst if dst was large enough to hold the entire decoded block. Otherwise, a newly allocated slice will be returned.
This decoder has automatic fallback to Snappy/S2. To reject fallback check with IsMinLZ.
The dst and src must not overlap. It is valid to pass a nil dst.
func DecodedLen ¶
DecodedLen returns the length of the decoded block. This length will never be exceeded when decoding a block.
func Encode ¶
Encode returns the encoded form of src. The returned slice may be a sub- slice of dst if dst was large enough to hold the entire encoded block. Otherwise, a newly allocated slice will be returned.
The dst and src must not overlap. It is valid to pass a nil dst.
The blocks will require the same amount of memory to decode as encoding, and does not make for concurrent decoding. Also note that blocks do not contain CRC information, so corruption may be undetected.
If you need to encode larger amounts of data, consider using the streaming interface which gives all of these features.
func IndexStream ¶
IndexStream will return an index for a stream. The stream structure will be checked, but data within blocks is not verified. The returned index can either be appended to the end of the stream or stored separately.
Example ¶
ExampleIndexStream shows an example of indexing a stream and indexing it after it has been written. The index can either be appended.
package main import ( "bytes" "fmt" "io" "math/rand" "os" "github.com/minio/minlz" ) func main() { fatalErr := func(err error) { if err != nil { fmt.Println("ERR:", err) } } // Create a test stream without index var streamName = "" tmp := make([]byte, 5<<20) { rng := rand.New(rand.NewSource(0xbeefcafe)) rng.Read(tmp) // Make it compressible... for i, v := range tmp { tmp[i] = '0' + v&3 } // Compress it... output, err := os.CreateTemp("", "IndexStream") streamName = output.Name() fatalErr(err) // We use smaller blocks just for the example... enc := minlz.NewWriter(output, minlz.WriterBlockSize(64<<10)) err = enc.EncodeBuffer(tmp) fatalErr(err) // Close and get index... err = enc.Close() fatalErr(err) err = output.Close() fatalErr(err) } // Open our compressed stream without an index... stream, err := os.Open(streamName) fatalErr(err) defer stream.Close() var indexInput = io.Reader(stream) var indexOutput io.Writer var indexedName string // Should index be combined with stream by appending? // This could also be done by appending to an os.File // If not it will be written to a separate file. const combineOutput = false // Function to easier use defer. func() { if combineOutput { output, err := os.CreateTemp("", "IndexStream-Combined") fatalErr(err) defer func() { fatalErr(output.Close()) if false { fi, err := os.Stat(output.Name()) fatalErr(err) fmt.Println("Combined:", fi.Size(), "bytes") } else { fmt.Println("Index saved") } }() // Everything read from stream will also be written to output. indexedName = output.Name() indexInput = io.TeeReader(stream, output) indexOutput = output } else { output, err := os.CreateTemp("", "IndexStream-Index") fatalErr(err) defer func() { fatalErr(output.Close()) fi, err := os.Stat(output.Name()) fatalErr(err) if false { fmt.Println("Index:", fi.Size(), "bytes") } else { fmt.Println("Index saved") } }() indexedName = output.Name() indexOutput = output } // Index the input idx, err := minlz.IndexStream(indexInput) fatalErr(err) // Write the index _, err = indexOutput.Write(idx) fatalErr(err) }() if combineOutput { // Read from combined stream only. stream, err := os.Open(indexedName) fatalErr(err) defer stream.Close() // Create a reader with the input. // We assert that the stream is an io.ReadSeeker. r := minlz.NewReader(io.ReadSeeker(stream)) // Request a ReadSeeker with random access. // This will load the index from the stream. rs, err := r.ReadSeeker(nil) fatalErr(err) _, err = rs.Seek(-10, io.SeekEnd) fatalErr(err) b, err := io.ReadAll(rs) fatalErr(err) if want := tmp[len(tmp)-10:]; !bytes.Equal(b, want) { fatalErr(fmt.Errorf("wanted %v, got %v", want, b)) } fmt.Println("last 10 bytes read") _, err = rs.Seek(10, io.SeekStart) fatalErr(err) _, err = io.ReadFull(rs, b) fatalErr(err) if want := tmp[10:20]; !bytes.Equal(b, want) { fatalErr(fmt.Errorf("wanted %v, got %v", want, b)) } fmt.Println("10 bytes at offset 10 read") } else { // Read from separate stream and index. stream, err := os.Open(streamName) fatalErr(err) defer stream.Close() // Create a reader with the input. // We assert that the stream is an io.ReadSeeker. r := minlz.NewReader(io.ReadSeeker(stream)) // Read the separate index. index, err := os.ReadFile(indexedName) fatalErr(err) // Request a ReadSeeker with random access. // The provided index will be used. rs, err := r.ReadSeeker(index) fatalErr(err) _, err = rs.Seek(-10, io.SeekEnd) fatalErr(err) b, err := io.ReadAll(rs) fatalErr(err) if want := tmp[len(tmp)-10:]; !bytes.Equal(b, want) { fatalErr(fmt.Errorf("wanted %v, got %v", want, b)) } fmt.Println("last 10 bytes read") _, err = rs.Seek(10, io.SeekStart) fatalErr(err) _, err = io.ReadFull(rs, b) fatalErr(err) if want := tmp[10:20]; !bytes.Equal(b, want) { fatalErr(fmt.Errorf("wanted %v, got %v", want, b)) } fmt.Println("10 bytes at offset 10 read") } }
Output: Index saved last 10 bytes read 10 bytes at offset 10 read
func IsMinLZ ¶
IsMinLZ returns whether the block is a minlz block and returns the size of the decompressed block.
func MaxEncodedLen ¶
MaxEncodedLen returns the maximum length of a snappy block, given its uncompressed length.
It will return a negative value if srcLen is too large to encode.
func RemoveIndexHeaders ¶
RemoveIndexHeaders will trim all headers and trailers from a given index. This is expected to save 20 bytes. These can be restored using RestoreIndexHeaders. This removes a layer of security, but is the most compact representation. Returns nil if headers contains errors. The returned slice references the provided slice.
func RestoreIndexHeaders ¶
RestoreIndexHeaders will index restore headers removed by RemoveIndexHeaders. No error checking is performed on the input. If a 0 length slice is sent, it is returned without modification.
Types ¶
type ErrCantSeek ¶
type ErrCantSeek struct {
Reason string
}
ErrCantSeek is returned if the stream cannot be seeked.
type Index ¶
type Index struct { // Total Uncompressed size. TotalUncompressed int64 // Total Compressed size if known. Will be -1 if unknown. TotalCompressed int64 // Offset pairs are pairs of Compressed -> Uncompressed positions. // Offsets are stream offsets from first stream byte. // It will be safe to start decompressing from any of these offsets. // The slice is sorted by offset. Offsets []OffsetPair // contains filtered or unexported fields }
Index represents an S2/Snappy/MinLZ index.
func (*Index) Find ¶
Find the offset at or before the wanted (uncompressed) offset. If offset is 0 or positive it is the offset from the beginning of the file. If the uncompressed size is known, the offset must be within the file. If an offset outside the file is requested io.ErrUnexpectedEOF is returned. If the offset is negative, it is interpreted as the distance from the end of the file, where -1 represents the last byte. If offset from the end of the file is requested, but size is unknown, ErrUnsupported will be returned.
func (*Index) Load ¶
Load a binary index. A zero value Index can be used or a previous one can be reused.
Example ¶
package main import ( "bytes" "fmt" "io" "math/rand" "sync" "github.com/minio/minlz" ) func main() { fatalErr := func(err error) { if err != nil { panic(err) } } // Create a test corpus tmp := make([]byte, 5<<20) rng := rand.New(rand.NewSource(0xbeefcafe)) rng.Read(tmp) // Make it compressible... for i, v := range tmp { tmp[i] = '0' + v&3 } // Compress it... var buf bytes.Buffer // We use smaller blocks just for the example... enc := minlz.NewWriter(&buf, minlz.WriterBlockSize(100<<10)) err := enc.EncodeBuffer(tmp) fatalErr(err) // Close and get index... idxBytes, err := enc.CloseIndex() fatalErr(err) // This is our compressed stream... compressed := buf.Bytes() var once sync.Once for wantOffset := int64(0); wantOffset < int64(len(tmp)); wantOffset += 555555 { // Let's assume we want to read from uncompressed offset 'i' // and we cannot seek in input, but we have the index. want := tmp[wantOffset:] // Load the index. var index minlz.Index _, err = index.Load(idxBytes) fatalErr(err) // Find offset in file: compressedOffset, uncompressedOffset, err := index.Find(wantOffset) fatalErr(err) // Offset the input to the compressed offset. // Notice how we do not provide any bytes before the offset. input := io.Reader(bytes.NewBuffer(compressed[compressedOffset:])) if _, ok := input.(io.Seeker); !ok { // Notice how the input cannot be seeked... once.Do(func() { fmt.Println("Input does not support seeking...") }) } else { panic("did you implement seeking on bytes.Buffer?") } // When creating the decoder we must specify that it should not // expect a stream identifier at the beginning og the frame. dec := minlz.NewReader(input, minlz.ReaderIgnoreStreamIdentifier()) // We now have a reader, but it will start outputting at uncompressedOffset, // and not the actual offset we want, so skip forward to that. toSkip := wantOffset - uncompressedOffset err = dec.Skip(toSkip) fatalErr(err) // Read the rest of the stream... got, err := io.ReadAll(dec) fatalErr(err) if bytes.Equal(got, want) { fmt.Println("Successfully skipped forward to", wantOffset) } else { fmt.Println("Failed to skip forward to", wantOffset) } } }
Output: Input does not support seeking... Successfully skipped forward to 0 Successfully skipped forward to 555555 Successfully skipped forward to 1111110 Successfully skipped forward to 1666665 Successfully skipped forward to 2222220 Successfully skipped forward to 2777775 Successfully skipped forward to 3333330 Successfully skipped forward to 3888885 Successfully skipped forward to 4444440 Successfully skipped forward to 4999995
func (*Index) LoadStream ¶
func (i *Index) LoadStream(rs io.ReadSeeker) error
LoadStream will load an index from the end of the supplied stream. ErrUnsupported will be returned if the signature cannot be found. ErrCorrupt will be returned if unexpected values are found. io.ErrUnexpectedEOF is returned if there are too few bytes. IO errors are returned as-is.
type OffsetPair ¶
type ReadSeeker ¶
type ReadSeeker struct { *Reader // contains filtered or unexported fields }
ReadSeeker provides random or forward seeking in compressed content. See Reader.ReadSeeker
func (*ReadSeeker) ReadAt ¶
func (r *ReadSeeker) ReadAt(p []byte, offset int64) (int, error)
ReadAt reads len(p) bytes into p starting at offset off in the underlying input source. It returns the number of bytes read (0 <= n <= len(p)) and any error encountered.
When ReadAt returns n < len(p), it returns a non-nil error explaining why more bytes were not returned. In this respect, ReadAt is stricter than Read.
Even if ReadAt returns n < len(p), it may use all of p as scratch space during the call. If some data is available but not len(p) bytes, ReadAt blocks until either all the data is available or an error occurs. In this respect ReadAt is different from Read.
If the n = len(p) bytes returned by ReadAt are at the end of the input source, ReadAt may return either err == EOF or err == nil.
If ReadAt is reading from an input source with a seek offset, ReadAt should not affect nor be affected by the underlying seek offset.
Clients of ReadAt can execute parallel ReadAt calls on the same input source. This is however not recommended.
type Reader ¶
type Reader struct {
// contains filtered or unexported fields
}
Reader is an io.Reader that can read Snappy-compressed bytes.
func NewReader ¶
func NewReader(r io.Reader, opts ...ReaderOption) *Reader
NewReader returns a new Reader that decompresses from r, using the framing format described at https://github.com/google/snappy/blob/master/framing_format.txt with S2 changes.
func (*Reader) DecodeConcurrent ¶
DecodeConcurrent will decode the full stream to w. This function should not be combined with reading, seeking or other operations. Up to 'concurrent' goroutines will be used. If <= 0, min(runtime.NumCPU, runtime.GOMAXPROCS, 8) will be used. On success the number of bytes decompressed nil and is returned. This is mainly intended for bigger streams, since it will cause more allocations.
func (*Reader) GetBufferCapacity ¶
GetBufferCapacity returns the capacity of the internal buffer. This might be useful to know when reusing the same reader in combination with the lazy buffer option.
func (*Reader) ReadSeeker ¶
func (r *Reader) ReadSeeker(index []byte) (*ReadSeeker, error)
ReadSeeker will return an io.ReadSeeker and io.ReaderAt compatible version of the reader. The original input must support the io.Seeker interface. A custom index can be specified which will be used if supplied. When using a custom index, it will not be read from the input stream. The ReadAt position will affect regular reads and the current position of Seek. So using Read after ReadAt will continue from where the ReadAt stopped. No functions should be used concurrently. The returned ReadSeeker contains a shallow reference to the existing Reader, meaning changes performed to one is reflected in the other.
func (*Reader) Reset ¶
Reset discards any buffered data, resets all state, and switches the Snappy reader to read from r. This permits reusing a Reader rather than allocating a new one.
func (*Reader) Skip ¶
Skip will skip n bytes forward in the decompressed output. For larger skips this consumes less CPU and is faster than reading output and discarding it. CRC is not checked on skipped blocks. io.ErrUnexpectedEOF is returned if the stream ends before all bytes have been skipped. If a decoding error is encountered subsequent calls to Read will also fail.
func (*Reader) UserChunkCB ¶
UserChunkCB will register a callback for chunks with the specified ID. ID must be a reserved user chunks ID, 0x80-0xfd (inclusive). For each chunk with the ID, the callback is called with the content. Any returned non-nil error will abort decompression. Only one callback per ID is supported, latest sent will be used. Sending a nil function will disable previous callbacks. You can peek the stream, triggering the callback, by doing a Read with a 0 byte buffer.
type ReaderOption ¶
ReaderOption is an option for creating a decoder.
func ReaderFallback ¶
func ReaderFallback(b bool) ReaderOption
ReaderFallback will enable/disable S2/Snappy fallback.
func ReaderIgnoreCRC ¶
func ReaderIgnoreCRC() ReaderOption
ReaderIgnoreCRC will make the reader skip CRC calculation and checks.
func ReaderIgnoreStreamIdentifier ¶
func ReaderIgnoreStreamIdentifier() ReaderOption
ReaderIgnoreStreamIdentifier will make the reader skip the expected stream identifier at the beginning of the stream. This can be used when serving a stream that has been forwarded to a specific point. Validation of EOF length is also disabled.
func ReaderMaxBlockSize ¶
func ReaderMaxBlockSize(blockSize int) ReaderOption
ReaderMaxBlockSize allows controlling allocations if the stream has been compressed with a smaller WriterBlockSize, or with the default 1MB. Blocks must be this size or smaller to decompress, otherwise the decoder will return ErrUnsupported.
For streams compressed with Snappy this can safely be set to 64KB (64 << 10).
Default is the maximum limit of 8MB.
func ReaderUserChunkCB ¶
func ReaderUserChunkCB(id uint8, fn func(r io.Reader) error) ReaderOption
ReaderUserChunkCB will register a callback for chunks with the specified ID. ID must be a Reserved skippable chunks ID, 0x40-0xfd (inclusive). For each chunk with the ID, the callback is called with the content. Any returned non-nil error will abort decompression. Only one callback per ID is supported, latest sent will be used. Sending a nil function will disable previous callbacks. You can peek the stream, triggering the callback, by doing a Read with a 0 byte buffer.
type Writer ¶
type Writer struct {
// contains filtered or unexported fields
}
Writer is an io.Writer that can write Snappy-compressed bytes.
func NewWriter ¶
func NewWriter(w io.Writer, opts ...WriterOption) *Writer
NewWriter returns a new Writer that compresses as an MinLZ stream to w.
Users must call Close to guarantee all data has been forwarded to the underlying io.Writer and that resources are released.
func (*Writer) AddUserChunk ¶
AddUserChunk will add a (non)skippable chunk to the stream. The ID must be in the range 0x80 -> 0xfe - inclusive. The length of the block must be <= MaxUserChunkSize bytes.
Example ¶
var buf bytes.Buffer w := NewWriter(&buf) // Add a skippable chunk if err := w.AddUserChunk(MinUserSkippableChunk, []byte("Chunk Custom Data")); err != nil { log.Fatalf("w.AddUserChunk: %v", err) } // if _, err := w.Write([]byte("some data")); err != nil { log.Fatalf("w.Write: %v", err) } w.Close() // Read back what we wrote. r := NewReader(&buf) err := r.UserChunkCB(MinUserSkippableChunk, func(sr io.Reader) error { var err error b, err := io.ReadAll(sr) fmt.Println("Callback:", string(b), err) return err }) if err != nil { log.Fatal(err) } // Read stream data b, err := io.ReadAll(r) if err != nil { log.Fatal(err) } fmt.Println("Stream data:", string(b))
Output: Callback: Chunk Custom Data <nil> Stream data: some data
func (*Writer) AsyncFlush ¶
AsyncFlush writes any buffered bytes to a block and starts compressing it. It does not wait for the output has been written as Flush() does.
func (*Writer) Close ¶
Close calls Flush and then closes the Writer. This is required to mark the end of the stream. Calling Close multiple times is ok, but calling CloseIndex after this will make it not return the index.
func (*Writer) CloseIndex ¶
CloseIndex calls Close and returns an index on first call. This is not required if you are only adding index to a stream.
func (*Writer) EncodeBuffer ¶
EncodeBuffer will add a buffer to the stream. This is the fastest way to encode a stream, but the input buffer cannot be written to by the caller until Flush or Close has been called when concurrency != 1.
If you cannot control that, use the regular Write function.
Note that input is not buffered. This means that each write will result in discrete blocks being created. For buffered writes, use the regular Write function.
func (*Writer) Flush ¶
Flush flushes the Writer to its underlying io.Writer. This does not apply padding.
func (*Writer) ReadFrom ¶
ReadFrom implements the io.ReaderFrom interface. Using this is typically more efficient since it avoids a memory copy. ReadFrom reads data from r until EOF or error. The return value n is the number of bytes read. Any error except io.EOF encountered during the read is also returned.
func (*Writer) Reset ¶
Reset discards the writer's state and switches the Snappy writer to write to w. This permits reusing a Writer rather than allocating a new one.
type WriterOption ¶
WriterOption is an option for creating a encoder.
func WriterAddIndex ¶
func WriterAddIndex(b bool) WriterOption
WriterAddIndex will append an index to the end of a stream when it is closed.
func WriterBlockSize ¶
func WriterBlockSize(n int) WriterOption
WriterBlockSize allows to override the default block size. Blocks will be this size or smaller. Minimum size is 4KB and and maximum size is 4MB.
Bigger blocks may give bigger throughput on systems with many cores, and will increase compression slightly, but it will limit the possible concurrency for smaller payloads for both encoding and decoding. Default block size is 1MB.
When writing Snappy compatible output using WriterSnappyCompat, the maximum block size is 64KB.
func WriterConcurrency ¶
func WriterConcurrency(n int) WriterOption
WriterConcurrency will set the concurrency, meaning the maximum number of decoders to run concurrently. The value supplied must be at least 1. By default this will be set to GOMAXPROCS.
func WriterCreateIndex ¶
func WriterCreateIndex(b bool) WriterOption
WriterCreateIndex allows to disable the default index creation. This can be used when no index will be needed - for example on network streams.
func WriterCustomEncoder ¶
func WriterCustomEncoder(fn func(dst, src []byte) int) WriterOption
WriterCustomEncoder allows to override the encoder for blocks on the stream. The function must compress 'src' into 'dst' and return the bytes used in dst as an integer. Block size (initial varint) should not be added by the encoder. Returning value 0 indicates the block could not be compressed. Returning a negative value indicates that compression should be attempted. The function should expect to be called concurrently.
func WriterFlushOnWrite ¶
func WriterFlushOnWrite() WriterOption
WriterFlushOnWrite will compress blocks on each call to the Write function.
This is quite inefficient as blocks size will depend on the write size.
Use WriterConcurrency(1) to also make sure that output is flushed. When Write calls return, otherwise they will be written when compression is done.
func WriterPadding ¶
func WriterPadding(n int) WriterOption
WriterPadding will add padding to all output so the size will be a multiple of n. This can be used to obfuscate the exact output size or make blocks of a certain size. The contents will be a skippable frame, so it will be invisible by the decoder. n must be > 0 and <= 4MB. The padded area will be filled with data from crypto/rand.Reader. The padding will be applied whenever Close is called on the writer.
func WriterPaddingSrc ¶
func WriterPaddingSrc(reader io.Reader) WriterOption
WriterPaddingSrc will get random data for padding from the supplied source. By default, crypto/rand is used.
func WriterUncompressed ¶
func WriterUncompressed() WriterOption
WriterUncompressed will bypass compression. The stream will be written as uncompressed blocks only. If concurrency is > 1 CRC and output will still be done async.
Source Files
¶
Directories
¶
Path | Synopsis |
---|---|
cmd
|
|
internal/filepathx
Package filepathx adds double-star globbing support to the Glob function from the core path/filepath package.
|
Package filepathx adds double-star globbing support to the Glob function from the core path/filepath package. |
internal/readahead
Package readahead will do asynchronous read-ahead from an input io.Reader and make the data available as an io.Reader.
|
Package readahead will do asynchronous read-ahead from an input io.Reader and make the data available as an io.Reader. |
internal
|
|