Documentation
¶
Overview ¶
Package tessera provides an implementation of a tile-based logging framework.
Index ¶
- Constants
- Variables
- func InMemoryDedupe(delegate func(ctx context.Context, e *Entry) IndexFuture, size uint) func(context.Context, *Entry) IndexFuture
- func NewCertificateTransparencySequencedWriter(s Storage) func(context.Context, *ctonly.Entry) IndexFuture
- func WithBatching(maxSize uint, maxAge time.Duration) func(*options.StorageOptions)
- func WithCTLayout() func(*options.StorageOptions)
- func WithCheckpointInterval(interval time.Duration) func(*options.StorageOptions)
- func WithCheckpointSigner(s note.Signer, additionalSigners ...note.Signer) func(*options.StorageOptions)
- func WithPushback(maxOutstanding uint) func(*options.StorageOptions)
- type Entry
- type IndexFuture
- type IntegrationAwaiter
- type Storage
Constants ¶
const ( // DefaultBatchMaxSize is used by storage implementations if no WithBatching option is provided when instantiating it. DefaultBatchMaxSize = 256 // DefaultBatchMaxAge is used by storage implementations if no WithBatching option is provided when instantiating it. DefaultBatchMaxAge = 250 * time.Millisecond // DefaultCheckpointInterval is used by storage implementations if no WithCheckpointInterval option is provided when instantiating it. DefaultCheckpointInterval = 10 * time.Second )
Variables ¶
var ErrPushback = errors.New("too many unintegrated entries")
ErrPushback is returned by underlying storage implementations when there are too many entries with indices assigned but which have not yet been integrated into the tree.
Personalities encountering this error should apply back-pressure to the source of new entries in an appropriate manner (e.g. for HTTP services, return a 503 with a Retry-After header).
Functions ¶
func InMemoryDedupe ¶
func InMemoryDedupe(delegate func(ctx context.Context, e *Entry) IndexFuture, size uint) func(context.Context, *Entry) IndexFuture
InMemoryDedupe wraps an Add function to prevent duplicate entries being written to the underlying storage by keeping an in-memory cache of recently seen entries. Where an existing entry has already been `Add`ed, the previous `IndexFuture` will be returned. When no entry is found in the cache, the delegate method will be called to store the entry, and the result will be registered in the cache.
Internally this uses a cache with a max size configured by the size parameter. If the entry being `Add`ed is not found in the cache, then it calls the delegate.
This object can be used in isolation, or in conjunction with a persistent dedupe implementation. When using this with a persistent dedupe, the persistent layer should be the delegate of this InMemoryDedupe. This allows recent duplicates to be deduplicated in memory, reducing the need to make calls to a persistent storage.
func NewCertificateTransparencySequencedWriter ¶
func NewCertificateTransparencySequencedWriter(s Storage) func(context.Context, *ctonly.Entry) IndexFuture
NewCertificateTransparencySequencedWriter returns a function which knows how to add a CT-specific entry type to the log.
This entry point MUST ONLY be used for CT logs participating in the CT ecosystem. It should not be used as the basis for any other/new transparency application as this protocol: a) embodies some techniques which are not considered to be best practice (it does this to retain backawards-compatibility with RFC6962) b) is not compatible with the https://c2sp.org/tlog-tiles API which we _very strongly_ encourage you to use instead.
Users of this MUST NOT call `Add` on the underlying storage directly.
Returns a future, which resolves to the assigned index in the log, or an error.
func WithBatching ¶
func WithBatching(maxSize uint, maxAge time.Duration) func(*options.StorageOptions)
WithBatching configures the batching behaviour of leaves being sequenced. A batch will be allowed to grow in memory until either:
- the number of entries in the batch reach maxSize
- the first entry in the batch has reached maxAge
At this point the batch will be sent to the sequencer.
Configuring these parameters allows the personality to tune to get the desired balance of sequencing latency with cost. In general, larger batches allow for lower cost of operation, where more frequent batches reduce the amount of time required for entries to be included in the log.
If this option isn't provided, storage implementations with use the DefaultBatchMaxSize and DefaultBatchMaxAge consts above.
func WithCTLayout ¶
func WithCTLayout() func(*options.StorageOptions)
WithCTLayout instructs the underlying storage to use a Static CT API compatible scheme for layout.
func WithCheckpointInterval ¶
func WithCheckpointInterval(interval time.Duration) func(*options.StorageOptions)
WithCheckpointInterval configures the frequency at which Tessera will attempt to create & publish a new checkpoint.
Well behaved clients of the log will only "see" newly sequenced entries once a new checkpoint is published, so it's important to set that value such that it works well with your ecosystem.
Regularly publishing new checkpoints:
- helps show that the log is "live", even if no entries are being added.
- enables clients of the log to reason about how frequently they need to have their view of the log refreshed, which in turn helps reduce work/load across the ecosystem.
Note that this option probably only makes sense for long-lived applications (e.g. HTTP servers).
If this option isn't provided, storage implementations will use the DefaultCheckpointInterval const above.
func WithCheckpointSigner ¶
func WithCheckpointSigner(s note.Signer, additionalSigners ...note.Signer) func(*options.StorageOptions)
WithCheckpointSigner is an option for setting the note signer and verifier to use when creating and parsing checkpoints.
A primary signer must be provided: - the primary signer is the "canonical" signing identity which should be used when creating new checkpoints.
Zero or more dditional signers may also be provided. This enables cases like:
- a rolling key rotation, where checkpoints are signed by both the old and new keys for some period of time,
- using different signature schemes for different audiences, etc.
When providing additional signers, their names MUST be identical to the primary signer name, and this name will be used as the checkpoint Origin line.
Checkpoints signed by these signer(s) will be standard checkpoints as defined by https://c2sp.org/tlog-checkpoint.
func WithPushback ¶
func WithPushback(maxOutstanding uint) func(*options.StorageOptions)
WithPushback allows configuration of when the storage should start pushing back on add requests.
maxOutstanding is the number of "in-flight" add requests - i.e. the number of entries with sequence numbers assigned, but which are not yet integrated into the log.
Types ¶
type Entry ¶
type Entry struct {
// contains filtered or unexported fields
}
Entry represents an entry in a log.
func (Entry) Identity ¶
Identity returns an identity which may be used to de-duplicate entries and they are being added to the log.
func (Entry) Index ¶
Index returns the index assigned to the entry in the log, or nil if no index has been assigned.
func (Entry) LeafHash ¶
LeafHash is the Merkle leaf hash which will be used for this entry in the log. Note that in almost all cases, this should be the RFC6962 definition of a leaf hash.
func (*Entry) MarshalBundleData ¶
MarshalBundleData returns this entry's data in a format ready to be appended to an EntryBundle.
Note that MarshalBundleData _may_ be called multiple times, potentially with different values for index (e.g. if there's a failure in the storage when trying to persist the assignment), so index should not be considered final until the storage Add method has returned successfully with the durably assigned index.
type IndexFuture ¶
IndexFuture is the signature of a function which can return an assigned index or error.
Implementations of this func are likely to be "futures", or a promise to return this data at some point in the future, and as such will block when called if the data isn't yet available.
type IntegrationAwaiter ¶
type IntegrationAwaiter struct {
// contains filtered or unexported fields
}
IntegrationAwaiter allows client threads to block until a leaf is both sequenced and integrated. A single long-lived IntegrationAwaiter instance should be reused for all requests in the application code as there is some overhead to each one; the core of an IntegrationAwaiter is a poll loop that will fetch checkpoints whenever it has clients waiting.
The expected call pattern is:
i, cp, err := awaiter.Await(ctx, storage.Add(myLeaf))
When used this way, it requires very little code at the point of use to block until the new leaf is integrated into the tree.
func NewIntegrationAwaiter ¶
func NewIntegrationAwaiter(ctx context.Context, readCheckpoint func(ctx context.Context) ([]byte, error), pollPeriod time.Duration) *IntegrationAwaiter
NewIntegrationAwaiter provides an IntegrationAwaiter that can be cancelled using the provided context. The IntegrationAwaiter will poll every `pollPeriod` to fetch checkpoints using the `readCheckpoint` function.
func (*IntegrationAwaiter) Await ¶
func (a *IntegrationAwaiter) Await(ctx context.Context, future IndexFuture) (uint64, []byte, error)
Await blocks until the IndexFuture is resolved, and this new index has been integrated into the log, i.e. the log has made a checkpoint available that commits to this new index. When this happens, Await returns the index at which the leaf has been added, and a checkpoint that commits to this index.
This operation can be aborted early by cancelling the context. In this event, or in the event that there is an error getting a valid checkpoint, an error will be returned from this method.
type Storage ¶
type Storage interface { // Add should duably assign an index to the provided Entry, returning a future to access that value. // // Implementations MUST call MarshalBundleData method on the entry before persisting/integrating it. Add(context.Context, *Entry) IndexFuture }
Storage described the expected functions from Tessera storage implementations.
Directories
¶
Path | Synopsis |
---|---|
Package api contains the tiles definitions from the [tlog-tiles API].
|
Package api contains the tiles definitions from the [tlog-tiles API]. |
layout
Package layout contains routines for specifying the path layout of Tessera logs, which is really to say that it provides functions to calculate paths used by the [tlog-tiles API].
|
Package layout contains routines for specifying the path layout of Tessera logs, which is really to say that it provides functions to calculate paths used by the [tlog-tiles API]. |
Package client provides client support for interacting with logs that uses the [tlog-tiles API].
|
Package client provides client support for interacting with logs that uses the [tlog-tiles API]. |
cmd
|
|
conformance/aws
aws is a simple personality allowing to run conformance/compliance/performance tests and showing how to use the Tessera AWS storage implmentation.
|
aws is a simple personality allowing to run conformance/compliance/performance tests and showing how to use the Tessera AWS storage implmentation. |
conformance/gcp
gcp is a simple personality allowing to run conformance/compliance/performance tests and showing how to use the Tessera GCP storage implmentation.
|
gcp is a simple personality allowing to run conformance/compliance/performance tests and showing how to use the Tessera GCP storage implmentation. |
conformance/mysql
mysql is a simple personality allowing to run conformance/compliance/performance tests and showing how to use the Tessera MySQL storage implmentation.
|
mysql is a simple personality allowing to run conformance/compliance/performance tests and showing how to use the Tessera MySQL storage implmentation. |
conformance/posix
posix runs a web server that allows new entries to be POSTed to a tlog-tiles log stored on a posix filesystem.
|
posix runs a web server that allows new entries to be POSTed to a tlog-tiles log stored on a posix filesystem. |
examples/posix-oneshot
posix-oneshot is a command line tool for adding entries to a local tlog-tiles log stored on a posix filesystem.
|
posix-oneshot is a command line tool for adding entries to a local tlog-tiles log stored on a posix filesystem. |
Package ctonly has support for CT Tiles API.
|
Package ctonly has support for CT Tiles API. |
internal
|
|
hammer
hammer is a tool to load test a Tessera log.
|
hammer is a tool to load test a Tessera log. |
parse
Package parse contains internal methods for parsing data structures quickly, if unsafely.
|
Package parse contains internal methods for parsing data structures quickly, if unsafely. |
storage
|
|
aws
Package aws contains an AWS-based storage implementation for Tessera.
|
Package aws contains an AWS-based storage implementation for Tessera. |
gcp
Package gcp contains a GCP-based storage implementation for Tessera.
|
Package gcp contains a GCP-based storage implementation for Tessera. |
internal
Package storage provides implementations and shared components for tessera storage backends.
|
Package storage provides implementations and shared components for tessera storage backends. |
mysql
Package mysql contains a MySQL-based storage implementation for Tessera.
|
Package mysql contains a MySQL-based storage implementation for Tessera. |