Documentation
¶
Overview ¶
Package benchmarks is used to benchmark the performance of the controllers against an existing Fleet installation. Each experiment aligns to a bundle's lifecycle. Experiments might have requirements, like the number of clusters in an installation. The experiments create a resource and wait for Fleet to reconcile it. Experiments collect multiple metrics, like the number and duration of reconciliations, the overall duration of the experiment, the number of created k8s resources and the CPU and memory usage of the controllers.
Index ¶
Constants ¶
const ( // GroupLabel is used on bundles. One cannot // use v1alpha1.RepoLabel because fleet 0.9 deletes bundles with an // invalid repo label. However, bundle labels are propagated to // bundledeployments. GroupLabel = "fleet.cattle.io/benchmark-group" // BenchmarkLabel is set to "true" on clusters that should be included // in the benchmark. BenchmarkLabel = "fleet.cattle.io/benchmark" )
Variables ¶
This section is empty.
Functions ¶
func TestBenchmarkSuite ¶
TestBenchmarkSuite runs the benchmark suite for Fleet.
Inputs for this benchmark suite via env vars: * cluster registration namespace, contains clusters * timeout for eventually * if metrics should be recorded
Types ¶
This section is empty.