Documentation
¶
Overview ¶
Package agent deals with High Availability tasks in a cluster
Tasks include: * Marking nodes that have lost quorum as tainted to repel new pods * Force deletion of Pods and VolumeAttachments on a node with lost quorum, triggering failover * Reconfigure for IO errors instead of IO suspension when pods should be stopped * Stop Pods if running on force-io-error resources.
Index ¶
Constants ¶
const ( PersistentVolumeByResourceDefinitionIndex = "pv-by-rd" PersistentVolumeByPersistentVolumeClaimIndex = "pv-by-pvc" PodByPersistentVolumeClaimIndex = "pod-by-pvc" SatellitePodIndex = "satellite-pod" VolumeAttachmentByPersistentVolumeIndex = "va-by-pv" )
Variables ¶
This section is empty.
Functions ¶
func TaintNode ¶
func TaintNode(ctx context.Context, client kubernetes.Interface, node *corev1.Node, taint corev1.Taint, disableNodeTaints bool) (bool, error)
TaintNode adds the specific taint to the node.
Returns false, nil if the taint was already present or applying node taints had been disabled.
Types ¶
type DrbdConnection ¶
type DrbdDevice ¶ added in v1.1.1
type DrbdDevice struct {
Quorum bool `json:"quorum"`
}
type DrbdResourceState ¶
type DrbdResourceState struct { Name string `json:"name"` Role string `json:"role"` Suspended bool `json:"suspended"` ForceIoFailures bool `json:"force-io-failures"` Devices []DrbdDevice `json:"devices"` Connections []DrbdConnection `json:"connections"` }
DrbdResourceState is the parsed output of "drbdsetup status --json".
func (*DrbdResourceState) HasQuorum ¶
func (d *DrbdResourceState) HasQuorum() bool
HasQuorum returns true if all local devices have quorum.
func (*DrbdResourceState) MayPromote ¶
func (d *DrbdResourceState) MayPromote() bool
MayPromote returns the best local approximation of the may promote flag from "drbdsetup events2".
func (*DrbdResourceState) Primary ¶
func (d *DrbdResourceState) Primary() bool
Primary returns true if the local resource is primary.
type DrbdResources ¶
type DrbdResources interface { // StartUpdates starts the process of updating the current state of DRBD resources. StartUpdates(ctx context.Context) error // Get returns the resource state at the time the last update was made. Get() []DrbdResourceState }
DrbdResources keeps track of DRBD resources.
func NewDrbdResources ¶
func NewDrbdResources(resync time.Duration) DrbdResources
type Options ¶
type Options struct { // NodeName is the name of the local node, as used by DRBD and Kubernetes. NodeName string // RestConfig is the config used to connect to Kubernetes. RestConfig *rest.Config // DeletionGraceSec is the number of seconds to wait for graceful pod termination in eviction/deletion requests. DeletionGraceSec int64 // ReconcileInterval is the maximum interval between reconcilation runs. ReconcileInterval time.Duration // ResyncInterval is the maximum interval between resyncing internal caches with Kubernetes. ResyncInterval time.Duration // DrbdStatusInterval is the maxmimum interval between drbd state updates. DrbdStatusInterval time.Duration // OperationTimeout is the timeout used for reconcile operations. OperationTimeout time.Duration // FailOverTimeout is minimum wait between noticing quorum loss and starting the fail-over process. FailOverTimeout time.Duration // FailOverUnsafePods indicates if Pods with unknown other volume types should be failed over as well. FailOverUnsafePods bool // SatellitePodSelector selects the Pods that should be considered LINSTOR Satellites. // If the DRBD connection name matches on of these Pods, the Kubernetes Node name is taken from these Pods. SatellitePodSelector labels.Selector // DisableNodeTaints prevents the nodes in a cluster from being tainted. DisableNodeTaints bool }
type ReconcileRequest ¶
type ReconcileRequest struct { RefTime time.Time Resource *DrbdResourceState Volume *corev1.PersistentVolume Pods []*corev1.Pod Attachments []*storagev1.VolumeAttachment Nodes []*corev1.Node }
type Reconciler ¶
type Reconciler interface {
RunForResource(ctx context.Context, req *ReconcileRequest, recorder events.EventRecorder) error
}
func NewFailoverReconciler ¶
func NewFailoverReconciler(opt *Options, client kubernetes.Interface, pvIndexer cache.Indexer) Reconciler
NewFailoverReconciler creates a reconciler that "fails over" pods that are on storage without quorum.
The reconciler recognizes storage without quorum by: * Having the local copy be promotable * Having pods running * Have pods that are mounting the volume read-write (otherwise the promotable info is useless) * Have a connection to the peer node that is not connected If all of these are true, it waits for a short timeout, before starting the actual "fail over" process. The process involves: * Adding a taint on the node, causing new Pods to avoid the node. * Evicting all pods using that volume from the failed node, creating new pods to replace them. * Delete the volume attachment, informing Kubernetes that attaching the volume to a new node is fine.
func NewForceIoErrorReconciler ¶
func NewForceIoErrorReconciler(opt *Options, client kubernetes.Interface) Reconciler
NewForceIoErrorReconciler creates a reconciler that evicts pods if a volume is reporting IO errors.
If DRBD is in "force IO failures" mode, all opener processes will see IO errors. This lasts until all openers closed the DRBD device, at which point DRBD will start behaving normally again. In order for all openers to be closed we need to force all local Pods to be stopped. This is what this reconciler does: * Adding a taint on the node, causing new Pods to avoid the node. * Evicting all pods using that volume from the failed node, creating new pods to replace them.
func NewSuspendedPodReconciler ¶
func NewSuspendedPodReconciler(opt *Options) Reconciler
NewSuspendedPodReconciler creates a reconciler that gets suspended Pods to resume termination.
While DRBD is suspending IO, all processes (including Pods and filesystems) using the device are stuck. In order to resume, one can force DRBD to report IO errors instead. The reconciler does just that if a local Pod should be stopped while it is suspended by DRBD. This enables a (relatively) clean shutdown of the resource without node reboot.