This is a work in process. There are still important features missing.
Turandot

Orchestrate and compose Kubernetes workloads using
TOSCA.
Supports policy-based service composition based on service templates stored in an inventory.
Workloads can comprise both standard and
custom
Kubernetes resources, as well as their
operators. They can be deployed
on a single cluster or on multi-cluster clouds. Virtual machines are supported via
KubeVirt.
External orchestrators (e.g. Ansible) are supported via custom artifacts
(e.g. Ansible playbooks) encapsulated as TOSCA types.
See the included examples.
Turandot targets complex, large-scale workloads. Moreover, it intends to handle the
orchestration aspect of
NFV (Network Function Virtualization) MANO (Management and Orchestration),
which is a crucial component for deploying heterogeneous network services on clouds at scale.
Included is a comprehensive example of a multi-cluster
telephony network service modeled entirely in TOSCA.
Get It

Rationale
Design-time: TOSCA's extensibility via an object-oriented grammar is analogous to Kubernetes's
extensibility via custom resource definitions and operators. TOSCA's added value is in providing a
composable and validated graph of resource interrelations, effectively imbuing Kubernetes resources
with architectural intent.
Run-time: Turandot manages resources together as single, coherent workloads—whether we call
them "applications" or "services"—even across cluster boundaries, ensuring consistency and
integration as well as allowing for cascading policies for allocation, composition, networking,
security, etc.
How It Works
The core is a Kubernetes operator that:
- Can work with an internal (built-in) or external inventories to retrieve CSAR-packaged service
templates. A CSAR
(Cloud Service Archive) is a zip file containing a TOSCA service template, TOSCA profiles,
and other files ("artifacts") required for orchestration (see #4, below).
- Uses Puccini to compile the CSAR-packaged service templates into the
Clout intermediary format.
- Renders the Clout to Kubernetes resources and schedules them as integrated workloads.
- Deploys and activates artifacts packaged in the CSAR file. This includes container images (as
well as KubeVirt virtual machine images) and cloud-native configuration tools, such as scripts,
playbooks, recipes, etc., as well as Kubernetes operators. These configuration tools have access
to the entire workload topology, allowing them to essentially configure themselves.
- Can delegate orchestration to Turandot operators in remote clusters (see multi-cluster workloads,
below).
The Turandot operator can be controlled using the turandot
utility, e.g.:
turandot service deploy myservice --file=myservice.csar
⮕ Documentation
The Cycle of Life
Day -1: Modeling. TOSCA is used to create "profiles" of reusable, composable types, which
together provide a validated and validating model for the target domain. TOSCA profiles vastly
simplify the work of the service template designer. For example, our telephony network service
example uses profiles for Kubernetes, KubeVirt, network services (including data planes), and
telephony.
Day 0: Design. Solution architects compose the models provided by the TOSCA profiles into
service templates, either by writing the TOSCA manually, or by using a wonderful graphical TOSCA IDE
(that is yet to be created!). The templates are tested in lab and staging environments using
CI/CD-style automation.
Day 1: Operations Handoff. The service templates are ready to be instantiated in production.
A ticket from an operations support system (OSS) initiates the transfer to a managed multi-cluster
cloud. Turandot is installed on the target clusters (or available as a delegate from central
clusters) and takes it from there.
Day 2+: Cloud-native Operations. Once they are up and running the services should orchestrate
themselves by adapting to changing internal and external conditions, as well as triggered and manual
actions from operations. Changes include scaling, healing, as well as more elaborate
transformations. The Turandot operator will continue to monitor these changes and update the Clout.
Components can refer to the Clout as "single source of truth" to see the complete topology in order
to make self-orchestration decisions, as well as checking against policies to which they must or can
adhere. Machine learning and AI can be applied to the Clout in order to make the best possible
runtime orchestration decisions.
Multi-Cluster Workloads
What if your workload crosses the boundaries of a single Kubernetes cluster?
Each cluster will have its own Turandot operator that manages resources only for that cluster,
however the Clout will always contain a view of all resources, ensuring workload integration.
Each operator can delegate work to specific other operators, according to composition policy.
This network of operators essentially turns your multi-cluster environment into a single cloud.
Note that allowing operators to network with each other across cluster boundaries is beyond the
scope of Turandot, however you can definitely use Turandot to orchestrate this control plane itself.
Often this will be a SDN solution, such
as shared virtual LANs across SD-WAN connections, using a combination of
Kubernetes CNI providers,
Multus, Cilium,
Network Service Mesh, custom proxies, etc. Indeed, one size does
not fit all, which is why Turandot insists on not having an opinion.
Namespaced or Cluster Mode
The Turandot operator can work in either "namespaced mode", in which it can only manage resources in
the namespace in which it is installed, or "cluster mode", in which it can manage all namespaces.
Cluster mode requires elevated permissions, and as such may not be applicable in multi-tenancy
scenarios. A more secure configuration is to have Turandot installed only in supported namespaces
within a cluster and to allow secure delegation between them, in effect treating it like the
multi-cluster scenario (see above).
FAQ
Is this a lifecycle manager (LCM) for Kubernetes workloads?
No, or not exactly. In Kubernetes, LCM is hardcoded behind the scheduling paradigm. Of course work
is done by built-in and custom controllers to provision containers, wire up the networking, run init
containers and sidecars, attach storage blocks, etc., but from an orchestration perspective LCM is
largely reduced to a simple binary: either the resource is scheduled or it isn't.
Individual resources can be updated, and this can have cascading effects on other resources, but
these effects are event-driven, not necessary sequential, and are certainly not "workflows" or
atomic transactions that can be rolled back. Changes are expected to be dynamic, asynchronous, and
"eventual". In other words: the total state of the workload is emergent rather than imposed.
This is so different from "legacy" LCM that it's probably best not to use that term in this
scenario. Kubernetes introduces a new, cloud-native orchestration paradigm.
Why is there a built-in inventory? Shouldn't the inventory be managed externally?
Surely, for production systems a robust inventory is necessary. Turandot can work with various
inventory backends, as well as any container image repository adhering to the
OCI or Docker standards, e.g.
Quay and Harbor. Indeed, the internal
repository is a simple Docker repository. Note that Turandot can store and retrieves CSAR files from
such repositories even though they are not container images.
The built-in inventory does not have to be used in production, but it can be useful as a local cache
if the repositories are slow to access or if access is unreliable, e.g. on cloud edge datacenters.
Why use TOSCA and CSARs instead of packaged Helm charts?
A Helm chart is essentially a collection of text templates for low-level
Kubernetes YAML resource specs stored in a bespoke
repository format. Up to Helm version 3, it had an
in-cluster controller named Tiller. At version 3 it was removed, leaving Helm entirely devoted
to text templating.
Text templating is a rather miserable method for generating YAML, and it's hard to use it to model
reusable types. By contrast, TOSCA is a strictly-typed object-oriented language that supports
inheritance and topological composition, making it vastly superior for modeling complex cloud
workloads. It's an industry-supported standard created exactly for this purpose. Note that TOSCA
does not have to introduce "abstraction", indeed the included Kubernetes TOSCA profile is a
one-to-one mapping of raw Kubernetes resources. TOSCA can and should precisely model the target
domain.
That said, in the future Turandot will support Helm charts in order to allow users to leverage
existing packaging efforts. The current design goal is to have Helm charts modeled as a single type
in TOSCA and have them packaged into the CSAR as artifacts, or referred to in external Helm chart
repositories.
Why is it called "Turandot"?
"Turandot" is the last opera by composer
Giacomo Puccini, likely inspired by Count Carlo
Gozzi's commedia dell'arte
play of the same name. Its final aria,
Nessun Dorma, is one of the
most well-known of all arias. Puccini is also famous
for his Tosca opera. See, everything is connected.
Turandot, the name of the protagonist of the opera, comes from Persian Turandokht, meaning
"daughter of Turan", Turan being an older name for
much of what we now call Central Asia. Turan in turn is named
for its legendary ruler, Tūr (meaning "brave"), a prince of the ancient
Shahnameh epic.
There is some disagreement over whether the final "t" should be pronounced or not, as it likely
wasn't pronounced by Puccini himself. All you should know is that if you pronounce it incorrectly
this software will not work well for you.