Deployments

A service is a running activity capable of responding to a specified set of requests on behalf of its clients.

Kumori’s Service Applications specify the "code" to execute to obtain a service out of a service application.

The Service Application is the program, the Service is the running program. In order to obtain a service out of a service application, it is necessary to deploy the service application within a Kumori platform.

To be able to actually deploy a service application, we need to make sure that all configuration parameters for all the roles are set to concrete values. When this happens, we can obtain a JSON representation of a service application, where all roles are fully configured, which, in turn, implies that all components of those roles are fully configured.

Kumori Platform provides a specific element to represent a deployment action of a service application, the #Deployment, specified through its own manifest, defined as follows

#Deployment: {
  name:      string
  meta:      {...}
  artifact:  #Artifact
  config:    #Configurable
  up:        string|*null
}

We can see that a Deployment specification is, in essence, the same as the specification for a role within a service application.

Whereas in a role, the up field was left alone, in a deployment it can be used to reference a previously existing deployment within the same cluster. The up deployment life cycle will then govern the lifecycle of the present deployment.

Notice that as with roles, any kind of artifact can be referred to via the artifact field. This means a component can be deployed just as a service app can. This is acomplished by using the natural mapping from components to service applications.

The config has the same meaning as for Components, Services and roles.

How deployment specifications are processed

There are two processes being applied to a deployment: the spread of the configuration, and the extraction of a solution.

Configuration spread

The configuration passed in a deployment is unified with the config specification for the artifact being deployed, exactly the same as for roles.

The config section of the configuration must be pure data, i.e., it must be representable as a json file.

If the artifact is a component, references to the component config fields are properly substituted within the rest of the component definition.

If the artifact is itself a service application, a recursive process ensures that the results of such unification are picked up by the references that the config fields of the service application roles specify, further unifying them with the config field of their respective artifacts, in exactly the same way as done for the deployment.

The end result of this spread, when all required configuration has been provided, is a concrete cue data structure (e.g. json-able) with all config for all components and builtins filled.

Solution extraction

Kumori Platform introduces the concept of a Solution, wich is a collection of deployments and links between deployments. At this moment a solution must contain only one top deployment and the other deployments must descend (via their up field) from that top deployment.

The second process applied to a deployment specification extracts such a solution, such that each deployment within it refers to flat artifacts: either components or services whose roles are implemented by components.

Currently, solutions cannot be specified directly, only deriving them from deployments.

Referring to resources

Whereas in an artifact specification designed for portability and reuse concrete resource references MUST not be used ever (they are defined on a particular cluster), in a deployment manifest they should actually be used to refer to registered resources on the cluster where the deployment should be launched.

The role of the meta field

As for roles, the meta field can contain arbitrary amounts of information that is made available to deployments that link to this deployment, be it within the same solution or from some other solution.

Expressing QoS conditions

In the previous version of the model, the deployment manifest could also carry with it a specification of the SLA. Even though in this version such specification is still allowed, it is taken for informational purposes.

The config section of the deployment is in charge of supplying the data dealing with how to scale an artifact as a funcion of the load on the service.

In the simplest case (when deployoing just a component) it is possible to simply say the number of instances the role should have.

When the artifact is a service app whose structure is visible/known (i.e., not a builtin) then, through the detailed field it is possible to specify scaling data that must be consumed by each one of the service app roles. This can be done recursively being capable of specifying precisely the number of instances of each component role in the solution being deployed.

Finally, through the qos field, it is possible to indicate that the service must prepare itself to a concrete load, commiting to a maximum response time. The spread process mentioned earlier takes care of converting such specification into scaling data for each one of the roles involved, finally being usable/used within fields of the role (hsize) and component (sine, mainly) specifications.

To actually convert a qos deployment data field into further qos data fields (or detailed or hsize fields) down the line, a developer is constrained to use the tools CUE makes available, including extra packages. Future versions of the model will include additional useful functions for manipulation of qos specifications.

The config section in artifacts, roles and deployments, and setting up defaults for them.

The config section of an artifact (service or component) represents those pieces of information that must be supplied by a deployer to fine-tune the behavior of the software to be run as a result of a deployment.

The config section must satisfy the following property:

Rule 1
When the config section is concrete (e.g. representable by a json object), the rest of the
artifact structure becomes concrete too.

That is, the rest of the artifact specification must contain either concrete values, or references to the config. Those references will be resolved when the config acquires concrete values.

Config of a role

A corollary of rule 1 above is that the config section of any role in a service artifact will become concrete when the config section of the service it is part of is itself concrete.

Besides this general rule, the config section of a role must also satisfy the following constrain:

Rule 2
The config section of a role MUST match the config section of the artifact implementing the role.
The match consists on providing ALL non-optional and non-CUE-defaulted fields in the artifact, and
only fields specified in the artifact's config.

Config of a deployment

A deployment is a specialized form of a role, where the config must be concrete data. As a role, the deployment’s config must match the config of the artifact that is being deployed.

Default values

The config section contains four distinct subsections.

The scale subsection is fully structured, providing no freedom as to what it can contain. It makes sense on occasion, to provide CUE-default values.

The resilience section is actually just one numeric field. In this case it also makes sense to provide CUE-default values.

The resource section contains references to registered resources in the cluster where the service is going to be deployed. It makes NO sense whatsoever to provide default values for resources, as those "values" are just references to registered resources in a particular cluster, and they cannot be reused from cluster to cluster.

The parameter section contains just data and it makes sense to provide default values for it even at different levels (e.g., component or even services)

Specifying default values

A field in the parameter section can be specified to get a default value in one of two ways: using the CUE notation (| *value), or declaring the field as optional, and, later on in other parts of the specification of the artifact, check if the field has been provided, supplying a value in case it has not.

Either of these ways makes it possible to use a concrete configuration to unify with the config without providing the defaultedable fields (it is optional to provide their values)

Order of consideration of defaults

Defaults at any level are made concrete at that level. That is, if a field has a default value in a service and one of its roles uses it as the value to pass to the role’s artifact, whatever values the field got at the service level (be it by direct assignment or by default) is passed to the role’s artifact.

Thus defaults declared in the service containing an artifact trump those declared in the artifact.