1 Introduction

Today’s software systems are distributed systems, composed of a set of agents collaborating among themselves to produce a service that is transparently accessed by its clients, be them human users, be them other software systems.

Writing distributed systems is complex as many issues dealing with coordination in the maintenance of a global consistent state must be dealt with.

Whereas now a days many supporting middlewares facilitate many of the tasks that must be carried out, the developer must still take care of properly architecting the system being built so that the desired properties (typically high availability and scalability) can be achieved.

Often times, a crucial aspect of converting a design into a working product is overlooked, jeopardizing the productization of an otherwise perfectly fine design and implementation: the deployment configuration and its life cycle maintenance.

1.1 Current standards

While platforms like Docker Compose and Kubernetes offer native models for deploying microservices, these are inherently technology-specific. They lack a higher-level abstraction that captures a complete service, including its communication topology and the interdependencies between configuration parameters, and, furthermore, do not offer a way to reuse and compose services to be part of larger, more complex services themselves.

2 The Kumori Service Model

The Kumori Service Model is a formal framework designed to describe, deploy, and manage cloud-native applications on the KClusters manages by the Axebow Platform. It provides a structured way to define modular, reusable services made of containerized microservices, abstracting away much of the complexity of Kubernetes while retaining a large degree of flexibility.

By formalizing service definitions, it enables developers to focus on business logic rather than infrastructure, while operators retain control over security, scalability, and resilience thanks to the information provided through the model.

Chapter @ch-overview provides a high-level overview of the Kumori Service Model

2.1 Goals

The goal of these documents is to provide the fooundations and working knowlede enabling users to take advantage of the capabilities of the Axebow platform and the expressive power of the Service Model it is based on.

3 Overview of the Kumori Service Model

3.1 Artifacts

The Kumori Service model occupies itself mainly with the business of defining those aspects of the implementation of a service that are essential to its deployment; be it in isolation, ready to be linked as a client or a server of other services; be it as a part of a larger service, where its relationship with other sub-services of the larger service must be established.

We refer to that essential specification of the implementation of a service as an Artifact.

Artifacts are the fundamental units in the Kumori Service Model and they describe a service implementation in a way that allows Axebow to both manage its deployments, as well as integrate it within larger service specifications. Artifacts are designed to be reusable and composable, enabling developers to build scalable and maintainable applications by assembling various artifacts together.

The Kumori Service Model distinguishes two types of artifacts:

Component Artifact
A Component artifact is a deployable unit that encapsulates the code of a service (within a container image), its configurability, and its service endpoints.
Service Artifact
A Service artifact is a collection of interconnected artifacts. Service artifacts are the primary mechanism for composing complex application topologies, enabling abstraction and reuse.

Artifact definition is split between its interface, which declares the artifact’s public contract (inputs, outputs, configuration options), and its implementation, which details how the artifact is actually defined (container images, resource requirements, deployment strategies, interconnections).

The common interface exposed by any artifact includes:

Channels
Represent communication paths to/from a
microservice.

We distinguish three kinds of channels.

Server Channel
A service channel defines a port where the software binds and waits for connections from other agents. A server channel represents specic functionality provided by the artifact (e.g. an API endpoint).
Client Channel
Represents a dependency on another service. The declaration can be used by the platform to set up communication paths (links) when deploying, without adding extra configuration to the artifact sofware that simply resolves the channel name to reach the dependency. Client channels are the mechanism to set up dependency injection in the platform.
Duplex Channel
Allows clients of the channel to distinguish among the various instances of the service exposing the channel. A duplex is typically used as a combination of a client and server channel, ready to support stateful service coordination.
Configuration
Set of knobs that can be set on deployment.
Configuration Parameters
data configuration that must be supplied when the artifact is actually deployed.
Configuration Resources
Elements that are understood by the platform that handles them in specific ways (secrets, volumes, domains…), and typically require their previous registration with the platform. The registration associates an id with the element and it is this id that needs to be specified in a configured resource.

3.1.1 Component Artifact

A component artifact must specify at least these items:

  • The software to be run
    Currently in the form of one or more docker images. In addition, when images are within private registries, the credentials needed have to be supplied. When deployed docker images will become the containers sharing the same network stack.
  • The channels it supports
    They show the expected funcionality and the dependencies of the software.
  • The configuration schema
    Parameters needed to properly configure a deployment of the component’s software. In addition, some configuration must be controlled by the platform where the artifact is finally deployed. That configuration should be registered on the plartform itself, and their registration names should be provided so that on deployment the platform can supply the objects referred to. Examples are secrets, volumes, and also, controlled resources like domains and ports.
  • The configuration mappings
    The configuration that is declared must be finally mapped to entities the software can reach. This could have been done by means of an API in the platform that applications should use. We chose instead to maximize compatibility and minimize lock-in by providing a mechanism for mapping the provided confuguration to elements understood and supported by software like environment variables and files. Consequently it is possible to specify how to map the configuration provided to elements software already is using. The approcah also makes it easy to include legacy software as components in Kumori.
  • The vertical scalability parameters
    The software will need CPU/GPU/Memory resources in order to run. Those should be declared, and their amounts derived from the rest of the configuration. Ideally, in production, a component should have, for a given quality of service, a vertical resource configuration.
  • Observavility setup
    In the form of probes that detect readiness and liveness, so that the platform can inform users and even take meassures to correct anomalous behavior. Additional setup may include log capture and metrics collection endpoints.

3.1.2 Service Artifact

A service artifact definition should include the following aspects:

  • Commposition
    Specify what collection of other artifacts compose the service artifacts. Artifacts included in a service artifact represent their own deploymnents, and thus, they are referred to as roles of the service artifact.
  • Configuration Spread
    How the configuration of the service artifact must be transformed into configuration for the composed artifacts in the service.
  • Dependency injection
    By means of links and connectors, specify how client channels from artifacts in the service should contact server channels from other artifacts in the service. Also, how the channels declared at the service artifact level should connect to the channels of the composed artifacts.

Often times we refer to a service artifact as a topology, as the dependency injection
is represented as a graph providing a very good representation of the deployment architecture of the whole service.

3.1.3 Deployments

A deployment specification consists of a reference to the artifact being deployed and the set of configuration values the deployment carries.

If the artifact being deployed is a component, the configuration is mapped to environment and files. If the artifact being deployed is a service, the configuratioin is spread to all its roles. Each role, in turn, behaveas as a deployment of the artifact the role refers to. The spread rules of the service will derive the configuration of the role’s artifact.

Deployments are, thus, essentially data (and a reference to what must be deployed)

3.1.4 Modules, Versioning and Registries

Artifacts are versioned. The unit of versioning is a Kumori module. A module can carry with it many different related artifacts and even deployment specifications. All artifacts in the same module share the module’s version.

Modules are designed to be accessed through module registries. Kumori implements a simple mechanism to access many registries, both with public and private access mechanisms.

Modules can take dependencies on each other. Dependencies are strongly versioned, followin a semantic versioning approach.

4 Developing artifacts

Developing for the Axebow platform involves mainly writing manifests for those artifacts (Components and Service Applications), and distributing them within versioned modules so that they can be used on any cluster either on their own, or to be used as part of other services applications.

On its first implementation, the Kumori Service Model used CUE (https://cuelang.org/) as the language to define those manifests. CUE is fundamentally a data validation and configuration language, often described as a superset of JSON, offering powerful features well-suited for defining structured data with constraints and schemas. It allows defining schemas and data, and then validate and export the data in JSON format.

While CUE provided a robust framework for defining and validating artifact manifests, we have identified some limitations in terms of ecosystem support and developer familiarity. Therefore, we are currently revising the implementation language for artifact definitions, exploring alternatives that may offer better integration with existing tools and a smoother learning curve for developers.

The set of concepts introduced by kumori is really small, and its usage fairly intutitive. This led us to decide to define a Domain Specific Language that could be used to build text-based authoring tools that could help developers create the manifests, instead of using a generic format, always difficult to fit the needs of helping development.

The result is the Kumori DSL, that is being used to describe the elements of how kumori applications are specified for their deployment on Axebow.

4.1 Modules

Artifacts are always defined within modules. Modules encapsulate and version many artifacts that can be employed in the deployment of kumori services.

Modules are the units of distribution of Kumori artifacts. Thus all artifacts within a module share the same version: the version of the module.

Likewise modules are named. Module names follow the simple scheme of <domain>/<module name>/@<version>. Where <module name> follows a hierarchical structure, and is scoped by the <domain> name.

Modules also declare dependencies on other modules. We will visit later on how dependencies are handled and how they are found and made available to a module.

Modules are typically published within module registries. Registries accepting modules for their distribution must be able to ensure that publishers control the <domain> part of the module id.

Registries can be public or private, and dependency resolution can be configured so that modules can be found by their ID.

Kumori publishes its own registry referred to as the Axebow Marketplace. Access to the marketplace is open to anyone but is subject to verification of control of the namespace (<domain>)

4.1.1 Module structure

A module is represented by a json file: kumori.mod.json ( see Listing 1). A folder with such a file is a kumori module, and there is quite a bit of freedom on how to structure its contents within folders.

Listing 1: Example module
{
  "spec": "kumori/module/v1",    # <1>
  "kumori": "0.0.1",
  "version": "1.3.0",
  "module": "kumori.systems/builtins",
  "requires": []
}
  1. Spec for the actual module format

4.2 Interface and implementation

As explained in the introduction, all artifacts define a deployable service, exposing those aspects relevant to the deployment and reusability: its srv (channel) structure, and its configuration structure.

The DSL representation of an artifact requires it to expose those aspects as its interface. Furthermore, interfaces need to be exposed within their own source files with extension h.kumori.

Artifact implementations contain the details of an artifact, and their extension is just .kumori

This separation enhances modularity and reusability, allowing developers to focus on defining the artifact’s deployability and composability aspects. This structure also makes it possible to publish the interface in Axebow’s Marketplace, accessible to anyone, while hiding the actual “implementation” details. Implementation details, however, can be accessed by Axebow so that it can properly deploy the artifact. The Marketplace can thus help artifact authors to monetize their artifacts without exposing their implementation or means of access to their code, that remains restricted and proprietary.

In the following sections we further explore the Kumori DSL and the tools available to handle it.