Kumori Platform
Kumori Platform is a Platform as a Service built on top of the open source Kumori Kubernetes distribution. As a platform it simplifies and streamlines many aspects of building, deploying and managing a service application whose scalability is automatically managed by the platform itself based on a series of indications provided by the service integrator/developer, mainly the SLA expressed as an expected response time within a range of load.
Kumori Platform Introduces a series of basic easy to understand concepts, which we present in what follows. These concepts are supported through a set of mechanisms and tools that permit developers and integrators to easily place in production non trivial services.
1. The Basics
Modern Services built for the cloud are based on the concept of separating functionalities into many "independent" units referred to as microservices. The intent is multiple. On the one hand we want to decouple functionalities. This is the well established approach to separate functional concerns.
On the other hand there is the realization that different tasks may need different amounts of resources, and, thus, different scaling strategies. It follows, thus, that it is best to separate functions into their own autonomous Microservices, so that we also separate operational concerns.
In Kumori Platform Microservices are ultimately implemented as Components. A Component is just a program that can be autonomously ran within its own environment. When ran, such a program becomes a microservice. Moreover, in a microservices environment, running a Component may actually result in the execution ot several autonomous instances of that program. This makes sense when we need the aggregated power of those instances to carry out a heavy computational task.
Another scenario when such multiplicity of executions makes sense is when we need the aggregated storage/bandwidth resources of such a a collection of instances to actually satisfy the demand of storage made by other parts of a service.
Yet another scenario when multiple instances of a program are needed is when we need to support failures while providing continuity of the service. Multiple instances, when well thought out and properly placed, make it possible for an instance of a component to take over the job of another, failed instance, without disruption to the service.
1.1. Services, Service Applications, Artifacts, Deployments
Services are thus formed by running multiple microservices within the constraints of the Service deployment architecture, describing the intercommunication patterns among those services.
When a service is first activated, one of the problems that needs to be solved is ensuring each one of the microservices can find those microservices with whom it needs to communicate. This is often referred to as service discovery.
If a service was static, i.e., never suffered changes in its configuration, like increasing/decreasing the number of instances a microservice has, or never suffering failures requiring launching new instances of microservices, then, we could provide each instance of each microservice with the IP address of those instances needed to carry out communications.
The above scenario is, however, unrealistic in current cloud computing conditions, where loads are dynamic and services must adapt dynamically to all sorts of changes.
To facilitate managing changes in the structure of a running service, Kumori Platform introduces the concept of a Service Application. In a nutshell, a service application brings together the set of components that will form a service, together with the interconnections that must be established so that their instances can discover each other.
In addition, a service application also declares the set of configuration parameters it expects, as well as the set of resources it needs to be provided with.
In fact a Component can be seen as a particular case of a Service Application with just one role, itself implemented by the only Component. This will be very apparent when the specification structure for both is detailed later on in this doc. Whenever we do not want to specifically distinguish between Service Application or Component we will refer to either of them as Artifacts.
Finally, services are the result of running deployment configurations of artifacts. Kumori Platform provides very simple mechanisms for service discovery, allowing all deployed instances of microservices to access other microservice instances deployed within the same service (and on different deployemnts too, through the mechanism of dynamic deployment linking)
Kumori Platform brings these important functionalities to the table:
-
A service model and formalism to express deployable services on the platform
-
A Kubernetes distribution, composed of open source modules, tested to work harmoniously.
-
A cluster management framework, allowing the dynamic reconfiguration of a cluster, including the ability to monitor the cluster and dynamically define various alarm conditions, enabling cluster operations to proceed effortlessly and smoothly
-
A service management framework, permitting service owners to manage their deployments, and their life cycle, including updates, rollbacks monitoring and scaling decissions.
-
An automation framework, capable of carrying out scaling decisions, automating adaptation to changing loads
In this document, we explain Kumori’s service model, the formalism required to define a Kumori Kubernetes service, and how a definition transforms itself into a running cloud service.