for Stateful Applications
ROBIN Solves These Challenges by Providing
- Hyper-converged Kubernetes with native integration of the entire infrastructure stack
- Simple app-store experience for any app, anywhere
- Built-in software-defined network that provides persistent IPs
- Built-in software-defined block storage with enterprise-grade features
- Role-based access control to manage user access and consumption quotas
- Dynamic scaling for distributed data-heavy applications
How Does This Work?
To understand how ROBIN hyper-converged Kubernetes works, for the sake of simplicity, let us consider application deployment as a 3-step process.
When an application deployment request is made through ROBIN CLI or GUI, the Plan Manager service accepts the request. The Plan Manager considers application requirements, decides how many pods are needed, takes into account affinity and anti-affinity policies, and submits a detailed plan to the Overlay Scheduler service.
The Overlay Scheduler service is aware of all available resources in the cluster. Based on resource availability on nodes, resource requirements, and affinity policies for pods defined in the plan, it maps pods to nodes. Based on the mapping, the Overlay Scheduler creates a manifest that enforces strict node affinity for pods. The default Kubernetes Scheduler then uses the manifest to schedule pods on the nodes as defined in the manifest. ROBIN uses this process to control the pod creation and allocation process without disrupting the K8s workflow, by letting the default K8s scheduler schedule the pods.
The K8s Scheduler notifies the API Server that it has scheuled pods for creation. The API server notifies Kubelets on corresponding worker nodes, which create the pods.
The ROBIN Overlay Scheduler service, along with creating the schedule for pod allocation, also creates the schedule for assigning Persistent Volumes to pods based on the data locality and affinity policies defined for the application. The Storage Management service is responsible for taking the Persistent Volumes allocation information, and relaying it to the Storage Coordinator service running inside the ROBIN Agent on the corresponding worker nodes where the relevant disks are located.
On the worker node, the Storage Coordinator service uses K8s CSI to interact with the ROBIN built-in storage to create a Persistent Volume. It also creates a Persistent Volume Claim, which is used by the associated pod to attach the Persistent Volume to itself.
The Overlay Scheduler passes the range of available IP Addresses to the IP Address Management service running on the ROBIN Agent.
The IPAM service interacts with the software-defined network (Open vSwitch) to assign IPs to the pods running on the worker node. The IPAM service also relays the IP and pod bindings to the ROBIN Master, so that if the node dies, and the pods are created on a new node, they can still have the same IP address.
ROBIN extends Kubernetes networking via both Calico and Open vSwitch based CNI drivers. This offers flexibility in using either overlay networks to create flexible L3 subnets that span multiple datacenters or cloud environments or use bridged networking to get wire-speed network access for high performance applications. In both modes ROBIN also enhances the CNI driver to retain the IP address of the POD when it is restarted or moved from one host to another. This provides greater flexibility during scaling, migration and high availability. Many Big Data and Database applications predate both Docker and Kubernetes and have made strong assumptions around how network and storage persistency is preserved in the event of POD restarts. ROBIN’s handling of both network and storage ensures that these applications function correctly when running on a Kubernetes cluster.