Robin.io’s new scalable and agile Kubernetes HCI platform aims to take the pain out of deploying and managing complex data-intensive systems in a containerized environment.
Robin.io recently introduced its Robin Platform for supporting containerized applications, a unique approach in an industry rife with container-based options. The platform promises to address many of the deployment and lifecycle-management challenges that come with implementing complex data-intensive systems in a Kubernetes environment.
By applying the principles of hyper-convergence to Kubernetes, Robin takes advantage of the best of both technologies to provide an agile and scalable platform to meet the demands of today’s workloads and services. Let’s explore how the new system works and what it promises to deliver in detail.
Kubernetes is an open-source container orchestration system that automates lifecycle operations such as deployment and scaling. It offers a portable and extensible platform that facilitates both automated and declarative configurations, while providing a container-centric management environment for orchestrating compute, network and storage resources to support varying workloads.
Based on technologies developed by Google, Kubernetes has its origins in microservices and stateless applications, running production workloads at scale in massive data centers.
Unfortunately, stateful applications came as an afterthought to Kubernetes, and they have been retrofitted into the environment. For this reason, it can still be difficult to deploy and manage certain workloads in a Kubernetes environment. For example, most database systems require a high degree of state, storage and network management, which are difficult to achieve with Kubernetes alone.
Hyper-convergence can help address these challenges by eliminating many of the complexities that come with integrating infrastructure components. A hyper-converged infrastructure (HCI) provides a software-defined platform that consolidates compute, storage and, to a growing degree, network resources into a tightly integrated system that can be implemented on off-the-shelf x86 hardware.
An HCI system is made up of a cluster of compute and storage nodes that serve as a foundation for the software-driven architecture. The HCI system consolidates the hardware components from across the cluster into shared resource pools accessible by any of the cluster’s workloads. This approach can help streamline deployment, simplify management and better use resources, while making it easier to scale the platform by simply adding nodes.
The Robin Platform utilizes Kubernetes and HCI technologies to create a software-defined application orchestration framework that combines Kubernetes’ agility with HCI’s simplicity. The platform natively integrates the components from both systems into a unified approach with Kubernetes at its core. Because the containerized applications share resources and data, performance is more predictable and resources are better utilized. At the same time, applications remain unaware of the underlying infrastructure, allowing quick and easy deployments.
Robin HCI Kubernetes features
The Robin platform is a software-only offering that can run in private data centers or public clouds, such as AWS, Google Cloud Platform and Microsoft Azure. Robin uses container technology to consolidate applications and better use resources, while providing runtime isolation for each container. The platform extends Kubernetes to support data-intensive workloads, such as AI, machine learning, database operations, indexing and search.
By integrating Kubernetes and HCI technologies, Robin can offer a number of features that simplify lifecycle-management operations. For example, the hyper-converged platform provides one-click cluster and application deployment, along with snapshot and cloning capabilities. The platform also provides dynamic quality of service for all its resources, with bidirectional insight for troubleshooting I/O issues.
In addition, Robin offers a software-defined storage mechanism that can be used with commodity DAS to create shared storage resource pools. Most importantly, the storage services can support stateful applications, such as relational database management systems, while offering data locality constraints to ensure the performance of data-heavy applications.
The Kubernetes platform can also assign persistent IP addresses to resources to further support stateful applications, as well as provide network routing capabilities to facilitate communications between pods and with external services. In addition, Robin provides mechanisms for delivering high availability, such as application-aware failover and health monitoring across the infrastructure, including containers and applications.
Robin’s Kubernetes platform’s components can generally be divided into those specific to Kubernetes and those that are part of the Robin application management layer. The platform implements the Robin components on top of the Kubernetes components to create an integrated infrastructure for delivering a hyper-converged container system.
Kubernetes provides both master node and worker node components. The master node components include the API server that exposes the Kubernetes API, the scheduler that allocates nodes to pods, the controller manager that runs the controllers on the Kubernetes cluster and the etcd store for storing cluster data.
The worker node components include the Kubelet agent for ensuring that the pods are running correctly, the Kube proxy for handling network routing and the container runtime that executes the containers. Kubernetes components also include the Container Storage Interface, Container Network Interface, Container Runtime Interface, Persistent Volumes and DaemonSets, which ensure that all nodes or subsets of nodes can run copies of specific pods.
The Robin components add software-defined block storage, software-defined networking and several components for managing the infrastructure — such as DaemonSets – as well as controllers and switchers. For example, the Robin Master DaemonSet manages the cluster topology and runs the logic to place application pods on nodes, and the Robin Agent DaemonSet executes Robin Master service tasks on the nodes.
Together, the Robin components make up the application management layer that orchestrates the infrastructure and controls application lifecycle operations. The management layer also continuously monitors the infrastructure and application stack to assign resources and carry out failover operations.
Because the application management layer is so tightly integrated with Kubernetes technologies, Robin is able to provide a unified product that facilitates simple and fast application roll-outs of both stateful and stateless applications, reducing the amount of time that IT and DevOps must spend to deploy and manage their workloads. The Robin platform also offers better resource use, while supporting complex enterprise applications such as Cloudera, Elastic Stack, Hortonworks and SAP HANA.
For organizations looking to Kubernetes to deploy their stateful applications, Robin’s Kubernetes offering could prove a useful alternative, especially for teams without the resources necessary to manage their containerized applications. For everyone else, Robin’s HCI Kubernetes system points to the potential of using hyper-convergence to achieve greater flexibility when implementing a container-based infrastructure, a trend that could readily grow as hyper-convergence and container technologies continue to make their mark in today’s data centers.