eBook: Delivering Enterprise Apps as-a-Service

Intellyx: Three Architectural Capabilities Critical for Financial Services Leaders

Intellyx: Three Architectural Capabilities Critical for Financial Services Leaders

Charles Araujo from Intellyx examines the seismic shift occurring in the financial services sector. The pace and depth of this transformation will be a challenge and will demand that financial services teams build and develop three fundamental architectural capabilities to power them through it.

How to Bring Cloud-Native Computing to the Enterprise

How to Bring Cloud-Native Computing to the Enterprise

Cloud-Native Computing White paper covers –

– Rise of Containers & Kubernetes

– Dealing with State in a Stateless Environment

– Modernizing Legacy Assets the Cloud-Native Way

– How Cloud-Native Architectures Help with Legacy

Robin Hyperconverged Kubernetes Platform

Author: Jason Bloomberg

Jason Bloomberg is a leading IT industry analyst, author, keynote speaker, and globally recognized expert on multiple disruptive trends in enterprise technology and digital transformation. He is founder and president of Digital Transformation analyst firm Intellyx. He is ranked #5 on Onalytica’s list of top Digital Transformation influencers for 2018 and #15 on Jax’s list of top DevOps influencers for 2017, the only person to appear on both lists. Mr. Bloomberg is the author or coauthor of four books, including The Agile Architecture Revolution (Wiley, 2013). His next book, Low-Code for Dummies, is due later this year.

How to Bring Cloud-Native Computing to the Enterprise White Paper

As the complexity of modern IT continues to explode with no limits in sight, organizations are increasingly leveraging cloud-native computing to bring the best practices of the cloud to their entire IT organization.

An essential enabling technology for this next-generation approach to architecting IT infrastructure is the use of containers in conjunction with the Kubernetes container orchestration platform.

Containers bring dramatically improved flexibility to the applications that enterprises put in front of customers but lack a sufficiently comprehensive way to maintain persistent information over time because containers are inherently stateless.

Managing state information in such a stateless environment, therefore, is one of the primary challenges facing enterprise deployments of Kubernetes. Both, Robin platform and Robin storage, solve the challenges by enabling stateful workloads that follow cloud-native principles.

What is ‘Cloud-Native’?

For all its transformative power and business value, cloud computing has unquestionably been a lightning rod for hype.

This buzzwordiness continues with little hope of abating. Today, it swirls around such terms as cloud-native. In common parlance, cloud-native refers to software that developers have built-in – and for – the cloud.

This concept is strikingly important to how enterprises take advantage of the cloud – and even more so, extend the value of the cloud to their IT organizations at large.

From the enterprise perspective, this definition of cloud-native might apply to some of the new software they’re building, but the cloud-native world would forever be separated from the on-premises context for enterprise IT that has been with us for generations.

Fortunately, this definition is shifting. Today, ‘cloud-native’ is more than ‘cloud only.’ It means bringing cloud-centric best practices to software and IT generally, whether that be in the cloud or on-premises – or both.

Robin Storage for Containers: Enabling Stateful Applications on Kubernetes

Robin Storage for Containers: Enabling Stateful Applications on Kubernetes

RobinStorage for Containers: Enabling Stateful Applications on Kubernetes | Robin.io

– ESG validated how Robin Storage simplifies application management, protection, and portability

– ESG validated that Robin Storage can bring a new set of applications into the containerized world

– Robin Storage brings advanced data management capabilities to Kubernetes

Learn more – Robin Storage for GKE and OpenShift


This report describes how Robin Storage delivers bare-metal performance and enterprise data management for stateful containerized applications on Kubernetes.

The Challenges

As organizations continue to pursue digital transformation initiatives, many have adopted container technologies to streamline application needs, get applications to market faster, and make them more portable. At the same time, Kubernetes has become the orchestrator of choice for deploying, managing, and scaling containers. While development remains a key container target, more organizations are deploying containers in production applications. When ESG asked IT managers about their production container usage in 2018, 56% reported having already deployed applications in production, 24% reported testing with a plan to deploy within a year, and another 16% reported that they expected to start testing production containers in the next year.1

Why the increased interest? Container technologies abstract applications from hardware by virtualizing the operating system, which is a lightweight design that makes them efficient, reliable, scalable, and portable. Containers enable development autonomy and agility, as developers can do more on their own without IT provisioning or management. The infrastructure and staffing efficiency of containers result in lower costs and streamlined processes.

Stateless containers have no need to keep data persistent once the processes they are executing have finished. A key challenge for running enterprise-class, container-based production applications is that they are most often stateful: that is, the applications maintain data from each compute session, even when the container terminates. As a result, running applications such as databases, artificial intelligence/machine learning (AI/ML), or custom-built applications on Kubernetes requires external storage that outlasts the container. When running mission-critical processes, these applications need swift storage provisioning, predictable performance, full data protection and security, easy data sharing, and the flexibility to leverage hybrid/multi-cloud deployments.

There are numerous external storage solutions that support containers through Container Storage Interface (CSI)-compliant APIs to manage interactions between container orchestrators such as Kubernetes and storage arrays. However, these solutions do not provide the performance and data management capabilities that enterprise production applications demand. They provide storage at the volume level but cannot deliver application-level data services.

The Solution: Robin Storage

Robin Storage is a CSI-compliant, container-native, software-defined block storage solution that offers enterprise-class performance and data management capabilities for Kubernetes-orchestrated containers. It provides resilient storage (supporting HDD, SSD, and NVMe) with bare-metal performance, and has built-in data rebalancing, disk and I/O error-detection, volume rebuilds, and hotspot detection.

[button open_new_tab=”true” color=”accent-color” hover_text_color_override=”#fff” size=”jumbo” url=”/wp-content/uploads/2019/07/esg-technical-review-robin-storage-technical-review.pdf” text=”Read ESG Technical Review for Robin Storage” color_override=””]

Application Aware Storage For GKE and OpenShift

Application Aware Storage For GKE and OpenShift


As more and more enterprises move to Kubernetes for their cloud-native needs, they are looking for high-performance storage options for their stateful and data-intensive workloads. Beyond a seamless workflow to provision storage, they are also looking for complete data lifecycle management features as a part of their storage platform. Robin Storage is a perfect candidate for this need. With their application awareness, high performance and scale, Robin Storage is well suited to meet enterprise cloud-native storage needs. Robin has already partnered with Google Cloud and Red Hat OpenShift to deliver their customers with high-performance storage. They are also well positioned to work with any Kubernetes platform.


As Kubernetes gains increasing enterprise adoption, the need for supporting stateful applications becomes a critical need. Kubernetes community is working on supporting stateful applications on the platform but the enterprise needs are different from running basic stateful apps. Many enterprises are forced to migrate their mission-critical legacy applications to container environments to ensure that both their legacy and modern applications are hosted on Kubernetes clusters. This brings into focus the need for a comprehensive storage platform that meets the critical enterprise needs such as:

  • Ease of use without developers having to manage additional operational overhead.
  • Provisioning and managing storage should be integrated into the developer workflow
  • Bare-metal like performance
  • High availability
  • Comprehensive data management capabilities

While most Kubernetes platforms meet the compute and networking needs of enterprise workloads, very few platforms are capable of meeting the performance and data management needs of the enterprises. According to Rishidot Research’s analysis of market trends and our discussions with enterprise stakeholders, more than half of the enterprises are expected to deploy container based workloads by 2020. Almost three-thirds of our research participants are planning to use containerized workloads also said they are planning to deploy Stateful workloads on Kubernetes. There is a clear market need for storage solutions that work seamlessly with Kubernetes. As more and more big data and machine learning workloads move to Kubernetes, high performance becomes the core requirement for Kubernetes storage. Robin’s platform features meet these requirements and, recently, they announced partnerships with both Google Cloud and Red Hat OpenShift. In this short whitepaper, we will talk about the state of stateful applications in Kubernetes and how Robin Storage meets the enterprise needs of Google Cloud and OpenShift customers.

The state of stateful applications in Kubernetes

Running stateful applications in Kubernetes is challenging even with the progress made by the Kubernetes community on StatefulSets and support for many Persistent Volumes. According to the CNCF survey, the critical challenges for running containers in production are security, complexity, storage, and networking. The enterprise needs are much different from modern applications as they have legacy and data-intensive applications in the mix. Even with StatefulSets, trying to bring up clusters, handling node failures, managing routing, ensuring HA, etc. for databases or big data workloads requires a great deal of coding and operational overhead. Using storage volumes with Kubernetes includes managing volume level primitives that adds operational overhead. As developers push for ease of access to resources, managing multiple configuration settings to deploy stateful applications is a non-starter for developers in the modern enterprise.

Critical Enterprise Needs

– Application Aware Storage
– Ease of use without developers having to manage additional operational overhead.
Provisioning and managing storage should be integrated into developer workflow
– Bare-metal like performance
– High availability
– Comprehensive data management capabilities on Kubernetes

High Performance Kubernetes Platform For Stateful Workloads

High Performance Kubernetes Platform For Stateful Workloads


Enterprises need high-performance storage for their stateful applications. Their needs vary from running Relational and NoSQL Databases to Big Data to AI/ML workloads. They want near bare metal like performance along with scale. In this white paper, we compare Robin Platform with open source GlusterFS and our analysis show that Robin platform provides near bare-metal performance and could scale well to meet the needs of Big Data and AI/ML applications.


As containers gain traction inside the enterprise and Kubernetes emerges as the standard for container orchestration, users are looking to leverage the technology for running mission-critical workloads such as stateful applications like databases, big data and AI/ML applications. Unlike stateless applications, these applications have important storage and networking requirements. The containers and Kubernetes are helping modern enterprises to embrace agile and DevOps while also bringing better efficiencies to IT operations. The key to enterprise IT transformation lies in leveraging a stack such as Kubernetes for both states, stateless and stateful, for big data applications and AI/ML.

The Kubernetes community has focused their attention on the need to support stateful workloads – the work done around StatefulSets is a good indicator of the progress. But this effort is far from mature and there exists operational overhead in provisioning the clusters needed for persistent volumes. Many IT organizations are spending multiple cycles to get Kubernetes set up for stateful workloads, leading to friction and delays. The problem gets bigger when big data and other data-intensive workloads become part of the equation. Beyond the operational overhead, performance is also a critical criterion for these workloads. The enterprise decision makers are torn between selecting a DIY approach to running stateful workloads on Kubernetes and finding the right platform that
is suitable for data-intensive workloads.

If faster time to market is the driver for the adoption of cloud and IT modernization strategies, building a platform for stateful workloads from vanilla Kubernetes or one of the platforms available for stateless applications is a waste of resources leading to undifferentiated heavy lifting. The key to successful digital transformation lies in picking a platform that provides a competitive edge in the market without incurring high operational overhead.

Key evaluation criteria

– Does the Kubernetes platform give bare metal like performance?
– Are the performance guarantees available at scale?
– Is the performance predictable?

Advanced Data Management for OpenShift – Powered by Robin Storage | White Paper

Advanced Data Management for OpenShift - Powered by Robin Storage

Robin Storage for OpenShift White Paper:

Manage App+Data as a Single Entity Robin Storage is a purpose-built container-native storage solution that brings advanced data management capabilities to Kubernetes. It is a CSI-compliant block storage solution with bare-metal performance that seamlessly integrates with Kubernetes-native administrative tooling such as Kubectl, Helm Charts, and Operators through standard APIs. Robin Storage is application-aware. The “Application” construct, as defined above, provides the context for all Robin Storage operations. All lifecycle operations are performed by treating app+data as a single entity.

For example, when you snapshot a MongoDB application, Robin Storage captures the entire application topology and its configuration (i.e., specs of Pod, Service, StatefulSet, Secrets, ConfigMaps, etc), and all data volumes (PersistentVolumeClaims) to create a point-in time application checkpoint.

Key Features Data Protection and Security

  • Protect app+data with replication, snapshots, backup & recovery to run always-on applications » Secure data with encryption at rest and in motion
  • Safeguard against data corruption with checksum error-detection Automated Application Management » Bring automated management of app+data (not just storage) to kubectl, Helm, and Operators
  • Enable Quick and easy deployment of enterprise workloads on any Kubernetes distribution High Performance at Scale and QoS Guarantee
  • Get high-performance enterprise-grade storage trusted and validated by Google » Experience bare-metal performance with the flexibility and scale of software-defined storage
  • Guarantee QoS for high priority applications by setting IOPS limits per application DevOps Collaboration for Stateful Applications
  • Enable collaboration across geos and teams by cloning app+data in minutes
  • Quickly share app+data among Dev, QA, and Production teams to shorten release cycles Hybrid and Multi-Cloud Flexibility
  • Enable easy movement of app+data, between on-prem and cloud(s)
  • Avoid infrastructure lock-in, run your applications on most cost-effective infrastructure

Advanced Data Management – Robin Storage for OpenShift White Paper

  • The Need for Data Management on Kubernetes
  • Defining and Managing An Application
  • Robin Storage: Manage App+Data as a Single Entity
  • Registering Helm Releases as Applications
  • App+Data Time-Travel with Snapshots
  • DevOps Collaboration using App+Data Clones
  • Backup & Restore App+Data to Recover from System Failures
  • App+Data Portability across Clouds

Learn more – Advanced Data Management for Kubernetes

Advanced Data Management for Kubernetes – Powered by Robin Storage

Advanced Data Management for Kubernetes - Powered by Robin Storage

Robin Storage White Paper

The Need for Data Management on Kubernetes

Kubernetes is gaining rapid adoption and enterprise customers are demanding the ability to run broader sets of workloads including stateful applications. Running stateful applications such as PostgreSQL, MySQL, MongoDB, Elastic Stack, Kafka, and MariaDB require advanced data management capabilities in order to:

  • Release new products and features faster: Automated lifecycle management for app+data (not just the storage) is required to save valuable time at each stage of the lifecycle.
  • Collaborate quickly across teams: Multiple teams (Dev/Test/Ops) need a mechanism to collaborate without procedural delays. CI/CD pipelines solve a part of the problem with automating the collaboration for code changes, but data is usually left out.
  • Recover from system failures and user errors: App+data protection capabilities such as point-in-time snapshots, backup, and restore are required to recover from system failures and user errors.
  • Avoid infrastructure lock-in: The ability to migrate from on-prem to cloud and vice versa, and among the public clouds is needed to avoid infrastructure lock-in.
  • Deliver predictable performance: To guarantee QoS and to ensure high priority applications do not miss SLAs, you need the ability to set IOPS limits per app.
  • Eliminate security vulnerabilities: Enterprise-grade security is required with authentication and encryption to ensure your data is safe.

Defining and Managing An Application

Kubernetes provides many useful constructs such as Pods, Controllers, PersistentVolumes etc. to help you manage your applications. However, there is no construct for an “Application”, i.e. a single entity that consists of all the resources that form an application. Users have to manually map the resources to an application and manage each resource individually for any lifecycle operation. The lack of a proper Application construct in Kubernetes poses a problem when it comes to performing operations that encompass a group of resources.

Frameworks such as Helm and Operators try to solve this problem by packaging resources together, but they do not solve it beyond the initial deployment. For example, how would one snapshot, clone or backup an entire helm release that spans PersistVolumeClaims, Secrets, ConfigMaps, StatefulSet, Pods, Services etc? Or how about snapshotting a web-tier, app-tier and database-tier each deployed separately using 3 different kubectl manifest files?

  • The Need for Data Management on Kubernetes
  • Defining and Managing An Application
  • Robin Storage: Manage App+Data as a Single Entity
  • Registering Helm Releases as Applications
  • App+Data Time-Travel with Snapshots
  • DevOps Collaboration using App+Data Clones
  • Backup & Restore App+Data to Recover from System Failures
  • App+Data Portability across Clouds

Learn more – Advanced Data Management for Kubernetes

White Paper – Elastic Stack as a Service on Kubernetes

White Paper - Elastic Stack as a Service on Kubernetes

Elastic as a Service on Kubernetes White Paper

The Elastic Stack, also known as the ELK stack, is one of the most popular DevOps tools. DevOps teams rely on the ELK stack to serve critical use cases such as Log Analysis, Metrics Analysis, Application Performance Monitoring, and Security Analytics. As a result, DevOps teams need the ability to provision and manage the ELK stack on-demand.

Kubernetes is revolutionizing how enterprises manage their IT infrastructure. Kubernetes brings automated deployment, effortless scaling, and cloud portability to workloads that run on top of it. These benefits make DevOps teams more productive, especially when applied to frequently used services such as the ELK stack. Containerizing the ELK stack on Kubernetes will help DevOps teams to find insights and resolve issues faster, reduce infrastructure costs, and be ready for the cloud-native world.

In this white paper, we will discuss the benefits of running the ELK stack on Kubernetes, and the most common challenges faced by DevOps teams while doing so. We will address these challenges and explain how the Robin Hyperconverged Kubernetes platform simplifies provisioning and management of ELK stack with 1-click operations.

This complimentary white paper covers how you can deploy ELK Stack as-a-Service in a Kubernetes-based environment:

  • Deploy, manage, and consolidate across any stage of your ELK deployment
  • 1-Click self-service deployment of ELK
  • Multi-tenancy, consolidation and cost savings
  • Deploy, manage, and consolidate across any stage of your ELK deployment
  • 1-Click self-service deployment of ELK on Kubernetes
  • Multi-tenancy, consolidation and cost savings

Learn more – Robin Hyperconverged Kubernetes for Elastic

Hyperconverged Kubernetes

Hyperconverged Kubernetes

Executive Summary – Hyperconverged Kubernetes White Paper

Kubernetes is the de-facto standard for container orchestration for microservices and applications. However, enterprise adoption of big data and databases using containers and Kubernetes is hindered by multiple challenges such as complexity of persistent storage, network, and application lifecycle management. Kubernetes provides the agility and scale modern enterprises need. Although, it provides the building blocks for infrastructure, not a turnkey solution.

On the other hand, Hyper-converged Infrastructure (HCI) provides a turnkey solution by combining virtualized compute(hypervisor), storage, and network in a single system. It eliminates the complexity of integrating infrastructure components by providing an out of the box solution that runs enterprise applications.

We believe combining Kubernetes and the principles of HCI brings simplicity to Kubernetes and creates a turnkey solution for data-heavy workloads. Hyper-converged Kubernetes technology with built-in enterprise-grade container storage and flexible overlay networking extends Kubernetes’ multi-cloud portability to big data, databases, and AI/ML.

Introducing: Hyper-Converged Kubernetes

What is hyper-convergence? Hyper-converged Infrastructure is a software-defined IT framework that combines compute, storage, and networking in a single system. HCI virtualizes all components of the traditional hardware-defined IT infrastructure. Typically, HCI systems consist of a hypervisor for virtualized computing, a software-defined storage (SDS) component, and a software-defined networking (SDN) component.

Hyper-converged Infrastructure software runs on X-86 based commodity hardware. It provides a complete environment for running enterprise applications, which means IT teams do not have to stitch together various pieces needed to to run the applications. All the required components are provided out of the box.

What is Kubernetes?

Kubernetes (also commonly referred to as K8s) is a container orchestration system that automates lifecycle operations such as deployment, scaling, and management for containerized applications. It was initially developed by Google, and later open-sourced. It is now managed by Cloud Native Computing Foundation (CNCF).

Kubernetes groups containers into logical units called “Pod”s. A pod is a collection of containers that belong together and should run on the same node. Kubernetes provides a Pod-centric management environment. It orchestrates compute, storage, and networking resources for workloads defined as Pods. Kubernetes can be used as a platform for containers, microservices, and private clouds.