ROBIN Platform

ROBIN Platform

ROBIN Platform Datasheet

Automate Enterprise Applications on Kubernetes

Extend Kubernetes for data-intensive applications such as Oracle, Cloudera, Elastic stack, RDBMS, NoSQL, and other stateful applications.

ROBIN Platform

ROBIN is a Software Platform for Automating Deployment, Scaling and Life Cycle Management of Enterprise Applications on Kubernetes. ROBIN provides a self-service App-store experience and combines containerized storage, networking, compute (Kubernetes), and the application management layer into a single system.

Robin.io helps enterprises increase productivity, lower costs – CAPEX and OPEX, and enables always-on automation with technology solutions for big data, databases, indexing and search, and industry solutions for financial services and telco.

This software-only solution runs on-premises in your private data center or in public-cloud (AWS, Azure, GCP) environments and enables 1-click deployment of any application. ROBIN enables 1-click simplicity for lifecycle management operations such as snapshot, clone, patch, upgrade, backup, restore, scale, & QoS control of the entire application. ROBIN solves fundamental challenges of running big data & databases in Kubernetes & enables deployment of an agile & flexible Kubernetes-based infrastructure for Enterprise Applications.

Key Benefits

  • Increase Productivity
  • Lower Cost – CAPEX and OPEX
  • Gain Always-on Availability
  • Run data-heavy applications on Kubernetes

ROBIN Platform Stack Components

Application Management Layer – Manage Applications and configure Kubernetes, Storage & Networking with Application workflows.

Kubernetes – Run big data and databases in extended Kubernetes, eliminating limitations that restrict Kubernetes to micro-services applications.

Built-in Storage – Allocate storage while deploying an application or cluster, share storage among apps and users,  get SLA guarantees when consolidating, support for data locality, affinity, anti-affinity and isolation constraints, and tackle storage for applications that modify the Root filesystem.

Built-in Networking – Set networking options while deploying apps and clusters in Kubernetes and preserve IP addresses during restarts.

ROBIN Platform Features and Benefits

Features

Benefits

Rapid Deployment – Self-service 1-click
App-store experience.

Slash deployment and management times from weeks and hours to minutes. Deploy and manage data-heavy apps and services in Kubernetes.

Control QoS – Dynamic control QoS for every resource – CPU, Memory, Network and Storage.

Get complete visibility into the underlying infrastructure, set min and max IOPs, eliminate noisy neighbor issue, and gain performance guarantee.

Rapid clones – Clone the entire application along with its data – thick, thin, or deferred.

No performance penalties, backup data with ease, share data among users and applications, among dev, test, and prod, with no additional storage.

Application Snapshots – Take unlimited full application cluster snapshots, which include application configuration + data

Restore or refresh a cluster to any point-in-time using snapshots. Roll back easily with 1-click to the last snapshot in case of data corruption.

Scale – Decouple compute and storage,
scale independently.

Scale out – add nodes. Scale up – increase CPU, Memory and IOPs.

High Availability – No single point of failure – get reliable crossover and detect failures.

Get automatic App-aware data failover for complex distributed applications on bare metal – ROBIN is the ONLY product to provide HA for apps that persist state inside Docker images.

Upgrade – Automated rolling upgrade of application containers that is integrated with
CI/CD pipeline.

Safe-Upgrade technology guarantees that failed upgrades can be rolled back without disrupting the application.

Enterprise Data Apps-as-a-Service – Sample Customer Deployments

Fortune 500 Financial Services Leader

  • 11 billion security events ingested and analyzed in a day
  • DevOps simplicity for Elasticsearch, Logstash, Kibana, Kafka

Global Networking and Security Leader

  • 6 Petabytes under active management in a single Robin cluster
  • Agility, consolidation for Cloudera, Impala, Kafka, Druid

Global Technology Company – Travel Industry

  • 400 Oracle RAC databases managed by a single Robin cluster
  • Self-service environment for Oracle, Oracle RAC
  •  

ROBIN Platform Datasheet

Provision Oracle RAC Database as a Service with ROBIN Platform

Provision Oracle RAC Database as a Service with ROBIN Platform

Oracle RAC Database as a Service – Provision with ROBIN Platform

See how easy it is for anybody to stand up an entirely new Oracle RAC environment including the grid infrastructure installation the ASM configuration and finally create the RAC database tool.

Log into the Robin Hyperconverged Kubernetes Platform console. Go straight to the application bundle screen. In this case, we just have a couple of simple bundles, one of which is our Oracle RAC bundle. So we’ll just simply click on that to provision Oracle RAC. On-click, we are immediately presented with the provisioning workflow associated with this application.

We will name our application. We’ll just call this Oracle RAC demo. Now we’ve got a couple of network interfaces we need to consider because for Oracle RAC both the public and private IP address ranges are available here. This is where we specify the public address because this is how the application will receive connection requests we’ve got the ability to specify the size of the cluster – both in terms of the number of nodes and the amount of compute and memory capacity.

This gives us the ability to shape the way in which the database will be laid out. In this case, we are going to change the default from flash to spinning disk because we, in this case, don’t have enough flash memory available for this particular deployment. We will then move down here to specify our private interconnect IP address and specify our single client address name for RAC. We’ll scroll on down to find a number of other environment variables which may be passed through robin for this deployment.

We’ve got the ability to define how we will declare ASM disk group redundancy – various credentials and then we have our placement rules where we can control how these resources will be deployed on the physical robin cluster. In this case, we need to be able to allow for multiple RAC instances on the same physical node in the cluster because we only have two nodes in our demo environment.

Simply click on provision application from that screen. This will kick off the deployment of our RAC environment. The provisioning process goes through a number of different phases beginning with the deployment of the V nodes or the actual virtual nodes or pods in the cluster running a variety of scripts to complete the configuration of the RAC environment itself from an Oracle perspective through the UI. After this, it is really just in a matter of minutes as we have our entirely fresh new RAC environment up and running.

View Provision Oracle RAC demo to learn more.

ROBIN Hyperconverged Kubernetes Platform

Scale Out Oracle RAC Database as a Service with ROBIN Platform

Scale Out Oracle RAC Database as a Service with ROBIN Platform

Oracle RAC Database as a Service – How to Scale

See how easy it is for anybody to stand up an entirely new Oracle RAC environment including the grid infrastructure installation the ASM configuration and finally create the RAC database tool.

You have seen how easy it is to deploy a fresh new Oracle RAC database environment. But what if we want to know how our workload might respond when adding a third node to the cluster? In other words, test the scalability of that particular workload when adding a third node.

So it’s really easy. We just click on “Scale Out” for the application and here we can define the number of nodes by which we want to extend this cluster. This is done simply by sliding across this bar but for this demonstration purpose, we need to add a single node. We can also explicitly
call out a hostname for the new node. We can go back and tweak some of the environment variables as input for this new operation but for this demo, we really don’t need to make any of these changes.

So let’s just close these out and just simply click on the “Scale Out” button to begin the process for extending our RAC cluster. Behind the scenes, Robin is making all the necessary calls to Oracle to affect the extension of the cluster – in very much the same way as you might through conventional means for any other installation ensuring that from an Oracle perspective things are all agreeable with the configuration. You can see the success of the operation in this window. We’ll close this window and now we are back on our application screen with the newly refreshed view to find that our third node has been added.

We can see the new IP addresses – the physical host on which the new container has been deployed. Let’s just jump back into the new container – rather do a similar verification to see that we have actually successfully reshaped RAC database environment with three nodes from two nodes. We now log into Oracle set our environment through SRB CTL. Let’s just do a status again of our Robin database so we can see that we’ve got our Robin three instance now, which has been added and it’s now running on our new V node.

In the new container in the Robin cluster, we can see the new vip is added and is up and running. The resources have been successfully configured across the new node and if we go back into SQL plus and log back into the database itself and do once again a query of gv$instance, we can see that we had the databases up and fully available across all three instances of the cluster. Okay, so we exit out of that. We’re back to the UI and so now what if we want to scale back in? So we need to shrink that cluster – testing is completed – so we need to shrink that cluster back to two nodes –

Watch the demo to understand how to scale back.

ROBIN Hyperconverged Kubernetes Platform

Clone Oracle RAC Database as a Service with ROBIN Platform

Clone Oracle RAC Database as a Service with ROBIN Platform

Clone Oracle RAC Database as a Service with Robin Platform

We have a database application that is up and running. Now let’s take a look at how easy it is to take snapshots of that application and then subsequently perform cloning operations.

Create Snapshot

Creating a snapshot is quite easy with Robin. We have the option to provide a name for the snapshot or just use the default – which is what we’ll do here. We can look at some of the operations behind the scenes that are going to occur with respect to freezing IO and quiescing the application to maintain consistency. We will then see the newly created snapshot.

From here we have the option for restoring back to that point in time or in this case we were going to perform a thin clone operation based on that snapshot. Here we want to name the clone. It’s essentially an entirely new application stack that will be stood up as part of this operation. So we need to give it a name just as we would give the original application when it was provisioned.

Therefore, we also need to specify both the public and the private IP addresses, because again, this is a RAC database application. We could tweak the capacity for this app and we’ll just leave that the same specify the private IP address and just simply launch the operation by clicking on the clone. This takes a few minutes.

We can again take a look at some of the operations that are occurring behind the scenes with respect to deploying the application. It’s relatively quick and at this point, we can close out this window.

View Oracle RAC Clone and the original application

Now we will be presented with the new application screen as it relates to this new clone cloned app with all the related information in terms of the new nodes that have been provisioned – IP addresses etc. So then if we go back and just click on the general application screen then we can get a summary. you can see the original application and the newly cloned deployment and the snapshot on which it was based.

Hyperconverged Kubernetes

Hyperconverged Kubernetes

Executive Summary – Hyperconverged Kubernetes White Paper

Kubernetes is the de-facto standard for container orchestration for microservices and applications. However, enterprise adoption of big data and databases using containers and Kubernetes is hindered by multiple challenges such as complexity of persistent storage, network, and application lifecycle management. Kubernetes provides the agility and scale modern enterprises need. Although, it provides the building blocks for infrastructure, not a turnkey solution.

On the other hand, Hyper-converged Infrastructure (HCI) provides a turnkey solution by combining virtualized compute(hypervisor), storage, and network in a single system. It eliminates the complexity of integrating infrastructure components by providing an out of the box solution that runs enterprise applications.

We believe combining Kubernetes and the principles of HCI brings simplicity to Kubernetes and creates a turnkey solution for data-heavy workloads. Hyper-converged Kubernetes technology with built-in enterprise-grade container storage and flexible overlay networking extends Kubernetes’ multi-cloud portability to big data, databases, and AI/ML.

Introducing: Hyper-Converged Kubernetes

What is hyper-convergence? Hyper-converged Infrastructure is a software-defined IT framework that combines compute, storage, and networking in a single system. HCI virtualizes all components of the traditional hardware-defined IT infrastructure. Typically, HCI systems consist of a hypervisor for virtualized computing, a software-defined storage (SDS) component, and a software-defined networking (SDN) component.

Hyper-converged Infrastructure software runs on X-86 based commodity hardware. It provides a complete environment for running enterprise applications, which means IT teams do not have to stitch together various pieces needed to to run the applications. All the required components are provided out of the box.

What is Kubernetes?

Kubernetes (also commonly referred to as K8s) is a container orchestration system that automates lifecycle operations such as deployment, scaling, and management for containerized applications. It was initially developed by Google, and later open-sourced. It is now managed by Cloud Native Computing Foundation (CNCF).

Kubernetes groups containers into logical units called “Pod”s. A pod is a collection of containers that belong together and should run on the same node. Kubernetes provides a Pod-centric management environment. It orchestrates compute, storage, and networking resources for workloads defined as Pods. Kubernetes can be used as a platform for containers, microservices, and private clouds.

Kubernetes for Stateful Applications Running databases, big data and AI/ML workloads in enterprise

Kubernetes for Stateful Applications Running databases, big data and AI/ML workloads in enterprise

Enterprise cloud-native requirements demand a robust platform that can support stateless and stateful workloads along with the necessary performance and SLA guarantees. ROBIN Hyper-Converged Kubernetes platform is built from the ground up to deploy enterprise applications. With an App-Store model for deploying stateful applications, Robin provides agility to DevOps teams with enterprise-grade performance. Introduction In today’s competitive market, enterprise IT faces an unenviable task of supporting innovation while enabling support for a variety of complex applications. Whether it is new applications with stateless architectures or existing stateful data-intensive applications, IT is expected to be the core part of the innovation team by empowering developers with right abstractions and enabling an agile workflow from developer laptop to production. In order to meet the demands of modern enterprise, IT has embraced cloud-native as the core pillar of their modernization strategy.

Kubernetes is the standard for container orchestration in the cloud-native ecosystem. Kubernetes, developed by Google and now part of Cloud Native Computing Foundation (CNCF), is an open source container orchestration engine used for the deployment, scaling and management of containers. The increase in market demand for Kubernetes is driving the platform as the standard for container orchestration. A vibrant ecosystem has emerged around Kubernetes, increasing the momentum of the project.

In the past two years, more organizations are using Kubernetes in production. According to a recent CNCF survey, 58% of respondents are using Kubernetes in production. This number will increase in the coming years as more enterprises go cloud-native. This trend is further highlighted by the report released by Dice and Indeed.com. Their report claims Kubernetes was the top job searched in 2018 and this trend will grow further in 2019. The advantage of Kubernetes lies in the low operational overhead, easier DevOps and a better abstraction for developers to deploy their applications. Kubernetes supports both on-premises and cloud-based deployments. The support for hybrid/multi-cloud deployments makes Kubernetes attractive for enterprises.

Get the White Paper – Kubernetes for Stateful Workloads

White Paper – Deploy, Manage, Consolidate NoSQL Apps with ROBIN Hyperconverged Kubernetes Platform

White Paper - Deploy, Manage, Consolidate NoSQL Apps with ROBIN Hyperconverged Kubernetes Platform

NoSQL White Paper

NoSQL database applications like Cassandra, MongoDB, CouchDB, ScyllaDB, and others are popular tools used in a modern application stack. However, deploying NoSQL databases typically starts with weeks of careful infrastructure planning to ensure good performance, ability to scale to meet anticipated growth and continued fault tolerance and high availability of the service. Post-deployment, the rigidity of the infrastructure also poses operational challenges in adjusting resources to meet changing needs, patching, upgrades, backup and the ability to snapshot and clone the database to create test and dev copies.

ROBIN hyper-converged Kubernetes platform takes an innovative new approach where application lifecycle workflows are natively embedded into a tightly converged storage, network, and Kubernetes stack; enabling 1-click self-service experience for both deployment and lifecycle management of Big data, Database and AI/ML applications. Enterprises using Robin will gain simpler and faster roll-out of critical IT and LoB initiatives, such as containerization, cloud-migration, cost-consolidation, and developer productivity.

This complimentary NoSQL white paper shows how to bring 1-click simplicity to deploy, snapshot, clone, patch, upgrade, backup, restore, and control QoS of any Kubernetes-based NoSQL App :

  • Deploy, manage, and consolidate any NoSQL App in your environment
  • Self-service deployment of NoSQL Apps with 1-click
  • Infrastructure consolidation and cost savings

Infographic: Building Stateful Cloud Applications With Containers

Infographic: Building Stateful Cloud Applications With Containers

Tips From Top Thinkers

Building Stateful Cloud Applications With Containers

The continued expansion of the cloud, growing end-user application performance demands, and an explosion in database needs are all stacking up fast against enterprise IT teams. When it comes to building enterprise database and big data applications, many are finding that container technology solves for at least a few of these problems. Here are stats and tips from top thinkers on how to best use containers when building stateful cloud applications.

Persistent Storage is a Top Challenge

26% of IT professionals cited “persistent storage” as a top challenge, when it comes to leveraging containers.

Streamline Until It Hurts “Some of the best writers have said they refine their work by cutting till it hurts. Containers are the same way.” Eric Vanderburg Vice President, Cybersecurity | TCDI

Isolate Containers & Hosts “Maintaining isolation between the container and hosts system by separating the file systems is vital towards management of the stateful application.” Craig Brown, PhD Senior Big Data Architect & Data Science Consultant

Select an Intelligent Orchestrator “An intelligent orchestrator along with a softwaredefined storage with software-defined networking is very essential for running a cloud-based application.” Deba Chatterjee Senior Engineering Program Manager | Apple

A Majority of Enterprises are Investing in Containers

69% of IT pros reported their companies are investing in containers 69

Validate All States “What they all (containerized stateful apps) have in common is the requirement to reliably validate all possible states and state transitions when changes are made to the application.” Marc Hornbeek Principal Consultant – Dev Ops | Trace3

Ensure You Can Monitor All Containers “Containerised applications are addictive. They can be created, tested and deployed very quickly when compared to traditional VMs. The infrastructure to begin monitoring a potentially vast and varying number of new containers is essential.” Stephen Thair Co-Founder | DevOpsGuys

Ofset Workloads with Containers “Stateful applications often reside in 1 or 2 geographical locations and take heavy loads … and at diferent times during peak and of-peak periods. Understanding these variables will enable an operations team to determine how to best design the use of container applications.” Steve Brown Director, DevOps Solutions N.A. | Lenovo

Top Container Orchestrators Now More Popular Than DevOps Tools

When choosing a platform, 35% felt Docker was the best fit for them among all DevOps tools

Get Infrastructure Pros Excited “A lot of people focus too much on the fact that “those application guys” are coming to mess with our infrastructure, instead of thinking that maybe we can elevate our own jobs and start working more closely with applications.” Stephen Foskett Proprietor | Foskett Services

Follow Design Microservices Principles “One of the fundamental aspects of containers is moving to immutable application infrastructure, which means that you cannot store state and application in the same container.” JP Morgenthal CTO Application Ser

Don’t Use Containers for Data Storage “When dealing with stateful applications, precautions need to be taken to ensure that you are not compromising or losing data.” Sylvain Kalache Co-Founder | Holberton School

Looking for more advice on building your stateful cloud application with containers? Download our full eBook today for more exclusive advice from top cloud, DevOps, and container technology pioneers.

Taming the Cassandra-Datastax Dev/Test Challenge in Production Eco Systems

Taming the Cassandra-Datastax Dev/Test Challenge in Production Eco Systems

Cassandra-Datastax

Taming the Cassandra-Datastax Dev/Test Challenge in Production Eco Systems

Cassandra-Datastax has a huge impact on customer facing solutions at scale. However, like most technologies, it presents unique challenges that are often first felt in the development and testing of the application. Containers, Docker in particular, have become a leading tool in addressing some of those challenges. However, Docker alone does not solve all the really hard and time-consuming problems.

RAPID CLUSTER DEPLOYMENT

  • Simplified
  • Repeatable and rapid cluster deployment
  • Node placement logic
  • Cluster scaling
  • Guarantee quality of service (QoS)

CLUSTER CLONING

  • Maintain cluster configuration during clone process
  • How to speed up the cluster cloning process
  • Space efficiency when duplicating clusters

TIME TRAVEL FOR CLUSTERS

  • Cluster snapshots in Robin
  • Point in Time capabilities using Robin
  • Role of cloning in Point in Time operations

CARY BOURGEOIS, Systems Engineer, Robin Systems

Cary has 20+ years of experience working with applications, databases and analytics. Prior to joining Robin he worked at DataStax in their field organization. Before moving to DataStax Cary worked at SAP supporting their In Memory Database(SAP HANA), Big Data Solutions and Analytic applications. Cary also has experience in the Consumer Package Goods industry having developed several commercial applications for ACNielsen.

Robin Solution – Simple Application and Data Lifecycle Management

ROBIN Hyper-Converged Kubernetes Platform  brings bare-metal-like performance, retains virtualization benefits, enables significant cost savings—all from the same management layer.
The platform transforms commodity hardware into a compute, storage, and data continuum where multiple applications can be deployed per machine to ensure the best possible hardware utilization.

Robin containerization platform provides 1 click deployment for traditional relational databases such as Oracle, PostgreSQL, MySQL and for modern ones such as NoSQL, MongoDB, and Cassandra. DBAs, DevOps engineers or developers simply choose which database to deploy while Robin completely automates infrastructure and data provisioning, monitoring and tracking of application topology—through its lifecycle, and day 2 operations—with Robin Application-to-Spindle QoS guarantee.

Users interact with Robin entirely at the database level and provision as well as manage the underlying infrastructure transparently.

  • Get bare-metal-like performance
  • Retain virtualization benefits
  • Enable cost savings
  • Manage everything from a single management layer

NoSQL Databases – Simple Lifecycle Management & Database Consolidation

Robin Sytems on Vimeo

On-demand Webinar: Are Containers Ready to Run NoSQL Databases?

On-demand Webinar: Are Containers Ready to Run NoSQL Databases?

Are Containers Ready to Run NoSQL Databases?

The database is the quintessential data dependency for any application. Databases in production environments tend to be performance sensitive and expect consistent and predictable performance from their underlying infrastructure. On the other hand, databases in dev/test environments need to be fast, agile and portable.

Due to this paradox, production databases are typically deployed on bare metal servers for maximum performance and predictability. This often leads to underutilization of hardware, idle capacity, and poor isolation. On the other hand, dev/test databases are deployed on VMs which are fast to deploy, improve hardware utilization and consolidation, are fully isolated, and are easy to move across data centers and clouds but suffer from poor performance, hypervisor overhead, and unpredictability. It is a challenge to run nosql databases with great performance, no hypervisor overhead and move data from dev, test to prod seamlessly without any downtime.

In this webinar, learn about:

  • How NoSQL databases like Cassandra can benefit from container technology
  • If the current storage systems can support containerized databases
  • How to alleviate data management challenges for large databases
  • How to run NoSQL databases on RCP
  • How ROBIN Hyper-Converged Kubernetes Platform can deliver bare-metal-like performance while retaining all virtualization benefits

Robin for NoSQL Databases

Robin Sytems on Vimeo