ROBIN Storage Video – Advanced Data Management for Kubernetes

ROBIN Storage Video - Advanced Data Management for Kubernetes

ROBIN Storage

Protect app+data with replication, snapshots, backup & recovery, and enterprise-grade security and get Hybrid & Multi-cloud portability with ROBIN Storage today!

As part of digital transformation initiatives, organizations across the globe are increasingly adopting containers and Kubernetes has emerged as the leading orchestration platform.

However, running mission-critical enterprise workloads that are Stateful on Kubernetes is still complex and challenging. Stateful applications such as PostgreSQL, MySQL, MongoDB, Elastic Stack, Kafka, and MariaDB require advanced data management capabilities in order to Recover from system failures, collaborate effectively across DevOps teams, and deliver hybrid and multi-cloud flexibility.

Introducing ROBIN Storage, the cloud-native storage with advanced data management that enables Stateful workloads on Kubernetes.

Born out of the partnership between Google and Robin.io that entails

  • Engineering to engineering collaboration to design standardized APIs for running data-centric workloads in Google Kubernetes Engine.
  • And ROBIN Storage, as the preferred storage for enterprise workloads in GKE.

ROBIN Storage is a CSI-compliant block storage solution with bare-metal performance and powerful data management capabilities which are exposed through standard APIs that seamlessly integrates with Kubernetes-native toolings such as Kubectl, Helm Charts and Operator framework.

It provides automated provisioning, point-in-time snapshots, backup and recovery, Enterprise-grade data security, application cloning, QoS guarantee, and multi-cloud migration for stateful applications on Kubernetes.

ROBIN Storage enables powerful hybrid cloud use cases such as cloning a snapshot and rehydrating in multiple Google Cloud Platform availability zones. ROBIN Storage also offers flexibility to leverage existing investments in storage infrastructure like DAS/NAS/SAN from leading vendors and also offers a single plane for advanced data management capabilities across hybrid cloud implementations.

Protect app+data with replication, snapshots, backup & recovery, and enterprise-grade security and get Hybrid & Multi-cloud portability with ROBIN Storage today!

ROBIN Hyper-Converged Kubernetes Platform in 2 Minutes

ROBIN Hyper-Converged Kubernetes Platform in 2 Minutes

Robin Systems Unveils Hyper-converged Kubernetes Platform for Big Data, Databases and AI/ML Applications 

Using the unique hyper-converged Kubernetes technology, with built-in enterprise-grade container storage and flexible overlay networking, ROBIN eliminates these challenges and extends Kubernetes’ multi-cloud portability to big data, databases, and AI/ML.

ROBIN Explainer Video – ROBIN Hyper-Converged Kubernetes Platform 2 Min Video

ROBIN Explainer Video – Hyper-Converged Kubernetes Platform  – ROBIN solves the fundamental challenges of running big data and databases in Kubernetes and enables the deployment of an agile, and flexible infrastructure for your Enterprise Applications.

As the only purpose-built Kubernetes-based solution, ROBIN offers the entire application lifecycle management embedded natively into the compute, storage, and network infrastructure stack for any application anywhere – on premises and on the public cloud.

ROBIN is the first implementation of hyper-converged Kubernetes in the market. Using ROBIN users can do self-service deployment of big dataNoSQL databasesRDBMS,  and AI/ML, share entire experiments among team members, quickly do what-if trials, scale resources including GPU and IOPs, and migrate as well as recreate entire application environments across data centers and clouds.

ROBIN offers a self-service app-store experience that simplifies deployment and lifecycle management with 1-click functions that shorten DevOps and IT tasks from hours and weeks to minutes. It makes applications truly agnostic of infrastructure choices and enables them to share resources and data with predictable performance, leading to significant cost savings.

Big Data, Artificial Intelligence & Machine Learning EcoCast – Partha Seetala, CTO

Big Data, Artificial Intelligence & Machine Learning EcoCast - Partha Seetala, CTO


ROBIN is a Software Platform for Automating Deployment, Scaling and Life Cycle Management of Enterprise Applications on Kubernetes


Robin Systems Videos

Big Data Ecocast – Partha Seetala, CTO

Big data and artificial intelligence/machine learning are technology trends for which we’re just scratching the surface of the long-term potential.  In such environments, storage isn’t just about capacity, but about how to use that data in the most expedient way possible. Today, as organizations consider the potential of these technologies, they’re struggling with determining how to store, manage, and protect this data. Moreover, they’re identifying key use cases for their burgeoning datasets. Increasingly. Organizations are collecting data to train artificial intelligence and machine learning models in order to bring these powerful capabilities into their operations to get ahead of the competition and to make the world a better place.

For example, PAIGE.AI, a spinout of Memorial Sloan Kettering Cancer Center (MSKCC) is using advanced technology to accelerate and optimize cancer research. The goal of PAIGE.AI is to develop and deliver a series of AI/ML modules that allow pathologists to improve the scalability of their work, enabling them to provide better care at lower cost. By analyzing petabytes of data from tens of thousands of digital slides of anonymized patient data, PAIGE.AI has developed deep learning algorithms based on convolutional and recurrent neural networks and generative models that are able to learn efficiently and help improve the accuracy and speed of cancer diagnosis.

This entire world brings with it new challenges and whole new terminology that has to be learned. You need to figure out the ups, the downs, the ins, and the outs of designing a big data architecture as well as help to identify and deploy the tools that will manage and consume this data.

In this Big Data, Artificial Intelligence & Machine Learning EcoCast you will learn about how big data, AI, and ML all come together and will be exposed to solutions that can help you rein in the madness while also harnessing their potential power.

On This Big Data EcoCast Event, You’ll Discover

  • Learn about the critical challenges imposed by big data needs
  • Identify the use cases that drive decisions around when to choose which architecture
  • Discover how AI & ML critically intersect with big data and what you need to do to keep that intersection from becoming the scene of an accident
  • Understand how you can leverage next-generation infrastructure to accelerate AI/ML model development

On-premise and Multi-Cloud support for AWS, Microsoft Azure, SAP HANA, MS-SQL, IBM DB2 & Packaged Enterprise Applications

Application-aware compute, network and storage layers decouple applications and infrastructure so that the applications can be easily moved, scaled, cloned and managed with 1-click lifecycle operations regardless of the infrastructure model (on-premise, cloud, hybrid-cloud, multi-cloud), which can technically be anywhere.

Robin Explainer Video – ROBIN Hyper-converged Kubernetes Platform in Two Minutes

Robin Systems Videos

First Container Solution – Partha Seetala, CTO, Robin Systems | DataWorks Summit 2018 The CUBE Video

First Container Solution - Partha Seetala, CTO, Robin Systems | DataWorks Summit 2018 The CUBE Video

ROBIN Hyper-Converged Kubernetes Platform – the First Container Solution certified to run Hortonworks Data Platform (HDP)

ROBIN Hyper-Converged Kubernetes Platform – the First Container Solution certified to run Hortonworks Data Platform (HDP)

On day two at Dataworks Summit 2018, Rebecca Knight and Jame Kobielus spoke with Partha Seetala, Chief Technology Officer (CTO), Robin Systems at theCUBE to discuss the first container solution certified to run Hortonworks Data Platform (HDP).

Tell us about Robin Systems

Robin Systems, a venture-backed company is headquartered in San Jose in the Silicon Valley. The focus is in allowing applications, such as big data, databases, NoSQL, and AI ML, to run within the Kubernetes platform. What we have built (the first container solution certified to run HDP) is a product that converges storage, complex storage, networking, application workflow management, along with Kubernetes to create a one-click experience where users can get managed services kind of feel when they’re deploying these applications. They can also do one click lifecycle management on these apps. Our thesis has initially been to actually look at it from the applications down and then say, “Let the applications drive the underlying infrastructure to meet the user’s requirements.”, instead of looking at this problem from an infrastructure up into the application.

Is this the differentiating factor for Robin Systems?

Yes, it is because most of the folks out there today are looking at is as if it’s a component-based play, it’s like they want to bring storage to Kubernetes or networking to Kubernetes but the challenges are not really around storage and networking.

If you talk to the operations folk they say that “You know what? Those are underlying problems but my challenge is more along the lines of when my CIO says the initiative is to make my applications mobile. The management wants to go across to different Clouds. That’s my challenge.” The line of business user says “I want to get a managed source experience.” Yes, storage is the thing that you want to manage underneath, but I want to go and click and create my, let’s say, an Oracle database or distributions log.

In terms of the developer experience here, from the application down, give us a sense for how Robin Systems tooling your product in a certain way enables that degree of specification of the application logic that will then get containerized within?

Absolutely, like I said, we want applications to drive the infrastructure. What it means is that Robin is a software platform – the first container solution certified by Hortonworks to run HDP. We layer ourselves on top of the machines that we sit on – whether it is bare metal machines on premises, on VMs, or even an Azure, Google Cloud as well as AWs. Then we make the underlying compute, storage, network resources almost invisible. We treat it as a pool of resources. Now once you have this pool of resources, they can be attached to the applications that are being deployed as can (3:10) inside containers. I mean, it’s a software plane installed on machines. Once it’s installed, the experience now moves away from infrastructure into applications. You log in, you can see a portal, you have a lot of applications in that portal. We ship support for about 25 applications.

So are these templates that the developer can then customize to their specific requirements? Or no?

Yes. Absolutely, we ship reference templates for pretty much a wide variety of the most popular big data, NoSQL, database, AI ML applications today. But again, as I said, it’s a reference implementation. Typically, customers take the reference recommendation and they enhance it or they use that to onboard their custom apps, for example, or the apps that we don’t ship out of the box.

So it’s a very open, extensible platform – but the goal is that whatever the application might be, in fact, we keep saying that, if it runs somewhere else, it’s running on Robin. So the idea here is that you can bring any app or database, and with the flip of a switch, you can make it a 1-click deploy, 1-click manage, one-click mobile across Clouds.

You keep mentioning this one click and this idea of it being so easy, so convenient, so seamless. Is that what you say is the biggest concern of your customers? Are this ease and speed? Or what are some other things that are on their minds that you want to deliver?

So one click, of course, is a user experience part – but what is the real challenge? The real challenges are –  there are a wide variety of tools being used by enterprises today. Even the data analytic pipeline, there’s a lot across the data store, processor pipeline. Users don’t want to deal with setting it up and keeping it up and running. They don’t want the management but they want to get the job done. Now when you only get the job done, you really want to hide the underlying details of those platforms and the best way to convey that, the best way to give that experience is to make it a single click experience from the UI. So I keep calling it all one click because that is the experience that you get to hide the underlying complexity for these apps with the First Container Solution certified to run HDP.

Does your environment actually compile executable code based on that one click experience? Or where do the compilation and containerization actually happen in your distributed architecture?

Alright, so, I think the simplest to explain like this – You work on all the three big public clouds. Whether it is Azure, AWS or Google. Your entire application is containerized itself for deployment into these Clouds. So the idea here is let’s simplify it significantly. You have Kubernetes today, it can run anywhere, on premises, in the public Cloud and so on. Kubernetes is a great platform for orchestrating containers but it is largely inaccessible to a certain class of data-centric applications. Robin makes that possible.

But the Robin take is, just onboarding those applications on Kubernetes does not solve your CXO or your line of business user’s problems. You ought to manage the environment from an application point of view, not from a container management point of view. From an application point of view, management is a lot easier and that is where we create this one-click experience.

Give us a sense for how we’re here at DataWorks and it’s the Hortonworks show. Discuss with us your partnership with Hortonworks and you know, we’ve heard the announcement of HDP 3.0 and containerization support. Just give us a rough sense for how you align or partner with Hortonworks in this area.

Absolutely. It’s kind of interesting because Hortonworks is a data management platform, if you think about it from that point of view and when we engaged with them first – So some of our customers have been using the product, Hortonworks, on top of Robin, so orchestrating Hortonworks, making it a lot easier to use. One of the requirements was, “Are you certified with Hortonworks?” And the challenge that Hortonworks also had is they had never certified a container based deployment of Hortonworks before. They actually were very skeptical, you know, “You guys are saying all these things. Can you actually containerize and run Hortonworks?”

So we worked with Hortonworks and we are, I mean if you go to the Hortonworks website, you’ll see that we are the first in the entire industry who have been certified as a container based play that can actually deploy and manage Hortonworks. They have certified us by running a wide variety of tests, which they call the Q80 Test Suite, and when we got certified the only other players in the market that got that stamp of approval was Microsoft in Azure and EMC with Isilon.

So you’re in good company?

I think we are in great company.

Are you certified to work with HDP 3.0 or the prior version or both?

When we got certified we were still in the 2.X version of Hortonworks, HDP 3.0 is a more relatively newer version. But our plan is that we want to continue working with Hortonworks to get certified as they release the program and also help them because HDP 3.0 also has some container based orchestration and deployment. So we want to help them provide the underlying infrastructure so that it becomes easier for users to spin up more containers.

The higher level security and governance and all these things you’re describing, they have to be over the Kubernetes layer. Hortonworks supports it in their data plane services portfolio. Does Robin Systems solutions portfolio tap into any of that, or do you provide your own layer of sort of security and metadata management so forth?

We don’t want to take away the security model that the application itself provides because the user might have step it up so that they are doing governance, it’s not just logging in and auto control and things like this. Some governance is built into. We don’t want to change that. We want to keep the same experience and the same workflow hat customers have so we just integrate with whatever security that the application has. We, of course, provide security in terms of isolating these different apps that are running on the Robin platform where the security or the access into the application itself is left to the apps themselves. When I say apps, I’m talking about Hortonworks or any other databases.

Moving forward, as you think about ways you’re going to augment and enhance and alter the Robin platform, what are some of the biggest trends that are driving your decision making around that in the sense of, as we know that companies are living with this deluge of data, how are you helping them manage it better?

I think there are a few trends that we are closely watching. One is around Cloud mobility. CIOs want their applications along with their data to be available where their end users are. It’s almost like follow the sun model, where you might have generated the data in one Cloud and at a different time, different time zone, you’ll basically want to keep the app as well as the data moving. So we are following that very closely. How we can enable the mobility of data and apps a lot easier in that world.

The other one is around the general AI ML workflow. One of the challenges there, of course, you have great apps like TensorFlow or Theano or Caffe, these are very good AI ML toolkits but one of the challenges that people face, is they are buying this very expensive, let’s say NVIDIA DGX Box, this box costs about $150,000 each. How do you keep these boxes busy so that you’re getting a good return on investment? It will require you to better manage the resources offered with these boxes. We are also monitoring that space and we’re seeing that how can we take the Robin platform and how do you enable the better utilization of GPUs or the sharing of GPUs for running your AI ML kind of workload.

We’ll be discussing these trends at the next DataWorks Summit, I’m sure, at some other time in the future.

Learn more about ROBIN Hyper-Converged Kubernetes Platform – the First Container Solution Certified to run Hortonworks Data Platform (HDP) for big data, nosql databases and RDBMS applications.