Global Fortune 500 Financial Services Leader Gains Efficiency and Agility on Robin

Global Fortune 500 Financial Services Leader Gains Efficiency and Agility on Robin

With over 12 million customers and $125 billion in assets under management, this Fortune 500 financial services organization is America’s leading
homeowner and auto insurance company.

Offering a full range of financial products and services to its constituents, this company uses technology platforms and solutions to enable its customers to access the services any way they like, including by telephone, Internet, mail, fax, any bank’s ATM machines, and their own mobile devices. To provide this level of access and flexibility, the company maintains an IT Infrastructure that processes petabytes of data, and has moved its data center architecture from hardware-defined to software-defined in order to increase business agility.

This financial company processes billions of security events each day and leverages the Elasticsearch, Logstash, and Kibana (ELK) stack for event aggregation, monitoring, and visualization for cybersecurity threat detection. The company also operates an IBM Db2 data warehouse for business analytics and a Kafka cluster for stream processing.

Download Now

Financial Services with Kubernetes – Solution Brief

Big Data-as-a-Service with Kubernetes - Solution Brief

Cloud native Financial Services Applications

Robin enables financial institutions to automate deployment, scaling and lifecycle management of enterprise applications on Kubernetes. Robin simplifies the containerization of critical application pipelines including fraud analytics, real-time risk detection, & deep learning which are composed of multiple stateless and stateful applications.


  • Define & deploy applications stack or data pipeline as a bundle on Kubernetes on-prem or in the cloud »Enable self-service provisioning and management capabilities for the entire stack.
  • Accelerate & enhance Dev/Test collaboration with application-aware cloning
  • Monitor the health of infrastructure, containers, and the entire application stacks
  • Dynamically scale-up/ scale-out in minutes, without interrupting application operations
  • Consolidate multiple Databases like Oracle RAC clusters to reduce hardware and licensing cost
  • Migrate your customized and legacy application stacks to cloud without refactoring
  • Protect your critical application stack with application aware snapshots and backup

Digital Transformation Demands Fast-paced Innovation Digital Transformation requires IT services to be delivered in a fast, agile and streamlined manner across the entire organization. Enterprises in the Financial Services industry constantly need to innovate to attract and retain customers demanding a rich digital and mobile experience. It is also critical to analyze security threats across diverse systems and applications in real-time and at the same time meet all the compliance requirements as well as achieve continous availability for critical applications.

The industry is looking at containerization and kubernetes to achieve IT agility, however, there are many challenges that significantly impact the ability of technology leaders to innovate: Infrastructure silos – Owing to years of organic growth, the application infrastructure landscape is very diverse.

Managing legacy applications and modern cloud-native applications at the same time can be challenging. Traditional methods take weeks to provision legacy applications or to provide dev/test refreshes. With release cycles shrinking due to the DevOps culture and modern architecture, developers need much faster turnaround times for their application pipeline that often include legacy applications, as many modern applications depend on legacy applications.

High licensing and Infrastructure costs – Creating dedicated clusters for individual “tenants” (teams, workloads, applications, etc.) is required due to challenges with performance isolation. Each cluster is deployed for peak capacity, leading to significant licensing and hardware costs.

Infrastructure lock-in – Migrating customized applications to the cloud is not easy. Locked into infrastructure choice limits your ability to scale and experiment with new ideas.

Robin Platform Enables “As-a-Service” Experience

Robin is a Software Platform for Automating Deployment, Scaling and Life Cycle Management of Enterprise Applications on Kubernetes. Robin automates the provisioning and day-2 operations so that you can deliver a “Self-Service” experience with 1-click deployment simplicity for developers, DBAs, and Data Scientists


Robin Platform

Robin Platform

Robin Platform Datasheet

Automate Enterprise Applications on Kubernetes

Extend Kubernetes for data-intensive applications such as Oracle, Cloudera, Elastic stack, RDBMS, NoSQL, and other stateful applications.

Robin Platform

Robin is a Software Platform for Automating Deployment, Scaling and Life Cycle Management of Enterprise Applications on Kubernetes. Robin provides a self-service App-store experience and combines containerized storage, networking, compute (Kubernetes), and the application management layer into a single system. helps enterprises increase productivity, lower costs – CAPEX and OPEX, and enables always-on automation with technology solutions for big data, databases, indexing and search, and industry solutions for financial services and telco.

This software-only solution runs on-premises in your private data center or in public-cloud (AWS, Azure, GCP) environments and enables 1-click deployment of any application. Robin enables 1-click simplicity for lifecycle management operations such as snapshot, clone, patch, upgrade, backup, restore, scale, & QoS control of the entire application. Robin solves fundamental challenges of running big data & databases in Kubernetes & enables deployment of an agile & flexible Kubernetes-based infrastructure for Enterprise Applications.

Key Benefits

  • Increase Productivity
  • Lower Cost – CAPEX and OPEX
  • Gain Always-on Availability
  • Run data-heavy applications on Kubernetes

Robin Platform Stack Components

Application Management Layer – Manage Applications and configure Kubernetes, Storage & Networking with Application workflows.

Kubernetes – Run big data and databases in extended Kubernetes, eliminating limitations that restrict Kubernetes to micro-services applications.

Built-in Storage – Allocate storage while deploying an application or cluster, share storage among apps and users,  get SLA guarantees when consolidating, support for data locality, affinity, anti-affinity and isolation constraints, and tackle storage for applications that modify the Root filesystem.

Built-in Networking – Set networking options while deploying apps and clusters in Kubernetes and preserve IP addresses during restarts.

Robin Platform Features and Benefits



Rapid Deployment – Self-service 1-click
App-store experience.

Slash deployment and management times from weeks and hours to minutes. Deploy and manage data-heavy apps and services in Kubernetes.

Control QoS – Dynamic control QoS for every resource – CPU, Memory, Network and Storage.

Get complete visibility into the underlying infrastructure, set min and max IOPs, eliminate noisy neighbor issue, and gain performance guarantee.

Rapid clones – Clone the entire application along with its data – thick, thin, or deferred.

No performance penalties, backup data with ease, share data among users and applications, among dev, test, and prod, with no additional storage.

Application Snapshots – Take unlimited full application cluster snapshots, which include application configuration + data

Restore or refresh a cluster to any point-in-time using snapshots. Roll back easily with 1-click to the last snapshot in case of data corruption.

Scale – Decouple compute and storage,
scale independently.

Scale out – add nodes. Scale up – increase CPU, Memory and IOPs.

High Availability – No single point of failure – get reliable crossover and detect failures.

Get automatic App-aware data failover for complex distributed applications on bare metal – Robin is the ONLY product to provide HA for apps that persist state inside Docker images.

Upgrade – Automated rolling upgrade of application containers that is integrated with
CI/CD pipeline.

Safe-Upgrade technology guarantees that failed upgrades can be rolled back without disrupting the application.

Enterprise Data Apps-as-a-Service – Sample Customer Deployments

Fortune 500 Financial Services Leader

  • 11 billion security events ingested and analyzed in a day
  • DevOps simplicity for Elasticsearch, Logstash, Kibana, Kafka

Global Networking and Security Leader

  • 6 Petabytes under active management in a single Robin cluster
  • Agility, consolidation for Cloudera, Impala, Kafka, Druid

Global Technology Company – Travel Industry

  • 400 Oracle RAC databases managed by a single Robin cluster
  • Self-service environment for Oracle, Oracle RAC

Robin Platform Datasheet

Hyperconverged Kubernetes

Hyperconverged Kubernetes

Executive Summary – Hyperconverged Kubernetes White Paper

Kubernetes is the de-facto standard for container orchestration for microservices and applications. However, enterprise adoption of big data and databases using containers and Kubernetes is hindered by multiple challenges such as complexity of persistent storage, network, and application lifecycle management. Kubernetes provides the agility and scale modern enterprises need. Although, it provides the building blocks for infrastructure, not a turnkey solution.

On the other hand, Hyper-converged Infrastructure (HCI) provides a turnkey solution by combining virtualized compute(hypervisor), storage, and network in a single system. It eliminates the complexity of integrating infrastructure components by providing an out of the box solution that runs enterprise applications.

We believe combining Kubernetes and the principles of HCI brings simplicity to Kubernetes and creates a turnkey solution for data-heavy workloads. Hyper-converged Kubernetes technology with built-in enterprise-grade container storage and flexible overlay networking extends Kubernetes’ multi-cloud portability to big data, databases, and AI/ML.

Introducing: Hyper-Converged Kubernetes

What is hyper-convergence? Hyper-converged Infrastructure is a software-defined IT framework that combines compute, storage, and networking in a single system. HCI virtualizes all components of the traditional hardware-defined IT infrastructure. Typically, HCI systems consist of a hypervisor for virtualized computing, a software-defined storage (SDS) component, and a software-defined networking (SDN) component.

Hyper-converged Infrastructure software runs on X-86 based commodity hardware. It provides a complete environment for running enterprise applications, which means IT teams do not have to stitch together various pieces needed to to run the applications. All the required components are provided out of the box.

What is Kubernetes?

Kubernetes (also commonly referred to as K8s) is a container orchestration system that automates lifecycle operations such as deployment, scaling, and management for containerized applications. It was initially developed by Google, and later open-sourced. It is now managed by Cloud Native Computing Foundation (CNCF).

Kubernetes groups containers into logical units called “Pod”s. A pod is a collection of containers that belong together and should run on the same node. Kubernetes provides a Pod-centric management environment. It orchestrates compute, storage, and networking resources for workloads defined as Pods. Kubernetes can be used as a platform for containers, microservices, and private clouds.

Kubernetes for Stateful Applications Running databases, big data and AI/ML workloads in enterprise

Kubernetes for Stateful Applications Running databases, big data and AI/ML workloads in enterprise

Enterprise cloud-native requirements demand a robust platform that can support stateless and stateful workloads along with the necessary performance and SLA guarantees. Robin Hyper-Converged Kubernetes platform is built from the ground up to deploy enterprise applications. With an App-Store model for deploying stateful applications, Robin provides agility to DevOps teams with enterprise-grade performance. Introduction In today’s competitive market, enterprise IT faces an unenviable task of supporting innovation while enabling support for a variety of complex applications. Whether it is new applications with stateless architectures or existing stateful data-intensive applications, IT is expected to be the core part of the innovation team by empowering developers with right abstractions and enabling an agile workflow from developer laptop to production. In order to meet the demands of modern enterprise, IT has embraced cloud-native as the core pillar of their modernization strategy.

Kubernetes is the standard for container orchestration in the cloud-native ecosystem. Kubernetes, developed by Google and now part of Cloud Native Computing Foundation (CNCF), is an open source container orchestration engine used for the deployment, scaling and management of containers. The increase in market demand for Kubernetes is driving the platform as the standard for container orchestration. A vibrant ecosystem has emerged around Kubernetes, increasing the momentum of the project.

In the past two years, more organizations are using Kubernetes in production. According to a recent CNCF survey, 58% of respondents are using Kubernetes in production. This number will increase in the coming years as more enterprises go cloud-native. This trend is further highlighted by the report released by Dice and Their report claims Kubernetes was the top job searched in 2018 and this trend will grow further in 2019. The advantage of Kubernetes lies in the low operational overhead, easier DevOps and a better abstraction for developers to deploy their applications. Kubernetes supports both on-premises and cloud-based deployments. The support for hybrid/multi-cloud deployments makes Kubernetes attractive for enterprises.

Get the White Paper – Kubernetes for Stateful Workloads

Big Data-as-a-Service with Kubernetes – Solution Brief

Big Data-as-a-Service with Kubernetes - Solution Brief

Automate your Big Data infrastructure using cloud-native architecture and Robin big data-as-a-service. Improve the agility and efficiency of your Data Scientists, Data Engineers, and Developers.

Highlights – Big Data-as-a-Serivice with Robin

  • Decouple compute and storage and scale independently to achieve public cloud flexibility
  • Migrate big data clusters to public cloud or leverage public cloud to off-load compute
  • Provision/Decommission compute-only clusters within minutes for ephemeral workloads
  • Provide self-service experience to improve developer and data scientist productivity
  • Eliminate planning delays, start small and dynamically scale-up/out nodes to meet demand
  • Consolidate multiple workloads on shared infrastructure to reduce hardware footprint
  • Trade resources among big data clusters to manage surges & periodic compute requirements

Top 5 Challenges for Big Data Management

Big data has transformed how we store and process data. However, following challenges keep organizations from unlocking the full potential of big data and maximizing ROI:

»Provisioning agility for ephemeral workloads: Certain workloads, such as ad-hoc analysis, require significant compute resources for a short period of time. Developers need the ability to quickly provision and decommission compute-only clusters for such workloads.

»Separation of compute and storage: Big data needs converged nodes with both compute and storage for data locality. However, compute is significantly more expensive than storage, and with ever-increasing data volumes, infrastructure costs are rising.

»Dynamic scaling to meet sudden demands: If critical services such as the NameNode run out of resources, it is not easy to scale-up nodes on the fly to add more memory or CPU.

»Cluster sprawl and hardware underutilization: Due to lack of reliable multi-tenancy and performance isolation, Hadoop Admins often deploy separate clusters for critical workloads, resulting in cluster sprawl and poor utilization of server resources.

»Cloud migration: There is no easy way to migrate big data clusters to public clouds, or leverage public cloud compute and storage as needed for on-prem clusters.

Robin Hyper-converged Kubernetes Platform

Robin platform extends Kubernetes with built-in storage, networking, and application management to deliver a production-ready solution for big data. Robin automates the provisioning and management of big data clusters so that you can deliver an “as-a-service” experience with 1-click simplicity to data engineers, data scientists, and developers.

Get big data-as-a-service with Robin

Solution Benefits and Business Impact

Robin brings together the simplicity of hyper-convergence and the agility of Kubernetes for big data-as-a-service.

Deliver Insights Faster

Self-service experience

Robin provides self-service provisioning and management capabilities to developers, data engineers, and data scientists, significantly improving their productivity. It saves valuable time at each stage of the application lifecycle.

Provision clusters in minutes

Robin has automated the end-to-end cluster provisioning process for Hortonworks, Cloudera, Spark, Kafka, and custom stacks. The entire provisioning process takes only a few minutes.

Provision compute-only clusters

You can create and decommission compute-only clusters for Hortonworks, Cloudera, and your custom big data stacks. Perfect for ephemeral workloads, these clusters simply point to existing data lake cluster in your organization, do the required processing, and store the data in the target systems.

Eliminate “right-size” planning delays

DevOps and IT teams can start with small deployments, and as applications grow, they can add more resources. Robin runs on commodity hardware, making it easy to scale-out by adding commodity servers to existing deployments.

Scale on-demand during surges

No need to create IT tickets wait for days to scale-up NameNodes, or to add more DataNodes. Cut the response time to few minutes with 1-click scale-up and scale-out.

Reduce Costs with Robin Big Data-as-a-Service

Decouple compute and storage

Enjoy the cost efficiencies by decoupling compute (CPU and memory) and storage. Store massive data volumes on storage-only inexpensive hardware, and use compute efficiently to process the data when needed. Simply turn on data locality with 1-click when you really need it.

Improve hardware utilization

Robin provides multi-tenancy and role-based access controls (RBAC) to consolidate multiple big data and database workloads without compromising SLAs and QoS, increasing hardware utilization.

Simplify lifecycle operations

Native integration between Kubernetes, storage, network, and application management layer enables 1-click operations to scale, snapshot, clone, backup, migrate applications, reducing the administrative cost of your big data infrastructure.

Trade resources among clusters

Reduce your hardware cost by sharing the compute between clusters. If a cluster runs the majority of its batch jobs during the night-time, it can borrow a resource from an adjacent application cluster with day-time peaks, and vice versa.

Future-Proof Your Enterprise

Migrate or extend to public cloud

Robin provides 1-click lift-and-shift for big data clusters. Simply clone your entire cluster and migrate to the public cloud of your choice. You can also scale-out your clusters to the public cloud from on-prem to create hybrid cloud environment.

Standardize on Kubernetes

Modernize your data infrastructure using cloud-native technologies such as Kubernetes and Docker. Robin solves the storage and network persistency challenges in Kubernetes to enable its use in the provisioning, management, high availability and fault tolerance of mission-critical Hadoop deployments.

No vendor lock-in

Kubernetes-based architecture gives you complete control of your infrastructure. With the freedom to move your workloads across private and public clouds, you avoid vendor lock-in.

Get Robin Solution Brief – Big Data-as-a-Service with Kubernetes

Big Data, Artificial Intelligence & Machine Learning EcoCast – Partha Seetala, CTO

Robin is a Software Platform for Automating Deployment, Scaling and Life Cycle Management of Enterprise Applications on Kubernetes

Robin Systems Videos

Big Data Ecocast – Partha Seetala, CTO

Big data and artificial intelligence/machine learning are technology trends for which we’re just scratching the surface of the long-term potential.  In such environments, storage isn’t just about capacity, but about how to use that data in the most expedient way possible. Today, as organizations consider the potential of these technologies, they’re struggling with determining how to store, manage, and protect this data. Moreover, they’re identifying key use cases for their burgeoning datasets. Increasingly. Organizations are collecting data to train artificial intelligence and machine learning models in order to bring these powerful capabilities into their operations to get ahead of the competition and to make the world a better place.

For example, PAIGE.AI, a spinout of Memorial Sloan Kettering Cancer Center (MSKCC) is using advanced technology to accelerate and optimize cancer research. The goal of PAIGE.AI is to develop and deliver a series of AI/ML modules that allow pathologists to improve the scalability of their work, enabling them to provide better care at lower cost. By analyzing petabytes of data from tens of thousands of digital slides of anonymized patient data, PAIGE.AI has developed deep learning algorithms based on convolutional and recurrent neural networks and generative models that are able to learn efficiently and help improve the accuracy and speed of cancer diagnosis.

This entire world brings with it new challenges and whole new terminology that has to be learned. You need to figure out the ups, the downs, the ins, and the outs of designing a big data architecture as well as help to identify and deploy the tools that will manage and consume this data.

In this Big Data, Artificial Intelligence & Machine Learning EcoCast you will learn about how big data, AI, and ML all come together and will be exposed to solutions that can help you rein in the madness while also harnessing their potential power.

On This Big Data EcoCast Event, You’ll Discover

  • Learn about the critical challenges imposed by big data needs
  • Identify the use cases that drive decisions around when to choose which architecture
  • Discover how AI & ML critically intersect with big data and what you need to do to keep that intersection from becoming the scene of an accident
  • Understand how you can leverage next-generation infrastructure to accelerate AI/ML model development

On-premise and Multi-Cloud support for AWS, Microsoft Azure, SAP HANA, MS-SQL, IBM DB2 & Packaged Enterprise Applications

Application-aware compute, network and storage layers decouple applications and infrastructure so that the applications can be easily moved, scaled, cloned and managed with 1-click lifecycle operations regardless of the infrastructure model (on-premise, cloud, hybrid-cloud, multi-cloud), which can technically be anywhere.

Robin Explainer Video – Robin Hyper-converged Kubernetes Platform in Two Minutes

Robin Systems Videos

First Container Solution – Partha Seetala, CTO, Robin Systems | DataWorks Summit 2018 The CUBE Video

Robin Hyper-Converged Kubernetes Platform announced as the First and Only Container Solution certified to run Hortonworks Data Platform (HDP)

Robin Hyper-Converged Kubernetes Platform – the First Container Solution certified to run Hortonworks Data Platform (HDP)

Robin Hyper-Converged Kubernetes Platform – the First Container Solution certified to run Hortonworks Data Platform (HDP)

On day two at Dataworks Summit 2018, Rebecca Knight and Jame Kobielus spoke with Partha Seetala, Chief Technology Officer (CTO), Robin Systems at theCUBE to discuss the first container solution certified to run Hortonworks Data Platform (HDP).

Tell us about Robin Systems

Robin Systems, a venture-backed company is headquartered in San Jose in the Silicon Valley. The focus is in allowing applications, such as big data, databases, NoSQL, and AI ML, to run within the Kubernetes platform. What we have built (the first container solution certified to run HDP) is a product that converges storage, complex storage, networking, application workflow management, along with Kubernetes to create a one-click experience where users can get managed services kind of feel when they’re deploying these applications. They can also do one click lifecycle management on these apps. Our thesis has initially been to actually look at it from the applications down and then say, “Let the applications drive the underlying infrastructure to meet the user’s requirements.”, instead of looking at this problem from an infrastructure up into the application.

Is this the differentiating factor for Robin Systems?

Yes, it is because most of the folks out there today are looking at is as if it’s a component-based play, it’s like they want to bring storage to Kubernetes or networking to Kubernetes but the challenges are not really around storage and networking.

If you talk to the operations folk they say that “You know what? Those are underlying problems but my challenge is more along the lines of when my CIO says the initiative is to make my applications mobile. The management wants to go across to different Clouds. That’s my challenge.” The line of business user says “I want to get a managed source experience.” Yes, storage is the thing that you want to manage underneath, but I want to go and click and create my, let’s say, an Oracle database or distributions log.

In terms of the developer experience here, from the application down, give us a sense for how Robin Systems tooling your product in a certain way enables that degree of specification of the application logic that will then get containerized within?

Absolutely, like I said, we want applications to drive the infrastructure. What it means is that Robin is a software platform – the first container solution certified by Hortonworks to run HDP. We layer ourselves on top of the machines that we sit on – whether it is bare metal machines on premises, on VMs, or even an Azure, Google Cloud as well as AWs. Then we make the underlying compute, storage, network resources almost invisible. We treat it as a pool of resources. Now once you have this pool of resources, they can be attached to the applications that are being deployed as can (3:10) inside containers. I mean, it’s a software plane installed on machines. Once it’s installed, the experience now moves away from infrastructure into applications. You log in, you can see a portal, you have a lot of applications in that portal. We ship support for about 25 applications.

So are these templates that the developer can then customize to their specific requirements? Or no?

Yes. Absolutely, we ship reference templates for pretty much a wide variety of the most popular big data, NoSQL, database, AI ML applications today. But again, as I said, it’s a reference implementation. Typically, customers take the reference recommendation and they enhance it or they use that to onboard their custom apps, for example, or the apps that we don’t ship out of the box.

So it’s a very open, extensible platform – but the goal is that whatever the application might be, in fact, we keep saying that, if it runs somewhere else, it’s running on Robin. So the idea here is that you can bring any app or database, and with the flip of a switch, you can make it a 1-click deploy, 1-click manage, one-click mobile across Clouds.

You keep mentioning this one click and this idea of it being so easy, so convenient, so seamless. Is that what you say is the biggest concern of your customers? Are this ease and speed? Or what are some other things that are on their minds that you want to deliver?

So one click, of course, is a user experience part – but what is the real challenge? The real challenges are –  there are a wide variety of tools being used by enterprises today. Even the data analytic pipeline, there’s a lot across the data store, processor pipeline. Users don’t want to deal with setting it up and keeping it up and running. They don’t want the management but they want to get the job done. Now when you only get the job done, you really want to hide the underlying details of those platforms and the best way to convey that, the best way to give that experience is to make it a single click experience from the UI. So I keep calling it all one click because that is the experience that you get to hide the underlying complexity for these apps with the First Container Solution certified to run HDP.

Does your environment actually compile executable code based on that one click experience? Or where do the compilation and containerization actually happen in your distributed architecture?

Alright, so, I think the simplest to explain like this – You work on all the three big public clouds. Whether it is Azure, AWS or Google. Your entire application is containerized itself for deployment into these Clouds. So the idea here is let’s simplify it significantly. You have Kubernetes today, it can run anywhere, on premises, in the public Cloud and so on. Kubernetes is a great platform for orchestrating containers but it is largely inaccessible to a certain class of data-centric applications. Robin makes that possible.

But the Robin take is, just onboarding those applications on Kubernetes does not solve your CXO or your line of business user’s problems. You ought to manage the environment from an application point of view, not from a container management point of view. From an application point of view, management is a lot easier and that is where we create this one-click experience.

Give us a sense for how we’re here at DataWorks and it’s the Hortonworks show. Discuss with us your partnership with Hortonworks and you know, we’ve heard the announcement of HDP 3.0 and containerization support. Just give us a rough sense for how you align or partner with Hortonworks in this area.

Absolutely. It’s kind of interesting because Hortonworks is a data management platform, if you think about it from that point of view and when we engaged with them first – So some of our customers have been using the product, Hortonworks, on top of Robin, so orchestrating Hortonworks, making it a lot easier to use. One of the requirements was, “Are you certified with Hortonworks?” And the challenge that Hortonworks also had is they had never certified a container based deployment of Hortonworks before. They actually were very skeptical, you know, “You guys are saying all these things. Can you actually containerize and run Hortonworks?”

So we worked with Hortonworks and we are, I mean if you go to the Hortonworks website, you’ll see that we are the first in the entire industry who have been certified as a container based play that can actually deploy and manage Hortonworks. They have certified us by running a wide variety of tests, which they call the Q80 Test Suite, and when we got certified the only other players in the market that got that stamp of approval was Microsoft in Azure and EMC with Isilon.

So you’re in good company?

I think we are in great company.

Are you certified to work with HDP 3.0 or the prior version or both?

When we got certified we were still in the 2.X version of Hortonworks, HDP 3.0 is a more relatively newer version. But our plan is that we want to continue working with Hortonworks to get certified as they release the program and also help them because HDP 3.0 also has some container based orchestration and deployment. So we want to help them provide the underlying infrastructure so that it becomes easier for users to spin up more containers.

The higher level security and governance and all these things you’re describing, they have to be over the Kubernetes layer. Hortonworks supports it in their data plane services portfolio. Does Robin Systems solutions portfolio tap into any of that, or do you provide your own layer of sort of security and metadata management so forth?

We don’t want to take away the security model that the application itself provides because the user might have step it up so that they are doing governance, it’s not just logging in and auto control and things like this. Some governance is built into. We don’t want to change that. We want to keep the same experience and the same workflow hat customers have so we just integrate with whatever security that the application has. We, of course, provide security in terms of isolating these different apps that are running on the Robin platform where the security or the access into the application itself is left to the apps themselves. When I say apps, I’m talking about Hortonworks or any other databases.

Moving forward, as you think about ways you’re going to augment and enhance and alter the Robin platform, what are some of the biggest trends that are driving your decision making around that in the sense of, as we know that companies are living with this deluge of data, how are you helping them manage it better?

I think there are a few trends that we are closely watching. One is around Cloud mobility. CIOs want their applications along with their data to be available where their end users are. It’s almost like follow the sun model, where you might have generated the data in one Cloud and at a different time, different time zone, you’ll basically want to keep the app as well as the data moving. So we are following that very closely. How we can enable the mobility of data and apps a lot easier in that world.

The other one is around the general AI ML workflow. One of the challenges there, of course, you have great apps like TensorFlow or Theano or Caffe, these are very good AI ML toolkits but one of the challenges that people face, is they are buying this very expensive, let’s say NVIDIA DGX Box, this box costs about $150,000 each. How do you keep these boxes busy so that you’re getting a good return on investment? It will require you to better manage the resources offered with these boxes. We are also monitoring that space and we’re seeing that how can we take the Robin platform and how do you enable the better utilization of GPUs or the sharing of GPUs for running your AI ML kind of workload.

We’ll be discussing these trends at the next DataWorks Summit, I’m sure, at some other time in the future.

Learn more about Robin Hyper-Converged Kubernetes Platform – the First Container Solution Certified to run Hortonworks Data Platform (HDP) for big data, nosql databases and RDBMS applications.

Hortonworks Data Platform Optimized for Docker Containers – Get Started Today

Hortonworks Data Platform Optimized for Docker Containers – Get Started Today

Robin Hortonworks Webinar

HDP on Robin Hyper-Converged Kubernetes Platform

Although the Docker revolution has made containers mainstream, containerizing big data is challenging because many containerization platforms do not support stateful applications. With the first and only out-of-the-box container-based solution that is certified by Hortonworks to run HDP, Robin Systems helps to build an Application-Defined Infrastructure.

Containerizing big data brings to the table many benefits such as 1. Improved utilization and reduced licensing costs with shared hardware resources, 2. Decreased administration costs and reduced time-to-market for big data apps with simplified operations.

Join this Robin Hortonworks Webinar to learn about:

  1. App-store experience; 1-click deployment of HDP: Deploying HDP is now as easy as installing an app from the App Store. Robin enables self-service big data deployment with 1-click cluster provisioning to deploy complex distributed applications in minutes.
  2. Doesn’t get any simpler: Scaling up and scaling out is now as easy as adjusting the brightness on your phone – Use sliders to configure compute, network, and storage layers.
  3. Meet critical SLAs: See how Robin’s multi-tenant architecture enables IT teams to meet the most demanding SLAs and handle performance isolation between HDP services even in a shared infrastructure environment while letting development and scientific user teams enjoy the simplicity of an app-store experience.

Ali Bajwa, Principal Partner Solutions Engineer, Hortonworks

Ali Bajwa is a seasoned engineer with extensive experience architecting and developing complex Big Data, CRM, mobile software infrastructure projects. He has delivered customer success by leading architectural workshops and proof of concept engagements with key customers. He has broad development experience across Hadoop, Web, Mobile, and Desktop applications.

Ankur Desai, Director of Products, Robin Systems

Ankur Desai is a Director of Products at Robin Systems. He brings over 12 years of experience in software development, product management, and product marketing for enterprise software. Ankur holds an MBA from Dartmouth College, and a Bachelor of Engineering in Information Technology from University of Mumbai.

Robin Hortonworks Webinar – Hortonworks Data Platform Optimized for Docker Containers – Get Started Today

Infographic: Building Stateful Cloud Applications With Containers

Infographic: Building Stateful Cloud Applications With Containers

Tips From Top Thinkers

Building Stateful Cloud Applications With Containers

The continued expansion of the cloud, growing end-user application performance demands, and an explosion in database needs are all stacking up fast against enterprise IT teams. When it comes to building enterprise database and big data applications, many are finding that container technology solves for at least a few of these problems. Here are stats and tips from top thinkers on how to best use containers when building stateful cloud applications.

Persistent Storage is a Top Challenge

26% of IT professionals cited “persistent storage” as a top challenge, when it comes to leveraging containers.

Streamline Until It Hurts “Some of the best writers have said they refine their work by cutting till it hurts. Containers are the same way.” Eric Vanderburg Vice President, Cybersecurity | TCDI

Isolate Containers & Hosts “Maintaining isolation between the container and hosts system by separating the file systems is vital towards management of the stateful application.” Craig Brown, PhD Senior Big Data Architect & Data Science Consultant

Select an Intelligent Orchestrator “An intelligent orchestrator along with a softwaredefined storage with software-defined networking is very essential for running a cloud-based application.” Deba Chatterjee Senior Engineering Program Manager | Apple

A Majority of Enterprises are Investing in Containers

69% of IT pros reported their companies are investing in containers 69

Validate All States “What they all (containerized stateful apps) have in common is the requirement to reliably validate all possible states and state transitions when changes are made to the application.” Marc Hornbeek Principal Consultant – Dev Ops | Trace3

Ensure You Can Monitor All Containers “Containerised applications are addictive. They can be created, tested and deployed very quickly when compared to traditional VMs. The infrastructure to begin monitoring a potentially vast and varying number of new containers is essential.” Stephen Thair Co-Founder | DevOpsGuys

Ofset Workloads with Containers “Stateful applications often reside in 1 or 2 geographical locations and take heavy loads … and at diferent times during peak and of-peak periods. Understanding these variables will enable an operations team to determine how to best design the use of container applications.” Steve Brown Director, DevOps Solutions N.A. | Lenovo

Top Container Orchestrators Now More Popular Than DevOps Tools

When choosing a platform, 35% felt Docker was the best fit for them among all DevOps tools

Get Infrastructure Pros Excited “A lot of people focus too much on the fact that “those application guys” are coming to mess with our infrastructure, instead of thinking that maybe we can elevate our own jobs and start working more closely with applications.” Stephen Foskett Proprietor | Foskett Services

Follow Design Microservices Principles “One of the fundamental aspects of containers is moving to immutable application infrastructure, which means that you cannot store state and application in the same container.” JP Morgenthal CTO Application Ser

Don’t Use Containers for Data Storage “When dealing with stateful applications, precautions need to be taken to ensure that you are not compromising or losing data.” Sylvain Kalache Co-Founder | Holberton School

Looking for more advice on building your stateful cloud application with containers? Download our full eBook today for more exclusive advice from top cloud, DevOps, and container technology pioneers.