Big Data-as-a-Service with Kubernetes – Solution Brief

Big Data-as-a-Service with Kubernetes - Solution Brief

Automate your Big Data infrastructure using cloud-native architecture and Robin big data-as-a-service. Improve the agility and efficiency of your Data Scientists, Data Engineers, and Developers.

Highlights – Big Data-as-a-Serivice with Robin

  • Decouple compute and storage and scale independently to achieve public cloud flexibility
  • Migrate big data clusters to public cloud or leverage public cloud to off-load compute
  • Provision/Decommission compute-only clusters within minutes for ephemeral workloads
  • Provide self-service experience to improve developer and data scientist productivity
  • Eliminate planning delays, start small and dynamically scale-up/out nodes to meet demand
  • Consolidate multiple workloads on shared infrastructure to reduce hardware footprint
  • Trade resources among big data clusters to manage surges & periodic compute requirements

Top 5 Challenges for Big Data Management

Big data has transformed how we store and process data. However, following challenges keep organizations from unlocking the full potential of big data and maximizing ROI:

»Provisioning agility for ephemeral workloads: Certain workloads, such as ad-hoc analysis, require significant compute resources for a short period of time. Developers need the ability to quickly provision and decommission compute-only clusters for such workloads.

»Separation of compute and storage: Big data needs converged nodes with both compute and storage for data locality. However, compute is significantly more expensive than storage, and with ever-increasing data volumes, infrastructure costs are rising.

»Dynamic scaling to meet sudden demands: If critical services such as the NameNode run out of resources, it is not easy to scale-up nodes on the fly to add more memory or CPU.

»Cluster sprawl and hardware underutilization: Due to lack of reliable multi-tenancy and performance isolation, Hadoop Admins often deploy separate clusters for critical workloads, resulting in cluster sprawl and poor utilization of server resources.

»Cloud migration: There is no easy way to migrate big data clusters to public clouds, or leverage public cloud compute and storage as needed for on-prem clusters.

Robin Hyper-converged Kubernetes Platform

Robin platform extends Kubernetes with built-in storage, networking, and application management to deliver a production-ready solution for big data. Robin automates the provisioning and management of big data clusters so that you can deliver an “as-a-service” experience with 1-click simplicity to data engineers, data scientists, and developers.

Get big data-as-a-service with Robin

Solution Benefits and Business Impact

Robin brings together the simplicity of hyper-convergence and the agility of Kubernetes for big data-as-a-service.

Deliver Insights Faster

Self-service experience

Robin provides self-service provisioning and management capabilities to developers, data engineers, and data scientists, significantly improving their productivity. It saves valuable time at each stage of the application lifecycle.

Provision clusters in minutes

Robin has automated the end-to-end cluster provisioning process for Hortonworks, Cloudera, Spark, Kafka, and custom stacks. The entire provisioning process takes only a few minutes.

Provision compute-only clusters

You can create and decommission compute-only clusters for Hortonworks, Cloudera, and your custom big data stacks. Perfect for ephemeral workloads, these clusters simply point to existing data lake cluster in your organization, do the required processing, and store the data in the target systems.

Eliminate “right-size” planning delays

DevOps and IT teams can start with small deployments, and as applications grow, they can add more resources. Robin runs on commodity hardware, making it easy to scale-out by adding commodity servers to existing deployments.

Scale on-demand during surges

No need to create IT tickets wait for days to scale-up NameNodes, or to add more DataNodes. Cut the response time to few minutes with 1-click scale-up and scale-out.

Reduce Costs with Robin Big Data-as-a-Service

Decouple compute and storage

Enjoy the cost efficiencies by decoupling compute (CPU and memory) and storage. Store massive data volumes on storage-only inexpensive hardware, and use compute efficiently to process the data when needed. Simply turn on data locality with 1-click when you really need it.

Improve hardware utilization

Robin provides multi-tenancy and role-based access controls (RBAC) to consolidate multiple big data and database workloads without compromising SLAs and QoS, increasing hardware utilization.

Simplify lifecycle operations

Native integration between Kubernetes, storage, network, and application management layer enables 1-click operations to scale, snapshot, clone, backup, migrate applications, reducing the administrative cost of your big data infrastructure.

Trade resources among clusters

Reduce your hardware cost by sharing the compute between clusters. If a cluster runs the majority of its batch jobs during the night-time, it can borrow a resource from an adjacent application cluster with day-time peaks, and vice versa.

Future-Proof Your Enterprise

Migrate or extend to public cloud

Robin provides 1-click lift-and-shift for big data clusters. Simply clone your entire cluster and migrate to the public cloud of your choice. You can also scale-out your clusters to the public cloud from on-prem to create hybrid cloud environment.

Standardize on Kubernetes

Modernize your data infrastructure using cloud-native technologies such as Kubernetes and Docker. Robin solves the storage and network persistency challenges in Kubernetes to enable its use in the provisioning, management, high availability and fault tolerance of mission-critical Hadoop deployments.

No vendor lock-in

Kubernetes-based architecture gives you complete control of your infrastructure. With the freedom to move your workloads across private and public clouds, you avoid vendor lock-in.

Get Robin Solution Brief – Big Data-as-a-Service with Kubernetes

Robin Platform in 2 Minutes

Robin Systems Unveils Hyper-converged Kubernetes Platform for Big Data, Databases and AI/ML Applications 

Using the unique hyper-converged Kubernetes technology, with built-in enterprise-grade container storage and flexible overlay networking, Robin eliminates these challenges and extends Kubernetes multi-cloud portability to big data, databases, and AI/ML.

Robin Explainer Video – Platform 2 Min Video

Robin Explainer Video – Hyper-Converged Kubernetes Platform

Robin solves the fundamental challenges of running big data and databases in Kubernetes and enables the deployment of an agile, and flexible infrastructure for your Enterprise Applications.

As the only purpose-built Kubernetes-based solution, Robin offers the entire application lifecycle management embedded natively into the compute, storage, and network infrastructure stack for any application anywhere on premises and on the public cloud.

Robin is the first implementation of hyper-converged Kubernetes in the market. Using Robin users can do self-service deployment of big dataNoSQL databasesRDBMS,  and AI/ML, share entire experiments among team members, quickly do what-if trials, scale resources including GPU and IOPs, and migrate as well as recreate entire application environments across data centers and clouds.

Robin offers a self-service app-store experience that simplifies deployment and lifecycle management with 1-click functions that shorten DevOps and IT tasks from hours and weeks to minutes. It makes applications truly agnostic of infrastructure choices and enables them to share resources and data with predictable performance, leading to significant cost savings.

Big Data, Artificial Intelligence & Machine Learning EcoCast – Partha Seetala, CTO

Robin is a Software Platform for Automating Deployment, Scaling and Life Cycle Management of Enterprise Applications on Kubernetes

Robin Systems Videos

Big Data Ecocast – Partha Seetala, CTO

Big data and artificial intelligence/machine learning are technology trends for which we’re just scratching the surface of the long-term potential.  In such environments, storage isn’t just about capacity, but about how to use that data in the most expedient way possible. Today, as organizations consider the potential of these technologies, they’re struggling with determining how to store, manage, and protect this data. Moreover, they’re identifying key use cases for their burgeoning datasets. Increasingly. Organizations are collecting data to train artificial intelligence and machine learning models in order to bring these powerful capabilities into their operations to get ahead of the competition and to make the world a better place.

For example, PAIGE.AI, a spinout of Memorial Sloan Kettering Cancer Center (MSKCC) is using advanced technology to accelerate and optimize cancer research. The goal of PAIGE.AI is to develop and deliver a series of AI/ML modules that allow pathologists to improve the scalability of their work, enabling them to provide better care at lower cost. By analyzing petabytes of data from tens of thousands of digital slides of anonymized patient data, PAIGE.AI has developed deep learning algorithms based on convolutional and recurrent neural networks and generative models that are able to learn efficiently and help improve the accuracy and speed of cancer diagnosis.

This entire world brings with it new challenges and whole new terminology that has to be learned. You need to figure out the ups, the downs, the ins, and the outs of designing a big data architecture as well as help to identify and deploy the tools that will manage and consume this data.

In this Big Data, Artificial Intelligence & Machine Learning EcoCast you will learn about how big data, AI, and ML all come together and will be exposed to solutions that can help you rein in the madness while also harnessing their potential power.

On This Big Data EcoCast Event, You’ll Discover

  • Learn about the critical challenges imposed by big data needs
  • Identify the use cases that drive decisions around when to choose which architecture
  • Discover how AI & ML critically intersect with big data and what you need to do to keep that intersection from becoming the scene of an accident
  • Understand how you can leverage next-generation infrastructure to accelerate AI/ML model development

On-premise and Multi-Cloud support for AWS, Microsoft Azure, SAP HANA, MS-SQL, IBM DB2 & Packaged Enterprise Applications

Application-aware compute, network and storage layers decouple applications and infrastructure so that the applications can be easily moved, scaled, cloned and managed with 1-click lifecycle operations regardless of the infrastructure model (on-premise, cloud, hybrid-cloud, multi-cloud), which can technically be anywhere.

Robin Explainer Video – Robin Hyper-converged Kubernetes Platform in Two Minutes

Robin Systems Videos

White Paper – Deploy, Manage, Consolidate NoSQL Apps with Robin Hyperconverged Kubernetes Platform

White Paper - Deploy, Manage, Consolidate NoSQL Apps with Robin Hyperconverged Kubernetes Platform

NoSQL White Paper

NoSQL database applications like Cassandra, MongoDB, CouchDB, ScyllaDB, and others are popular tools used in a modern application stack. However, deploying NoSQL databases typically starts with weeks of careful infrastructure planning to ensure good performance, ability to scale to meet anticipated growth and continued fault tolerance and high availability of the service. Post-deployment, the rigidity of the infrastructure also poses operational challenges in adjusting resources to meet changing needs, patching, upgrades, backup and the ability to snapshot and clone the database to create test and dev copies.

Robin hyper-converged Kubernetes platform takes an innovative new approach where application lifecycle workflows are natively embedded into a tightly converged storage, network, and Kubernetes stack; enabling 1-click self-service experience for both deployment and lifecycle management of Big data, Database and AI/ML applications. Enterprises using Robin will gain simpler and faster roll-out of critical IT and LoB initiatives, such as containerization, cloud-migration, cost-consolidation, and developer productivity.

This complimentary NoSQL white paper shows how to bring 1-click simplicity to deploy, snapshot, clone, patch, upgrade, backup, restore, and control QoS of any Kubernetes-based NoSQL App :

  • Deploy, manage, and consolidate any NoSQL App in your environment
  • Self-service deployment of NoSQL Apps with 1-click
  • Infrastructure consolidation and cost savings

First Container Solution – Partha Seetala, CTO, Robin Systems | DataWorks Summit 2018 The CUBE Video

Robin Hyper-Converged Kubernetes Platform announced as the First and Only Container Solution certified to run Hortonworks Data Platform (HDP)

Robin Hyper-Converged Kubernetes Platform – the First Container Solution certified to run Hortonworks Data Platform (HDP)

Robin Hyper-Converged Kubernetes Platform – the First Container Solution certified to run Hortonworks Data Platform (HDP)

On day two at Dataworks Summit 2018, Rebecca Knight and Jame Kobielus spoke with Partha Seetala, Chief Technology Officer (CTO), Robin Systems at theCUBE to discuss the first container solution certified to run Hortonworks Data Platform (HDP).

Tell us about Robin Systems

Robin Systems, a venture-backed company is headquartered in San Jose in the Silicon Valley. The focus is in allowing applications, such as big data, databases, NoSQL, and AI ML, to run within the Kubernetes platform. What we have built (the first container solution certified to run HDP) is a product that converges storage, complex storage, networking, application workflow management, along with Kubernetes to create a one-click experience where users can get managed services kind of feel when they’re deploying these applications. They can also do one click lifecycle management on these apps. Our thesis has initially been to actually look at it from the applications down and then say, “Let the applications drive the underlying infrastructure to meet the user’s requirements.”, instead of looking at this problem from an infrastructure up into the application.

Is this the differentiating factor for Robin Systems?

Yes, it is because most of the folks out there today are looking at is as if it’s a component-based play, it’s like they want to bring storage to Kubernetes or networking to Kubernetes but the challenges are not really around storage and networking.

If you talk to the operations folk they say that “You know what? Those are underlying problems but my challenge is more along the lines of when my CIO says the initiative is to make my applications mobile. The management wants to go across to different Clouds. That’s my challenge.” The line of business user says “I want to get a managed source experience.” Yes, storage is the thing that you want to manage underneath, but I want to go and click and create my, let’s say, an Oracle database or distributions log.

In terms of the developer experience here, from the application down, give us a sense for how Robin Systems tooling your product in a certain way enables that degree of specification of the application logic that will then get containerized within?

Absolutely, like I said, we want applications to drive the infrastructure. What it means is that Robin is a software platform – the first container solution certified by Hortonworks to run HDP. We layer ourselves on top of the machines that we sit on – whether it is bare metal machines on premises, on VMs, or even an Azure, Google Cloud as well as AWs. Then we make the underlying compute, storage, network resources almost invisible. We treat it as a pool of resources. Now once you have this pool of resources, they can be attached to the applications that are being deployed as can (3:10) inside containers. I mean, it’s a software plane installed on machines. Once it’s installed, the experience now moves away from infrastructure into applications. You log in, you can see a portal, you have a lot of applications in that portal. We ship support for about 25 applications.

So are these templates that the developer can then customize to their specific requirements? Or no?

Yes. Absolutely, we ship reference templates for pretty much a wide variety of the most popular big data, NoSQL, database, AI ML applications today. But again, as I said, it’s a reference implementation. Typically, customers take the reference recommendation and they enhance it or they use that to onboard their custom apps, for example, or the apps that we don’t ship out of the box.

So it’s a very open, extensible platform – but the goal is that whatever the application might be, in fact, we keep saying that, if it runs somewhere else, it’s running on Robin. So the idea here is that you can bring any app or database, and with the flip of a switch, you can make it a 1-click deploy, 1-click manage, one-click mobile across Clouds.

You keep mentioning this one click and this idea of it being so easy, so convenient, so seamless. Is that what you say is the biggest concern of your customers? Are this ease and speed? Or what are some other things that are on their minds that you want to deliver?

So one click, of course, is a user experience part – but what is the real challenge? The real challenges are –  there are a wide variety of tools being used by enterprises today. Even the data analytic pipeline, there’s a lot across the data store, processor pipeline. Users don’t want to deal with setting it up and keeping it up and running. They don’t want the management but they want to get the job done. Now when you only get the job done, you really want to hide the underlying details of those platforms and the best way to convey that, the best way to give that experience is to make it a single click experience from the UI. So I keep calling it all one click because that is the experience that you get to hide the underlying complexity for these apps with the First Container Solution certified to run HDP.

Does your environment actually compile executable code based on that one click experience? Or where do the compilation and containerization actually happen in your distributed architecture?

Alright, so, I think the simplest to explain like this – You work on all the three big public clouds. Whether it is Azure, AWS or Google. Your entire application is containerized itself for deployment into these Clouds. So the idea here is let’s simplify it significantly. You have Kubernetes today, it can run anywhere, on premises, in the public Cloud and so on. Kubernetes is a great platform for orchestrating containers but it is largely inaccessible to a certain class of data-centric applications. Robin makes that possible.

But the Robin take is, just onboarding those applications on Kubernetes does not solve your CXO or your line of business user’s problems. You ought to manage the environment from an application point of view, not from a container management point of view. From an application point of view, management is a lot easier and that is where we create this one-click experience.

Give us a sense for how we’re here at DataWorks and it’s the Hortonworks show. Discuss with us your partnership with Hortonworks and you know, we’ve heard the announcement of HDP 3.0 and containerization support. Just give us a rough sense for how you align or partner with Hortonworks in this area.

Absolutely. It’s kind of interesting because Hortonworks is a data management platform, if you think about it from that point of view and when we engaged with them first – So some of our customers have been using the product, Hortonworks, on top of Robin, so orchestrating Hortonworks, making it a lot easier to use. One of the requirements was, “Are you certified with Hortonworks?” And the challenge that Hortonworks also had is they had never certified a container based deployment of Hortonworks before. They actually were very skeptical, you know, “You guys are saying all these things. Can you actually containerize and run Hortonworks?”

So we worked with Hortonworks and we are, I mean if you go to the Hortonworks website, you’ll see that we are the first in the entire industry who have been certified as a container based play that can actually deploy and manage Hortonworks. They have certified us by running a wide variety of tests, which they call the Q80 Test Suite, and when we got certified the only other players in the market that got that stamp of approval was Microsoft in Azure and EMC with Isilon.

So you’re in good company?

I think we are in great company.

Are you certified to work with HDP 3.0 or the prior version or both?

When we got certified we were still in the 2.X version of Hortonworks, HDP 3.0 is a more relatively newer version. But our plan is that we want to continue working with Hortonworks to get certified as they release the program and also help them because HDP 3.0 also has some container based orchestration and deployment. So we want to help them provide the underlying infrastructure so that it becomes easier for users to spin up more containers.

The higher level security and governance and all these things you’re describing, they have to be over the Kubernetes layer. Hortonworks supports it in their data plane services portfolio. Does Robin Systems solutions portfolio tap into any of that, or do you provide your own layer of sort of security and metadata management so forth?

We don’t want to take away the security model that the application itself provides because the user might have step it up so that they are doing governance, it’s not just logging in and auto control and things like this. Some governance is built into. We don’t want to change that. We want to keep the same experience and the same workflow hat customers have so we just integrate with whatever security that the application has. We, of course, provide security in terms of isolating these different apps that are running on the Robin platform where the security or the access into the application itself is left to the apps themselves. When I say apps, I’m talking about Hortonworks or any other databases.

Moving forward, as you think about ways you’re going to augment and enhance and alter the Robin platform, what are some of the biggest trends that are driving your decision making around that in the sense of, as we know that companies are living with this deluge of data, how are you helping them manage it better?

I think there are a few trends that we are closely watching. One is around Cloud mobility. CIOs want their applications along with their data to be available where their end users are. It’s almost like follow the sun model, where you might have generated the data in one Cloud and at a different time, different time zone, you’ll basically want to keep the app as well as the data moving. So we are following that very closely. How we can enable the mobility of data and apps a lot easier in that world.

The other one is around the general AI ML workflow. One of the challenges there, of course, you have great apps like TensorFlow or Theano or Caffe, these are very good AI ML toolkits but one of the challenges that people face, is they are buying this very expensive, let’s say NVIDIA DGX Box, this box costs about $150,000 each. How do you keep these boxes busy so that you’re getting a good return on investment? It will require you to better manage the resources offered with these boxes. We are also monitoring that space and we’re seeing that how can we take the Robin platform and how do you enable the better utilization of GPUs or the sharing of GPUs for running your AI ML kind of workload.

We’ll be discussing these trends at the next DataWorks Summit, I’m sure, at some other time in the future.

Learn more about Robin Hyper-Converged Kubernetes Platform – the First Container Solution Certified to run Hortonworks Data Platform (HDP) for big data, nosql databases and RDBMS applications.

Hortonworks Data Platform Optimized for Docker Containers – Get Started Today

Hortonworks Data Platform Optimized for Docker Containers – Get Started Today

Robin Hortonworks Webinar

HDP on Robin Hyper-Converged Kubernetes Platform

Although the Docker revolution has made containers mainstream, containerizing big data is challenging because many containerization platforms do not support stateful applications. With the first and only out-of-the-box container-based solution that is certified by Hortonworks to run HDP, Robin Systems helps to build an Application-Defined Infrastructure.

Containerizing big data brings to the table many benefits such as 1. Improved utilization and reduced licensing costs with shared hardware resources, 2. Decreased administration costs and reduced time-to-market for big data apps with simplified operations.

Join this Robin Hortonworks Webinar to learn about:

  1. App-store experience; 1-click deployment of HDP: Deploying HDP is now as easy as installing an app from the App Store. Robin enables self-service big data deployment with 1-click cluster provisioning to deploy complex distributed applications in minutes.
  2. Doesn’t get any simpler: Scaling up and scaling out is now as easy as adjusting the brightness on your phone – Use sliders to configure compute, network, and storage layers.
  3. Meet critical SLAs: See how Robin’s multi-tenant architecture enables IT teams to meet the most demanding SLAs and handle performance isolation between HDP services even in a shared infrastructure environment while letting development and scientific user teams enjoy the simplicity of an app-store experience.

Ali Bajwa, Principal Partner Solutions Engineer, Hortonworks

Ali Bajwa is a seasoned engineer with extensive experience architecting and developing complex Big Data, CRM, mobile software infrastructure projects. He has delivered customer success by leading architectural workshops and proof of concept engagements with key customers. He has broad development experience across Hadoop, Web, Mobile, and Desktop applications.

Ankur Desai, Director of Products, Robin Systems

Ankur Desai is a Director of Products at Robin Systems. He brings over 12 years of experience in software development, product management, and product marketing for enterprise software. Ankur holds an MBA from Dartmouth College, and a Bachelor of Engineering in Information Technology from University of Mumbai.

Robin Hortonworks Webinar – Hortonworks Data Platform Optimized for Docker Containers – Get Started Today

Infographic: Building Stateful Cloud Applications With Containers

Infographic: Building Stateful Cloud Applications With Containers

Tips From Top Thinkers

Building Stateful Cloud Applications With Containers

The continued expansion of the cloud, growing end-user application performance demands, and an explosion in database needs are all stacking up fast against enterprise IT teams. When it comes to building enterprise database and big data applications, many are finding that container technology solves for at least a few of these problems. Here are stats and tips from top thinkers on how to best use containers when building stateful cloud applications.

Persistent Storage is a Top Challenge

26% of IT professionals cited “persistent storage” as a top challenge, when it comes to leveraging containers.

Streamline Until It Hurts “Some of the best writers have said they refine their work by cutting till it hurts. Containers are the same way.” Eric Vanderburg Vice President, Cybersecurity | TCDI

Isolate Containers & Hosts “Maintaining isolation between the container and hosts system by separating the file systems is vital towards management of the stateful application.” Craig Brown, PhD Senior Big Data Architect & Data Science Consultant

Select an Intelligent Orchestrator “An intelligent orchestrator along with a softwaredefined storage with software-defined networking is very essential for running a cloud-based application.” Deba Chatterjee Senior Engineering Program Manager | Apple

A Majority of Enterprises are Investing in Containers

69% of IT pros reported their companies are investing in containers 69

Validate All States “What they all (containerized stateful apps) have in common is the requirement to reliably validate all possible states and state transitions when changes are made to the application.” Marc Hornbeek Principal Consultant – Dev Ops | Trace3

Ensure You Can Monitor All Containers “Containerised applications are addictive. They can be created, tested and deployed very quickly when compared to traditional VMs. The infrastructure to begin monitoring a potentially vast and varying number of new containers is essential.” Stephen Thair Co-Founder | DevOpsGuys

Ofset Workloads with Containers “Stateful applications often reside in 1 or 2 geographical locations and take heavy loads … and at diferent times during peak and of-peak periods. Understanding these variables will enable an operations team to determine how to best design the use of container applications.” Steve Brown Director, DevOps Solutions N.A. | Lenovo

Top Container Orchestrators Now More Popular Than DevOps Tools

When choosing a platform, 35% felt Docker was the best fit for them among all DevOps tools

Get Infrastructure Pros Excited “A lot of people focus too much on the fact that “those application guys” are coming to mess with our infrastructure, instead of thinking that maybe we can elevate our own jobs and start working more closely with applications.” Stephen Foskett Proprietor | Foskett Services

Follow Design Microservices Principles “One of the fundamental aspects of containers is moving to immutable application infrastructure, which means that you cannot store state and application in the same container.” JP Morgenthal CTO Application Ser

Don’t Use Containers for Data Storage “When dealing with stateful applications, precautions need to be taken to ensure that you are not compromising or losing data.” Sylvain Kalache Co-Founder | Holberton School

Looking for more advice on building your stateful cloud application with containers? Download our full eBook today for more exclusive advice from top cloud, DevOps, and container technology pioneers.

Taming the Cassandra-Datastax Dev/Test Challenge in Production Eco Systems

Taming the Cassandra-Datastax Dev/Test Challenge in Production Eco Systems

Cassandra-Datastax has a huge impact on customer facing solutions at scale. However, like most technologies, it presents unique challenges that are often first felt in the development and testing of the application. Containers, Docker in particular, have become a leading tool in addressing some of those challenges. However, Docker alone does not solve all the really hard and time-consuming problems.

RAPID CLUSTER DEPLOYMENT

  • Simplified
  • Repeatable and rapid cluster deployment
  • Node placement logic
  • Cluster scaling
  • Guarantee quality of service (QoS)

CLUSTER CLONING

  • Maintain cluster configuration during clone process
  • How to speed up the cluster cloning process
  • Space efficiency when duplicating clusters

TIME TRAVEL FOR CLUSTERS

  • Cluster snapshots in Robin
  • Point in Time capabilities using Robin
  • Role of cloning in Point in Time operations

CARY BOURGEOIS, Systems Engineer, Robin Systems

Cary has 20+ years of experience working with applications, databases and analytics. Prior to joining Robin he worked at DataStax in their field organization. Before moving to DataStax Cary worked at SAP supporting their In Memory Database(SAP HANA), Big Data Solutions and Analytic applications. Cary also has experience in the Consumer Package Goods industry having developed several commercial applications for ACNielsen.

Robin Solution – Simple Application and Data Lifecycle Management

Robin Hyper-Converged Kubernetes Platform  brings bare-metal-like performance, retains virtualization benefits, enables significant cost savings—all from the same management layer.
The platform transforms commodity hardware into a compute, storage, and data continuum where multiple applications can be deployed per machine to ensure the best possible hardware utilization.

Robin containerization platform provides 1 click deployment for traditional relational databases such as Oracle, PostgreSQL, MySQL and for modern ones such as NoSQL, MongoDB, and Cassandra. DBAs, DevOps engineers or developers simply choose which database to deploy while Robin completely automates infrastructure and data provisioning, monitoring and tracking of application topology—through its lifecycle, and day 2 operations—with Robin Application-to-Spindle QoS guarantee.

Users interact with Robin entirely at the database level and provision as well as manage the underlying infrastructure transparently.

  • Get bare-metal-like performance
  • Retain virtualization benefits
  • Enable cost savings
  • Manage everything from a single management layer

NoSQL Databases – Simple Lifecycle Management & Database Consolidation

Robin Sytems on Vimeo

Core Elements to Running an Oracle Database Using Docker

Want to Try Running Oracle Database Using Docker Yourself?

Connect with our solutions team. You will be running production ready Oracle clusters using Docker in no time.

I want to Run Oracle Database on Docker

Core elements to run an Oracle Database using Docker from Robin Systems on Vimeo.

Core Elements to Running an Oracle Database Using Docker

In the last few years, Docker has been widely adopted in the stateless application space. Large enterprises have now started to explore the idea to use Docker to run stateful database and big data applications as well. Docker containers are light-weight, portable, and provide a great alternative to VM based virtualization while ensuring bare-metal grade performance. Running Oracle Using Docker is now a breeze with Robin Hyper-Converged Kubernetes Platform

Running Oracle Using Docker

See Docker Benefits In Action – Consolidate, Agility, QoS

In this joint webinar by Robin Systems and Oracle Corporation, we will go over the essentials that you need to run the Oracle database inside a Docker container. We will also explore the core elements required to use containers to consolidate databases without compromising performance while guaranteeing isolation and no manageability changes.

During the demonstration, you will see how the various different database lifecycle management tasks can be done with just a click of a button on the Robin Hyper-Converged Kubernetes Platform.

Deba Chatterjee, Director of Products, Robin Systems

Deba Chatterjee is the Director of Product management at Robin Systems. Prior to his current role, he was the product manager at Oracle Corporation responsible for the Oracle Multitenant option and Oracle Diagnostics and Tuning packs.

Before product management, Deba worked for the performance services team in Oracle Product Development IT, where he was responsible for the performance of large data warehouses.

He has previously worked at Oracle Consulting, Oracle India, Michelin Tires in Clermont-Ferrand, France, and Tata Consultancy Services. Deba has a Master’s in Technology Management – a joint program by Penn Engineering and Wharton Business School.

Gerald Venzi, Senior Principal Product Manager, Oracle

Gerald Venzl is a Senior Principal Product Manager for Oracle. During his career he has worked as a Developer, DBA, Performance Tuner, Software Architect, Consultant and Enterprise Architect prior to his current role. This allowed Gerald to live several different lives in the IT sector, providing him with a solid understanding of the concerns in those individual areas while gaining a holistic view overall. In his current role, Gerald focuses on evangelizing how to build systems that provide flexibility yet still meet the business’ needs.

Robin for Relational Databases – Oracle

Robin Sytems on Vimeo

451 Research – Containers: economically better option than HW virtualization

451 Research - Containers: economically better option than HW virtualization

451 Research – Containers

451 Research – Containers: economically, they appear to be a better option than hardware virtualization

Is the hype around containers justified, or are they simply an alternative form of virtual machine? 451 Research believes containers are better placed, at least theoretically, to achieve lower TCO than traditional hardware virtualization. In fact, we have found that double-digit resource savings are achievable even with relatively simple implementations.

By reducing duplication, server resources are freed to be allocated to other requirements. In other words, container technology is likely to more efficiently ‘sweated’ – resources being shared, with the asset used to the fullest – than hardware virtualized counterparts. The asset-sweating stretches beyond just servers – band- width, time, bits, bytes and labor are all likely to be better utilized with containers, according to our research.

THE 45 1 TAKE
Rarely are decisions in IT based purely on cost. Cost is, of course, a factor, but this is balanced against the value achieved for that cost. Virtual machines are unlikely to be as cost-efficient as containers, but they do provide value in other ways (which we’ll cover in a follow-up report). However, the economic advantage of containers suggests they’re not slowing down anytime soon – by their very nature, they have an economic edge over hardware virtualization, and this is likely to be taken advantage of by vendors, providers and end users. Over time, software vendors will seek to improve containers, so their value proposition will only in- crease against virtual machines.

VIRTUALIZATION ECONOMICS

The primary economic benefit of traditional server virtualization is the ‘sweating of assets’ through the consolidation of hardware – it roughly means ‘getting as much use as possible out of what you already possess.’ Originally, one server meant one operating system, typically delivering one workload. Through virtualization, one server can hold multiple op- erating systems, each one operating a logically separated workload. Before virtualization, perhaps just a tiny fraction of the asset (the server and its resources) might be used at any one time. Through virtualization, we can multiplex multiple applications together, so that resources are shared and the asset is fully used.

451 Research – Containers: economically better

If a server at a total cost S was previously able to hold just one workload, but can now support n workloads, the cost per workload plummets from S to S/n. If n is 16, a fairly reasonable level of consolidation, that’s a 93% average cost savings per workload. The greater the value of n, the greater the savings. It is clear why virtualization is so commonplace today.

The cloud was the next step up from virtualization, providing the benefits of consolidation with the flexibility of being able to dynamically create and move resources to suit different business requirements. None of this is rocket science, and most in IT have theoretical and practical experience in the subject. But containers have seriously rocked the boat. They’re the new kids on the block, but are they all they’re cracked up to be?

A CONSOLIDATED NEW WORLD

451 Research – Containers are better cost wise over virtual machines

Hardware virtualization means that operating systems (and their applications) share hardware resources such as compute, storage and memory from a single asset, be it a server or even a pool of servers. Container technology, specifically system containers, is essentially operating system virtualization – workloads share operating system resources such as libraries and code.

Containers have the same consolidation benefits as any virtualization technology, but with one major benefit – there is less need to reproduce operating system code. Hardware virtualization means each workload must have all its underly- ing operating system technology. If the operating system takes up 10% of a workload’s footprint, then in a hardware virtualized platform, 10% of the whole asset is spent on operating system code. This is regardless of the number of work- loads, n, being run on the asset.

In the same environment utilizing containers, the operating system only takes up 10% divided by the number of work- loads, n. In a nutshell, our server is running 10 workloads, but only one operating system, in our container environment

– in the virtualized environment, the server would be running 10 workloads and 10 operating systems.

451 Research – Containers: economically, they appear to be a better option than hardware virtualization

Robin Systems Videos