Provision Oracle RAC Database as a Service with ROBIN Platform

Provision Oracle RAC Database as a Service with ROBIN Platform

Oracle RAC Database as a Service – Provision with ROBIN Platform

See how easy it is for anybody to stand up an entirely new Oracle RAC environment including the grid infrastructure installation the ASM configuration and finally create the RAC database tool.

Log into the Robin Hyperconverged Kubernetes Platform console. Go straight to the application bundle screen. In this case, we just have a couple of simple bundles, one of which is our Oracle RAC bundle. So we’ll just simply click on that to provision Oracle RAC. On-click, we are immediately presented with the provisioning workflow associated with this application.

We will name our application. We’ll just call this Oracle RAC demo. Now we’ve got a couple of network interfaces we need to consider because for Oracle RAC both the public and private IP address ranges are available here. This is where we specify the public address because this is how the application will receive connection requests we’ve got the ability to specify the size of the cluster – both in terms of the number of nodes and the amount of compute and memory capacity.

This gives us the ability to shape the way in which the database will be laid out. In this case, we are going to change the default from flash to spinning disk because we, in this case, don’t have enough flash memory available for this particular deployment. We will then move down here to specify our private interconnect IP address and specify our single client address name for RAC. We’ll scroll on down to find a number of other environment variables which may be passed through robin for this deployment.

We’ve got the ability to define how we will declare ASM disk group redundancy – various credentials and then we have our placement rules where we can control how these resources will be deployed on the physical robin cluster. In this case, we need to be able to allow for multiple RAC instances on the same physical node in the cluster because we only have two nodes in our demo environment.

Simply click on provision application from that screen. This will kick off the deployment of our RAC environment. The provisioning process goes through a number of different phases beginning with the deployment of the V nodes or the actual virtual nodes or pods in the cluster running a variety of scripts to complete the configuration of the RAC environment itself from an Oracle perspective through the UI. After this, it is really just in a matter of minutes as we have our entirely fresh new RAC environment up and running.

View Provision Oracle RAC demo to learn more.

ROBIN Hyperconverged Kubernetes Platform

Scale Out Oracle RAC Database as a Service with ROBIN Platform

Scale Out Oracle RAC Database as a Service with ROBIN Platform

Oracle RAC Database as a Service – How to Scale

See how easy it is for anybody to stand up an entirely new Oracle RAC environment including the grid infrastructure installation the ASM configuration and finally create the RAC database tool.

You have seen how easy it is to deploy a fresh new Oracle RAC database environment. But what if we want to know how our workload might respond when adding a third node to the cluster? In other words, test the scalability of that particular workload when adding a third node.

So it’s really easy. We just click on “Scale Out” for the application and here we can define the number of nodes by which we want to extend this cluster. This is done simply by sliding across this bar but for this demonstration purpose, we need to add a single node. We can also explicitly
call out a hostname for the new node. We can go back and tweak some of the environment variables as input for this new operation but for this demo, we really don’t need to make any of these changes.

So let’s just close these out and just simply click on the “Scale Out” button to begin the process for extending our RAC cluster. Behind the scenes, Robin is making all the necessary calls to Oracle to affect the extension of the cluster – in very much the same way as you might through conventional means for any other installation ensuring that from an Oracle perspective things are all agreeable with the configuration. You can see the success of the operation in this window. We’ll close this window and now we are back on our application screen with the newly refreshed view to find that our third node has been added.

We can see the new IP addresses – the physical host on which the new container has been deployed. Let’s just jump back into the new container – rather do a similar verification to see that we have actually successfully reshaped RAC database environment with three nodes from two nodes. We now log into Oracle set our environment through SRB CTL. Let’s just do a status again of our Robin database so we can see that we’ve got our Robin three instance now, which has been added and it’s now running on our new V node.

In the new container in the Robin cluster, we can see the new vip is added and is up and running. The resources have been successfully configured across the new node and if we go back into SQL plus and log back into the database itself and do once again a query of gv$instance, we can see that we had the databases up and fully available across all three instances of the cluster. Okay, so we exit out of that. We’re back to the UI and so now what if we want to scale back in? So we need to shrink that cluster – testing is completed – so we need to shrink that cluster back to two nodes –

Watch the demo to understand how to scale back.

ROBIN Hyperconverged Kubernetes Platform

Clone Oracle RAC Database as a Service with ROBIN Platform

Clone Oracle RAC Database as a Service with ROBIN Platform

Clone Oracle RAC Database as a Service with Robin Platform

We have a database application that is up and running. Now let’s take a look at how easy it is to take snapshots of that application and then subsequently perform cloning operations.

Create Snapshot

Creating a snapshot is quite easy with Robin. We have the option to provide a name for the snapshot or just use the default – which is what we’ll do here. We can look at some of the operations behind the scenes that are going to occur with respect to freezing IO and quiescing the application to maintain consistency. We will then see the newly created snapshot.

From here we have the option for restoring back to that point in time or in this case we were going to perform a thin clone operation based on that snapshot. Here we want to name the clone. It’s essentially an entirely new application stack that will be stood up as part of this operation. So we need to give it a name just as we would give the original application when it was provisioned.

Therefore, we also need to specify both the public and the private IP addresses, because again, this is a RAC database application. We could tweak the capacity for this app and we’ll just leave that the same specify the private IP address and just simply launch the operation by clicking on the clone. This takes a few minutes.

We can again take a look at some of the operations that are occurring behind the scenes with respect to deploying the application. It’s relatively quick and at this point, we can close out this window.

View Oracle RAC Clone and the original application

Now we will be presented with the new application screen as it relates to this new clone cloned app with all the related information in terms of the new nodes that have been provisioned – IP addresses etc. So then if we go back and just click on the general application screen then we can get a summary. you can see the original application and the newly cloned deployment and the snapshot on which it was based.

Postgres Clone Database – ROBIN Storage

Postgres Clone Database - ROBIN Storage

Postgres Clone – ROBIN Storage PostgreSQL Demo

  • Use a PostgreSQL database Snapshot to create a clone
  • Verify the clone reflects the data captured in the snapshot
  • Modify the cloned database and verify the original database remains unaffected

Application cloning improves the collaboration across Dev/Test/Ops teams. Teams can share app+data quickly, reducing the procedural delays involved in re-creating environments. Each team can work on their clone without affecting other teams. In this demo, we will:

  • Use a PostgreSQL database Snapshot to create a clone
  • Verify the clone reflects the data captured in the snapshot
  • Modify the cloned database and verify the original database remains unaffected

We will see how we can clone an entire PostgreSQL database, including all Kubernetes resources such as Pods, StatefulSets, ConfigMaps, PersistentVolumeClaims, etc. with a single command.

Postgres Deploy, Snapshot, and Rollback – ROBIN Storage

Postgres Deploy, Snapshot, and Rollback - ROBIN Storage

Postgres Deploy – ROBIN Storage PostgreSQL Demo

  • Roll back an entire PostgreSQL database
  • Including all Kubernetes resources such as Pods, StatefulSets, ConfigMaps, PersistentVolumeClaims, etc. with a single command

Snapshots allow you to restore your application’s state to a point-in-time. If you make a mistake, such as unintentionally deleting important data, you can simply undo it by restoring a snapshot. In this demo, we will:

  • Deploy a Postgres database on Kubernetes using Helm and ROBIN Storage
  • Register our Postgres database with ROBIN as an “app”
  • Incrementally add data to our database and take snapshots
  • Simulate a user error or database fault by deleting some data
  • Recover the lost data using snapshot with ROBIN Rollback feature

We will see how we can roll back an entire PostgreSQL database, including all Kubernetes resources such as Pods, StatefulSets, ConfigMaps, PersistentVolumeClaims, etc. with a single command.

Elastic – Dynamic Scaling with ROBIN Hyperconverged Kubernetes Platform

Elastic - Dynamic Scaling with ROBIN Hyperconverged Kubernetes Platform

Scale on-demand

No need to create IT tickets wait for days to scale-up Data Nodes by adding more memory, CPU, or Storage, or to scale-out by adding more Data Nodes.

Dynamic scaling to meet sudden demands

If a Data Node runs out of resources, end users can simply scale up by adding more CPU/RAM, no need for IT tickets. Adding more Data Nodes to existing ELK cluster is also a simple 1-click operation.

Elastic – Deploy ELK Clusters with ROBIN Hyperconverged Kubernetes Platform

Elastic - Deploy ELK Clusters with ROBIN Hyperconverged Kubernetes Platform

Elastic – Deploy ELK Clusters with ROBIN Hyperconverged Kubernetes Platform

Deliver ELK (Elastic, Kibana, Logstash) Stack-as-a-service

Turbocharge your DevOps productivity with Elastic Stack on Kubernetes. Improve the agility and efficiency of your Developers, Operation teams, and Data Scientists.

Self-service experience

ROBIN provides self-service provisioning and management capabilities to developers, operations teams, and data scientists, significantly improving their productivity.

Provision custom Elastic stacks in minutes

ROBIN has automated the end-to-end cluster provisioning process for the Elastic Stack, including custom stacks with different versions and combinations of Elasticsearch, Logstash, Kibana, Beats, and Kafka. The entire provisioning process takes only a few minutes.

Consolidate ELK clusters with ROBIN Hyperconverged Kubernetes Platform

Consolidate ELK clusters with ROBIN Hyperconverged Kubernetes Platform

Improve hardware utilization

ROBIN provides performance isolation and RBAC to consolidate multiple ELK workloads without compromising SLAs and QoS.

Get more out of your hardware

Consolidate multiple ELK workloads and ensuring data locality for Data Nodes for better performance reduces hardware footprint. Also, reduce your hardware cost by sharing the compute resources between clusters. If an ELK cluster runs the majority of its batch jobs during the night-time, it can borrow a resource from an adjacent ELK cluster with day-time peaks, and vice versa.

Controlling IOPS in a shared Environment

Controlling IOPS in a shared Environment

In this video, we demonstrate how easily we can throttle IOPS from an application to address the noisy neighbor problem with ROBIN Hyper-Converged Kubernetes Platform

Controlling IOPS in a Shared Environment

Robin Systems Videos

nput/output operations per second (IOPS, pronounced eye-ops) is an input/output performance measurement used to characterize computer storage devices like hard disk drives (HDD), solid state drives (SSD), and storage area networks (SAN). Like benchmarks, IOPS numbers published by storage device manufacturers do not directly relate to real-world application performance.[1][2]

Controlling IOPS – Background

To meaningfully describe the performance characteristics of any storage device, it is necessary to specify a minimum of three metrics simultaneously: IOPS, response time, and (application) workload. Absent simultaneous specifications of response-time and workload, IOPS are essentially meaningless. In isolation, IOPS can be considered analogous to “revolutions per minute” of an automobile engine i.e. an engine capable of spinning at 10,000 RPMs with its transmission in neutral does not convey anything of value, however an engine capable of developing specified torque and horsepower at a given number of RPMs fully describes the capabilities of the engine.

In 1999, recognizing the confusion created by industry abuse of IOPS numbers following Intel‘s release of IOmeter, a performance benchmarking tool, the Storage Performance Council developed an industry-standard, peer-reviewed and audited benchmark that has been widely recognized as the only meaningful measurement of storage device IO performance; the SPC-1 benchmark suite[citation needed]. The SPC-1 requires storage vendors to fully characterize their products against a standardized workload closely modeled on ‘real-world’ applications, reporting both IOPS and response-times and with explicit prohibitions and safeguards against ‘cheating’ and ‘benchmark specials’. As such, an SPC-1 benchmark result provides users with complete information about IOPS, response-times, sustainability of performance over time and data integrity checks. Moreover, SPC-1 audit rules require vendors to submit a complete bill-of-materials including pricing of all components used in the benchmark, to facilitate SPC-1 “Cost-per-IOPS” comparisons among vendor submissions.

Among the single-dimension IOPS tools created explicitly by and for benchmarketers, applications, such as Iometer (originally developed by Intel), as well as IOzone and FIO[3]have frequently been used to grossly exaggerate IOPS. Notable examples include Sun (now Oracle) promoting its F5100 Flash array purportedly capable of delivering “1 million IOPS in 1 RU” (Rack Unit). Subsequently, tested on the SPC-1, the same storage device was only capable of delivering 30% of the IOmeter value on the SPC-1.[4][5]

The specific number of IOPS possible in any system configuration will vary greatly, depending upon the variables the tester enters into the program, including the balance of read and write operations, the mix of sequential and random access patterns, the number of worker threads and queue depth, as well as the data block sizes.[1] There are other factors which can also affect the IOPS results including the system setup, storage drivers, OS background operations etc. Also, when testing SSDs in particular, there are preconditioning considerations that must be taken into account.[6]

Performance characteristics and Controlling IOPS

 

Random access compared to sequential access.

The most common performance characteristics measured are sequential and random operations. Sequential operations access locations on the storage device in a contiguous manner and are generally associated with large data transfer sizes, e.g. 128 kB. Random operations access locations on the storage device in a non-contiguous manner and are generally associated with small data transfer sizes, e.g. 4kB.

The most common performance characteristics are as follows:

MeasurementDescription
Total IOPSTotal number of I/O operations per second (when performing a mix of read and write tests)
Random Read IOPSAverage number of random read I/O operations per second
Random Write IOPSAverage number of random write I/O operations per second
Sequential Read IOPSAverage number of sequential read I/O operations per second
Sequential Write IOPSAverage number of sequential write I/O operations per second

For HDDs and similar electromechanical storage devices, the random IOPS numbers are primarily dependent upon the storage device’s random seek time, whereas, for SSDs and similar solid state storage devices, the random IOPS numbers are primarily dependent upon the storage device’s internal controller and memory interface speeds. On both types of storage devices, the sequential IOPS numbers (especially when using a large block size) typically indicate the maximum sustained bandwidth that the storage device can handle.[1]Often sequential IOPS are reported as a simple MB/s number as follows:

{displaystyle {text{IOPS}}*{text{TransferSizeInBytes}}={text{BytesPerSec}}} (with the answer typically converted to MegabytesPerSec)

Some HDDs will improve in performance as the number of outstanding IOs (i.e. queue depth) increases. This is usually the result of more advanced controller logic on the drive performing command queuing and reordering commonly called either Tagged Command Queuing (TCQ) or Native Command Queuing (NCQ). Most commodity SATA drives either cannot do this, or their implementation is so poor that no performance benefit can be seen.[citation needed] Enterprise class SATA drives, such as the Western Digital Raptor and Seagate Barracuda NL will improve by nearly 100% with deep queues.[7] High-end SCSI drives more commonly found in servers, generally show much greater improvement, with the Seagate Savvio exceeding 400 IOPS—more than doubling its performance.[citation needed]

While traditional HDDs have about the same IOPS for read and write operations, most NAND flash-based SSDs are much slower writing than reading due to the inability to rewrite directly into a previously written location forcing a procedure called garbage collection.[8][9][10] This has caused hardware test sites to start to provide independently measured results when testing IOPS performance.

Newer flash SSDs, such as the Intel X25-E, have much higher IOPS than traditional HDD. In a test done by Xssist, using IOmeter, 4 KB random transfers, 70/30 read/write ratio, queue depth 4, the IOPS delivered by the Intel X25-E 64GB G1 started around 10000 IOPs, and dropped sharply after 8 minutes to 4000 IOPS, and continued to decrease gradually for the next 42 minutes. IOPS vary between 3000 and 4000 from around the 50th minutes onwards for the rest of the 8+ hours test run.[11] Even with the drop in random IOPS after the 50th minute, the X25-E still has much higher IOPS compared to traditional hard disk drives. Some SSDs, including the OCZ RevoDrive 3 x2 PCIe using the SandForce controller, have shown much higher sustained write performance that more closely matches the read speed.[12]

Self-service deployment of a Cloudera cluster on the Robin platform demo video

Self-service deployment of a Cloudera cluster on the Robin platform demo video

In this demo video, we demonstrate how you can setup a Cloudera cluster with a click of a button on the ROBIN Hyper-Converged Kubernetes Platform.

Robin’s application-aware manager simplifies deployment and lifecycle management using container-based “virtual clusters.” Each cluster node is deployed within a container. The collection of containers running across servers makes the “virtual cluster.” This allows Robin to automate all tasks pertaining to the creation, scheduling, and operation of these virtual application clusters, to the extent that an entire data pipeline can be provisioned or cloned with a single click and minimal upfront planning or configuration.

Robin Platform has three components

Robin Application-aware compute

Robin platform aggregates the existing compute – proprietary or commodity servers – and creates a single layer of all compute resources that are available to each application that the enterprise uses.

Container Technology

Robin leverages container technology to consolidate applications with complete runtime isolation. Container is a lightweight, OS-based virtualization technology that allows creation of compartmentalized and isolated application environment on top of the standard OS.

Performance-Sensitive Workloads

Robin is the first and only product in the industry that brings application lifecycle management benefits to all types of enterprise applications – including highly performance-sensitive workloads such as NoSQL databases RDBMS and Big Data.

Appropriate Container Configuration

Robin’s Adaptive container technology picks the appropriate container configuration depending on the application types. Traditional applications are deployed within “system containers” to provide VM-like semantics and Robin supports the deployment of stateless microservices applications such as Docker containers.

Zero Performance Impact

When used with bare-metal servers, Robin enables “zero-performance-impact” consolidation of data-havy databases, and other distributed applications such as Elasticsearch, with Application lifecycle management features, resulting in significant operational efficiency gains and cost reduction.

Robin Application-aware storage

Robin Application-aware manager

Agile Provisioning

  • Simplify cluster deployment using application-aware fabric controller—provision an entire operational data pipeline within minutes
  • Deploy container-based “virtual clusters” running across commodity servers
  • Automate tasks – create, schedule and operate virtual application clusters
  • Scale-up or scale-out instantaneously to meet application performance demands

Self-service deployment of a Cloudera cluster on the Robin platform

Robin Systems Videos