Forrester – Taking Enterprise Apps to the Cloud

Taking Enterprise Apps to the Cloud – Challenges and Benefits blended with containers from Robin Systems on Vimeo.

Robin for Big Data

Robin for Enterprise Applications Challenges

Robin Systems on Vimeo

Enterprise Applications Challenges

Taking Enterprise Applications to the Cloud – Challenges & Benefits Blended with Containers

Forrester - Robin Systems - Joint Webinar - Taking Enterprise Apps to the Cloud - Challenges & Benefits Blended with Containers

Join Forrester and Robin Systems

Enterprise Applications Challenges – While the Cloud seems great for saving on CAPEX (subscription vs hardware) and optimizing on OPEX (greater agility, flexibility), you – like many others — might find out it is not necessarily always as easy as it sounds. Users say that the ease of spinning resources on the cloud does not help with server/resource sprawl, but rather it makes it harder to track and manage.

Looking at numerous cloud use patterns, it is clear that stateless web-scale/facing apps are best suited for the cloud – when a VM goes down, simply bring a new one instead. When it comes to onboarding stateful distributed or clustered applications, cloud resources on demand is not really a sufficient solution and significant planning and architecture adaptations are required.

Modern Enterprise apps often characterized by a data heavy/centric nature, relying on Big-Data pipelines or NoSQL databases, have architecture implications that are not easily solvable on the cloud.  

Learn what others are attempting and some already doing to make cloud work given these challenges. The session will include a discussion of the latest trends and best practices as well as guiding points to consider.

Dave Bartoletti is a Principal Analyst at Forrester Research.   Dave has developed, delivered, supported, and marketed game-changing technologies for more than 25 years as a software executive at several high-profile technology and financial services leaders. He was at the forefront of the middleware, web, virtualization, automation, and cloud computing tech disruptions as both vendor and consumer.

Razi Sharir, VP of Products and Marketing at Robin Systems.  Razi is a veteran product management executive and joined Robin from CA Technologies, where he lead the SaaS Center of Excellence and Product Management for the team that developed a container-based Enterprise PaaS geared for modern application economy.

Robin for Big Data

Robin for Enterprise Applications Challenges

Robin Systems on Vimeo

How I Stopped Worrying & Learned to Love Data – Hortonworks (HDP)

How I stopped worrying and started to love the data – Meeting seasonal data peaks

Hortonworks (HDP) and Robin Systems Webinar

Deploying, right-sizing, ability to meet seasonal peaks without disrupting availability or supporting multiple clusters on a shared platform are often seen as the most difficult challenges in a Hadoop deployment.

In this webinar, we discuss these operational complexities that are often associated with Hadoop deployments and how they adversely impact the business. We will then look at the Robin Hyper-Converged Kubernetes Platform and see how it can help address these challenges.

Eric Thorsen is VP, Industry Solution at Hortonworks, with a specialty in Retail and Consumer Products

Eric holds over 25 years of technology expertise. Prior to joining Hortonworks, Eric was a VP with SAP, managing strategic customers in Retail and CP industries. Focusing on business value and impact of technology on business imperatives, Eric has counseled grocers, e-commerce, durables and hardline manufacturers, as well as fashion and specialty retailers.

Eric’s focus on open source big data provides strategic direction for revenue and margin gain, greater consumer loyalty, and cost-takeout opportunities.

Deba Chatterjee, Director of Products at Robin Systems, has worked on data intensive applications a for more than 15 years. In his previous position as Senior Principle Product Manager, Oracle Multi-Tenant, Deba worked on delivering mission critical solutions for one of the biggest enterprise databases.

At Robin Systems, Deba has contributed his significant experience to building the Robin Hyper-Converged Kubernetes Platform that delivers Bare Metal performance and application level Quality of Service across all applications to help companies meet peak workloads while maximizing off-peak utilization.

Meeting seasonal data peaks

Today, organizations are struggling to cobble together different open source software to manage Big Data environments such as Hadoop or build an effective data pipeline that can withstand the volume as well as the speed of data ingestion and analysis.

The applications used within the big data pipeline differ from case to case and almost always present multiple challenges. Organizations are looking to do the following:

  • Harness data from multiple sources to ingest, clean, & transform Big Data
  • Achieve agile provisioning of applications & clusters
  • Scale elastically for seasonal spikes and growth
  • Simplify Application & Big Data lifecycle management
  • Manage all processes with lower OPEX costs
  • Share data among Dev, Test, Prod environments easily

Robin Solution – Simple Big Data application & Pipeline Management

Robin Hyper-Converged Kubernetes Platform provides a complete out-of-the-box solution for hosting Big Data environments such as Hadoop in your big data pipeline on a shared platform, created out of your existing hardware – proprietary/commodity, or cloud components.

Robin container-based Robin Hyper-Converged Kubernetes Platform helps you manage Big Data and build an elastic, agile, & high-performance Big Data pipeline rapidly.

  • Deploy on bare metal or on virtual machines
  • Rapidly deploy multiple instances of data-driven applications
  • No need to make additional copies of data

Robin for Big Data

Robin Sytems on Vimeo

ESG Lab Review – Robin Hyper-Converged Kubernetes Platform

ESG Lab Review - Robin Hyper-Converged Kubernetes Platform

Abstract

This ESG Lab Report highlights the recent testing of container-based Robin Hyper-Converged Kubernetes Platform . Using a combination of guided demos and audited performance results, ESG Lab validated the ease of use, performance, scalability, and efficiency of Robin Systems’ container-based architecture. The Challenges Containers optimize application deployment by bundling all of the application’s required components into a single package, including supporting libraries and configuration files. Containers only require a supported Linux kernel to operate, making it easy to move them between environments, e.g., between hosts, from dev to test, or from test to production. Organizations are discovering that existing data center infrastructure is not capable of dealing with

This ESG Lab Report highlights the recent testing of container-based Robin Hyper-Converged Kubernetes Platform. Using a combination of guided demos and audited performance results, ESG Lab validated the ease of use, performance, scalability, and efficiency of Robin Systems’ container-based architecture. The Challenges Containers optimize application deployment by bundling all of the application’s required components into a single package, including supporting libraries and configuration files. Containers only require a supported Linux kernel to operate, making it easy to move them between environments, e.g., between hosts, from dev to test, or from test to production. Organizations are discovering that existing data center infrastructure is not capable of dealing with a large number of containerized applications since a single modern microservices-based web application can easily span hundreds or more containers. Organizations run many applications and often find their systems administration teams overwhelmed attempting to match resources with containers. Containers improve server utilization by allowing multiple applications to run on the same server. But since all applications share the same storage, storage performance can be erratic, which impacts overall application performance. To combat this, some organizations deploy critical applications on siloed infrastructure to ensure good performance, which leads to overprovisioned hardware and poor resource utilization.

Plans for Deploying Container Management Framework Technology Source: Enterprise Strategy Group, 2017 As shown in Figure 1, recent ESG research indicates that 68% of organizations are testing or using containers today, and another 16% are planning to start using them soon.1 The benefits of containers—including easy, consistent application deployment and light overhead when compared with virtual machines and hypervisors—make them appealing for a variety of applications.

Read more – ESG Lab Report

Robin Systems Videos

Controlling IOPS in a shared Environment

Controlling IOPS in a shared Environment

Robin Systems Videos

In this video, we demonstrate how easily we can throttle IOPS from an application to address the noisy neighbor problem with Robin Hyper-Converged Kubernetes Platform

Controlling IOPS in a Shared Environment

Robin Systems Videos

nput/output operations per second (IOPS, pronounced eye-ops) is an input/output performance measurement used to characterize computer storage devices like hard disk drives (HDD), solid state drives (SSD), and storage area networks (SAN). Like benchmarks, IOPS numbers published by storage device manufacturers do not directly relate to real-world application performance.[1][2]

Controlling IOPS – Background

To meaningfully describe the performance characteristics of any storage device, it is necessary to specify a minimum of three metrics simultaneously: IOPS, response time, and (application) workload. Absent simultaneous specifications of response-time and workload, IOPS are essentially meaningless. In isolation, IOPS can be considered analogous to “revolutions per minute” of an automobile engine i.e. an engine capable of spinning at 10,000 RPMs with its transmission in neutral does not convey anything of value, however an engine capable of developing specified torque and horsepower at a given number of RPMs fully describes the capabilities of the engine.

In 1999, recognizing the confusion created by industry abuse of IOPS numbers following Intel‘s release of IOmeter, a performance benchmarking tool, the Storage Performance Council developed an industry-standard, peer-reviewed and audited benchmark that has been widely recognized as the only meaningful measurement of storage device IO performance; the SPC-1 benchmark suite[citation needed]. The SPC-1 requires storage vendors to fully characterize their products against a standardized workload closely modeled on ‘real-world’ applications, reporting both IOPS and response-times and with explicit prohibitions and safeguards against ‘cheating’ and ‘benchmark specials’. As such, an SPC-1 benchmark result provides users with complete information about IOPS, response-times, sustainability of performance over time and data integrity checks. Moreover, SPC-1 audit rules require vendors to submit a complete bill-of-materials including pricing of all components used in the benchmark, to facilitate SPC-1 “Cost-per-IOPS” comparisons among vendor submissions.

Among the single-dimension IOPS tools created explicitly by and for benchmarketers, applications, such as Iometer (originally developed by Intel), as well as IOzone and FIO[3]have frequently been used to grossly exaggerate IOPS. Notable examples include Sun (now Oracle) promoting its F5100 Flash array purportedly capable of delivering “1 million IOPS in 1 RU” (Rack Unit). Subsequently, tested on the SPC-1, the same storage device was only capable of delivering 30% of the IOmeter value on the SPC-1.[4][5]

The specific number of IOPS possible in any system configuration will vary greatly, depending upon the variables the tester enters into the program, including the balance of read and write operations, the mix of sequential and random access patterns, the number of worker threads and queue depth, as well as the data block sizes.[1] There are other factors which can also affect the IOPS results including the system setup, storage drivers, OS background operations etc. Also, when testing SSDs in particular, there are preconditioning considerations that must be taken into account.[6]

Performance characteristics and Controlling IOPS

Random access compared to sequential access.

The most common performance characteristics measured are sequential and random operations. Sequential operations access locations on the storage device in a contiguous manner and are generally associated with large data transfer sizes, e.g. 128 kB. Random operations access locations on the storage device in a non-contiguous manner and are generally associated with small data transfer sizes, e.g. 4kB.

The most common performance characteristics are as follows:

Measurement Description
Total IOPS Total number of I/O operations per second (when performing a mix of read and write tests)
Random Read IOPS Average number of random read I/O operations per second
Random Write IOPS Average number of random write I/O operations per second
Sequential Read IOPS Average number of sequential read I/O operations per second
Sequential Write IOPS Average number of sequential write I/O operations per second

For HDDs and similar electromechanical storage devices, the random IOPS numbers are primarily dependent upon the storage device’s random seek time, whereas, for SSDs and similar solid state storage devices, the random IOPS numbers are primarily dependent upon the storage device’s internal controller and memory interface speeds. On both types of storage devices, the sequential IOPS numbers (especially when using a large block size) typically indicate the maximum sustained bandwidth that the storage device can handle.[1]Often sequential IOPS are reported as a simple MB/s number as follows:

{displaystyle {text{IOPS}}*{text{TransferSizeInBytes}}={text{BytesPerSec}}} (with the answer typically converted to MegabytesPerSec)

Some HDDs will improve in performance as the number of outstanding IOs (i.e. queue depth) increases. This is usually the result of more advanced controller logic on the drive performing command queuing and reordering commonly called either Tagged Command Queuing (TCQ) or Native Command Queuing (NCQ). Most commodity SATA drives either cannot do this, or their implementation is so poor that no performance benefit can be seen.[citation needed] Enterprise class SATA drives, such as the Western Digital Raptor and Seagate Barracuda NL will improve by nearly 100% with deep queues.[7] High-end SCSI drives more commonly found in servers, generally show much greater improvement, with the Seagate Savvio exceeding 400 IOPS—more than doubling its performance.[citation needed]

While traditional HDDs have about the same IOPS for read and write operations, most NAND flash-based SSDs are much slower writing than reading due to the inability to rewrite directly into a previously written location forcing a procedure called garbage collection.[8][9][10] This has caused hardware test sites to start to provide independently measured results when testing IOPS performance.

Newer flash SSDs, such as the Intel X25-E, have much higher IOPS than traditional HDD. In a test done by Xssist, using IOmeter, 4 KB random transfers, 70/30 read/write ratio, queue depth 4, the IOPS delivered by the Intel X25-E 64GB G1 started around 10000 IOPs, and dropped sharply after 8 minutes to 4000 IOPS, and continued to decrease gradually for the next 42 minutes. IOPS vary between 3000 and 4000 from around the 50th minutes onwards for the rest of the 8+ hours test run.[11] Even with the drop in random IOPS after the 50th minute, the X25-E still has much higher IOPS compared to traditional hard disk drives. Some SSDs, including the OCZ RevoDrive 3 x2 PCIe using the SandForce controller, have shown much higher sustained write performance that more closely matches the read speed.[12]

On-demand Webinar: Containerizing your Existing Enterprise Applications

Get Slides

Containerizing your Existing Enterprise Applications

Robin Hyper-Converged Kubernetes Platform for containerizing your existing enterprise Applications

Traditional enterprise applications are the lifeblood of any business. However, most companies are still struggling to deploy and operate these applications using the existing data center infrastructure and tooling. This puts enormous burden on application developers and IT administrators to manually deploy applications and its dependencies, manage data lifecycle, and deliver the desired quality of service, all while meeting business SLAs. Containers are lightweight, fast, and agile. They solve all dependency issues related to applications. Unfortunately, container technology such as Docker has seen adoption mostly amongst modern applications – stateless, cloud-native, mobile, etc. Why should your existing enterprise applications continue to suffer their current fate and not benefit from the advantages of containers?

Learn  & get slides

Robin Hyper-Converged Kubernetes Platform Resources

Managing IOPS with Robin Hyper-Converged Kubernetes Platform

Learn More – Robin Hyper-Converged Kubernetes Platform for big data & databases

Managing IOPS with Robin Systems

Managing IOPs with Robin Hyper-Converged Kubernetes Platform for Big Data & Databases

Allocate the right amount of IOPs for each Application in your data center. Make sure one Application does not hog all the IOPs or majority of the IOPs. Set min and max IOPs for each Application and change them dynamically with Robin Hyper-Converged Kubernetes Platform for big data and databases.