by Umasankar Mukkara on May 9, 2013
Whew. What a day! The storage media has been set into an overdrive with EMC throwing itself into the software-defined storage (SDS) race. We are excited by this development and welcome the industry leader as it recognizes the tremendous benefits of SDS and how SDS solutions are better suited to meet the cloud storage demands of performance, flexibility and scale. We are hopeful that EMC’s entry into the fray will accelerate the move towards a software-defined datacenter, with an open standards-based approach towards defining the API requirements for storage provisioning and management.
With its ViPR announcement, EMC brings a management layer on top their legacy storage systems, effectively bringing storage provisioning and management into their software definition stack. This opens up an interesting opportunity for other storage vendors like CloudByte, which have well defined APIs that can be used to easily integrate with EMC platforms. Service providers can now have mixed storage vendor technologies at their datacenters and yet have a flexibility to manage the storage volumes in a unified way from their own cloud management portals.
One of the key challenges that service providers face today is to define a storage platform that can compete with EBS or build one that is better than EBS. So, is it enough to have a software-defined storage management layer to build an EBS style platform?
Quite obviously, the answer is no!
To build a storage platform that competes with EBS, service providers need to have performance control of their software defined storage volumes. And, that is what next-generation storage companies like CloudByte bring to the table with their QoS-aware storage platforms. CloudByte has an easy-to-implement, comprehensive API layer that already integrates well with cloud orchestration layers like OpenStack Cinder, CloudStack storage plugins, and server virtualization SDKs like VMware VI SDK. By integrating CloudByte management layer and the newly announced EMC management layer, we believe service providers will soon be able to bring in CloudByte Elastistor’s performance control to the EMC iSCSI block storage.
It is still to be seen if EMC’s entry will push everyone towards a common set of APIs for storage volume management at storage arrays, which will be a great development for the entire storage community!
by Team CloudByte on March 18, 2013
We’ve seen a lot of confusion around all-flash storage arrays and how it’s the only solution to the storage QoS problem. Forget about being the only solution, all-flash storage arrays cannot even be counted among the few storage QoS solutions available today. In Short, All-flash Storage ≠ Storage QoS. This confusion/misinformation in the service providers’ minds is obviously seeded by (a few) all-flash storage vendors and we think it’s time to take a deep breath and delve deeper into this issue.
If you’re wondering why there is a big fuss about storage QoS, here’s a quick intro to storage QoS from Forrester blogs and why it is a must have feature for enterprises and the cloud. Also, read this excellent post by Arun Taneja, where he discusses automating QoS provisioning.
“Storage QoS allows performance provisioning at a granular level. This functionality should provide controls on transaction (IOPS) and throughput (GB per second) performance – typically set on a LUN basis, but will ideally enforce policies at a VM level in the future”
Henry Baltazar, Senior Analyst, Forrester
Storage QoS needs a re-architecture of the legacy monolithic storage controller
Now, how does an all-flash array help in delivering storage QoS to every application/VM? Hint: It does not. Delivering QoS to every application within a shared storage platform requires a re-architecture of the monolithic storage controller and has got nothing to do with adding fancy-name (and expensive) storage. The storage controller must be architected from the ground up for multi-tenancy i.e., the controller should be able to isolate storage boundaries for every application within shared storage and dedicate a set of resources according to its performance demands. These resources should then be continuously monitored and tweaked to achieve the desired QoS for applications with disparate workloads. In short, a multi-tenant storage controller completely cures the noisy neighbor issues and delivers tailored QoS to every application from a shared storage platform. Read more about CloudByte technology and how it delivers storage QoS here and here.
All-flash arrays are an inefficient and expensive workaround to deliver QoS
So, why do all-flash storage vendors claim QoS? As we’ll see, it’s just a workaround and an inefficient and expensive one at that. By deploying an expensive all-flash array, you can be assured of 100,00s of IOPS and this helps muzzle the noisy neighbors to a certain extent. Even if an application sees a spike in its workload, the other applications are still guaranteed of some IOPS due to the sheer performance capability of all-flash storage. This is neither ideal (still does not guarantee QoS) nor efficient (all-flash arrays are too expensive for generic workload characteristics). This is precisely why Forrester’s Henry calls the all-flash approach as “indiscriminately throwing performance at the QoS problem” and “not a good (or fiscally responsible) long term answer”.
We still believe all-flash arrays are an efficient solution for a specific set of workloads and so are other storage devices, whether it’s SATA or SAS. For a comprehensive storage solution, service providers and enterprises should demand storage QoS solutions that are device agnostic.
by Team CloudByte on March 4, 2013
For organizations today, running applications in the cloud has become a matter of “when will we deploy,” and not “should we deploy.” Even conservative IT shops that may have shied away from the public cloud have at least embraced the cost and efficiency advantages of deploying utility computing.
Large “retail-class” infrastructure as a service (IaaS) providers, like Amazon Web Services, work well for applications that require scaling to high aggregate processing throughput or near-infinite storage capacity. But, there are significant challenges in hosting performance-sensitive applications. A significant business opportunity exists for Cloud Service Providers (CSPs) that can support QoS-sensitive workloads, like Oracle, SAP, SAS, OLTP, ERP, etc.
Challenges in Hosting Performance-Sensitive Applications
Hosting performance-sensitive enterprise applications requires delivery of guaranteed QoS, which has been the Achilles heel of large cloud service providers. In fact, without much effort, one can find many horror stories from many organizations trying to get databases running—and keep them running—in these environments.
So, what stops legacy solutions from delivering guaranteed QoS? Noisy neighbors! Within a shared storage platform, legacy solutions cannot isolate and dedicate a specific set of resources to any application. As a result, applications are in a constant struggle for the shared storage resources. An application’s IOPS, throughput, and latency are determined by the current state of the system (i.e., currently available resources), rather than being in sync with its workload characteristics. An obvious solution is to dedicate physical storage per application. But this implies a huge wastage of resources and an exorbitant cost structure for the CSP. The only efficient solution is to virtually isolate applications and dedicate/control resources allotted to them right from the shared storage platform i.e., provide a true multi-tenant solution.
CloudByte ElastiStor Guarantees QoS right from Shared Storage
ElastiStor, with its patented TSMTM architecture, is specifically designed for hosting multiple disparate workloads on a single system. For the first time ever, ElastiStor delivers guaranteed QoS to every application within shared storage, resolving the noisy neighbor issues. In addition to industry-first multi-tenant capabilities, ElastiStor provides all the standard storage features that CSPs need. Software-only and software-defined, CloudByte also frees CSPs from any proprietary lock-in, eliminating large upfront and ongoing investments.
View the whitepaper below to understand how ElastiStor cures the noisy neighbor issues!
by Umasankar Mukkara on February 20, 2013
CloudByte takes another logical step forward in being the storage leader for the new age datacenters. Close on the heels of Citrix-Ready certification, CloudByte ElastiStor has now been certified by VMware. With VMware NAS certification under ElastiStor’s belt, VMware users can now confidently deploy ElastiStor in staging and production environments and realize the benefits of our groundbreaking technology.
What’s more.. CloudByte has developed a VMware vCenter plugin that allows admins to provision NFS datastores for virtual machines and configure their QoS parameters (IOPS, throughput, latency), right from the vCenter console.
Download your eval copy of ElastiStor and discover how we make storage easy, affordable and predictable in virtualized environments!
by Umasankar Mukkara on February 18, 2013
Recently, I was invited to spend some time with the CloudStack Bangalore user group and I loved the energy in the room. Giles Sirett (from ShapeBlue) talked about the eco-system needed to build an AWS style cloud and how CloudStack can help. His experiences with CloudStack were truly enlightening.
I shared my thoughts around the evolution of multi-tenancy in various software layers and the main requirements of multi-tenant storage. Here is the deck I presented:
by Guest on January 22, 2013
Guest post by George Crump, President and Founder of Storage Switzerland. With 25 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS and SAN. Prior to founding Storage Switzerland he was CTO at one the nations largest storage integrators where he was in charge of technology testing, integration and product selection.
Originally posted at Storage Switzerland
Cloud Service Providers (CSPs) and Managed Service Providers (MSPs) are faced with a unique storage challenge that many other organizations don’t have to deal with; large scale provisioning of storage resources. Thanks to server virtualization, these organizations provision and manage data center wide server resources but they struggle, at least data center wide, to do the same with their storage resources. As a result storage is either massively over provisioned, wasting money or intensely and manually monitored wasting time and personnel while slowing down the deployment of new servers or applications.
The Problem With Legacy Storage Provisioning
This need for better storage provisioning capabilities has lead the storage suppliers in the industry to add storage virtualization capabilities to their legacy storage systems. But this virtualization is often internal, meaning it is isolated to a single system and a single manufacturer. Internal storage virtualization has simplified, to a degree, the storage provisioning process by allowing an administrator to simply select the size of the partition and letting the storage system do more of the work. With internal virtualization the administrator will still receive every storage request, analyze the request and know where to provision that storage from. All of which becomes a bottleneck to service delivery.
It also leads to having multiple storage virtualization software instances running as each system from each manufacturer has its own software that needs to be learned and interacted with. The CSP/MSP typically has a wide collection of storage hardware. This would be similar to having a different brand hypervisor loaded on every server and having to manage each of those separately.
Legacy provisioning as it is provided by internal storage virtualization also requires that the administrator know which type of storage and storage system from which the provisioning will occur. The administrator needs to make the physical connection between the performance needs of the application and the storage environment’s available storage media types. They must know which media types and systems are best suited for each type of request.
Amazon EC2 has solved this problem. But Amazon EC2 is not a technology but a service to end customers. If another MSP/CSP needs to provide the services similar to EC2 or better than EC2, the legacy technologies cannot come to the rescue.
Provisioning is More Than Capacity
The current internal storage virtualization capabilities found in legacy systems are limited to the provisioning of capacity. Storage, like servers, has more than just one resource and applications will use those resources differently depending on the situation. Storage resources include the storage CPU, storage controller memory, internal cache management and network bandwidth in addition to physical capacity required. The combination and control of these resources represent the amount of IOPS (input/output operations per second) or throughput and storage latency that a storage system can deliver. But as is the case with storage capacity not all servers or applications need the same amount of IOPS or storage latency. Legacy storage systems simply don’t provide a granular way to allocate performance within a storage system.
The lack of the ability to provision performance plagues even more modern storage systems as well as storage virtualization software that claims to be designed for the highly virtualized data center. Reality is that these systems may be appropriate for those situations but are not able to meet the provisioning needs of the CSP/MSP.
Provision Requirements of the CSP/MSP
The CSP/MSP is foreshadowing what the enterprise will become in the near future; a data center that is judged on its ability to respond rapidly to an ever growing and ever demanding user base. In the case of a CSP/MSP these “users” are accounts that pay a monthly fee and have specific service level agreements (SLA) requirements of the CSP/MSP. The speed at which provisioning can be performed and the ability for that provisioning rule to be maintained over time is the foundational component in meeting those SLAs.
For the CSP/MSP to be profitable they cannot afford to hire administrators every time a new account is brought on or even after 100 accounts are brought on. Instead they need to be able to safely delegate provisioning to the account while maintaining oversight. This means allocating a certain amount of capacity and IOPS/throughput/latency per account and then allowing the account to divide up those services based on need.
Self monitoring also may be a need in many cases. The account wants to know how much he is using on which application at what time. This will help them to better manage their applications running at a CSP/MSP.
Manages User’s Expectations
A key challenge with not being able to provision IOPS in legacy systems is that the performance experience can not be controlled. This creates an expectations problem because users that sign up for a bronze service level get the same performance experience that a gold service level gets.
Even if different class systems are used to allocate the performance resources, the first set of users on a system will experience a higher than promised level of performance and then see their performance degrade as more accounts are added to a system. The CSP/MSP needs the ability to guarantee a certain level of performance, no more, no less, so that user’s expectations can be managed.
This level of performance needs to remain constant, so the performance that the user sees from their assigned storage is the same today as it will be a year from now. Changes to the environment and even the storage system itself should not impact the user nor jeopardize the SLA.
The MSP/CSP needs to balance the cost advantages of maximizing storage resources with the customer satisfaction risks associated with extending a system too far. They need a storage system that will allow them to granularly assign capacity and performance resources so that these systems can be taken to their maximum capabilities without risking customer satisfaction.
Essentially each available GB and IOPS needs to be bought and paid for prior to investing in an additional system. This allows the addition of new storage investment to be trended based on the rate that resources are being consumed on present systems. In short the storage environment needs to scale like the CSP/MSPs business scales.
When a customer or account demands storage with varying levels of performance and capacity, they may need to be provisioned from different storage systems. The administrator needs to know which storage system has how much capacity and performance left out. When manually managed by the administrator, storage fragmentation occurs, usually.
Storage fragmentation is a phenomenon in which large number of storage systems have the ability to provide a certain type of storage but none of the storage system is capable of providing one type of storage. For example, if there are 10 storage systems in the infrastructure and the CSP/MSP admin provisions 5TB/1000 IOPS volumes equally on all of them, when the system is 70% full, it may not be possible to provision 5TB/20000 IOPS, as this needs writing across large number of disks, but the disks are 70% full. Intelligent and automated provisioning guidelines will help avoid such a scenario.
Multi-Vendor, Multi-Tier Provisioning
CSP/MSP also need the storage system to provide this provisioning along with other storage services like thin provisioning, snapshots, cloning and replication, across multiple storage platforms, even those from different vendors. This allows the CSP/MSP to manage their entire storage environment from a single interface regardless of the manufacturer of the individual platform. Performance can then be allocated intelligently across platforms by finding the storage system with the storage resources that best match the IOPS requirement. It also saves the MSP/CSP from the vendor lock in associated with buying a single vendor’s system, giving them flexibility to select storage systems based on suitability to the task at hand.
Introducing Elastic Provisioning
Elastic provisioning is the ability to provision both capacity and performance resources data center wide from a single interface. It models the server virtualization concept by deploying a series of off the shelf servers to act as physical storage controllers. The storage in the environment is then assigned to these storage controllers. Since these controllers are abstracted from the physical storage they can manage a mixed storage vendor environment.
Elastic storage provides the ability to spawn virtual controllers similar to how a server host spawns virtual servers. Each of these virtual controllers is assigned to an account. Capacity and IOPS/Throughput/Latency, based on the needs of the account, are then assigned to the virtual controller. The account can then sub-divide the capacity and SLA parameters based on the needs of each of its applications.
This virtual controller functionality ensures that a misbehaving application at one account won’t impact the capacity or performance needs of another account. There is complete isolation. It also insures that data can be segregated between accounts, another common concern in the CSP/MSP.
From CSP to the Enterprise
It is easy to see how the enterprise could leverage these capabilities as well. Instead of accounts, different lines of business or application groups could be assigned virtual storage controllers. Those groups could then manage their own storage without risk to the other groups. As is the case with CSP/MSPs the enterprise also has a mix of storage systems and could benefit from a centralized controller cluster.
Provisioning of storage remains a key challenge in data centers of all types and sizes but it is especially problematic for the CSP/MSP. It becomes THE bottleneck in rapidly responding to customer requests and its limitations make it difficult to guarantee long term adherence to SLAs. Elastic provisioning is a viable solution to this problem. It provides for multi-vendor provisioning of both capacity and performance resources.
Comments from CloudByte Learn more about how CloudByte ElastiStor addresses this provisioning challenge with its QoS-configurable storage endpoints and on-demand provisioning at http://www.cloudbyte.com/products_features.aspx
by Team CloudByte on January 2, 2013
Wish you all a very happy new year from CloudByte! A new year beckons a new introduction from us – so, here we go.
by Umasankar Mukkara on December 20, 2012
We’re delighted to inform you that CloudByte ElastiStor has successfully passed the rigorous Citrix Ready certification tests for Xen Server 6.0 and CloudPlatform 2.x and 3.x. We’re now fully Citrix Ready!
Citrix Ready certification is an important milestone on our path to leadership position in the cloud storage space, where delivering tailored storage performance to each customer/application is a definite need. This certification is achieved on CloudByte ElastiStor 1.0, which can host upto 50 TSMs on a single physical node. This implies that a single HA pair of CloudByte ElastiStor 1.0 can serve the storage needs of 50 customers hosted on Xen and CloudStack based virtualization setups.
You can now confidently deploy ElastiStor as the storage infrastructure if you are a Citrix Xen and/or Citrix Cloud Platform customer. To get your evaluation copy of CloudByte ElastiStor 1.0, please visit http://cloudbyte.com/eap.aspx today!
by Felix Xavier on December 6, 2012
It is now widely acknowledged that software-defined datacenters are the future, with its benefits well understood. Everything appears to be software-defined these days from servers and switches to storage. With many storage vendors now jumping on this bandwagon, the real question that CIOs and IT managers face is: “Is it really software-defined?” Just as placing a Facebook link on your website doesn’t make your business “social”, placing a software layer or being “software-only” doesn’t make your storage “software-defined”.
Software-Defined Servers and Network
Abstracting hardware from the software layer is one of the most important revolutions delivered by server virtualization vendors like VMWare, Hyper-V and Xen. With this abstraction, servers could be defined in terms of CPU, memory, network cards and provisioned from a pool of underlying hardware. This revolution seeded the idea of software-defined datacenters. Subsequently, networking world quickly responded to this shift with software-defined networks (SDN), where new standards (OpenFlow) emerged. Here, network paths could be dynamically defined from the software layer. OpenFlow started off with software-based switches and was later on adopted by other hardware vendors.
Storage: The Missing Piece in your Software-Defined Datacenter
Being a conservative component, storage was the odd-man out in the software-defined datacenters. Legacy storage still requires you to hardwire a dedicated set of drives and/or a dedicated set of storage controllers to configure the required amount of storage performance in terms of IOPS, throughput and latency.
Unfortunately, in its current form, legacy storage cannot be software-defined. In a legacy storage controller, storage endpoints can only be defined in terms of capacity, making it impossible to configure storage performance from a software layer. Legacy storage controllers need to be completely re-architected to deliver software-defined storage. Hence, it is imperative for IT managers to be cautious of any vendors trying to pass off their solutions which have “software-only-delivery-model” as SDS.
CloudByte Storage Controllers are Built from the Ground-up to be Software-Defined
By building a new class of storage controllers from the ground up, CloudByte breaks the need for hardwiring storage controllers and disks to configure a desired storage performance. CloudByte has built intelligent storage nodes, where each storage endpoint/volume is defined in terms of IOPS, throughput, and latency, in addition to capacity. CloudByte’s automated provisioning helps you determine the right storage node and storage pool, which can deliver the required storage performance and capacity. With CloudByte ElastiStor, storage is truly software-defined with complete abstraction of the underlying hardware infrastructure.
by Felix Xavier on November 26, 2012
This article was originally published in Cloudstory.in on Nov 23, 2012.
With its recent announcement of IOPS provisioning, Amazon has once again leapfrogged ahead of its competition in the cloud computing industry. Few may have realized that this particular announcement is backed by a breakthrough technology. Historically, storage performance has been considered unpredictable due to various factors involved and the technology limitation in controlling them. By offering guaranteed IOPS for its storage volumes, AWS appears to have solved this complex problem,albeit in a limited way (we’ll elaborate on this a little later).
Legacy Solutions Limit Cloud Service Providers’ Offerings
Storage performance unpredictability is the key limiting factor in integrating storage seamlessly in a cloud environment. Legacy storage solutions can define their storage endpoints only in terms of capacity and cannot offer predictable storage performance. As a result, performance-sensitive applications are not deployed in a shared storage environment. Rather, these applications are dedicated a set of storage controllers and disks that can deliver the required performance. Unable to leverage a shared storage platform, CSPs do not realize any economies of scale in storage, resulting in exorbitantly high storage costs. Typically, CSPs either do not host performance-sensitive applications or charge much higher for such applications.
AWS has Leapfrogged Competition with its IOPS-guaranteed Storage Performance
Amazon is targeting precisely these performance-sensitive applications by offering IOPS-guaranteed performance. Cost-effective and with flexibility of on-demand performance and capacity, AWS provides an ideal platform for such applications. More importantly, this is the largest revenue market within cloud storage, with industry estimates of US$4bn global market that is rapidly chipping away at the on-premise enterprise storage. With its technology breakthrough, AWS definitely seems to hold an unfair advantage over its competition in attracting performance sensitive applications to its shared storage environment.
So, how can CSPs effectively compete with AWS?
It’s actually pretty simple – CSPs must equip themselves with better technology than even AWS. While AWS provides just IOPS-guaranteed performance, with the next-generation storage solutions, CSPs can now get storage performance that is guaranteed in terms of IOPS, throughput, and latency. While AWS offering is suited only for IOPS-sensitive applications, other CSPs can now do better by hosting even throughput/latency sensitive applications. In other words, CSPs can now host a database application, a video streaming application and an email application on the same storage platform – providing both the cost-effectiveness and flexibility of on-demand performance that enterprises desire.
Selecting the Next-Generation Storage Solution
I believe it is important to carefully screen the next-generation storage solutions to avoid any future embarrassments. Of course, they must all satisfy the basic requirement of superior technology i.e., guaranteed storage performance, in terms of IOPS, throughput, and latency. In addition, it is best to look for solutions that are seamlessly scalable, avoid any proprietary lock-in (both in terms of hardware and software), and meet enterprise-class security standards.