Welcome!

Server Monitoring Authors: Carmen Gonzalez, AppDynamics Blog, Yeshim Deniz, Liz McMillan, Pat Romanski

Related Topics: IBM Cloud, @CloudExpo

IBM Cloud: Blog Feed Post

How Do You Get the Benefits of Shared Resources in a Private Cloud?

In a private cloud, where does the sharing of resources come from?

I was recording a podcast last week on the subject of cloud with an emphasis on security and of course we talked in general about cloud and definitions. During the discussion the subject of “private cloud” computing was raised and one of the participants asked a very good question:

Some of the core benefits of cloud computing come from shared resources. In a private cloud, where does the sharing of resources come from?

I had to stop and think about that one for a second, because it’s not something I’ve really thought about before. But it was a valid point; without sharing of resources the reduction in operating costs is not as easily realized. But even in an enterprise data center there is a lot more sharing that could be going on than perhaps people realize.


SHARING in the ENTERPRISE


There are plethora of ways in which sharing of resources can be accomplished in the enterprise. That’s because there are just as many divisions within an organization for which resources are often dedicated as there are outside the organization. Sometimes the separation is just maintained in the financial ledger, but just as frequently the separation manifests itself physically in the datacenter with dedicated resources.

Individual initiatives. Departmental level applications. Lines of business. Subsidiaries. Organizations absorbed – mostly - via mergers and acquisitions.

couple-arguingEach of these “entities” can – and often does – have its own budgets and thus dedicated resources. Some physical resources in the data center are dedicated to specific projects, or departments, or lines of business and it is often the case that the stakeholders of applications deployed on those resources “do not play well with others” in that they aren’t about to compromise the integrity and performance of that application by sharing what might be perfectly good compute resources with other folks across the organization.

Thus is it perfectly reasonable to believe that there are quite a few “dedicated” resources in any large data center which can be shared across the organization. And given chargeback methods and project portfolio management methods, this results in savings in much the same manner as it would were the applications deployed to a public cloud.

But there is also a good deal of compute resources that go to waste in the data center due to constraints placed upon hardware utilization by organizational operating policies. Many organizations still limit the total utilization of resources on any given machine (and hardware) to somewhere between 60% and 80%. After that the administrators get nervous and begin thinking about deploying a second machine from which resources can be utilized. This is often out of consideration for performance and a fear of over-provisioning that could result in the dread “d” word: downtime.

Cloud computing models, however, are supposed to ensure availability and scalability through on-demand provisioning of resources. Thus if a single instance of an application begins to perform poorly or approaches capacity limits, another instance should be provisioned. The models themselves assume full utilization of all compute resources across available hardware, which means those pesky utilization limits should disappear.

Imagine if you had twenty or thirty servers all running at 60% utilization that were suddenly freed to run up to 90% (or higher)? That’s like gaining … 600-900% more resources in the data center or 6-9 additional servers. The increase in utilization offers the ability to share the resources that otherwise sat idle in the data center.


INCREASING VM DENSITY


If you needed even more resources available to share across the organization, then it’s necessary to increase the density of virtual machines within the data center. Instead of a 5:1 VM –> physical server ratio you might want to try for 7:1 or 8:1. To do that, you’re going to have to tweak out those virtual servers and ensure they are as efficient as possible so you don’t compromise application availability or performance.

Sounds harder than it is, trust me. The same technology – unified application delivery - that offloads compute intense operations from physical servers can do the same for virtual vmdensity-increasedmachines because what the solutions are really doing in the case of the former is optimizing the application, not the physical server. The offload techniques that provide such huge improvements in the efficiency of servers comes from optimizing applications and the network stack, both of which are not tied to the physical hardware but are peculiar to the operating system and/or application or web server on which an application is deployed. By optimizing the heck out of that, the benefits of offload technologies can be applied to all servers: virtual or physical.

That means lower utilization of resources on a per virtual machine basis, which allows an organization to increase the VM density in their data center and frees up resources across physical servers that can be “shared” by the entire organization.


CHANGE ATTITUDES AND ARCHITECTURES


The hardest thing about sharing resources in a private cloud implementation is going to be changing the attitudes of business stakeholders toward the sharing of resources. IT will have to assure those stakeholders that the sharing of resources will not adversely affect the performance of applications for which those stakeholders are responsible. IT will need to prove to business stakeholders that the resulting architecture may actually lower costs of deploying new applications in the data center because they’ll only be “paying” (at least on paper and in accounting ledgers) for what they actually use rather than what is available.

By sharing compute resources across all business entities in the data center, organizations can, in fact, realize the benefits of cloud computing models that comes from sharing of systems. It may take a bit more thought in what solutions are deployed as a foundation for that cloud computing model, but with the right solutions that enable greater efficiencies and higher VM densities the sharing of resources in a private cloud computing implementation can certainly be achieved.

Follow me on Twitter View Lori's profile on SlideShare friendfeedicon_facebook AddThis Feed Button Bookmark and Share

Related blogs & articles:

Read the original blog entry...

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

IoT & Smart Cities Stories
New competitors, disruptive technologies, and growing expectations are pushing every business to both adopt and deliver new digital services. This ‘Digital Transformation’ demands rapid delivery and continuous iteration of new competitive services via multiple channels, which in turn demands new service delivery techniques – including DevOps. In this power panel at @DevOpsSummit 20th Cloud Expo, moderated by DevOps Conference Co-Chair Andi Mann, panelists examined how DevOps helps to meet the de...
According to Forrester Research, every business will become either a digital predator or digital prey by 2020. To avoid demise, organizations must rapidly create new sources of value in their end-to-end customer experiences. True digital predators also must break down information and process silos and extend digital transformation initiatives to empower employees with the digital resources needed to win, serve, and retain customers.
In his keynote at 18th Cloud Expo, Andrew Keys, Co-Founder of ConsenSys Enterprise, will provide an overview of the evolution of the Internet and the Database and the future of their combination – the Blockchain. Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life ...
Smart Cities are here to stay, but for their promise to be delivered, the data they produce must not be put in new siloes. In his session at @ThingsExpo, Mathias Herberts, Co-founder and CTO of Cityzen Data, discussed the best practices that will ensure a successful smart city journey.
"Space Monkey by Vivent Smart Home is a product that is a distributed cloud-based edge storage network. Vivent Smart Home, our parent company, is a smart home provider that places a lot of hard drives across homes in North America," explained JT Olds, Director of Engineering, and Brandon Crowfeather, Product Manager, at Vivint Smart Home, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
LogRocket helps product teams develop better experiences for users by recording videos of user sessions with logs and network data. It identifies UX problems and reveals the root cause of every bug. LogRocket presents impactful errors on a website, and how to reproduce it. With LogRocket, users can replay problems.
@CloudEXPO and @ExpoDX, two of the most influential technology events in the world, have hosted hundreds of sponsors and exhibitors since our launch 10 years ago. @CloudEXPO and @ExpoDX New York and Silicon Valley provide a full year of face-to-face marketing opportunities for your company. Each sponsorship and exhibit package comes with pre and post-show marketing programs. By sponsoring and exhibiting in New York and Silicon Valley, you reach a full complement of decision makers and buyers in ...
There are many examples of disruption in consumer space – Uber disrupting the cab industry, Airbnb disrupting the hospitality industry and so on; but have you wondered who is disrupting support and operations? AISERA helps make businesses and customers successful by offering consumer-like user experience for support and operations. We have built the world’s first AI-driven IT / HR / Cloud / Customer Support and Operations solution.
Data Theorem is a leading provider of modern application security. Its core mission is to analyze and secure any modern application anytime, anywhere. The Data Theorem Analyzer Engine continuously scans APIs and mobile applications in search of security flaws and data privacy gaps. Data Theorem products help organizations build safer applications that maximize data security and brand protection. The company has detected more than 300 million application eavesdropping incidents and currently secu...
Rafay enables developers to automate the distribution, operations, cross-region scaling and lifecycle management of containerized microservices across public and private clouds, and service provider networks. Rafay's platform is built around foundational elements that together deliver an optimal abstraction layer across disparate infrastructure, making it easy for developers to scale and operate applications across any number of locations or regions. Consumed as a service, Rafay's platform elimi...