Welcome!

Server Monitoring Authors: Yeshim Deniz, Liz McMillan, Pat Romanski, Carmen Gonzalez, Ken Schwaber

Related Topics: @CloudExpo

@CloudExpo: Interview

Cloud Datacenters: 4 Questions for Uptime Institute

Digital Infrastructure VP Steve Carter Tracks Progress of Public & Private Clouds

With the global growth of Cloud Computing solutions and the datacenters that support them, it seemed like a good time to fire a few questions to the Uptime Institute, which leads the global conversation about datacenters through its certifications, consulting, research, and educational programs.

So here are four questions for Steve Carter, VP of Digital Infrastructure Services at the Institute..

1. How are datacenters becoming more efficient? What are the major strategies being used to maximize processing power while trying to keep cooling costs under control?

Steve Carter: There are two strategies that should be part of any good datacenter efficiency improvement effort.

The first is reducing IT's electrical load by improving utilization of IT systems on a per-server basis. The second is improving, that is, lowering, the amount of overhead power required for the mechanical and electrical systems that support the IT load.

Our Digital Infrastructure Services clients that currently average 30% virtualization across their distributed systems can realize a 2:1 payback on money spent for 3-year transformation projects that significantly reduce future IT total electrical loads

2. How important will latency and related issues be to datacenters? That is, what is the potential for datacenters to serve customers beyond their national and even continental borders?

Steve: Several large global companies have successfully consolidated datacenters in single geographical regions. Significant efforts were required to test and deploy application environments that are more tolerant to global latency issues.  Many legacy application environments must be replaced by web services type environments that allow global consolidations.

Consolidation of data centers allowed these global clients to reduce the total number of datacenter sites requiring global network connectivity.  The savings realized in reducing the number of datacenter network connectivity concentration sites allows for increasing bandwidth of the fewer numbers of circuits required.  Often the reduction of total quantities of the global circuits allowed these companies to dramatically increase bandwidth of the remaining circuits at a lower total global cost.

3. I've been guilty of equating "datacenter hosting" with "cloud computing," even though that's not always the case. What percentage of hosted datacenter services will be focused on cloud computing over the next few years?

Steve: Adoption of public, outsourced cloud services will be utilized at different rates by industry sectors and their maturity.  New upstart companies that do not have legacy infrastructures have very high percentages of public cloud deployments. On the other end of the spectrum, financial services organizations will be much slower to implement public cloud services.

I believe that public cloud adoption will follow trends that we observed for virtualization from 2006 till the present.  Areas such as application development environments were among the first environments to be virtualized in quantity.  I think we are seeing this trend develop for public cloud as well.

Applying cloud technologies within private datacenters is a trend that is gaining momentum. We have clients that are already in the development and test phases of transforming their client facing web services environments from traditional architectures & infrastructures to private cloud environments.

4. To what degree do economies of scale start to apply to datacenters? That is, even with so much offsite cloud computing, there will be local, company-owned datacenters for many more decades, I would assume. Most of these would be smaller than large, hosted plants, right? So how important are economies of scale, and what can companies do to ensure their local datacenter is as optimized and efficient as possible?

Steve: I believe that private datacenters can benefit significantly by utilizing the basic approaches utilized by datacenter service providers.

Service providers clearly understand their infrastructure CAPEX and OPEX costs for every square foot of space, every kW of power added and every BTU of cooling required.  This is not always true of private datacenter owners.  Understanding the true costs associated with any new added infrastructure requirement is necessary to effectively manage a datacenter at any scale.

Private datacenters greatly benefit by clearly understanding how every additional kW of IT load impacts their CAPEX and OPEX performance.  Services providers must clearly understand these basic financial facts if they are to remain in business.

More Stories By Roger Strukhoff

Roger Strukhoff (@IoT2040) is Executive Director of the Tau Institute for Global ICT Research, with offices in Illinois and Manila. He is Conference Chair of @CloudExpo & @ThingsExpo, and Editor of SYS-CON Media's CloudComputing BigData & IoT Journals. He holds a BA from Knox College & conducted MBA studies at CSU-East Bay.

IoT & Smart Cities Stories
Nicolas Fierro is CEO of MIMIR Blockchain Solutions. He is a programmer, technologist, and operations dev who has worked with Ethereum and blockchain since 2014. His knowledge in blockchain dates to when he performed dev ops services to the Ethereum Foundation as one the privileged few developers to work with the original core team in Switzerland.
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settlement products to hedge funds and investment banks. After, he co-founded a revenue cycle management company where he learned about Bitcoin and eventually Ethereal. Andrew's role at ConsenSys Enterprise is a mul...
Whenever a new technology hits the high points of hype, everyone starts talking about it like it will solve all their business problems. Blockchain is one of those technologies. According to Gartner's latest report on the hype cycle of emerging technologies, blockchain has just passed the peak of their hype cycle curve. If you read the news articles about it, one would think it has taken over the technology world. No disruptive technology is without its challenges and potential impediments t...
If a machine can invent, does this mean the end of the patent system as we know it? The patent system, both in the US and Europe, allows companies to protect their inventions and helps foster innovation. However, Artificial Intelligence (AI) could be set to disrupt the patent system as we know it. This talk will examine how AI may change the patent landscape in the years to come. Furthermore, ways in which companies can best protect their AI related inventions will be examined from both a US and...
In his general session at 19th Cloud Expo, Manish Dixit, VP of Product and Engineering at Dice, discussed how Dice leverages data insights and tools to help both tech professionals and recruiters better understand how skills relate to each other and which skills are in high demand using interactive visualizations and salary indicator tools to maximize earning potential. Manish Dixit is VP of Product and Engineering at Dice. As the leader of the Product, Engineering and Data Sciences team at D...
Bill Schmarzo, Tech Chair of "Big Data | Analytics" of upcoming CloudEXPO | DXWorldEXPO New York (November 12-13, 2018, New York City) today announced the outline and schedule of the track. "The track has been designed in experience/degree order," said Schmarzo. "So, that folks who attend the entire track can leave the conference with some of the skills necessary to get their work done when they get back to their offices. It actually ties back to some work that I'm doing at the University of San...
When talking IoT we often focus on the devices, the sensors, the hardware itself. The new smart appliances, the new smart or self-driving cars (which are amalgamations of many ‘things'). When we are looking at the world of IoT, we should take a step back, look at the big picture. What value are these devices providing. IoT is not about the devices, its about the data consumed and generated. The devices are tools, mechanisms, conduits. This paper discusses the considerations when dealing with the...
Bill Schmarzo, author of "Big Data: Understanding How Data Powers Big Business" and "Big Data MBA: Driving Business Strategies with Data Science," is responsible for setting the strategy and defining the Big Data service offerings and capabilities for EMC Global Services Big Data Practice. As the CTO for the Big Data Practice, he is responsible for working with organizations to help them identify where and how to start their big data journeys. He's written several white papers, is an avid blogge...
Dynatrace is an application performance management software company with products for the information technology departments and digital business owners of medium and large businesses. Building the Future of Monitoring with Artificial Intelligence. Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more busine...