How to diagnose cloud performance issues

Is your public cloud-based workload too slow? You don’t know where to look first?  Here are some quick guidelines for diagnosing the root cause of most performance issues.

I’ve found that many people in IT who can quickly diagnose issues with traditional systems have trouble diagnosing cloud-based system. Why? Because they don’t have a deep understanding of what’s in a public cloud, such as Amazon Web Services or Microsoft Azure, and believe that it’s a black box.

That’s really not the case. Plus, the system management tools and APIs that most public clouds provide are first-rate. However, you do have to understand where to look first, and what tools to use.

Cloud performance is complex, because it’s a complex distributed system at the end of the day. However, follow the five diagnoses steps below to find and fix root causes. If you find performance issues at one step, don’t stop there! You may have more than one issue affecting performance.

1. Check the infrastructure that supports the workloads, both application and data

Using system monitoring and log analysis tools, you can determine CPU and storage utilization, which are the most likely culprits.

Many IT pros using clouds fail to allocate more CPUs and storage as needed as an application’s and database’s size expand over time. Although you would assume that a public cloud automatically expands to meet your needs, that’s not the case. You need to configure and provision more servers to handle to additional workload before they are needed.  

2. Look at the applications themselves

There are many monitoring tools that can peer into applications, and I strongly recommend that you use one or more of them.

Applications are the culprit for poor performance almost as often as the infrastructure is because they may not have been refactored or modified to used cloud-native features. Thus, they can become very inefficient at using the infrastructure, which falsely puts the performance blame on the infrastructure.

3. Look at other less likely root causes of performance issues

Now it’s time to check other components. Check the security system: Encryption services can saturate storage and compute. Check the governance services—even the monitoring services that will tell you about performance issues in the first place. I’ve found that all such tools can oversaturate the infrastructure.

4. Move to the network, including bandwidth checks inside and outside the cloud

Because you consume public cloud services over the open internet, you’re often competing with lots of other packets. To see if that’s a cause of your poor performance, do ping tests, as well as data movement up and down, using tests that approximate what’s transmitted and consumed by the cloud-based workloads.

5. Examine the users’ browsers and computers

Finally, there are often issues with the users’ browsers that interact with the cloud-based application.

I’ve found malware, encryption issues, and basically all of the stuff that can go wrong with Windows PCs and Macs can make the cloud performance become slow at the client side. Have tech support run those down if the first four steps come up clean.

Powered by WPeMatico

Why cloud adoption isn’t slowing datacenter growth

I’m always interested in datacenters because I live in Northern Virginia, where a new one opens about once a month, leveraging a huge bundle of fiber coming out of the ground near Dulles Airport and cheap power sources. Indeed, they now call my region “Datacenter Alley.” 

A report by JLL shows that the strong movement of data from private corporate servers to cloud services, coupled with a growing corporate interest in internet of things (IoT) initiatives, is pushing the demand for these new datacenters. With data usage skyrocketing, major cloud providers expect triple their infrastructure by 2020, so they are building or renting datacenter space to keep up with the growth.

But at the same time, enterprises are not giving up their private or hosted traditional datacenters. That’s a natural part of the process, because you just can’t shut down the legacy systems before bringing their functionality to public cloud platforms.    That duality is a redundancy cost of cloud migration. 

So, we’re going to have more datacenters in the short term. However, as we share pools of platforms on public cloud providers, and do such way more efficiently, we should end up with fewer datacenters, right?

Not for at least for the next ten years.  

There are a few factors driving this delay in dumping the corporate datacenter:

First, enterprises have no plans to give up their datacenters. Although some companies have very publicly reduced their own datacenters, most of the companies that have datacenters now will have them five years from now. They simply don’t seem to believe their increased use of the cloud means they will eventually decrease their private datacenter usage.      

Second, enterprises have tax and business reasons to hang on to their datacenters.    I’ve worked with many enterprises that have datacenter leases that continue for another ten years. Moreover, the CFOs often find that owning the hardware and software provides a tax advantages that they are not willing to give up. 

My bet is that keeping the legacy datacenters is both expensive and getting old fast, so those factors keeping all those corporate datacenters in use will change. Modernization is in order, and the more effective way to do that is as part of a migration to the cloud and the modern datacenters and services already there.

Sooner or later, enterprises will truly understand that the measure of success is how many effective services you have working for your business, not whether you can physically touch the gear they run on. Those services will run in datacenters—but fewer and fewer should run on ones you own.

Powered by WPeMatico

You want innovation? You’ll have to go to the cloud

Are we at the tipping point with cloud computing? As more technology comes out on public clouds, cloud technology seems to be pushing the limits of innovation. It’s still an emerging approach, yet the degree of innovation in the public cloud seems to have surpassed the innovation of technologies that remain on premises. 

A case in point is the abundance of machine learning technology that’s now based in the public cloud. But the trend does not stop there. Intelligent databases, internet of things, advanced identity-based security, and containers and container operations are more examples of where the innovation is in the cloud. 

Of course, traditional on-premises providers have footholds in the public cloud as well. Most enterprise databases, middleware, applications, operations, and management systems have both on-premises and cloud-based versions.

But I don’t consider these traditional providers part of the tipping point. Traditional on-premises providers are innovating largely for on-premises platforms and using platform analogs to run in the cloud as hosted software. It’s an afterthought more than a strategy; they are not yet cloud-native.

What continues to emerge in the public cloud is new technology that never had an on-premises version—nor ever will. Just a few years ago, the new technologies that arose in the cloud were interesting, but the cloud offerings didn’t provide feature parity with on-premises systems—a critical requirement for enterprise customers. But today, new cloud-based technologies lead their market, such as the machine learning offerings from Microsoft and Amazon Web Services, serverless development, and advanced security services.

In the past, enterprises looked to the cloud only to support systems that were moving from their premises. They wanted what they already had, just deployed in the cloud. These days, the best tools and technology for both cloud and on-premises systems are cloud-delivered.

The new essential technology is in the cloud, not so much on-premsies. The market has been tipped.

Powered by WPeMatico