Understand the multicloud management trade-off

One of the trends I’ve been seeing for a while is the use of multiple clouds or multicloud. This typically means having two or three public clouds in the mix that are leveraged at the same time. Sometimes you’re mixing private clouds and traditional systems as well.

In some cases even applications and data span two or more public clouds, looking to mix and match cloud services. Why? Enterprises are seeking to leverage the best and most cost-effective cloud services, and sometimes that means picking and choosing from different cloud providers.

In order to make multicloud work best for an enterprise you need to place a multicloud management tool, such as a CMP (cloud management platform) or a CSB (cloud services broker) between you and the plural clouds. This spares you from having to deal with the complexities of the native cloud services from each cloud provider.

Instead you deal with an abstraction layer, sometimes called a “single pane of glass” where you are able to leverage a single user interface and sometimes a single set of APIs to perform common tasks among the cloud providers you’re leveraging. Tasks may include provisioning storage or compute, auto-scaling, data movement, etc.   

While many consider this a needed approach when dealing with complex multicloud solutions, there are some looming issues. The abstraction layers seem to have a trade-off when it comes to cloud service utilization. By not utilizing the native interfaces from each cloud provider you’re in essence not accessing the true power of the cloud provider, but instead just leveraging a subset of the services. 

Case in point: cloud storage. Say you’re provisioning storage through a CMP or CSB, and thus you’re leveraging an abstraction layer that has to use a least-common-denominator approach when managing the back-end cloud computing storage services. This means that you’re taking advantage of some storage services but not all. Although you do gain access to storage services that each cloud has in common, you may miss out on storage services that are specific to a cloud, such as advanced caching or systemic encryption.

The point here is that there is a trade-off. You can’t gain simplicity without sacrificing power. This may leave you with a much weaker solution than one that leverages all cloud-native features. No easy choices here.

Powered by WPeMatico

Cloud app slow? Blame the app, not the cloud

It’s 7:00 a.m., and you’re in the office early. You’re hoping that nobody else is accessing the public cloud the company uses and that the inventory application will perform well for a change. However, even with just a handful of users on the cloud at that time of the morning, performance is still lackluster. 

The knee-jerk reaction is to blame the cloud provider. The provider is, of course, the host of the application and data thus any performance problems fall on its shoulders, right? Wrong.

Nine times out of ten I’m finding that performance issues are due to application design and the selection of enabling technology, rather than issues with the cloud infrastructure. Keep in mind that if you’re at capacity in a public cloud, you can simply add more. You can even scale on-demand as needed.

But in the case of our slow inventory app, those tricks just aren’t working. 

Back to a rule that I’ve repeated many times:

Crappy on-premises applications moved to the public cloud are just crappy applications in a public cloud.  

What’s happening is that enterprises moving to the public cloud are not looking at the application design, or the use of databases, middleware, or other enabling technology, before they push the application into the cloud. It compiles, it links to the database, data is flowing, good to go.

The reality is that the application will not only perform poorly, but will likely increase your cloud bill by 50 or 60 percent as the public cloud struggles to deal with an application that is not designed properly. Common issues are inefficient I/O, chatty applications, and non-optimized database queries—and those are only a few of the dozens of things that typically go wrong. 

The resolution to this problem is something that most folks in enterprise IT don’t want to hear: The application needs to be refactored. This will include making tweaks to the design and making some parts of the application leverage cloud-native features, such as native I/O, database caches, and a bunch of other tricks to make your applications run well on a cloud, or any platform for that matter. 

I hate to be the cloud buzz-kill here. But make sure you budget time to redo poorly designed applications as they migrate to the cloud, or no matter how early you get to the office to use that crappy cloud app, it will never be early enough. 

Powered by WPeMatico

Think twice before using bare-metal clouds

A bare-metal cloud allows you to rent hardware resources from a public cloud service provider, or sometimes a managed service provider. With a bare-metal cloud, you get direct access to the hardware platform without having to go through tenant management systems. Therefore, one of the benefits of bare-metal cloud, as it is sold to the public, is the ability to better support high-transaction workloads that do not tolerate latency. 

I’ve found that bare-metal is often used by tier 2 cloud providers, and managed services providers, as a selling point of their “cloud.” Indeed, enterprises that are still attempting to maintain control over their hardware and software often pick bare-metal to maintain that control, typically while not considering costs and workloads requirements. 

If you are thinking of leveraging a bare-metal cloud, keep these points in mind. 

First, make sure to compare costs to actual bare-metal, meaning hardware and software you can buy and install in a datacenter, or under your desk. In doing many of these cost models for clients, I’ve found that it is usually much cheaper to continue to buy your own hardware and software, including operations and maintenance.     

Second, the performance does not seem to be much better than traditional, multi-tenant cloud services. While you would think that bare-metal will “kill it” in terms of I/O performance and lower latency, public cloud providers have done such a good job of managing access to underlying physical resources that the difference is not that dramatic. However, do your own benchmarking. 

Finally—and this is the deal breaker for me—it takes much longer to spin up servers on bare-metal clouds than on traditional clouds. This means that you are trading your ability to expand on demand and change on demand for the marginal benefit of running on bare metal. Considering that agility accounts for most of the value that cloud computing provides, moving to cloud without gaining it seems downright dumb.

Now, there certainly are some applications for bare-metal—I get that. My point here is that the majority of workloads I see ending up on bare-metal cloud instances get no benefit from being there, other than the ability for IT to claim proudly that they are on bare-metal clouds. Let’s work in reality, shall we?

Powered by WPeMatico

Cloud portability is still science fiction

Enterprises want cloud portability. Why? Well, they want to hedge bets in case a public cloud provider “breaks bad” in terms of consistently poor service, or performance issues, or, more likely, jacking up the subscription fees to obnoxious levels.

As many CIOs I have talked to put it: “We need to have choices. Choices mean leverage.” I get that.

However, to have true choices, the workloads, including applications and data, need to be easily moveable from public cloud to public cloud. This means that the code will move, the data will move, and it’s a matter of recompiling, configuring, and testing on the new cloud platform. 

However, it’s never that easy. Indeed, if you’ve ported applications and data to public clouds you’ve had to refactor them to leverage some cloud-native features. These include spinning up native compute and storage servers, leveraging native security and governance, etc. It’s impractical not to leverage these native-cloud services to support your applications, else you pay way more for the workload in terms of cloud service consumption or not meeting the requirements of the business, such as security. 

Being cloud native is good. However, it also greatly limits portability. Those cloud-native services on one public cloud must be written in new cloud-native services on another public cloud. They are not compatible, and although everything is portable if you have enough time and money, these workloads would not be considered “pragmatically portable.”

Of course, many enterprises believe that new technology will save us, namely containers and serverless computing. Although serverless computing is great for net-new applications, meaning we’ve designed them from the ground up for a serverless architecture, there will be little hope for public cloud portability here. After all, the public cloud providers have their own cloud-native serverless capabilities, and those are mostly unique to each public cloud.

Containers have more promise, but it takes a great deal of work to shove old workloads into new containers. Again, the benefit of containers is mostly around net-new applications. You can “containerize” most applications, and, indeed, they would be easily ported from one cloud to another, but the amount of work and money needed would typically prohibit many enterprises from moving in that direction for most existing applications. 

So, is practical cloud portability still science fiction? For all practical purposes, it is for now. Sorry. 

Powered by WPeMatico

Will your cloud smarts be rewarded?

Back on the topic of organizational impact around the implementation of cloud computing, I’ve had many questions about who’s going to be impacted. Better put, are IT organizations going to promote the better people to the top around the use of cloud computing? 

As I tell those who will listen, this is an opportunity for IT leaders to improve things.   However, I suspect that won’t occur. Here’s why.

As a pattern, I see executive leadership promote those in IT who have the least vision and the greatest political instincts. Case in point: those who have been advocating for cloud computing internally for years not getting promoted or even recognized. In some cases, it has even negatively impacted their career. 

Indeed, those in IT who have pushed back on cloud computing have generally held their position in the company’s IT department, or have even been promoted. In some cases, they have gotten credit for the movement to cloud, when they were actually an impediment.     

This isn’t really a surprise. Unfortunately, we have a few core realities in corporate IT management. Pay is more a matter of how well an employee can negotiate a salary, than a matter of merit or talent. Moreover, and more importantly, those who are hired or promoted to management are often the more politically astute, rather than those who have a vision of what IT needs to be. This includes understanding the potential value of new technology, such as cloud computing or whatever is next. 

What can you do? Not a lot if you’re not in executive leadership. However, I would recommend two ideas to consider:

  • First, use metrics for promotions and raises that value vision and innovation more than the ability to keep the executives above you happy.
  • Second, focus on value delivered by IT. This means reducing costs, but at the same time increasing productivity and agility and decreasing time to market. This typically means using cloud computing in strategic ways. 

There’s no easy way to fix this, considering that this is about people and not technology. It’s harder to change hearts and minds than platforms.   

Powered by WPeMatico

How will the cloud change IT? Look at Microsoft

Microsoft is planning a global sales reorganization to better focus on selling cloud software, according to Microsoft insiders. This comes as no surprise, considering that Microsoft did the same for its ailing phone business last year

What does this mean? Well, cloud in, software out—at least from the Microsoft business standpoint. However, count on Microsoft soaking you for more operating system and office automation money for years to come. So, that’s still a thing. 

What does this mean to you, the non-cloud or software provider? This is a good use case for what the cloud is likely to do to your IT shop in the next two years. As cloud becomes more of a common enterprise platform, a few things will become apparent, including: 

  • Those who work in the enterprise data center will get pink slips at some point in the next few years. The data centers that are owned and operated by enterprises are quickly becoming cost centers that boards of directors are no longer willing to fund. Either move to the cloud or move to a managed service provider. Enterprises have been exiting the data center business for the past several years, and the cloud will only accelerate that.
  • For that matter, anyone associated with the procurement of hardware and software will get the heave-ho as well. These are typically large layers of middle management who have been VPs of saying “no” for the past 20 years as enterprise lines of business attempted to set up systems for much-needed automation.  
  • Executives focused on the “traditional” systems will also find themselves out the door. Although some will attempt to reinvent themselves as cloud knowledgeable, most were pushing back hard against the use of cloud just a few years ago. I talk to executives every day who I think must be alien clones based on their quick change of attitude about cloud. Career survival, I guess.

It will be interesting to see how the cloud changes the landscape of IT, including the jobs needed and not needed. Indeed, this will likely be the most dramatic change we’ve experienced in the past 30 years of technology evolving.

Powered by WPeMatico

The latest cyber attacks show why the cloud is safer

Computer systems from the Ukraine to the United States were affected last week by the Petya cyber attack. It’s similar to the recent WannaCry ransomware attack last month.

The WannaCry ransomware took advantage of vulnerabilities in the older versions of Windows that allowed the infection to spread. All someone needed to do was click a malicious link and—bang!—they were infected. That is, if they hadn’t installed the patches and updates.

These attacks are a reminder of why the cloud is a safer place to do your computing.

The parade of attacks in recent years have forced enterprise IT to become more diligent about holistic security. These attacks are successful when security is not holistic, such as when patches and fixes are not applied.

But the generalized security fears have also caused many IT organizations to delay the adoption of new technologies, such as cloud computing. There’s a sense that something new, especially something managed by others, will make things more vulnerable.

Actually, the opposite is true.

Using the public cloud makes you less likely to get attacked and breached. The layers of security in the cloud are more than a deterrent for most attacks. The cloud providers proactively monitor these clouds, and they quickly spot and quickly block them. And they automatically apply operating system, application, and service patches and fixes are automatically behind the scenes.

Extremely few IT organizations do the same. The cost of security is just too much for most enterprises to bear, and most can’t keep up with all that needs to be done to keep their systems and users secure enough from WannaCry, Petya, and other malware that shut down systems.

Enterprises should not run in place when these attacks occur, but instead do a “look in the mirror” assessment around the state of systems and security. You’re likely to find deep issues that can’t be solved overnight. From there, you’ll need to plan the “to be” state of things, including how data, processes, PCs, mobile devices, IoT devices, and other elements are going to be secure.

As you undertake that effort, you’ll find that using the cloud is becoming the best fit for security. It may be counterintuitive to those who equate hands-on control with effective control, but it’s simply true. The cloud has had outages, yes, just like enterprise IT systems. But no major cloud provider has fallen victim to all the malware attacks of the last few years. What does that tell you?

Powered by WPeMatico

3 tricks to better manage your public cloud services

Some people call them “cloud hacks,” which is perhaps more accurate than “cloud tricks,” but the enterprises I work with don’t like the term “hack.”

Whatever you prefer to call them, here are three shortcuts you can create to achieve specific end states.

Cloud trick No. 1: Customize your console

Both Amazon Web Services and Microsoft have consoles that provide a master control view of resources on their clouds. With them, you can see what’s available and what you have already provisioned.

Most public cloud IaaS consoles let you configure your console via drag and drop, so the more-accessed services are at the top. But this customized view makes you much more productive.

Cloud trick No. 2: Learn and use the CLIs

Most of us use the GUIs that public IaaS cloud providers offer. However, most IaaS cloud providers offer CLIs (command-line interfaces) as well.

When using a CLI, you can launch scripts more easily to do such operations as provisioning a group of resources, or shutting those resources down. Indeed, most people who start with the CLI rarely go back to the GUI. Myself included.

Although you do need to memorize commands to use a CLI, you’ll find that you’re much more productive once you’ve got the commands down. No longer do you need to navigate pages of a GUI to find what you need. (However, the potential for error also increases when using a CLI, since you’re operating without training wheels.)

Cloud trick No. 3: Automate billing information

Most IaaS cloud providers let you set up budgets for your cloud costs, to keep you out of trouble by setting maximums that trigger alerts as your usage gets close to reaching them. However, it’s really better to also keep an eye on the bill daily. To do that, set up a daily email that shows the costs for that day, down to the resources and the activities on those resources.

This does more than help you keep an eye on costs. It also lets you proactively manage the usage of those resources. I cant tell you how many times I’ve spotted issues by seeing their cost, rather than in the monitoring console.

Powered by WPeMatico

Don’t bet too soon on the hot cloud technologies

We all know what’s cool now in the cloud: microservices, devops, containers, and machine learning. It’s what guys like me are writing and speaking about. However, the overapplication of these technologies could end up hurting you greatly. Here’s why.

On one hand, I want to promote the use of new technology, such as cloud computing and containers. But, on the other hand, I need to have a good understanding of what business problems my clients are looking to solve, to determine the correct application of any technology, new, old, hyped, taken for granted, whatever. 

What typically happens is that the people looking to move into cloud are up on all the hyped technologies. It’s like shopping for a new car: You can have a pretty long list what you think you need: self-parking, heated seats, bending lights, voice assistance, childproof seating, maybe short-range flight.

But unlike the case with a car, you have legacy to deal with when you move to the cloud. Your applications are old, and many are so poorly structured, that they have no hope of running in containers or using a microservices architecture. Moreover, they typically present a huge layer of security and performance issues to solve before you can even think of moving them to the cloud.

This situation is the reality at about half the enterprises out there. So, those companies need to spend the first few years focusing on the fundamentals such as application design, database design, security, and performance—that old-timey boring stuff. But many companies jump right into whatever new technology that they view will be their savior and end up face-planting in just a year or so.

That “prepare first” path to the cloud is easy to understand, but often much harder to accept—and harder to do for three reasons:

  • First, you need to understand your own business and technology requirements, both now and into the future.
  • Second, you need to understand your current state.
  • Third, you need to define you future desired state and the path to get there, including any enabling technology you’ll need (whatever that technology is, and no matter if it’s been featured in the latest tech pubs).

The reality is that most of the technologies hyped today won’t become standard for years. That’s okay—you can start thinking about how you might take advantage of them one day. But in the meantime, you need to move forward with clear purpose. First crawl, then walk, then run, then think about competing in the Olympics.

Powered by WPeMatico

‘Pick that cloud, lose our business’: What to do

Here’s a shocker: Wal-Mart is telling some technology companies that if they want Wal-Mart’s business, they can’t use Amazon Web Services. (Wal-Mart says it simply doesn’t want customers storing Wal-Mart’s sensitive info on AWS.) That’s a tall order for technology companies that may have invested millions in their tech running on AWS.

However, if you see it from Wal-Mart’s point of view, Amazon.com’s retail business is costing it billions a year in lost sales, so why not fight back by reducing Amazon’s AWS income from not just Wal-Mart but Wal-Mart’s customers? After all, Amazon.com refuses to sell products from Apple and Google that compete with its own streaming devices and services. 

The larger lesson here for enterprises is that can politics be part of the price of your choice of public cloud . You’ll find that both existing and potential customers are concerned about the platform you use, including your selection of AWS, Microsoft, or Google as your cloud provider. 

All three major cloud providers have businesses that compete with other companies outside the cloud platform business. Amazon.com, for example, competes with pretty much any retailer. But it’s not just direct competition that open up business conflicts. I’ve seen companies push back on Google due to Google’s extensive data collection practices, and I’ve seen companies push back on Microsoft due to issues with Microsoft’s enterprise software licenses.

The potential political conflicts are particularly acute if you’re a technology-driven company — and who isn’t these days? — and need to pick a cloud as your platform. Or you’ve already picked one, and now its owner is in conflict with a key customer and wants you to end that established, expensive relationship.

You of course want to pick a public cloud provider based on how its capabilities match up with your requirements and budget. Having to worry about losing revenue from a partner or customer because of that choice shouldn’t be a concern — but it increasingly is. In fact, I’m seeing clauses in contracts these days that specify “no-fly clouds,” where enterprises don’t want their data stored. They have nothing to do with the technology; it’s all perception, including risk, and, yes, spite.

Smaller enterprises, such as those doing business with Wal-Mart, are going to feel the brunt of this. They simply have less leverage, so they can more easily be bullied.

One constructive recommendation I can make is to work the multicloud angle. I’ve repeatedly recommended a multicloud strategy to gain redundancy and resiliency, but another benefit is that defuses the politics you haven’t chosen sides) and makes any imposed migrations easier to accomplish.

Powered by WPeMatico