The cloud can’t fix poor application performance

Have you heard the fairy tale that application performance on the cloud is automatically optimized, without any effort from developers or administrators?

Too many people believe it’s reality, and not a fairy tale.

I blame the confusion on early cloud hype, when “elasticity” was often stated as something related to cloud performance. Although elasticity does let you scale on demand by provisioning servers, or perhaps automatically these days using serverless computing technology, the elasticity concept unto itself does not guarantee well-performing applications.

There are three reasons the elasticity reality doesn’t live up to the performance fairy tale:

First, performance issues are typically with the design, development, and deployment of the application itself. Poorly performing applications do not benefit from faster virtual processors or more numerous virtual processors to the extent that some people might assume.

Application performance is engineered into the application by those who designed and built it.

Second, you’ll spend more money for less return. Although you can get marginal performance benefits for unoptimized software from cloud platforms’ virtual hardware and services, the fact is you’ll end up spending more on cloud services for a minimal return on performance gains.

There are public clouds that provide auto-scaling and auto-provisioning services, and it can be tempting to use them if application performance is an issue. But turning them on means that you’ve pushed control to the cloud provider to try to solve the applications’ intrinsic performance problems. In many instances, you’re giving the cloud provider a blank check. Some of my clients have received huge and unexpected cloud bills as a result of their use of auto-scaling and auto-provisioning services.

Third, you’ll likely forget about security and governance, which are performance killers if not done correctly. For example, if you encrypt everything per government regulations, you could reduce performance by as much as 25 percent. The good news is that was 50 percent just a few years ago. The developer of a well-engineered application will have thought through the encryption overhead in how it manages the data in the first place, to minimize the encryption price.

What to do instead. The answer is not to simply turn the performance problem over to your cloud provider. Instead, you have to do the design, development, and testing work to get the best performance.

As you “lift and shift” applications to the cloud, be sure to think about how you’ll address systemic performance issues before you move your applications to the cloud. That’s the only way.

Powered by WPeMatico

Watch out for serverless computing’s blind spot

Serverless computing is an exciting aspect of public cloud computing: You no longer have to provision virtual servers in the cloud; that’s done automatically to meet the exact needs of your application.

Although the value of serverless computing is not in dispute, it’s my job to find potential downsides in new technologies so that my clients—and you—can avoid them. In the case of serverless computing, we may find that cloud architecture as a discipline suffers. Here’s why.

When building applications for server-oriented architectures (where the virtual servers need to be provisioned, including storage and compute), you have built-in policies around the use of resources, including the virtual server itself. After all, you have to provision servers before the workloads can access them. That means you’re well aware that they’re there, that they cost money, and that they’re configured for your workloads.

The serverless approach means you get what you need when you need it, which then exempts the cloud architect from critically thinking about resources that your applications will require. There’s no need for server sizing; as a result, budgets become a gray area because you’re basically in a world where resources are available from a function call.

The danger is that cloud architects, along with application designers and developers, become easily removed from the process of advanced resource planning. As a result, applications use more resources than they should, leading to much higher costs and poor application design practices.

In other words, you’ve put yourself in a position where you don’t know what’s happening and can’t optimize for the best outcome or calculate what you’re spending. You’ve made yourself blind because the system will take care of it.

How do you get the advantages of serverless computing without falling into this blindness trap? Application designers and cloud architects need to set up best practices and guidelines in terms of the use of serverless cloud resources.

Unfortunately, there is little in the form of methodologies for doing that and few tools available right now. But you have to do what you can:

  • The first step is to understand this blindness risk.
  • The next step is to continue to do real resource planning upfront, so serverless computing’s automation won’t have to handle wasteful tasks.

Powered by WPeMatico

The 3 big speed bumps to devops in the cloud

Devops and cloud—both concepts are hot, for good reason. Let’s take a look at the current state of devops and cloud, and how they fit into today’s technology sets.

Devops provides an approach and a group of technologies that help enterprise developers do a better, faster job of creating applications. It also eliminates the barriers between development and operations (thus the name “devops”).

The cloud, meaning the public cloud, provides the platform for devops. Although you can certainly do devops on premises, most enterprises want to reduce costs and increase speed. The cloud is where you look for those benefits.

All you have to do is mix devops and the cloud, like mixing chocalate and peanut butter, right? Well, no. Enterprises have made big blunders with devops and the cloud. Here are three elements you should understand to avoid making those blunders yourself.

1. You need a hybrid solution to devops

Today’s public clouds do not provide one-stop-shopping for devops. Although they have application development management, including support for devops, it’s still a world where you’ll have to cobble together a solution from a mix of products that includes public cloud services and, yes, traditional software.

For example, although you can have pipeline management and continuous integration services on most public clouds, you’ll have to go old-school for continuous testing and continuous deployment. The degree to which your services are cloud-centric versus local-platform-centric will make a big difference in that mix.

2. Devops isn’t as cheap as the cloud

Because you must use traditional platforms along with public clouds, the costs are higher than you’d expect. Many organizations budget the devops solution assuming it’s all cloud-based. But it isn’t. As a result, there are cost overruns all over the place when it comes to devops and the cloud.

3. The devops tools aren’t all here yet

Although vendors and IT organizations both continue to learn about the continuous development, testing, integration, and deployment that are fundamental to devops, we’re nowhere near nirvana. The super tools that automate everything, cloud or not, aren’t here yet.

The sales pitch for devops is often like that of getting a superhighway, but in reality that highway has lots of stoplights. You still have to stop and perform manual processes as a part of devops automation. There’s no getting around it right now.

One day, we will get a true superhighway. The technology is getting there. But right now both devops and the cloud are works in progress. You should do devops, but understand the road you’ll actually be traveling.

Powered by WPeMatico

How to keep multicloud complexity under control

“Multicloud” means that you use multiple public cloud providers, such as Google and Amazon Web Services, AWS and Microsoft, or all three—you get the idea. Although this seems to provide the best flexibility, there are trade-offs to consider.

The drawbacks I see at enterprise clients relate to added complexity. Dealing with multiple cloud providers does give you a choice of storage and compute solutions, but you must still deal with two or more clouds, two or more companies, two or more security systems … basically, two or more ways of doing anything. It quickly can get confusing.

For example, one client confused security systems and thus inadvertently left portions of its database open to attack. It’s like locking the back door of your house but leaving the front door wide open. In another case, storage was allocated on two clouds at once, when only one was needed. The client did not find out until a very large bill arrived at the end of the month.

Part of the problem is that public cloud providers are not built to work together. Although they won’t push back if you want to use public clouds other than their own, they don’t actively support this usage pattern. Therefore, you must come up with your own approaches, management technology, and cost accounting.

The good news is that there are ways to reduce the multicloud burden.

For one, managed services providers (MSPs) can manage your multicloud deployments for you. They provide gateways to public clouds and out-of-the-box solutions for management, cost accounting, governance, and security. They will also be happy to take your money to host your applications, as well as provide access to public cloud services.

If you lean more toward the DIY approach, you can use cloud management platforms (CMPs). These place a layer of abstraction between you and the complexity of managing multiple public clouds. As a result, you use a single mechanism to provision storage and compute, as well as for security and management no matter how many clouds you are using.

I remain a fan of the multicloud approach. But you’ll get its best advantage if you understand the added complexity up front and the ways to reduce it.

Powered by WPeMatico

Serverless computing will drive out OpenStack private clouds

By now we all (should) know the benefits of serverless computing in the public cloud. InfoWorld’s Eric Knorr provides a good summary of serverless computing’s advantages, so I won’t go into the details here.

What’s most interesting is that as Amazon Web Services, Google, and Microsoft get better and better, the private cloud providers are still moving at a snail’s pace. The public cloud is where we see new technologies take off, such as machine learning, big data, and now serverless computing. By contrast, the private cloud seems like the redheaded stepchild.

What went wrong? Private clouds have been largely tied to OpenStack and other open cloud standards. Although there are huge advantages of using open source, the fact is that all those open-source-based private cloud efforts can’t move as fast as a single company, such as AWS. New technologies take forever to get through the open source process, then forever again to get adopted by all the vendors once formally developed and approved. The open source process explains the glacial pace of private cloud technology.

Only a few years ago, enterprise IT organizations looked to private clouds as a strategy to both do the cloud and maintain control of the hardware. As an excuse to do it themselves, IT organizations cited security and compliance issues—ironically, tasks that the public cloud proividers ended up doing better. Indeed, security on public cloud-based systems is typically twice as good as that of any on-premises systems I deal with these days.

The enterprises that banked on private clouds a few years ago are now having second thoughts, given core advantages of the public cloud, including their fast support for serverless computing, machine learning, and big data, all on demand.

That’s why I see not only the expected migrations of workloads from traditional systems to a public cloud, but migrations from private clouds to public clouds picking up as well. 

Private clouds will continue to grow, but their pace of growth will be exponentially less than that of public clouds. That essentially flat growth will turn into decline. 2017 is the inflection point, the beginning of the end of the short-lived private cloud phenomenon.

Powered by WPeMatico