Make sense of edge computing vs. cloud computing

The internet of things is real, and it’s a real part of the cloud. A key challenge is how you can get data processed from so many devices. Cisco Systems predicts that cloud traffic is likely to rise nearly fourfold by 2020, increasing 3.9 zettabytes (ZB) per year in 2015 (the latest full year for which data is available) to 14.1ZB per year by 2020.

As a result, we could have the cloud computing perfect storm from the growth of IoT. After all, IoT is about processing device-generated data that is meaningful, and cloud computing is about using data from centralized computing and storage. Growth rates of both can easily become unmanageable.

So what do we do? The answer is something called “edge computing.” We already know that computing at the edge pushes most of the data processing out to the edge of the network, close to the source of the data. Then it’s a matter of dividing the processing between the edge and the centralized system, meaning a public cloud such as Amazon Web Services, Google Cloud, or Microsoft Azure. 

That may sound a like a client/server architecture, which also involved figuring out what to do at the client versus at the server. For IoT and any highly distributed applications, you’ve essentially got a client/network edge/server architecture going on, or — if your devices can’t do any processing themselves, a network edge/server architecture.

The goal is to process near the device the data that it needs quickly, such as to act on. There are hundreds of use cases where reaction time is the key value of the IoT system, and consistently sending the data back to a centralized cloud prevents that value from happening.

You would still use the cloud for processing that is either not as time-sensitive or is not needed by the device, such as for big data analytics on data from all your devices.

There’s another dimension to this: edge computing and cloud computing are two very different things. One does not replace the other. But too many articles confuse IT pros by suggesting that edge computing will displace cloud computing. It’s no more true than saying PCs would displace the datacenter.

It makes perfect sense to create purpose-built edge computing-based applications, such as an app that places data processing in a sensor to quickly process reactions to alarms. But you’re not going to place your inventory-control data and applications at the edge — moving all compute to the edge would result in a distributed, unsecured, and unmanageable mess.

All the public cloud providers have IoT strategies and technology stacks that include, or will include, edge computing. Edge and cloud computing can and do work well together, but edge computing is for purpose-built systems with special needs.   Cloud computing is a more general-purpose platform that also can work with purpose-built systems in that old client/server model.

Powered by WPeMatico

Good news: CIOs have stopped fighting the cloud

I call them the “folded-arm gang”: those CIOs who invite the “cloud guy” into a meeting and then push back on everything you say and do so for no good technical reason. It’s frustrating.

But things are changing. CIOs who once pushed back on cloud computing have either changed their minds or have been fired. Either reason is fine with me.

You can see that shift in a study by Trustmarque that shows more than nine in ten U.K. CIOs and IT decision-makers polled said they plan to migrate their organizations on-premises workloads to the cloud within five years. The study polled 200 CIOs and senior IT decision-makers in enterprises with more than 1,000 employees.

Most surprising is that public-sector U.K. CIOs were more likely to move quickly compared to their private-sector counterparts. That’s not the case in the U.S., where public-sector CIOs are way behind the private sector.

The stated driver for the shift was mostly cost savings, cited by 61 percent. A close second was scalability, at 60 percent. Solving that pesky business agility problem came in at 51 percent. A bit less than half (49 percent) said that outplacing existing infrastructure (such as storage and compute) was the primary driver for migrating to the cloud. Indeed, more than half of CIOs said the complexity of their existing IT infrastructure was causing too much latency.

When it comes to technology deployments, the U.S. tends to be a bit more aggressive than the U.K., so add 10 percent to these numbers to get American CIOs’ take on cloud computing.

For the last decade, CIOs have a big barrier to cloud adoption. That’s partly because maintaining the status quo meant being employed another year; deployment disasters rivaled security breaches as a sure path to the exit door. So avoiding a risk was considered a victory.

These days, CEOs and boards of directors are wise to the value of IT, and thus the value of cloud computing, as a strategic business advantage. They ask much more of their CIOs than they did in the past. This forces everyone from the top down to understand more about cloud, and for CIOs to actually do the work. I’ll take it.

Powered by WPeMatico

The cloud can’t fix poor application performance

Have you heard the fairy tale that application performance on the cloud is automatically optimized, without any effort from developers or administrators?

Too many people believe it’s reality, and not a fairy tale.

I blame the confusion on early cloud hype, when “elasticity” was often stated as something related to cloud performance. Although elasticity does let you scale on demand by provisioning servers, or perhaps automatically these days using serverless computing technology, the elasticity concept unto itself does not guarantee well-performing applications.

There are three reasons the elasticity reality doesn’t live up to the performance fairy tale:

First, performance issues are typically with the design, development, and deployment of the application itself. Poorly performing applications do not benefit from faster virtual processors or more numerous virtual processors to the extent that some people might assume.

Application performance is engineered into the application by those who designed and built it.

Second, you’ll spend more money for less return. Although you can get marginal performance benefits for unoptimized software from cloud platforms’ virtual hardware and services, the fact is you’ll end up spending more on cloud services for a minimal return on performance gains.

There are public clouds that provide auto-scaling and auto-provisioning services, and it can be tempting to use them if application performance is an issue. But turning them on means that you’ve pushed control to the cloud provider to try to solve the applications’ intrinsic performance problems. In many instances, you’re giving the cloud provider a blank check. Some of my clients have received huge and unexpected cloud bills as a result of their use of auto-scaling and auto-provisioning services.

Third, you’ll likely forget about security and governance, which are performance killers if not done correctly. For example, if you encrypt everything per government regulations, you could reduce performance by as much as 25 percent. The good news is that was 50 percent just a few years ago. The developer of a well-engineered application will have thought through the encryption overhead in how it manages the data in the first place, to minimize the encryption price.

What to do instead. The answer is not to simply turn the performance problem over to your cloud provider. Instead, you have to do the design, development, and testing work to get the best performance.

As you “lift and shift” applications to the cloud, be sure to think about how you’ll address systemic performance issues before you move your applications to the cloud. That’s the only way.

Powered by WPeMatico

Watch out for serverless computing’s blind spot

Serverless computing is an exciting aspect of public cloud computing: You no longer have to provision virtual servers in the cloud; that’s done automatically to meet the exact needs of your application.

Although the value of serverless computing is not in dispute, it’s my job to find potential downsides in new technologies so that my clients—and you—can avoid them. In the case of serverless computing, we may find that cloud architecture as a discipline suffers. Here’s why.

When building applications for server-oriented architectures (where the virtual servers need to be provisioned, including storage and compute), you have built-in policies around the use of resources, including the virtual server itself. After all, you have to provision servers before the workloads can access them. That means you’re well aware that they’re there, that they cost money, and that they’re configured for your workloads.

The serverless approach means you get what you need when you need it, which then exempts the cloud architect from critically thinking about resources that your applications will require. There’s no need for server sizing; as a result, budgets become a gray area because you’re basically in a world where resources are available from a function call.

The danger is that cloud architects, along with application designers and developers, become easily removed from the process of advanced resource planning. As a result, applications use more resources than they should, leading to much higher costs and poor application design practices.

In other words, you’ve put yourself in a position where you don’t know what’s happening and can’t optimize for the best outcome or calculate what you’re spending. You’ve made yourself blind because the system will take care of it.

How do you get the advantages of serverless computing without falling into this blindness trap? Application designers and cloud architects need to set up best practices and guidelines in terms of the use of serverless cloud resources.

Unfortunately, there is little in the form of methodologies for doing that and few tools available right now. But you have to do what you can:

  • The first step is to understand this blindness risk.
  • The next step is to continue to do real resource planning upfront, so serverless computing’s automation won’t have to handle wasteful tasks.

Powered by WPeMatico

The 3 big speed bumps to devops in the cloud

Devops and cloud—both concepts are hot, for good reason. Let’s take a look at the current state of devops and cloud, and how they fit into today’s technology sets.

Devops provides an approach and a group of technologies that help enterprise developers do a better, faster job of creating applications. It also eliminates the barriers between development and operations (thus the name “devops”).

The cloud, meaning the public cloud, provides the platform for devops. Although you can certainly do devops on premises, most enterprises want to reduce costs and increase speed. The cloud is where you look for those benefits.

All you have to do is mix devops and the cloud, like mixing chocalate and peanut butter, right? Well, no. Enterprises have made big blunders with devops and the cloud. Here are three elements you should understand to avoid making those blunders yourself.

1. You need a hybrid solution to devops

Today’s public clouds do not provide one-stop-shopping for devops. Although they have application development management, including support for devops, it’s still a world where you’ll have to cobble together a solution from a mix of products that includes public cloud services and, yes, traditional software.

For example, although you can have pipeline management and continuous integration services on most public clouds, you’ll have to go old-school for continuous testing and continuous deployment. The degree to which your services are cloud-centric versus local-platform-centric will make a big difference in that mix.

2. Devops isn’t as cheap as the cloud

Because you must use traditional platforms along with public clouds, the costs are higher than you’d expect. Many organizations budget the devops solution assuming it’s all cloud-based. But it isn’t. As a result, there are cost overruns all over the place when it comes to devops and the cloud.

3. The devops tools aren’t all here yet

Although vendors and IT organizations both continue to learn about the continuous development, testing, integration, and deployment that are fundamental to devops, we’re nowhere near nirvana. The super tools that automate everything, cloud or not, aren’t here yet.

The sales pitch for devops is often like that of getting a superhighway, but in reality that highway has lots of stoplights. You still have to stop and perform manual processes as a part of devops automation. There’s no getting around it right now.

One day, we will get a true superhighway. The technology is getting there. But right now both devops and the cloud are works in progress. You should do devops, but understand the road you’ll actually be traveling.

Powered by WPeMatico

How to keep multicloud complexity under control

“Multicloud” means that you use multiple public cloud providers, such as Google and Amazon Web Services, AWS and Microsoft, or all three—you get the idea. Although this seems to provide the best flexibility, there are trade-offs to consider.

The drawbacks I see at enterprise clients relate to added complexity. Dealing with multiple cloud providers does give you a choice of storage and compute solutions, but you must still deal with two or more clouds, two or more companies, two or more security systems … basically, two or more ways of doing anything. It quickly can get confusing.

For example, one client confused security systems and thus inadvertently left portions of its database open to attack. It’s like locking the back door of your house but leaving the front door wide open. In another case, storage was allocated on two clouds at once, when only one was needed. The client did not find out until a very large bill arrived at the end of the month.

Part of the problem is that public cloud providers are not built to work together. Although they won’t push back if you want to use public clouds other than their own, they don’t actively support this usage pattern. Therefore, you must come up with your own approaches, management technology, and cost accounting.

The good news is that there are ways to reduce the multicloud burden.

For one, managed services providers (MSPs) can manage your multicloud deployments for you. They provide gateways to public clouds and out-of-the-box solutions for management, cost accounting, governance, and security. They will also be happy to take your money to host your applications, as well as provide access to public cloud services.

If you lean more toward the DIY approach, you can use cloud management platforms (CMPs). These place a layer of abstraction between you and the complexity of managing multiple public clouds. As a result, you use a single mechanism to provision storage and compute, as well as for security and management no matter how many clouds you are using.

I remain a fan of the multicloud approach. But you’ll get its best advantage if you understand the added complexity up front and the ways to reduce it.

Powered by WPeMatico

Serverless computing will drive out OpenStack private clouds

By now we all (should) know the benefits of serverless computing in the public cloud. InfoWorld’s Eric Knorr provides a good summary of serverless computing’s advantages, so I won’t go into the details here.

What’s most interesting is that as Amazon Web Services, Google, and Microsoft get better and better, the private cloud providers are still moving at a snail’s pace. The public cloud is where we see new technologies take off, such as machine learning, big data, and now serverless computing. By contrast, the private cloud seems like the redheaded stepchild.

What went wrong? Private clouds have been largely tied to OpenStack and other open cloud standards. Although there are huge advantages of using open source, the fact is that all those open-source-based private cloud efforts can’t move as fast as a single company, such as AWS. New technologies take forever to get through the open source process, then forever again to get adopted by all the vendors once formally developed and approved. The open source process explains the glacial pace of private cloud technology.

Only a few years ago, enterprise IT organizations looked to private clouds as a strategy to both do the cloud and maintain control of the hardware. As an excuse to do it themselves, IT organizations cited security and compliance issues—ironically, tasks that the public cloud proividers ended up doing better. Indeed, security on public cloud-based systems is typically twice as good as that of any on-premises systems I deal with these days.

The enterprises that banked on private clouds a few years ago are now having second thoughts, given core advantages of the public cloud, including their fast support for serverless computing, machine learning, and big data, all on demand.

That’s why I see not only the expected migrations of workloads from traditional systems to a public cloud, but migrations from private clouds to public clouds picking up as well. 

Private clouds will continue to grow, but their pace of growth will be exponentially less than that of public clouds. That essentially flat growth will turn into decline. 2017 is the inflection point, the beginning of the end of the short-lived private cloud phenomenon.

Powered by WPeMatico

The 3 biggest mistakes to avoid in cloud migrations

I’ve heard many times that if you’re not making mistakes, you’re not making progress. If that’s true, we’re seeing a lot of progress made this year in cloud migrations!

Here are the three errors that I see enterprises repeatedly committing.

Mistake 1: Moving the wrong apps for the wrong reasons to the cloud. Enterprises continue to pick applications that are wrong for the cloud as the ones they move first. These applications are often tightly coupled to the database and have other issues that are not easily fixed.

As a result, after they’re moved, they don’t work as expected and need major surgery to work correctly. That’s a bad way to start your cloud migration.

Mistake 2: Signing SLAs not written for the applications you’re moving to the cloud. When I’m asked what the terms of service-level agreements should be, the answer is always the following: It depends on the applications that are moving to the cloud or the net new applications that you’re creating. Easy, right?

However, there are many—I mean many—enterprises today that sign SLAs with terms that have nothing to do with their requirements. Their applications use the cloud services in ways that neither the cloud provider nor the application owner expected. As a result, the cloud provider does not meet expectations in terms of resources and performance, and the enterprises have no legal recourse.

Mistake 3: Not considering operations. News flash—when you’re done migrating to the cloud, somebody should maintain that application in the cloud.

This fact comes as a surprise to many; in fact, I get a call a week about applications that are suffering in the cloud. Those callers’ organizations assumed that somehow, someway the cloud would magically maintain the application. Of course it won’t.

Remember that you have ops with on-premises systems, and you should have ops with cloud-based systems. The good news: The tasks are pretty much the same.

I hope you won’t make any of these mistakes, but chances are good that you will. If you must make them, I hope you’ll recognize them more quickly thanks to this list and recover sooner.

Powered by WPeMatico

Think again: Data integration is different in the cloud

It’s been nearly 20 years since I wrote the book “Enterprise Application Integration,” yet after all that time data integration remains an afterthought when it comes to cloud deployments. I guess that’s par for the course, since security, governance, monitoring, and other core services are often afterthoughts as well.

When moving to the cloud, enterprises focus on the move itself, rather than on what they need after they get there. Although this may be a common plan, it’s not a best practice.

Data integration is essential because you’ve rehosted some of your data on a remote cloud service. The inventory system that’s stilling running on a mainframe in the datacenter needs to share data with the sales order system that’s now on AWS. In other words, your data-integration problem domain is now bigger and more complex.

The trouble is that traditional approaches to data integration, including traditional data-integration technology providers, are typically no longer a fit. Even data-integration technologies that I’ve built in the past as a CTO would no longer be on my short list of data-integration technologies that I would recommend today.

That’s because the use of the public cloud changes how you do data integration. For example, you need a much more lightweight approach that can deal with more types of data.

Also, having the data-integration engine in the enterprise datacenter is no longer efficient; for the same reason, it should not be placed at a cloud provider that has centralized access to all systems that are being integrated.

Cloud-based data integration also requires different types of security and governance services. Although most data that moves from system to system in an enterprise is not encrypted, you need to encrypt pretty much everything moving to and from systems in the cloud.

The list goes on.

The result is that cloud data integration is not your father’s data integration. It requires different approaches and different technologies. Although the old guard has done a pretty good job of cloud-washing their datacenter-centric solutions, you need to look beyond them, at data-integration technology that was built specifically for the cloud.

Powered by WPeMatico

Don’t let cloud providers kick you off like United

As everyone knows, last week a United Airlines passenger was asked to deplane because the airline overbooked and needed his seat for a staff member, then was dragged off the plane by Chicago airport cops when he refused to leave. Yes, the passenger didn’t follow the rules, but the situation ultimately was United’s fault.

Believe it or not, what happened at United is an object lesson for any business that signs up for cloud services. I’ll explain shortly.

Back in 2007, I boarded a United flight that was overbooked, and I was asked to deplane as a result. It was inconvenient and humiliating. However, I didn’t go limp, and the cops didn’t drag me bleeding off the flight.

Most airline employees, whether at United or another carrier, robotically follow procedures and rules. In the case of last week’s passenger, who didn’t believe he could be forced off the flight because he had a paid ticket, the employees didn’t try to solve the problem, such as by asking for a volunteer or trying to solve the passenger’s concerns (he had patients to treat the next day back home). They did what the procedures said and called the cops.

The airline adhered to the contract of the ticket purchase, which basically give passengers no rights. But being legally correct isn’t the point. It’s all about how you treat customers when the system stops working correctly for them, even if that unwanted behavior is “legal” or within the contract.

The contracts you sign with public cloud providers are similar to the contracts in an airline ticket: They’re one-sided in favor of the provider, with many limitations and the right for the cloud provider to kick you off its cloud. When you operate automated systems at such scale, you can’t deal with all the desires and special circumstances of each customer. At least, you don’t think you can, which is why cloud and airline contracts are so one-sided.

IT organizations haven’t yet experienced the cloud equivalent of being asked to deplane. But wait until enterprises have migrated 25 to 40 percent of their workloads to the cloud—and begin to stress the resources of the public cloud.

At that point, we’ll see enterprises make more demands on their public cloud providers, and we’ll see the providers push back, citing the contracts and even kicking some enterprises off the public cloud per those contract’s terms.

But as in the case of United, cloud providers that mindlessly implement their contract terms (and kick enterprises off their cloud services for whatever reasons the contract permits) won’t be in the right. It makes no difference what the rules say: Public perception will play a huge role, and the cloud provider will lose. The backlash, and major stock price hit, that United experienced last week is a cautionary example.

Enterprises need to understand they have leverage, even with the one-sided cloud contracts they’ve signed. An enterprise’s opinion of its cloud provider is powerful in and of itself, and enterprises that have issues with providers can go public with those issues—usually they find that the issues quickly go away as the provider does damage control, no matter what the contract says.

The new world order is one of perception. Cloud providers can try to fight it all they want, but even if they win, they’ll lose in the end. Enterprises should be aware of their new power and use it when needed.

Powered by WPeMatico