What the ‘versatilist’ trend means for IT staffing

According to Gartner, by 2021, 40 percent of IT staff will be “versatilists,” holding multiple roles. Moreover, most of these roles will be business-related, rather than technology-related, it predicts.

Furthermore, by 2019, IT technical specialist hires will fall by more than 5 percent. Gartner predicts that 50 percent of enterprises will formalize IT versatilist profiles and job descriptions, and that 20 percent of IT organizations will hire versatilists to scale their digital business. As a result, IT technical specialist employees will fall to 75 percent of 2017 levels, it predicts.

I agree with Gartner that this versatilist shift is real. Cloud computing is a big reason why. The force of cloud computing is indeed changing how you staff IT; indeed, I’m seeing more people in cloud-enabled IT organizations who have more than one job.

However, if you think this mean that things will become less technical, you’re in for a bit of a surprise by 2021. In fact, they will be much more technical.

There are a two major trends that I’m seeing in enterprises adopting the cloud for a significant portion of their infrastructure:

The shift to the cloud is causing a duality of skills

IT staff who once only focused on systems in the datacenter now focus on systems in the public cloud as well. This means that while they understand how to operate the LAMP stacks in their enterprise datacenters, as well as virtualization, they also understand how to do the same things in a pubic cloud.

As a result, they have moved from one role to two roles, or even more roles. However, the intention is that eventually that the traditional systems will go away completely, and they will just be focused on the cloud-based systems. I agree with Gartner on that, too.

The cloud shift is putting more focus on technology, not less

While I understand where Gartner is coming from, the more automation that sits between us and the latest technology means we need more technology specialists, not less. So, I’m not convinced that IT versatilists will gain new business roles to replace the loss of of the traditional datacenter roles, as Gartner suggests will happen.

Think about it: Look the tidal wave of new technologies that are now being provided with public clouds, such as machine learning, IoT, big data, advanced monitoring, and governance. You need the “extreme geeks” to figure that stuff out—not just now but well past 2021.

I’ve never seen a machine learning system that designs and builds its own learning model, an IoT system that sets up data integration on its own, nor a cloud monitor and manage itself. Thus, highly skilled and technical people will still run the show.

There’s nothing wrong with IT specialists taking on business roles—in fact, that’s often a good thing. I just don’t believe that IT pros will need to do so because the need for technology skills will be reduced. There’ll actually be more demand for technology skills, just not the same ones we have today.

Powered by WPeMatico

Rethinking cloud ROI: Come for cost savings but stay for agility

Companies are moving away from the traditional operations-oriented ROI model, and now look toward agility as the core metric to determine value. That’s clear in a new report called “How Enterprises Are Calculating Cloud ROI—And Why Some Enterprises Are Moving Ahead Without It,” from ISACA.

Although this is new to many enterprises and analysis firms, it’s not new to me.  I’ve written many blog posts since 2011 about the reasons to use business agility as a primary metric for calculating the real cloud ROI. It wasn’t just me, of course: Clearly the cloud experts were talking about agility and ROI. But enterprises were still focused on ops costs and capital cost avoidance as the primary metric.

As I’ve said many times, enterprises come to the cloud for cost savings but stay for the agility. Finally, that slogan seems to be gaining wider acceptance in the Global 2000 enterprises.

We’re going through the turning point right now. That’s very exciting, considering the fact that the cloud is not that disruptive when it’s just for ops saving. 

There are good tools and models for figuring out the ROI of agility that cloud computing can bring. I’ve done a ton of these models, and I can tell you that this is a very different measurement of ROI. Patterns such as the vertical market, the size of the business, and the degree of innovation need to be understood before you can understand the ROI of agility. 

But you can build reusable algorithms that you can take from domain to domain, and dial in historical metrics. For example, you could do so for companies similar to yours that have used cloud computing and have reached this level of ROI due to the agility that cloud has brought. 

Still, it’s difficult to find public case studies to prove ROI assumptions. So your ROI calculations are difficult to verify upfront. But you should proceed nonetheless. It’s important to understand that this agility-based ROI approach is a much more effective way to look at the value of cloud computing technology.

Powered by WPeMatico

The evidence is in: The cloud’s advantages are now clear to business

One of the likely outcomes of moving to the public cloud is altering how products are designed, a recent Harvard Business Review article shows. With cloud, there is closer collaboration between corporate IT departments and business units—sales, finance, forecasting, and even customer interaction. In fact, the HBR article shows that many IT departments have jointly developed products with their customers. 

Many report that new ways of writing and deploying software in the cloud encourage new types of faster organizational designs. The feedback loops enabled by cloud computing seem to allow direct interaction with the product producer, no matter if it’s a thing or software, and with the ultimate customers.

As the cloud technology advances, it’s becoming easier for companies to design and build products and services in cloud-based systems. This extends to sales and marketing as well. The cloud, in essence, becomes a common repository for the collection and analysis of new data. And it lets you take full advantage of the possibilities of tools such as machine learning, chatbots, internet of things, and other cool technologies that many in corporate America view as disrupters. 

What’s significant about this finding is that it’s in a mainstream business journal, and not from yours truly or other cloud pundits. It means the cloud has likely crossed the chasm between IT promises and actual results, and now produces real value for the business. Most important, the businesses know it.

As technologists, we quickly find the value of new technologies. We will deal with the next shiny objects because we’re trained to do that to stay relevant in our careers. However, there is often a huge gap between what the technology actually does and its proven value to the business. Only when the business sees the value can the technology be used for its full potential.

The cloud has proven itself to the business, for the most part. It’s now a systemic technology that is part of many business systems, and it now can move companies from followers to innovators—thanks to the agility and speed of cloud computing rather than any other aspect of this technology. 

The fact that businesses have started to use the cloud as a means of incorporating customers and partners into design, production, and sales processes means that customers feel integrated with the company’s systems and so are more likely to stay on as customers, as well as spend more money. We technologists saw that coming, but now the business does too. Prepare for the next wave of deepened cloud adoption as a result.

Powered by WPeMatico

SaaS-ifying your enterprise application? A quick-and-dirty guide

Lots of people called it SaaS-enablement, some call it SaaS-ification of software. Whatever you call it, more and more enterprises are looking to turn some enterprise application into a SaaS cloud application.

There are several reasons to SaaS-enable an internal application. Enterprises need to expose a software system to their partners and/or customers to better automate the business. Or, they are looking to monetize applications they view as having value to other companies.

Whatever the reasons, there are a few things to consider first. I call this the SaaS-ification reality check:

  1. Can you handle the SaaS? Many enterprises don’t understand what’s needed to manage a SaaS cloud service. You have created in essence a product, and so you need a roadmap of improvements you’re going to make, product management, product marketing, product support, etc. for the SaaS services to be any kind of success. If you’re not willing to invest that much, rethink this venture.
  2. Is the application in good enough shape to be made into a SaaS service? The truth is that when applications and databases are designed for enterprise use, they are typically not built with SaaS in mind. So, they may need to undergo significant refactoring, meaning rewriting significant portions of the application code or restructuring the database.
  3. What’s tenant management? Enterprise applications are written to support many users, but not many tenants. Having many users mean that you’re just standing up one instance of the application and database, even if you have thousands of users connected to that instance. Being multitenant means that you’re running many application instances, in their own application spaces, and each must be separated virtually but allowed to share hardware resources at the same time. This takes additional thinking and understanding because new users require their own tenant space, including their own part of the database, as well as the ability to use hardware resources at the same time as the other tenants.
  4. What about security and liability? If you choose to get into this business, you can’t be half pregnant. So, you need to provide sufficient security so hackers won’t run off with your customers’ data. That bring up another issue: liability. There is risk that your new SaaS service could be hacked, lose data, or have an outage that puts your customers’ business in the red. So, you need to ensure that you’re both protecting your customers and yourself.
  5. What about ops costs? SaaS cloud services are rarely built on the enterprise’s premises but are built and run from a public IaaS cloud. IaaS providers don’t give away their cloud services for free so you can charge for yours. So, make sure to understand the costs of the public cloud that will host your cloud. Typically, it’s much higher than my clients think. Also make sure you understand the all-in ops costs, including the people you need to operate the service, do the troubleshooting, and provide customer service.

Good luck!

Powered by WPeMatico

Capacity alone won’t assure good cloud performance

Many people believe that workloads in the cloud always perform better because public clouds have access to an almost unlimited amount of resources. Although you can provision the resources you need—and even use serverless computing so the allocation of resources is done for you—the fact is that having the right amount of resources is only half the battle.

To get good cloud performance means you have to be proactive in testing for performance, not be reactive and wait for an issue to arrive in production. After all, performance depends on much more than raw capacity.

I strongly encourage testing. If you’re using devops to build and deploy your cloud application workloads, your testing for security, stability, and so on are typically done withcontinuous testing tools as part of the devops process.

But what about performance testing?

Truth be told, performance testing is often an afterthought that typically comes up only when there is a performance problem that the users see and report. Moreover, performance usually becomes an issue when the user loads surpass a certain level, which can be anywhere from 5,000 to 100,0000 concurrent sessions, depending on the application. So you discover a problem only when you’re got high usage. At which point you can’t escape the blame.

An emerging best practice is to build in performance testing into your devops or cloud migration process. This means adding performance tests to the testing mix and look at how the application workload and connected database deals with loads well beyond what you would expect.      

This means looking for a performance testing tool that is compatible with your application, the other devops tools you have, and the target cloud platform where the application is to be deployed. Of course, a “cool tool” itself is not the complete answer; you need testing engineers to design the right set of testing processes in the first place.      

Ironically, although devops itself ( as both a process and tool set) is all about being proactive in terms of testing, most devops processes that I’ve seen don’t do much performance testing, if any at all.     

Withouth that testing, you can’t answer the question “When will my cloud workload hit the performance wall?” Instead, your users find out for you, and you may discover it’s time to look for a new job.         

Powered by WPeMatico

Don’t worry about selecting the ‘wrong’ public cloud

When I speak in public about cloud architecture, I’m often asked a question with no right answer: “Which public cloud should we use?”

Not knowing much about what “we” is, there is no right answer. While I can list the top two players, they may be wrong for “we’s” problem domain when taking into account special issues such as performance requirements, security, and compliance. No matter which public cloud you pick, it will have upsides and downsides, depending on who you work for and your specific needs.

What you need to do to answer that question is simple: Get your requirements in order first before you even start exploring the public cloud market. 

However, enterprises are not likely to do that in the real world. Partnerships are formed in the early stages, and enterprises often have a ton of credits that they can only turn into a single public cloud service brand. Whatever the reason, it’s not stretching the truth to say that most enterprises select a cloud provider based on items other than business and technical requirements.

So, given the reality that your pubic cloud selection very likely will be based on factors other than your requirements, how concerned should you be? The good news is that how you use that cloud makes more of a difference than the cloud brand. 

IaaS clouds, for example, boil down to storage and compute. You can compare cloud services and see that they offer various shiny objects such as machine learning, big data, and internet of things. But if your public cloud supports these basic storage and computer features, you’re usually more than half way home. 

Where cloud project fail is not so much about the cloud selection but about picking the wrong workloads to migrate and not paying attention to security, governance, and other core services. The problem seldom is about the public cloud not living up to expectations or having the appropriate technology.  

So, while picking the optimal public cloud based on requirements is the best practice, that selection won’t determination your success or failure. It’s what you do with that that’s more important that which cloud you pick.

Powered by WPeMatico

One cloud accounting dilemma will soon be fixed

You know cloud computing is here to stay when the accountants take notice. The Financial Accounting Standards Board’s Emerging Issues Task Force plans to propose new rules for how to deal with cloud computing service costs.

The updated guidance means that a customer under contract with a cloud computing provider would consider the current processes of leveraging internal-use software to determine how to recognize implementation costs as an asset. Moreover, the new guidance recognizes that implementation costs are an asset that may be expensed over the terms of the contract with the cloud computing provider, as long as the arrangement is not terminated at the time of the contract.

This is good news for both the enterprises that use cloud computing whose accountants need to figure out how to treat these costs, and for the cloud computing providers that now have a way to explain to enterprises how the costs should be treated. Enterprises have struggled to find best practices, as well as define legal issues, to determine how to treat cloud computing costs that are significant for many enterprises.

The reality is that, no matter if you use internal systems or public cloud systems, you’re getting the advantage of using systems. So the treatment of those costs should be aligned. Thanks to the FASB, they soon will be more aligned.

But not fully aligned. I’ve also been struggling with how cloud computing should be treated considering that enterprises must give up the depreciation of capital expenses in most cases. So holding onto old on-premises equipment to gain the benefit from the depreciation could outweigh any benefit an enterprise would get from cloud computing. Although this new FASB rule does not overcome that issue, the trend is that accounting groups are beginning to see better and fairer ways to deal with cloud computing costs. It’s about time. 

Powered by WPeMatico

How likely can terrorists, nuclear attacks, or hackers take down the cloud?

For all you Chicken Littles out there: A cyber problem that shuts down a top US cloud computing provider for three to six days could trigger a loss to clients of between $5.3 billion and $19 billion in business losses, of which only $1.1 billion to $3.5 billion would be covered by insurance, said insurer Lloyd’s of London in a report. (A “cyber problem” could include hacking, lightning strikes, bombing of datacenters, and human errors making a public cloud service provider take a dirt nap.)

I don’t doubt those numbers. But if one or more major cloud providers are disabled for some reason, we’ll have more important problems than not being able to log into the inventory system. 

And the chances are slim, anyhow: The truth is that public clouds providers are pretty resilient. Although we’ve seen regional outages in the past, typically due to human error, taking down a public cloud provider through a cyberattack would be a bit like playing Whack-a-Mole with 800-pound moles. 

Public cloud providers have set up many redundant systems in their clouds. Although you could bring down a single datacenter, perhaps even a whole region, you won’t disrupt all the cloud datacenters and regions. Kill one, and the others take over. 

Of course, there could be a major event such as an atomic attack that could take out most or all of a cloud provider. However, even then I doubt that all public clouds capabilities would be offline. Keep in mind that TCP/IP was designed by the US Defense Dept. to route around missing pieces of the network due to nuclear attack. 

And, in the event of a nuclear attack, would you care about your cloud services all tha much?

For less-world-ending scenarios, one of the good things about cloud computing is that the cloud providers are not the sitting ducks that enterprise datacenters have been (and many still are). The cloud providers have a wide geographical distribution, and they are redundant. So, your cloud data is actually safer than your on-premises data. It’s a good thing cloud data redundancy is almost foolproof, because it looks like you won’t get much help from insurance. 

Powered by WPeMatico

No, edge computing will not replace cloud computing

The press is still having a field day with this relatively new tech term edge computing, and how it will soon displace cloud computing. I’ve seen more a half dozen articles in just the last two months that advancing the perception that edge computing will displace, not complement, cloud computing.

It’s sad to see such naive discussions continue around edge computing, which I’ve previously tried to debunk in my posts “Make sense of edge computing vs. cloud computing” and “Edge computing: What you need to know before you deploy.” But let me try again!

Any extreme positions on technology never come true. Even the predictions that cloud computing would replace all on-premises computing was far-fetched. Although a good deal of on-premises systems can be moved to the public cloud, a good portion of those systems cannot due to the fact that they have no platform analogs on the public cloud or, more likely, are just too expensive to relocate to the public cloud.

The reality is always somewhere between where the technology is now and the grandiose predictions. You need to understand how to take all the technology hype with a grain of salt.

Even cloud computing is still a murky term that describes way too many things. So it is understandable that the press, pundits, and analysts have run amuck with its redefinition based on new technology or approaches showing up, such as edge computing.

So, here is the skinny: Cloud computing is about centralization of processing and storage to provide a more efficient and scalable platform for computing. Edge computing is simply about pushing some of that processing and storage out near to the devices that produce and consume the data—that is, to the edge. Edge computing will be one of the approaches we use to deploy in the cloud to support specific use cases, with the internet of things being the most applicable.

But edge computing replacing cloud computing? That’s like a toe replacing a body. Cloud computing is a big, broad concept that spans all types of computing approaches and technology; you can consider it a macro technology pattern. Edge computing is simply a micro pattern, where you can do new tactical things with public and private clouds.

Edge computing is an approach within the large corpus of cloud computing. Enough said?

Powered by WPeMatico

Cloud portability: Why you’ll never really get there

Portability means that you can move an application from one host environment to another, including cloud to cloud such as from Amazon Web Services to Microsoft Azure. The work needed to complete the porting of an application from one platform to another depends upon the specific circumstances.

Containers are one technology meant to make such porting easier, by encapsulating the application and operating systems into a bundle that can be run on a platform that supports that container standard like Docker or Kubernetes. But containers are no silver bullet.

The reality is that porting applications,whether they’re in containers or not, requires a great deal of planning to deal with the compatibility issues of the different environments. The use of containers does not guarantee that your containerized applications will be portable from platform to platform, cloud to cloud. For example, you can’t take a containerized application meant for Linux and run it on Windows, or the other way around.

Indeed, containers are really just a cool way of bundling applications with operating systems. You do get enhanced portability capabilities with containers, but you don’t get the “any platform to any platform” portability that many believe it to be.

Of course, enterprises want portability. And you can have it. All that’s needed is a greater planning effort when it comes to creating the applications in the first place.   

The fact is that all applications are portable if you have enough time and money.     The game here is to create applications where the least amount of work is required to move them from platform to platform, cloud or not. Using containers or other technology can help you in providing cross-platform application compatibility, but they are just part of the equation.

So, portability is not binary, meaning that it exists or not. Instead, it’s shades of gray—the “it depends” answer that so many people in IT leadership hate.       

Perhaps the most critical thing to understand about portability is that it comes at a big cost: reduced functionality due to using the lowerst common denominator of capabilities supported across all environments. The more your applications use native platform or cloud features, the less likely that your applications will be easily portable. The reason is simple: There are many desirable capabilities that are tied to a specific operating system, language, cloud platform or other technology, and those just can’t be moved as is. And sometimes not at all. 

The only way to mitigate this is through planning and design. Even then, the technology will always be changing. Portability will never be binary, always shades of gray.

Powered by WPeMatico