Why you should not worry about cross-tenant cloud attacks

We’ve all heard the concerns: While public clouds do a good job protecting our cloud-based systems from outside attackers, what about attacks that may come from other public cloud users? These are known as cross-tenant attacks (sometimes called side-channel attacks), where other tenants on the same public cloud somehow access your data. Should you pay more attention to this fear?

No, you should not pay more attention to cross-tenant attack fears. Here’s why.

First, there are much easier attack vectors to exploit when it comes to public clouds, such as human error. The cloud breaches that I hear about are caused almost 100 percent by human error. Often, people misconfigured their cloud machine instances and thus exposed data that was not meant to be exposed. If enterprises focus on dealing with cloud security, they should be focused there.

Second, most enterprises encrypt data on public clouds, both in-flight and at rest. Even if one tenant could access server instances held in other tenants’ account, that miscreant wouldn’t be able to read the data. Encryption also protects against hacking that comes from outside the cloud.

Third, the public cloud providers have the security systems in place to ensure that a cross-tenant attack is unlikely. It’s true that the tenant-management systems manage resources for many tenants at the same time, which is why enterprises get nervous. But there are well-thought-out virtual demarcation lines between tenants, which is a fundamental aspect of a multitenant system. Each public cloud provider has its own way of accomplishing these separation goals, and while you have no way of understanding every aspect of the approaches they use, you need to trust them at the end of the day.

With all of that said, this is a legitimate concern, and enterprises should always have a healthy level of skepticism about any type of provider services. However, you have more pressing concerns right now. Don’t let this one take more time than needed and divert you from those more serious issues.

Powered by WPeMatico

Don’t fall for the ‘pluggable cloud’ siren call

People once made requests for hybrid cloud because of the perception of flexibility. Now they make multicloud requests, for the same reasons. Multicloud is just part of a cloud architecture that uses more than two clouds, private and/or public. However, most multicloud deployments involve more than two public clouds, typically AWS, Microsoft, and sometimes one other, such as Google.

Although the concept of having “pluggable clouds” is not at all new, I get more and more inquiries about multicloud patterns that promote the notion of pluggable clouds.

A pluggable cloud is a multicloud setup where you can swap out the public or private clouds without having to change much of the underlying application dependencies. The term is often used to describe any multicloud architecture where changing out clouds is something that enterprises do to deal with price and functionality changes.

Is this type of architecture even feasible? Consider the facts.

First, this can only work if you use a cloud service broker (CSB), a cloud management platform (CMP), or other tools that provide abstraction away from the native cloud services. Otherwise it becomes too complex to manage the native cloud services of each public cloud provider, because you have to deal with each native cloud service on its own terms.

Second, you need to understand the “pluggable” requirement. If the expectations are that you can unplug AWS and plug in Alibaba, for example, without significant alterations to how the applications and data storage systems use those services, you’re smoking something. In reality, there are vast differences between how AWS does storage, and how Microsoft or Google or Alibaba does storage. Even if you do a great job creating abstraction and orchestration layers, there is a great deal of work needed to make it actually work. I’m not sure “pluggable” would be the word I’d use.

Third, while it might be possible to make your multicloud setup pluggable, you would do so at the expense of services. You would be forced into a least-common-denominator approach, which means that you’ll only use the basic functions to make your workloads work across cloud providers. In other words, you’ll only use a fraction of what the clouds can offer in terms of services such as storage and compute.

Keep in mind that both CSBs and CMPs are proven tools that can manage multicloud complexity. Just be aware that the use of these tools does not mean you can add and remove public clouds without significant remapping of cloud services to your workloads.

Powered by WPeMatico

Lack of cloud skills and training begin to take a toll

According to a recent report by cloud and datcenter vendor Rackspace, “Nearly three quarters of IT decision-makers (71 percent) believe their organizations have lost revenue due to a lack of cloud expertise. On average, this accounts for 5 percent of total global revenue, or $258,188,279 per organization.”

That’s a pretty good hunk of cheddar! This is a real issue and it’s starting to get noticed by enterprises leadership, and even by the stockholders.

Truth be told, these sorts of opportunity costs are rarely considered. Think of the cost of using subpar data analytics, substandard networks, even bad automation itself.  There is always a difference between what enterprises do and what they should do to maximize revenue. Or, more to this case, what they can’t do because their staff lacks the skills.

While my personal experiences are not exact metrics, I’ve been getting about four requests per week from enterprises that want a training plan, skilled people, or me.  This was a once-a-month occurrence just two years ago.

Nobody in these organizations saw this kind of demand coming, and when lost-revenue numbers are put next to not being able to fill this demand, the enterprises begin to react. Although some enterprises were proactive with cloud computing skills acquisition, most have beenreactive and now in a panic, with 25 to 50 open positions chasing one qualified candidate.

There’s not much you can do about the shortage, or its impact to the bottom line.  Years ago, I was one of many voices that urged Global 2000 companies to begin training and hiring for cloud computing. On the other hand, having consulted with that Global 2000, I do understand the many priorities that businesses have to consider and why preparing for emerging trends tends to get deferred for later.

But my advice today is the same as it was years ago: Get a training plan in place ASAP to coach the current IT staffers on cloud computing technology, from specific technology to general configuration and architectural concepts. Hire when an opportunity presents itself, but don’t lower standards just to put butts in seats and fill positions.

Powered by WPeMatico

Enterprise IoT threatens to undermine cloud and IT security

The internet of things, or IoT, is pervasive these days in your personal life. However, this technology is just getting into the Global 2000 companies. Yet most of the Global 2000 companies are unaware of the risks that they are bringing to IT and cloud security with their IoT adoption.

How did this happen? Well, for example, as thermostats and sensor fail in buildings’ HVAC systems, they are often replaced with smart devices, which can process information at the device. These new IoT sensor devices often are computers unto themselves; many have their own operating systems and maintain internal data storage. IT is largely unaware that they exist in the company, and they are often placed on the company’s networks without IT’s knowledge.

Besides the devices that IT is unaware of, there are devices that it does know about but are just as risky. Upgrades to printers, copiers, Wi-Fi hubs, factory robots, etc. all come with systems that are light-years more sophisticated in intelligence and capabilities than what came before, but they also have the potential of being turned against you—including attacking the cloud-based systems where your data now resides.

Worse, many of these IoT devices are easily hacked, and so can easily become agents for the hackers lying in wait to grab network data and passwords, andeven breach cloud-based systems that may not have security systems that take into account access from within the company firewall.

And don’t let price be a proxy for secrity level: I’m finding that the more specialized and expensive that the devices are, the more they are likely to have crappy security.

This is going to be a huge issue in 2018 and 2019; many companies will need to get burned before they take corrective action.

The corrective action for this is obvious: If the IoT device—no matter what it is—cannot provide the same level of security as your public cloud provider or have security systems enabled that you trust, it should not be used.

Most IoT companies are improving their security, even supporting security management by some public clouds. However, such secure IoT devices are very slow to appear, so most companies deploy what is available in the market: IoT devices without the proper security systems bundled in.

Sadly, I suspect that IoT security will be mostly a game of Whack-a-Mole over the next several years, as these things pop up on the corporate network regularly.

That’s really too bad. We finally just got cloud security right, and now we’re screwing it up with new thermostats and copiers that make all that good security worthless.

Powered by WPeMatico

4 surprise cloud computing trends for 2018

First of all, I hate doing yearly predictions. Also, this is the time of year that every PR firm in the country asks me to read the cloud computing predictions of their clients, which are all pretty much wrong and self-serving.

So, I’ve put together four cloud predictions for 2018 that you won’t see coming but that should help shape your cloud strategy for the new year.

2018 cloud prediction No. 1:
Microsoft or Oracle buys Salesforce.com

Microsoft and Oracle can afford it, and both are looking to accelerate their cloud computing cred. It does not get better in terms of SaaS dominance than Salesforce, and that cash cow can be milked by one of the two mega enterprise players for the next 20 years.

2018 cloud prediction No. 2:
A rash of data breaches caused by idiots

We’ve seen the NSA and others leave sensitive data exposed because of public cloud misconfigurations. There’s been no real damage done yet, but in 2018 we’ll see an explosion of breaches caused because somebody forgot the lock the virtual door—you just need to know the URL, and you’re in—not because the hackers were exploiting some unknown vulnerability.

2018 cloud prediction No. 3:
More cloud categories are coming

While hybrid, public, and private clouds are how we’re defining cloud deployments, as well as now multicloud, they are often misapplied. This semantic confusion is caused by big enterprises technology providers cloud-washing the heck out of the commonly used buzzwords, perverting the cloud terminology defined by NIST in 2008.

For example, vendors’ versions of hybrid clouds are often traditional systems paired with pubic cloud, and not the paired private and public clouds that NIST defined. Moreover, virtualized sets of servers are often called private clouds, even though they are not.

We’ll have to make up new terms for these other patterns, and stop calling them what they are not. Watch this space for my suggestions. 

2018 cloud prediction No. 4: 
Non-US cloud providers get more traction

We’re now seeing several new public cloud providers, such as Alibaba, that are beginning to show up in deals. Although most of the Global 2000, as well as the US government, will turn up their noses at these new providers, enterprises and governments outside the US, as well as small to medium US businesses, will look at these providers with interest, considering their low costs.

Indeed, depending on what analyst firm you’re paying attention to, Alibaba has already surpassed Google in IaaS revenue.  

So, be ready for these four cloud developments in 2018.               

Powered by WPeMatico

3 New Year’s resolutions for the cloud in 2018

I’m one of those people who takes time at the new year to define personal objectives for the forthcoming year, some of which I actually achieve. Enterprise IT should be doing the same thing for cloud computing.

Here are my three suggestions for IT’s cloud resolutions for 2018.

2018 cloud resolution No. 1:
Look at your cloud security approach and technology

When I find issues with enterprise cloud deployments in my consulting work, it’s most often around security. Clients often leave aspects of their cloud deployments unprotected or underprotected, and things that should be encrypted are not, while things that should not be encrypted are.  

While I’m not recommending that you gut your cloud security and replace it with what’s cool and new, I am recommending that you take some time to walk through the security solution architecture and ask yourself about where you can improve. Moreover, consider all the security technology in place, what needs to be updated?   What should be replaced?

2018 cloud resolution No. 2:
Look at your cloud training plan

There are two categories of cloud training:

  • Provider training that’s focused on a specific provider such as Amazon Web Services, Microsoft, or Google.
  • General training that provides a good overview of how to make cloud work in enterprises, and all that is involved with that.

You should have a mix of both, as well as some paths for your staff defined to get the skills of a cloud architect, cloud developer, cloud operations specialist, and cloud devops specialist, just to name a few roles. There should be training paths through both vendor and nonvendor  courses to get your staff members the skills they need to perform their duties (which of course must be clearly defined). 

2018 cloud resolution No. 3:
Evaluate your databases

Databases are sticky, and once enterprises have used a specific database, they are not likely to change it. Indeed, what many enterprises have done is just rehost their data on public clouds using the same database they used on premises.

Today we have many options in the cloud, including SQL and non-SQL databases. While there are native databases in public clouds such as AWS’s RedShift and DynamoDB, there are many other options from databases providers that support the public cloud and traditional platforms. Are you using the optimal solution?  

These are just a few suggestions; I suspect that you can name more. Whatever they are, pick a few and follow up. Have a great new year!

Powered by WPeMatico

Cloudops automation is the only answer to cloud complexity

Let’s face facts: We often look at cloud computing as a single place to put our workloads and our data, where magic pixy dust takes care of what’s needed. But today we increasingly understand that the reality of the cloud—like the datacenter before it—is complexity, labor intensitiy, and more costs than we expected.   

This should not surprise anyone. Cloud computing, including the hybrid and multicloud approaches, are complex distributed systems. Although the use of the public cloud buys you not having to purchase and maintain hardware and software, you still have to monitor, manage, and deal with cloud-based systems as if they were down the hall.

In 2018, I think that this is a problem that most enterprises can solve. And we’ll do so using automation, as we have in so many business and manufacturing processes before.

The fact is that most enterprise deal with cloud operations—aka cloudops—using the native tools of their cloud providers. Although that is scalable when you’re just using one public cloud for everything, the reality is that you have to manage traditional systems built within the last 20 years, multiple public clouds, perhaps a private cloud, IoT devices, and data that exists everywhere (with no single source of truth). In other words, a huge mess.  

Automation does not save you from having this mess, but it helps a great deal. 

First, you need to consider the concept. When you automate cloudops, you’re really looking to remove the complexity by placing an abstraction layers between the complex array of systems,and you the person that needs to operate the technology.  

Second, consider the enabling technology. This means using a cloud management platform or other management systems that can automate most of the back-end operations tasks, including backing up systems, putting servers in and out of production, and security operations. The trick is focusing on the broader management technology, and the automation that it providers, versus the cloud-native tools that won’t help you beyond a single public cloud.    

That may seems simple, but it’s not. Today’s cloud management technology does have limits, and there is no magic pixy dust there, either. However, it will get you further down the road by removing the need for people to touc everything all the time, via the use of better and better automation.    

Powered by WPeMatico

Don’t leave your Amazon S3 buckets exposed

As long as you know the right URL, anyone with access to the internet could retrieve all the data that was left online by marketing analytics company Alteryx. This is the second major exposure of data stored and improperly managed in the Amazon Web Services S3 storage service.

In the Alteryx case, it was apparent that the firm had purchased the information from Experian, as part of a data set called ConsumerView. Alteryx uses this data to provide marketing and analytics services. It put the data in AWS S3—and forgot to lock the door.

In November, files detailing a secret US intelligence collection program were leaked in the same manner, also stored in S3. The program, led by US Army Intelligence and Security Command, a division of the National Security Agency, was supposed to help the Pentagon get real-time information about what was happening on the ground in Afghanistan in 2013 by collecting data from US computer systems on the ground. Much as in the Alteryx case, the data was exposed by a misconfigured S3 bucket.

Here’s the deal: AWS defaults to closing access to data in S3, so in both cases someone had to configure S3 to expose the data. Indeed, S3 has the option to provide data over the web, if configured to do so. So, this is not an AWS issue, but one of stupidity, naïveté, or ignorance by people running their S3 instances.

Public cloud providers often say that they are not responsible for ineffective, or in these cases nonexistent, security configurations that leave data exposed. You can see why.

In these cases, white hat hackers informed those in charge about the exposure. But I suspect that many other such mistakes have been uncovered by people who quietly collect the data and move on into the night.

The fix for this is really common sense: Don’t actively expose data that should not be exposed. You need to learn about security configurations and processes before you bring the public cloud into your life. Otherwise, this kind of avoidable stuff will keep happening.

Powered by WPeMatico

How the end of net neutrality will affect enterprise cloud computing

I hate writing about politics because the topic is so polarizing. However, I’ve had enough questions about the net neutrality issue that I felt InfoWorld readers needed some preliminary guidance about its effect on enterprise-grade cloud computing.

The U.S. Federal Communications Commission has repealed the net neutrality rules it passed just two and a half years ago. This move has sent a lot of people over the edge, in terms of its potential impact on consumers, small businesses, and small websites. Moreover, there is a lot of speculation that the prices for internet-delivered media services, such as Netflix and Amazon Prime Video, could significantly increase.

The FCC’s 2015 rules prohibited broadband providers from selectively blocking or slowing web traffic. However, they never covered enterprise internet services, which are typically offered through customized arrangements. The 2015 regulations did protect small businesses’ access to the internet.

Republicans, including FCC Chairman Ajit Pai, has long criticized net neutrality rules as needless, costly regulations on internet service providers. Indeed, Republicans have argued that the rules discourage investment in broadband networks. This is based on the assertion that the regulations limit the kinds of business models ISPs can deploy.

Although many tech companies supported the now-gone net neutrality rules, there are a few that didn’t. Technology providers such as Oracle and Cisco Systems promoted the 2017 FCC plan to repeal net neutrality. The 2015 regulations discouraged investment in broadband, Oracle senior vice president Kenneth Glueck wrote in a letter to the FCC.

So, who’s right? Who’s wrong? And how will this affect the use of cloud for enterprises?

First, if you’re a company with more than $1 billion in revenue, the end of net neutrality is unlikely to affect you. You typically have custom, negotiated agreements in place in with ISPs that limit or eliminate any throttling they can do. Enterprises that use a particular cloud provider more than others can typically get dedicated lines installed from the enterprise to the cloud provider’s datacenter. That bypasses the effects of net neutrality’s elmination altogether. 

Businesses with less than $1 billion in revenue who place websites on cloud providers and who do most of their IT in the cloud have more reason to be concerned. Most ISPs has said that they won’t throttle traffic for small customers, and they won’t limit access based on what you pay for. Even the notion of packet prioritization has been raised as a concern because it could tilt the scale in favor of favored businesses, though the ISPs have not made moves in that direction either—yet. 

So, not much changes today. However, if I were in an IT shop at a small business, I would be running network monitoring as soon as possible to see if any cloud performance or access changes are being limited by bandwidth throttling. I suspect you won’t find anything unusual right after the rules have been lifted, but it’s better to trust and verify than blindly trust. 

Although the big companies that use the cloud are mostly immune from the effects of the net neutrality changes, I suggest they keep an eye out as well. Remember: The customers who use the cloud have the ultimate authority: the ability to vote with your dollars. 

Powered by WPeMatico

Cloud migration: How to know if you’re going too fast or too slow

Companies have decreased their spending on traditional system deployments to fund their cloud migration activities. Indeed, the IaaS market—including Amazon Web Services, Microsoft, and Google—has been exploding with growth over 40 percent in revenue per year since 2011, according to Gartner. And Gartner forecasts 300 percent growth for IaaS between 2016 and 2020.

What’s most astounding is the shift in IT spending. The on-premises budgets for IT infrastrcuture will fall from 70.2 percent in 2016 to 57 percent by 2018, according to IDC—an 18.8 percent decline. In other words, the IaaS portion of IT infrastructure spend is rising from 29.8 percent to 43 percent—a 44.2 percent increase.

Although a few enterprises are slow to start—and some have to yet to start—their migaations to cloud, many enterprises are blasting forward, with the funding and support to cloud-enable most of their enterprise IT by 2020.  

While there may appear to be a party going on and you’ve not been invited, my advice to enterprises is to proceed to the cloud at your own measured place.   Indeed, while the growth numbers are impressive, I can’t help but think that some enterprises are moving so fast to the cloud that they are bound to make some very costly mistakes such as not dealing with security, governance, and operations properly for cloud-based systems. I’ve been making a nice living over the last year fixing these.  

But the larger danger is that you’re not taking advantage of what public cloud services can offer enterprises IT—and your business. Enterprises that are sitting on the fence are perhaps losing money because they are missing out on the cost and strategic benefits of the cloud. Most don’t bother to do the ROI analysis and planning, so they have no idea of how they are damaging their business.   

So, at what pace should you move to the cloud? The answer lies within your enterprise. Don’t go faster or slower to matc the pace of the enterprises down the street. Instead, you look at your own requirements and business problems first, then the examine best approaches and technologies to meet those requirements and solve your problems.  

Powered by WPeMatico