Are you cloud-savvy? Take this test to find out

There are a few people who are truly savvy about cloud computing, but most are just posers.

Here is a quick test to see if you’ve got the right stuff.

  1. Do you understand that serverless does not actually mean it is serverless? If yes, give yourself 10 points.
  2. Do you believe “cloud formation” is actually a formation of rain clouds? If yes, take away 5 points.
  3. Do you believe “cloud governance” means Amazon Web Services is obeying the law? If yes, take away 5 points.
  4. Do you think that “identity management” is inside your wallet or purse? If yes, take away 5 points.
  5. Do you know that Kubernetes is a container-orchestration tool, and not a new Eastern religion? If yes, give yourself 15 points.
  6. Do you think cloudops is the opposite of cloud computing? If yes, take away 5 points.
  7. Can you can name 20 AWS services, and include 10 APIs for those services? If yes, give yourself 20 points.
  8. Have you’ve used the term “cloud” more than 100 times today? If yes, take away 10 points.

So, what’s your score?

To read this article in full or to leave a comment, please click here

Powered by WPeMatico

Database Dilemma: Selecting The Best Database For Your Organization

The term “cloud” is so ubiquitous that it means different things to everyone. For these purposes, we will define it in relation to infrastructure; the cloud is the ability to auto-provision a subset of available compute/network/storage to meet a specific business need via virtualization (IaaS).

As far as applications, the cloud is browser-based access to an application (SaaS), and, importantly, the utility-based consumption model of paying for these services that has caused a major disruption in the traditional models of technology.

This has led to a paradigm shift in client-server technology. Just as the mainframe morphed into mini-computing, which led to the client-server model, cloud-computing and Amazon Web Services (AWS), the ubiquity of the cloud is the next phase in the evolution of IT. In this phase, applications, data and services are being moved to the edge of the enterprise data center.

A CIO wanting to lower IT spending and mitigate risk has many options:

  • Move budget and functionality directly to the business (shadow IT) and empower the use of public cloud options
  • Move to a managed service — private cloud for the skittish
  • Create a private cloud with the ability to burst to a public cloud (i.e., hybrid cloud)
  • Move 100% to a public cloud provider managed by a smaller IT department

Each one of the options listed above comes with pros and cons. With all the available database options, it can be difficult to determine which one is the best solution for an enterprise.

The three key issues most central to an organization’s database needs are performance, security, and compliance. So what are best practices for database management strategies for each deployment option to manage those priorities?

Let’s briefly examine five use cases for deploying your enterprise database strategy: on-premise/private cloud, hybrid cloud, public cloud, appliance-based, and virtualized.

On Premise/Private Cloud

One of the main pros of this type of database deployment scenario is that an enterprise will have control over its own environment, which can be customized to its specific business needs and use cases. This boosts trust in the security of the solution, as IT and CIOs own and control it.

Where a customer is located relative to where data is located can impact legacy applications. Latency can be an issue if users located in a different part of the globe than the company are accessing data via mobile device, resulting in overall poor user experience.

Another con is capital expenditures (CAPEX). Traditionally, the break-even ROI for on-premise deployment — between hardware, software and all required components — is about 24 and 36 months, which can be too long for some organizations. Storage costs also can get expensive.

A feature that could be a pro or con, depending on how one looks at it, is that IT will have a greater involvement. This sometimes can impact an enterprise’s ability to go to market quickly.

Before moving to an on premise/private cloud database, it’s important to examine expected ROI — if the ROI timeline is more than two or three years into the future, then this option can be justified, but this timeline may not apply for all organizations.

Perceived security and compliance are other considerations. Some industries have security regulations that require strict compliance, such as financial services and health care. Countries like Canada, Germany, and Russia are drafting stricter data residency and sovereignty laws that require data to remain in the country to protect their citizens’ personal information. Doing business in those countries, while housing data in another, would be in violation of those laws.

Security measures and disaster recovery both must be architected into a solution as well.

Hybrid Cloud

A hybrid cloud is flexible and customizable, allowing managers to pick and choose elements of either public or private cloud as needs arise. The biggest advantage of hybrid cloud is the ability to do “cloud bursting.” A business running an application on premise may experience a spike in data volume during a given time of month or year. With hybrid, it can “burst” to the cloud to access more capacity only when needed, without purchasing extra capacity that would normally sit unused.

A hybrid cloud lets an enterprise self-manage an environment without relying too much on IT and it gives the flexibility to deploy workloads depending on business demands.

More importantly, disaster recovery is built into a hybrid solution and thus removes a key concern. An organization can mitigate some restraints of data sovereignty and security laws with a hybrid cloud; some data can stay local and some can go into the cloud.

The cons of a hybrid cloud are that integration is complicated; trying to integrate an on-premise option into a public cloud adds complexity that may lead to security issues. Hybrid cloud also can lead to sprawl, where growth of computing resources underlying IT services is uncontrolled and exceeds the resources required for the number of users.

While hybrid gives the flexibility to leverage the current data center environment with some best-of-breed SaaS offerings, it’s important to have a way to govern and manage sprawl. Equally as important is having a data migration strategy architected into a hybrid cloud. This helps reduce complexity while enhancing security.

Public Cloud

The main advantage with public cloud is its almost infinite scalability. Its cost model, too, is an advantage, with pay-as-you-go benefits. It offers faster go-to-market capability and gives an enterprise the ability to utilize newer applications, as using legacy applications in the cloud can be challenging.

As in a hybrid cloud, sprawl can also be a problem in the public cloud. Without a strategy to manage and control a public cloud platform, costs can spiral and negate the savings and efficiency. But keep in mind that the public cloud may open the door to shadow IT, creating a security issue.

Data visibility is another downside; once data goes into a cloud, it can be hard to determine where it actually resides, and sovereignty laws can come into play for global enterprises. Trust in the public cloud is an issue for CIOs and decision makers, which is why hybrid — the best of both worlds — is such a popular deployment option.

Public clouds also are often homogenous by nature; they are meant to satisfy many different enterprises’ needs (vs. on premise, which is designed just for one company), so customization can be a challenge.

While a public cloud is operational expenditures (OPEX) friendly, it can get expensive after the first 36 months. Keep TCO in mind when deploying a workload: its lifecycle and overall cost benefit, as well as how the true cost of that application will be tracked.

Latency issues can occur, depending on how an enterprise has architected its public cloud and how it has deployed applications or infrastructure, which can greatly affect the quality of user experience. To improve performance, distributing apps and data close to a user base is a better solution than the traditional approach, where everything is in one data zone.

Disaster recovery will be built in, so there is no need for enterprise to architect it on its own. Security with a public cloud is always a challenge, but can be mitigated with proper measures such as at-rest encryption and well-thought-out access management tools or processes.

Appliance Database

Traditionally, this is an on-premise solution — either managed by a vendor or in an enterprise’s own data center. There are many popular vendors that provide this solution, and using one vendor to control the complete solution can offer performance and support gains.

However, this also can be a disadvantage, because it locks an enterprise into a single vendor, and appliance-based databases tend to be a niche, use-case-specific option. Vendor selection is an essential process to make certain that the partnership works both in the present and the future.

Appliance databases, because of their specialized, task-specific nature, are expensive. They can be cost-effective over time if they are deployed properly.

Virtualized Database

One advantage of virtualization is the ability to consolidate multiple applications onto a given piece of hardware, which leads to lower costs and more efficient use of resources.

The ability to scale is built into a virtualized environment, and administration is simple, with a number of existing tools to administer a virtualized environment.

With virtualization, patching can sometimes be an issue; each OS sits on top of a hypervisor and IT may have to patch each virtual machine (VM) separately in each piece of hardware.

It’s best to plan for a higher initial CAPEX, because the cost of installing a database needs to be accounted for. An enterprise can opt for an open-source solution like kernel-based virtual machines (KVM), but this solution often requires additional set-up expenses.

A con is that the enterprise itself will be the single point of failure; if hardware fails, VMs go down. Fault-proof disaster recovery is a major concern and must be well architected.

There can be network traffic issues because multiple applications will be trying to use the same network card. The actual server an enterprise employs must be purpose built for the virtualized environment.

Virtualization is ideal for repurposing older hardware to some extent, because IT can consolidate many applications onto hardware that might have been written off. It is well suited to clustering; being able to cluster multiple VMs over multiple servers is a key benefit as far as disaster recovery.

It comes with a Capex, but over time, Opex is reduced because of consolidation (a lot of processes will be automated), so lower operational expenses and savings over time lead to a quicker return and lower total cost of ownership. However, licensing costs can get expensive.

An enterprise can achieve better data center resource utilization because of the smaller footprint, which saves on the costs of running servers and allows an enterprise to host multiple virtual databases on same physical machine while maintaining complete isolation of the operating system layer.

Selecting the Right Database

As you can see, selecting a deployment option is not a trivial matter. Therefore, how can a CIO or SI mitigate the risk of choosing one over another? Cost can’t be the only driver.

Just as mainframe eventually led to cloud, enterprises may find success if they can enable the simple path from legacy on-prem databases to a private cloud with APIs to the public cloud that will connect a legacy architecture to mobile, Internet of Things (IoT), and automated intelligence (AI), and, potentially a launching pad for a hybrid cloud architecture with best of breed public cloud services: storage, applications, etc.

Every enterprise has its own challenges, goals and needs and there is no one-size-fits-all recommendation when selecting a database. Carefully examine your own infrastructure as well as ROI expectations, long-term business goals, sovereignty laws, IT capabilities, and resource allocation to determine which of these databases is the right one for your enterprise — now and years down the line.

Powered by WPeMatico

Doing cloud computing? You need devops, too

I’ve gone from recommending that you should have devops to you must have devops. At least, if a public cloud is in your future.

There are two fundamental reasons why cloud computing and devops are necessary complements.

First, the public clouds are all about automation, and so is devops. Devops takes advantage of orchestration systems that can autoprovision and autoscale, as well as proactively monitor the application workloads and data sets in the public clouds.

To read this article in full or to leave a comment, please click here

Powered by WPeMatico

5 Common Issues Hindering Cloud Instancing Optimization

Public cloud environments such as AWS, Microsoft® Azure, and Google® Cloud Platform have been promoted as a means of saving money on IT infrastructure resources. Unfortunately, this is not often the case. The increasing complexity of cloud offerings and the lack of visibility most organizations have into these environments make it difficult to effectively control costs. Many organizations unwittingly overprovision in the public cloud — an error that’s too costly to ignore. By avoiding five of the most common mistakes, teams can maximize cloud resource efficiency and reduce performance risk in these new school environments.

Mistake #1: Not Understanding the Detailed Application Workload Patterns

Not all workloads are created equal and regardless of which public cloud you’re leveraging, the devil is in the details when it comes to cloud instance selection. It’s important to understand both the purpose of the workload and the detailed nature of the workload utilization pattern.

The economics of running a batch job workload in the public cloud that comes up to do its work once at the end of every month is very different than those of apps that are constantly busy with varying peaks and valleys throughout the day. To properly select the right resources and cloud instance, you really need to understand the intra-day workload pattern and how that the pattern changes over a business cycle.

Unfortunately, many organizations take a simplistic approach to analyzing their workloads, resigning themselves to only look at daily averages or percentiles instead of taking a thorough, in-depth dive into specific patterns. The result is an inaccurate picture of resource requirements, which can cause both over-provisioning and performance issues. Rarely do these simple approaches get it right. When you are looking for a solution to help you select the right cloud instance, choose something that truly understands the detailed utilization patterns of the workloads.

Mistake #2: Not Leveraging Benchmarks to Normalize Data Between Platforms

A common approach to sizing resource allocations for the cloud is to size “like for like” when moving from one virtual or cloud environment to another — meaning allocating a workload the same resources it had in an old one. But not every environment runs the same hardware with the same specifications. If you don’t use benchmarks to normalize workload data and accommodate for the performance differences in the underlying hardware between environments, you won’t get an accurate picture of how that workload will perform in the new environment.

Newer environments often have more powerful hardware, giving you more bang for your buck and as a result, workloads don’t often require the same amount of resource to be allocated. This is key when transforming servers and also when optimizing your public cloud use as providers are constantly offering updated cloud instance types running on new hardware. To avoid leaving money on the table, you need to be able to compare “apples to apples” and the only way to do that is by normalizing the data.

Mistake #3: Focusing on Right-Sizing and Ignoring Modernizing Workloads

Modernizing workloads to newer cloud instance offerings running on newer, more powerful hardware can be a very effective means of reducing costs. In fact, we have found right-sizing instances alone delivers typically 20% savings on a public cloud bill, whereas modernizing and right-sizing increases savings to 41% on average.

With the dizzying number of services and instance types that public cloud vendors offer, it is difficult for organizations to choose the right instance, let alone keep up with the new options. The potential savings though, make it a worthwhile effort. As mentioned, to do this properly requires a detailed understanding of the workloads, the cloud instance catalogs, costs, and the ability to normalize the data to account for performance differences between environments. This isn’t something that can be done manually and requires a thorough analysis to find the right combinations to save money and ensure performance. It’s also something that should be done regularly as apps deployed even a few months ago may be great modernization candidates.

Mistake #4: Getting Caught in the ‘Bump-up Loop’

The “bump-up loop” is an insidious cycle that leads to over-provisioning and overspending. Let’s say workload is running and you see CPU is at 100%. A simple tool would look at this, deem it under-provisioned and recommend bumping up the CPU resources (and the cost of your cloud instance). The problem here is that some workloads will use as much resource as they’re given. If you provision more CPU, these apps will take it and still be run at 100 percent, perhaps just for a shorter time. The cycle repeats itself, and you’re stuck in the costly bump-up loop.

To avoid this resource-sucking loop, you need to understand exactly what a workload does and how it behaves. Again, we come back to the need to understand the individual workload patterns and nature of the workload. This is particularly important as you look at memory, which is a major driver of cloud cost.

Mistake #5: Letting Idle Zombie Instances Go Unmanaged

Most organizations don’t have an effective process for identifying idle “zombie” instances, causing them to slip under the radar and pile up over time. They usually result from someone hastily deploying an instance for the short-term and forgetting to shut it down. Zombie instances do nothing but waste budget. To avoid this unnecessary cost, organizations must look at the workload pattern across a full business cycle (weeks or months of data) using sufficient history. Identifying and eliminating this deadwood can easily save thousands a year, but it requires longer term visibility into the workload than most tools out there provide.

Most organizations don’t realize how much money they are leaving in their public cloud. Getting that money back requires paying much closer attention to understanding how your workloads utilize resources and what they truly need to work as efficiently as possible without compromising performance. Having the ability to understand the detail is the only way to avoid a hemorrhaging cloud budget.

Powered by WPeMatico

SecureDoc CloudVM from WinMagic

WinMagic announced the immediate availability of SecureDoc CloudVM in the Amazon Web Services Marketplace. WinMagic, an award-winning encryption and intelligent key management security solution provider, is now featured on the Marketplace’s Software Infrastructure + Security category, bringing key management, volume and full disk encryption for customers operating on AWS cloud computing infrastructure. By combining WinMagic’s SecureDoc CloudVM data security solution with the agility and elasticity of AWS public cloud services, enterprises can focus on driving business value, rather than trying to manage multiple security solutions.

SecureDoc CloudVM leverages all the strengths built into WinMagic’s SecureDoc endpoint encryption to provide enterprises with a flexible and high-performing data security solution that protects virtualized and public, private and hybrid cloud workloads and data. SecureDoc CloudVM is offered on both bring-your-own license (BYOL) and subscription models from the marketplace, with several features that apply intelligence to encryption, speed time to market, increase efficiency and reduce risk, such as hourly subscriptions. Some of the powerful performance and security features include:

  • In-guest encryption and enterprise-controlled key management that detaches encryption management from the hypervisor, so keys and data are never exposed to persons outside your organization
  • Cloud IaaS workload protection against data breaches, undisclosed government, or privileged user mishandling
  • Highest performing conversion times allow for quick crypt, snap shot and replicated workloads to be managed without having to decrypt and re-encrypt. And the industry’s only on-line and off-line conversion for both Windows and Linux based VMs
  • Network Pre-Boot that ensures all workloads are authorized before opening, by authenticating workloads against SecureDoc, or authorizing through pre-boot for new workloads
  • Intelligent Key Management & Encryption engine policies that enable granular administration rights such as time, geography, and cloning restrictions to enforce compliance and security requirements like GDPR and data sovereignty

http://www.winmagic.com/

Powered by WPeMatico