Say Hello To Hyperscale

Hyperscale computing is a set of architectural patterns for delivering scale-out IT capabilities at massive, industrialized scale. Many enterprises that are pressured to develop innovations quickly and “hyper” scale those innovations to millions of users worldwide are now looking to hyperscale cloud.

Hyperscale computing has, until now, involved abstracting data centers into software running on low-cost servers and standard storage drives. These large-scale data centers are located near low-cost power sources and offer availability through massive build-out of redundant components. Hyperscale computing usually involves a minimum of half a million servers (the bragging rights seem to be about how much real estate one owns), virtual machines (VMs), or containers.

Lesson from Netflix

The poster child of using hyperscale cloud is Netflix. You can find their architecture documented here. What is interesting is that, per the Netflix blog, “Failures are unavoidable in any large-scale distributed system, including a cloud-based one. However, the cloud allows one to build highly reliable services out of fundamentally unreliable but redundant components.”

This begs certain questions. Can enterprise companies use hyperscale computing technology in their own data centers to deliver mission critical applications? Think of these key applications like electricity — you expect it to work always, and things really fall apart without it. If public hyperscale cloud is built with an expectation of failure on top of non-reliable low-cost components, can hyperscale capabilities be created in private or hybrid datacenters to the necessary service-level agreements (SLAs) for mission critical workloads?

The answer is, “possibly.” Using commodity servers involves a massive investment in huge data center footprints and associated power management — plus hiring lots of people to do what Netflix is doing. For example, they say: “By incorporating the principles of redundancy and graceful degradation in our architecture, and being disciplined about regular production drills using Simian Army, it is possible to survive failures in the cloud infrastructure and within our own systems without impacting the member experience.”

Hyperscale: A Possibility for All?

This does not seem necessarily feasible for many enterprises, who are often trying to get out of the business of managing large data centers as a core competency. And, not all workloads are created equal: businesses have a combination of mission critical, mission essential, and differentiating services for customers, partners, and employees, all of which require enterprise-scale and reliability.

7 Characteristics of a Mainframe That Deliver Hyperscale Cloud

So, the real question is, what happens if one builds the next gen hyperscale architecture using highly-reliable and high-security components and systems? From the outset, the mainframe has been architected to support high-performance transactional systems with the highest security for “electricity-like” workloads. But can z systems be part of a hyperscale computing environment and deliver on its promise? Here are seven key characteristics of a mainframe that support hyperscale cloud infrastructure:

  1. Software-defined: All compute, storage, middleware, and networking for mission-critical application/services are enabled through a software-defined architecture. That is, all elements of the infrastructure are virtualized and delivered as a service.
  2. Available and elastic: Hyperscale data centers with z systems bring the best of both hyperscale and z systems to deliver the availability and elasticity required by enterprise-scale workloads. Hyperscale excels at scale-out (many jobs or 500K servers or VMs), while z systems perform best in scale-up mode and deliver five 9s of availability through high-availability approaches like Sysplex. The result: a much more available, reliable, and elastic infrastructure that can handle different types of enterprise workloads.
  3. Open: Despite what you may have heard, the z systems ecosystem brings many open source elements, including support for enterprise-grade, native distributions of the Apache Spark in-memory analytics engine. And, with Linux on z, almost all open source software available for Linux is accessible, including Docker containers, artificial intelligence (AI), machine learning (Google Tensorflow), and modern languages (GO, Python, etc.).
  4. Highly secure: Security in a hyperscale data center with next gen mainframes is improved because they offer enterprise-grade security and compliance capabilities which no other server can beat (e.g., EAL5-certification, crypto containers).
  5. Energy-sustainable: Low-power consumption and a small footprint are a hallmark of z systems, leveraging some of the system z power-management density characteristics that provide significant efficiencies in contrast to the setup of arrays of commodity servers.
  6. Intelligent automation: Workloads can be orchestrated across the data center based on unique hardware and software requirements and service level agreements (SLAs) with the business. Advances in AI and machine learning can make intelligent automation a reality and bring IT closer to the vision of “NoOps.”
  7. Vendor-neutral: There is an industry ecosystem around IBM, and the world’s largest solution providers, who are building hyperscale data centers to support enterprise customers.

What does all this mean? It means with a hyperscale cloud that includes next gen mainframes in the data center, your business can further capture all of the advantages of hyperscale computing: you can leverage high-performance block solutions, support mission-critical workloads with complete reliability, and facilitate end-to-end security and compliance.

Effective and Economical Hyperscale Computing

When I talk to companies about hyperscale, two concerns typically arise. First, managers assume they will need to learn new skill sets to program and code in a z systems environment compared to the way they program and code in the cloud. Simply put: this is not true. For example, application developers are increasingly able to develop in Java for mobile-to-mainframe applications, without ever having to touch a green screen. The vendor ecosystem has continued to deliver new tools to new developers on the mainframe.

The second concern is regarding cost and expense. But think of it this way: instead of having a gigantic hyperscale industrial data center, you can have one system sitting on the floor using less power than a coffee machine, and one person operating the equivalent of a state’s department of motor vehicles portal. That’s hard to beat!

As your workloads increasingly demand electricity-like, always-on IT services, you will need the power of hyperscale computing. Hyperscale data centers which rely on z systems can deliver that for you — and improve your total cost of ownership (TCO), security, platform stability, and business agility as well.

Powered by WPeMatico

Cloud Management Platform from IndependenceIT

IndependenceIT’s flagship cloud management software solution offers service providers, independent software vendors (ISVs) and enterprises the most powerful solution for Software Defined Data Centers (SDDCs), workspace automation and app services enablement.

Cloud Workspace® has been recognized by leading market research firms for technology innovations in cloud management, WaaS, and application workload delivery. In a report by International Data Corporation (IDC) titled IndependenceIT’s Cloud Workspace Suite 5.0 – Enabling Choice in Virtual Client Computing Deployments, analysts noted the solution’s ability to enable enterprise IT organizations and service providers to leverage a mix of public or private cloud resources to best implement and manage virtualized desktops and/or applications. The company has also been named a Gartner Cool Vendor for its cloud management platform.

IndependenceIT's Cloud Management Platform

http://www.independenceIT.com

Powered by WPeMatico

Intel IoT Gateway Technology-based Platforms from Advantech

Advantech has launched a full range of IoT gateways to fulfill a wide array of application environments. These gateways, powered by Intel® IoT Gateway Technology, comprise of fanless box PCs, embedded automation computers, video surveillance, fleet management, and in-vehicle series. These gateways provide a foundation for connecting devices seamlessly and securely, delivering trusted data to the cloud, and adding value through analytics. They enable Machine-to-Machine (M2M) communication, Integrated Services Router (ISR), and cellular connectivity for areas such as industry, smart buildings, retail, and transportation.Advantech IoT gateways

The Intel IoT Gateway Technology solution is designed on the Wind River Intelligent Device Platform XT to speed innovation and maintain interoperability with legacy systems. Developers can quickly develop, prototype, and deploy intelligent gateways that meet emerging IoT market requirements, while maintaining interoperability with legacy systems including sensors and datacenter servers. The solution is completely preconfigured and pre-validated with hardware, software, and security capabilities.

These kits are designed to simplify integration which minimizes cost and accelerates time-to-market with a complete solution that includes a fully configured board, chassis, power supply, antennas, and software.

www.advantech.com/

Powered by WPeMatico

Cloud Backup from Acronis

Acronis Backup Cloud has added Plesk, cPanel, and generic website backup functionality to its complete hybrid cloud backup-as-a-service platform.

Through way of an extension added to existing cPanel and Plesk servers, Acronis Backup Cloud blends into Plesk and cPanel’s native multi-tier and multi-tenant architecture, displaying an Acronis widget in the administrator and user control panels and providing full image-based server backup and recovery for administrators, and granular self-service website recovery for hosting customers. Acronis Backup Cloud includes support for Microsoft Office 365 mailbox backup and recovery.Acronis Backup Cloud

Microsoft Office 365 has become a cornerstone product for service provider customers. If Office 365 data is lost — for example, a user accidently deletes their data — the customer’s business communications can be severely disrupted. Acronis ensures that both local and cloud copies are maintained for granular recovery and long term storage.

Acronis Active Protection™, an advanced ransomware protection technology proven by independent tests, will be added to Acronis Backup Cloud in the coming month, keeping data stored in the Acronis Cloud out of reach of ransomware crooks.

http://www.acronis.com

Powered by WPeMatico

Storage Acceleration from SoftNAS

SoftNAS’s UltraFast™ is an intelligent, self-tuning storage accelerator for the WAN and cloud.

Performance improvements for use cases such as:

  • Faster offsite backups and retrieval of data via optimized S3 object storage
  • High-performance S3 object storage access from on premise
  • Replication between data centers over an enterprise WAN
  • Replication optimization for Remote Office/Branch Office (ROBO)
  • Master copy data management for global content publishing
  • Disaster recovery to the cloud for quick synchronization of data from any source

SoftNAS UltraFast will be an add-on feature to the SoftNAS product portfolio. Benefits include:

  • Migrate data to the cloud: Optimize data streams for large-scale data transfers between geographically dispersed IT environments.
  • Replicate data for disaster recovery: Replicate data to the cloud for cost-effective fail over.
  • Integrate islands of data: Quickly transfer data among remote office, factories and corporate for big data analytics.
  • Scheduled bandwidth throttling: Automatically regulate and prioritize network traffic and performance according to administrator-defined bandwidth limits and schedules.
  • Leverage existing environment: Work with existing applications, storage (hardware and software) and networks to improve network efficiency without additional hardware.
  • Increase bulk data transfer rate by up to 5 times: Speed transfers by up to 5 times (based on 1-4 GB file sizes across high quality networks (<0.1% packet loss)).
  • Improve WAN link utilization up to 16 times over TCP alone, with 95% link efficiency: Overcome bandwidth degradation due to high round trip transfers over large geographic distances and packet losses from multiple routers (based on 1 to 4 GB file sizes across high quality networks (<0.1% packet loss)).

SoftNAS UltraFast storage accelerator

https://www.softnas.com/

Powered by WPeMatico

The 3 dumbest things enterprises do in the cloud

You’re going to make mistakes. I tell my enterprise clients that every week.

However, there are mistakes and there are mistakes that are more like self-inflected wounds. Here are three of the dumbest mistakes I’m now seeing enterprises make in the cloud efforts.

Dumbest mistake No. 1: Keeping the data on premises but the compute in the cloud

When helping clients plan their cloud efforts, I regularly hear, “My data is sacred, so we don’t want to put our data in the cloud. However, we’re paying too much for compute and datacenter space, so let’s place that on some public cloud.”

That is not a good move for a couple reasons. First, you’re going to hit a great deal of latency. In fact, I’ve never seen this kind of hybrid architecture work due to the lags. Second, security becomes way more difficult. In fact, you typically end up with more vulnerabilities. 

Dumbest mistake No. 2: Firing staff working on the legacy systems too soon

Enterprises typically change their budgets around the use of public clouds, and publicly traded companies typically don’t want to have any upticks in expenses even during transitions. For cloud migrations, they budget for a zero sum game, and to do that they get rid of the staff that looks after the legacy systems—before moving the workloads to the public clouds. 

That’s a huge mistake. Typically, significant cloud migrations take a year or more. You’re going to need your legacy systems during that time to run the business. So you still need your legacy staff for a good while. Moreover, you’re never going to completely get all your applications on the public cloud. Many applications should not move due to their economics, and others can’t move due to some limitations in the technology. So you still need some of your legacy staff for the long term. You’ll still save, but only over the longer run.

Dumbest mistake No 3: Overselling the benefits of the cloud

My, how the pendulum has swung! The people in enterprise IT who pushed back on the cloud just a few years ago are now aggressively embracing it. They see the writing on the wall.

But in a hype-driven frenzy, they are overstating the ROI that public cloud computing will bring. As a result, they are falling short in the eyes of the enterprise’s leadership. 

The truth is that the mileage you get from cloud computing varies a great deal. That’s why I spend a great deal of time on the business case to tell enterprises exactly what they can expect. You have to do that, too.

Powered by WPeMatico

Focusing on the cheapest cloud price could cost you more

Amazon Web Services really leads the way in determining market price for cloud services, and the second-, third-, and lower-tier cloud providers try to price their cloud services below that of AWS to steal its business. That is, until AWS drops prices—again.

Enterprises that focus only on cloud usage prices are missing the big—and more important—picture.

For example, say that you move 100 applications and their linked data to a public cloud provider. It charges you a certain usage price for compute and storage, which is set at the time you provision those resources. 

Your applications use those resources under specific patterns of use. That means they take a specific amount of compute for a specific amount of storage, and do so consistently until things change. The pattern of use includes network usage, security services, and other services that those applications also use. 

No matter what your initial prices are for these services, your monthly bill will go up to run the same applications. What gives? The answer is that things are never consistent. There’s the natural organic growth of data, there are functional changes demanded by users, and there are the tweaks developers make while maintaining the applications. All those factors mean the pattern of use slowly changes, and your costs follow along as you use services more than expected, use the services in different ratios, and/or add new services.

That tendency to cost creep might seem like a good reason to focus on getting a lower price upfront. But a sole focus on price typically means you’re not doing the necessary work to establish cost metrics and cost governance. Those are where you control costs over the long term; focusing on the initial price and then letting the services just run will save you money only for a short time, and you’ll lose those savings as the cost increase.

Indeed, I get calls from cloud users all the time that are surprised by their cloud bills. Something changed, and they have no idea what changed or how it affected their cloud bills. They would have avoided that surprise had they established a cloud governance and cloud metrics program. That program includes software systems to monitor usage patterns and their direct impact on costs in real time. That program also requires that you have controls in place, which these software systems often provide via the ability to place limits on usage and automation that uses predefined cost policies to adjust the service mix on the fly to reduce cost growth. 

By all means, look for the best pricing you can get. Just don’t stop there. Otherwise, you’ll get a nasty surprise later on.

Powered by WPeMatico