Why Cloud Architecture Matters

Choosing an enterprise cloud platform is a lot like choosing between living in an apartment building or a single-family house. Apartment living can offer conveniences and cost-savings on a month-by-month basis. Your rent pays the landlord to handle all ongoing maintenance and renovation projects — everything from fixing a leaky faucet to installing a new central A/C system. But there are restrictions that prevent you from making customizations. And a fire that breaks out in a single apartment may threaten the safety of the entire building. You have more control and autonomy with a house. You have very similar choices to consider when evaluating cloud computing services.

The first public cloud computing services that went live in the late 1990s were built on a legacy construct called a multi-tenant architecture. Their database systems were originally designed for making airline reservations, tracking customer service requests, and running financial systems. These database systems feature centralized compute, storage, and networking that served all customers. As their numbers of users grew, the multi-tenant architecture made it easy for the services to accommodate the rapid user growth.

All customers are forced to share the same software and infrastructure. That presents three major drawbacks:

  1. Data co-mingling: Your data is in the same database as everyone else, so you rely on software for separation and isolation. This has major implications for government, healthcare, and financial regulations. Further, a security breach to the cloud provider could expose your data along with everyone else co-mingled on the same multi-tenant environment.
  2. Excessive maintenance leads to excessive downtime: Multi-tenant architectures rely on large and complex databases that require hardware and software maintenance on a regular basis, resulting in availability issues for customers. Departmental applications in use by a single group, such as the sales or marketing teams, can tolerate weekly downtime after normal business hours or on the weekend. But that’s becoming unacceptable for users who need enterprise applications to be operational as close to 24/7/365 as possible.
  3. One customer’s issue is everyone’s issue: Any action that affects the multi-tenant database affects all shared customers. When software or hardware issues are found on a multi-tenant database, it may cause an outage for all customers, and an upgrade of the multi-tenant database upgrades all customers. Your availability and upgrades are tied to all other customers that share your multi-tenancy. Entire organizations do not want to tolerate this shared approach on applications that are critical to their success. They need software and hardware issues isolated and resolved quickly, and upgrades that meet their own schedules.

With its inherent data isolation and multiple availability issues, multi-tenancy is a legacy cloud computing architecture that cannot stand the test of time.

The multi-instance cloud architecture is not built on large centralized database software and infrastructure. Instead, it allocates a unique database to each customer. This prevents data co-mingling, simplifies maintenance, and makes delivering upgrades and resolving issues much easier because it can be done on a one-on-one basis. It also provides safeguards against hardware failures and other unexpected outages that a multi-tenant system cannot.

The provider is able to replicate application logic and database for each customer instance between two paired and geographically diverse data centers in each of our eight regions around the world. This can be done in near real-time with each side of the paired data centers fully operational and active. Automation technology can quickly move customer instances between these replicated data center pairs.

It’s important to emphasize that multi-instance is not the same single-tenant, where the cloud provider actually deploys separate hardware and software stacks for each customer. There is some sharing of infrastructure pieces, such as network architecture, load balancers, and common network components. But these are segmented into distinct zones so that the failure of one or more devices does not affect more than a few customers. This enables the creation of redundancy at every layer. For example, at the internet borders, a vendor might have multiple border routers that connect to several tier- one providers on many different private circuits, direct connections, and on different pieces of fiber.

This leads to another important difference between multi-tenant and multi-instance architectures: the approach to disaster recovery. Permanent data loss is a risk inherent to all multi-tenant architectures, and that means external disaster recovery sites are no longer viable options.

True, these are sites that a vendor can fail to if the active side fails. But they are only tested a few times a year and only used if an extreme situation arises. If (when) that happens, they risk failing under load. When that happens, data is lost forever.

That risk virtually disappears in a multi-instance environment. Again, there is not one master file system that services all customers. You can scale out pieces of hardware — stack them on top of each other like LEGO blocks. Each block services no more than a few customers, so one hardware crash cannot affect all the blocks. And because replication is automatic, the secondary side is immediately accessible.

When you partner with a cloud provider that bases its platform on a multi-instance architecture, you’re moving into your own house. Your data is isolated, a fully replicated environment provides extremely high availability, and upgrades on the schedule you set, not the provider. Cloud architecture matters because you’re in control, and better protected when disaster strikes.

Powered by WPeMatico

Transitioning To An Agile IT Organization

If you have even a passing interest in software development, you’re likely familiar with the premise of agile methods and processes: keep the code simple, test often, and deliver functional components as soon as they’re ready. It’s more efficient to tackle projects using small changes, rapid iterations, and continuous validation, and to allow both solutions and requirements to evolve through collaboration between self-organizing, cross-functional teams. All in all, agile development carves a path to software creation with faster reaction times, fewer problems, and better resilience.

The agile model has been closely associated with startups that are able to eschew the traditional approach of “setting up walls” between groups and departments in favor of smaller, more focused teams. In a faster-paced and higher-risk environment, younger companies must reassess priorities more frequently than larger, more established ones; they must recalibrate in order to improve their odds of survival. It is for this reason that startups have also successfully managed to extend agile methods throughout the entire service lifecycle — e.g., DevOps — and streamline the process from development all the way through to operations.

Many enterprises have been able to carve out agile practices for the build portion of IT, or even adopt DevOps on a small scale. However, most larger companies have struggled to replicate agility through the entire lifecycle for continuous build, continuous deployment, and continuous delivery. Scaling agility across a bimodal IT organization presents some serious challenges, with significant implications for communication, culture, resources, and distributed teams — but without doing so, enterprises risk being outrun by smaller, nimbler companies.

If large enterprises were able to start from scratch, they would surely build their IT systems in an entirely different way — that’s how much the market has changed. Unfortunately, starting over isn’t an option when you have a business operating at a global, billion-dollar scale. There needs to be a solution that allows these big companies to adapt and transform into agile organizations.

So what’s the solution for these more mature businesses? Ideally, to create space within their infrastructure for software to be continuously built, tested, released, deployed, and delivered. The traditional structure of IT has been mired by ITIL dogma, siloed teams, poor communication, and ineffective collaboration. Enterprises can tackle these problems by constructing modern toolchains that shake things up and introduce the cultural changes needed to bring a DevOps mindset in house.

I like to think of the classic enterprise technology environments as forests. There are certainly upsides to preserving a forest in its entirety. Its bountiful resources — e.g., sophisticated tools and talented workers — offer seemingly endless possibilities for development. Just as the complex canopy of the forest helps shield and protect the life within, the infrastructure maintained by the operations team can help protect the company from instability.

But the very structure that protects the software is also its greatest hindrance. It prevents the company from making the rapid-fire changes necessary to keep up with market trends. The size and scale of the infrastructure, which were once strengths, become enormous obstacles during deployment and delivery. Running at high speed through a forest is a bad idea — you will almost certainly trip over roots, get whacked by branches, and find your progress slowed as you weave through a mix of legacy technology, complex processes, regulatory concerns, compliance overhead, and much more.

By making a clearing in the forest, enterprises can create a realm where it’s possible to run without the constraints of so many trees. This gives them the ability to mimic the key advantage of smaller companies by creating the freedom to quickly build, deploy, and deliver what they want — without the tethers of legacy infrastructure.

For example, I have worked with a multinational retailer that, in addition to operating 7,800 stores across 12 markets, manages 4,500 IT employees around the world — which translates to 7 million emails and 300 phone calls per day from distributed operation centers in nine different countries. The major issue was that notification processes were inconsistent on a global level, and frequently failed to get relevant information to the right people at the right time. This, of course, translated into slower response times to issues affecting their customers.

In order to modernize its IT force, the company reorganized into a service-oriented architecture (SOA), featuring separate service groups that owned the design, development, and run of each of their respective systems. This meant that many IT members were given new roles with responsibilities; though most had worked on developing systems, most hadn’t worked on supporting systems. They also integrated tools to enable automation and self-service for end-users. Today, they have a more consistent and collaborative digital work environment, and the result is greater efficiency, happier customers, and more growth opportunities for the future.

Similarly, I worked with a retail food chain that presented a challenge in improving the communication and collaborative capabilities of its teams in food risk management. Prior to IT modernization, in-store staff manually monitored freezer temperatures every four hours — a complex and time-consuming task that was highly prone to human error. If an incident arose, the escalation process couldn’t identify the correct team member to address the temperature issues, so a mass email would be blasted out. There was no way of knowing if the correct team member has been made aware of the issue and had addressed it.

The company tackled this challenge by creating a more robust process for incident management involving SMS messages to identified staff, emails and phone calls to management, and automated announcements over the in-store system. In addition, they implemented an Internet of Things (IoT) program to completely automate and monitor refrigerator and frozen food temperature management. The result has been significantly increased efficiency, transparency, and accountability — not to mention a safer experience for their customers.

As you can see, these companies were able to identify target areas and problems, and create new spaces within their existing infrastructures to allow them to communicate better, and ultimately become faster, nimbler, and more responsive. Any enterprise looking to move toward agile software development and operations should look at technology-based projects and initiatives that will be most impactful in enhancing team focus and culture. Before you even start thinking about the problems you want to solve with agile and DevOps, you should identify and initiate the conversations that will provide the starting points for adoption. Without a detailed map of your infrastructure and the activities within it, you cannot clear a path to complete, end-to-end DevOps adoption.

Powered by WPeMatico

Summertime And Living In The Cloud Is Easy

Welcome to Cloud Strategy’s 2016 Summer issue! We really outdid ourselves this time.

To begin, Allan Leinwald of ServiceNow is here with an in-depth look at cloud architecture for our cover story. But there is more! Kiran Bondalapati from ZeroStack writes about the commoditization of infrastructure; Sumeet Sabharwal of NaviSite writes on the opportunities available to independent software vendors in the cloud; Mark Nunnikhoven of Trend Micro talks about the trend of the everywhere data center and the danger of dismissing the hybrid cloud; Alan Grantham of Forsythe writes about the cloud conversations companies should be having; Peter Matthews of CA Technologies, Anthony Shimmin of AIMES Grid Services, and Balazs Somoskoi of Lufthansa Systems share their tips for selecting the right cloud services provider; Adam Stern, founder and CEO of Infinitely Virtual writes about the importance of cloud storage speed; Shea Long of TierPoint tackles the hot topic of DRaaS; and Steve Hebert, CEO of Nimbix writes on the challenges CIO face in balancing public, private, and hybrid clouds.

In addition, we have a case study from Masergy on its successful implementation of a high-speed network to implement Big Data analytics.

Another great issue, if we say so ourselves.

Powered by WPeMatico

Hyper-scale data center eliminates IT risk and uncertainty

In June 2016, CyrusOne completed the Sterling II data center at its Northern Virginia campus. A custom facility featuring 220,000 sq ft of space and 30 MW of power, Sterling II was built from the ground up and completed in only six months, shattering all previous data center construction records.

The Sterling II facility represents a new standard in the building of enterprise-level data centers, and confirms that CyrusOne can use the streamlined engineering elements and methods used to build Sterling II to build customized, quality data centers anywhere in the continental United States, with a similarly rapid time to completion.

CyrusOne’s quick-delivery data center product provides a solution for cloud technology, social media, and enterprise companies that have trouble building or obtaining data center capacity fast enough to support their information technology (IT) infrastructure. In trying to keep pace with overwhelming business growth, these companies often find it hard to predict their future capacity needs. A delay in obtaining data center space can also delay or stop a company’s revenue-generating initiatives, and have significant negative impact on the bottom line.

The record completion time of the Sterling II facility was the result of numerous data center construction principles developed by CyrusOne. These include standardized data center design techniques that enable CyrusOne and its build partners to customize the facility to optimize space, power, and cooling according to customer needs; effective project management in all phases of design and construction, thanks to CyrusOne’s established partnerships with data center architects, engineers, and contractors; advanced supply-chain techniques that enable CyrusOne to manufacture or pre-fabricate data center components and equipment without disrupting work at the construction site; and the use of Massively Modular® electrical units and chillers to enable rapid deployment of power and cooling at the facility according to customers’ IT capacity needs.

Introduction

In late December 2015, CyrusOne broke ground on the Sterling II data center, the second facility at its Northern Virginia campus. Built for specific customers, the Sterling II facility is a 220,000-sq-ft data center with 30 MW of critical power capacity. The facility was completed and commissioned in mid-June 2016. Its under six-month construction time frame is the shortest known time to completion ever achieved by CyrusOne for an enterprise-scale data center of its size. The 180-day build time shattered all known industry construction records.

CyrusOne had previously set another industry record by delivering a 120,000-sq-ft, 6MW facility in Phoenix, Arizona, in 107 days, or just over three months. The Sterling II facility is almost twice the size of the Phoenix facility, offers five times more power capacity, and took only twice as long to deliver. Its record time to market represents a new industry standard in the construction and deployment of built-to-suit enterprise data centers.

The Challenge

Many large-scale cloud, internet, social media and enterprise companies are growing at an unprecedented and unpredictable rate, with their IT footprints often doubling or tripling in size in just a few years. But rapid growth makes it harder for these companies to predict or plan for future IT infrastructure expansion.

“When enterprises determine how much IT capacity they will require to handle future business growth, it often turns out that they needed it ‘yesterday,’” explains John Hatem, CyrusOne’s executive vice president of data center design, construction. and operations. “But they can’t build new data centers or buy colocation space fast enough to meet their skyrocketing IT infrastructure demands. In addition, the quest to build or obtain new data center space is a distraction from the company’s core business, whether that’s software development, cloud technology, social media. or other business applications.”

The Solution

CyrusOne Solutions™ build-to-suit IT deployments can deliver a completed, high-quality data center product, often in the same amount of time it takes enterprises to order and receive the computing equipment that will operate inside the facility. This rapid time to delivery helps relieve the customer’s risk of not having adequate IT capacity to support their key business growth, or the infrastructure demands of new initiatives. Significantly, CyrusOne is typically able to deliver this data center product with lower construction, engineering and operational costs to the customer.

The Sterling II and Phoenix enterprise data centers were completed in record time thanks to CyrusOne Solutions’ streamlined construction and IT deployment approach, which includes:

  • CyrusOne’s signature Massively Modular engineering disciplines, which employ standardized data center design using pre-fabricated components and template construction techniques.
  • Effective project management by the CyrusOne Solutions team through productive and collaborative relationships with experienced data center architects, engineers and contractors involved in the project.
  • Advanced supply-chain techniques that enable CyrusOne to manufacture or pre-fabricate data center components with time-saving efficiency.
  • CyrusOne’s Massively Modular approach, which uses modular electrical units and chillers to provide flexible power and cooling deployments for the facility.

Massively Modular Construction 

“We think of building our data centers as a manufacturing process, not a construction process,” Hatem says. “We deliver the same high-quality product to all of our customers, which is a reliable data center with space, power and cooling. Using a standardized data center design and components enables us to deploy a similar product anywhere in the continental United States, with the fastest time to market available.”

Through its Massively Modular construction/engineering methods, CyrusOne builds data centers in standardized building blocks with 60,000 sq ft of infrastructure and 4.5 MW of power. For customized data center projects, CyrusOne builds as many blocks as the customer requires. The Phoenix data center consists of two building blocks, while the Sterling II data center consists of five building blocks (with additional power capacity added). Using this standardized layout as a basis, CyrusOne can then customize the design of a built-to-suit data center to optimize space, power and cooling according to the individual customer’s IT needs.

Effective Project Management through Industry Parnerships

To build the Sterling II facility, CyrusOne Solutions put together a project-management team that included outside architects, engineers, and contractors who had worked with CyrusOne on previous data center builds. By working with these industry experts, CyrusOne was able to plan and execute the Sterling II project so the facility could be built in a very short time.

“I can’t say enough about the entire team that worked on the project,” says Laramie Dorris, CyrusOne’s vice president of design and construction. “That includes the architect and engineering team, general contractors, third-party consultants, structural and civil engineers, and local contractors in Northern Virginia, who all pulled together to manage and execute this project. A project like this runs 24/7 for the entire duration, and it was incredible to watch everyone working together in a collaborative, cohesive effort to meet the project requirements and finish the facility within the established six-month time frame.”

Corgan, a Dallas firm, is the architect of record for the Sterling II facility. According to Mike Connell, who served as Corgan’s project manager on Sterling II, “One reason for CyrusOne’s success is they don’t try to micromanage a data center project from the top down. Instead, they hire the right people, build the right teams and empower project managers to make important decisions based on their roles. It makes their construction projects run more smoothly and efficiently.

“For Sterling II, CyrusOne provided Corgan with the basis of design, a budget and a time frame for building the data center, and let our engineers take care of the rest. We were able to give them several design options and tell them the impact on construction, schedule and cost for each option. The confidence that CyrusOne showed in our engineers enabled them to use their creativity to meet the challenge and solve the problems of building a facility in just six months. Our engineers are able to work smarter and harder when they aren’t being overly managed by the client.”

Advanced Supply-Chain Techniques

“In Northern Virginia, CyrusOne made an educated decision to go with an all-precast structural concrete building with modular power and cooling units,” Dorris explains. “This enabled us to set up advanced supply-chain operations to manufacture or pre-fabricate the components we needed for the data center, which gave us significant savings in time and costs.

“For example, a normal data center building has tilt-up concrete walls, which are cast on-site at the construction site. But for the Sterling II data center, we set up a separate off-site facility where we could cast pre-fabricated concrete wall panels. We then brought those panels to the construction site on trucks and used them to set up the data center building. It saved time because we didn’t have to stop work at the building site while the concrete walls were being cast.

“Also, we decided to use pre-fabricated concrete supports in the data center building, which we could also cast off-site. This saved additional time and money because we didn’t have to buy a reinforced steel framework for the building or wait for it to be delivered to us. Using pre-cast concrete walls and supports shaved a couple of months off our time to market for Sterling II.”

Modular Power and Cooling

“To provide power and cooling to the Sterling II facility, we used CyrusOne’s Massively Modular engineering approach,” Dorris says. “We set up another off-site facility where we could assemble modular power units. Each unit included an uninterruptible power supply (UPS), a backup generator, and a utility transformer, all housed in weatherproof containers. We brought the modular units to the Sterling II site and set them up in ‘lineups’ outside the facility. Using modular power units speeds up construction, saves money and reduces the building’s footprint because we don’t have to build additional rooms inside the data center to house power equipment. Also, we used modular cooling units from Stulz at the Sterling II facility, which saved us from having to build a large centrifugal cooling plant on-site.

“The Massively Modular approach provides flexible power and cooling options for Sterling II. If our customer needs to change their IT deployment within the facility, we can bring in additional power units and chillers, and increase power density and cooling with no negative impact or downtime on their current environment. The modular cooling units help lower operating costs because they’re cheaper to operate and maintain over a regular on-site cooling plant. Also, the Massively Modular approach provides redundancy. If a power or cooling unit breaks down, the others will take up the slack until the broken unit can be repaired or replaced.”

Conclusion

CyrusOne Solutions’ built-to-suit data center product is the best solution for cloud, internet, or enterprise customers who need quality data center facilities built in the shortest time possible. The standardized construction approach is a repeatable process employable in multiple locations to ensure rapid speed to market for data center projects, with significant cost savings for customers.

By delivering data centers like the Sterling II and Phoenix facilities in record times, CyrusOne is continuously setting the bar higher for the data center industry. Additionally, CyrusOne is helping ensure its customers are able to scale at hyper-speed to meet their data center capacity needs by removing the risks of running out of space or power.

“CyrusOne has a culture of dedication to client service that starts with their executives and permeates throughout their company,” Connell adds. “When a customer asks them to do something, instead of saying no, they try to figure out ways to make it happen.”

*This case study first appeared on the CyrusOne website.

Powered by WPeMatico

With vCloud Air sale, VMware clears cloud computing path

With the sale of its long languishing vCloud Air offering this week, VMware found a way to step away from the product that has had an uncertain future for quite some time.

The company sold its vCloud Air business to OVH, Europe’s largest cloud provider, for an undisclosed sum, handing off its vCloud Air operations, sales team and data centers to add to OVH’s existing cloud services business.

But VMware isn’t exactly washing its hands of the product. The company will continue to direct research and development for vCloud Air, supplying the technology to OVH – meaning VMware still wants to control the technical direction of the product. It also will assist OVH with various go-to-market strategies, and jointly support VMware users as they transfer their cloud operations to OVH’s 20 data centers spread across 17 countries.

The sale of vCloud Air should lift the last veil of mist that has shrouded VMware’s cloud computing strategy from the start for years. VMware first talked about its vCloud initiative in 2008, and six years later re-launched the product as vCloud Air, a hybrid, IaaS offering for its vSphere users. It never gained any measurable traction among IT shops, getting swallowed up by a number of competitors, most notably AWS and Microsoft.

The company quickly narrowed its early ambitions for vCloud Air to a few specific areas, such as disaster recovery, acknowledged Raghu Raghuram, VMware’s chief operating officer for products and cloud services, in a conference call to discuss the deal.

Further obscuring VMware’s cloud strategy was EMC’s $12 billion purchase of Virtustream in 2015, a product that had every appearance of being a competitor to vCloud Air. This froze the purchasing decisions of would-be buyers of vCloud Air who waited to see how EMC-VMware would position the two offerings.

Even a proposed joint venture between VMware and EMC, called the Virtustream Cloud Services Business, an attempt to deliver a more cohesive technical strategy, collapsed when VMware pulled out of the deal. Dell’s acquisition of EMC, and by extension VMware, didn’t do much to clarify what direction the company’s cloud computing strategy would take.

But last year VMware realized the level of competition it was up against with AWS and made peace with the cloud giant, signing a deal that makes it easier for corporate shops to run VMware on both their own servers as well as servers running in AWS’ public cloud. Announced last October and due in mid-2017, the upcoming product will be called VMware Cloud on AWS, that lets users run applications across vSphere-based private, hybrid and public clouds.

With the sale of vCloud Air, the company removes another distraction for both itself and its customers. Perhaps now the company can focus fully on its ambitious cross-cloud architecture, announced at VMworld last August, which promises to help users manage and connect applications across multiple clouds. VMware delivered those offerings late last year, but the products haven’t created much buzz since.

VMware officials, of course, don’t see the sale as the removal of an obstacle, but rather “the next step in vCloud Air’s evolution,” according to CEO Pat Gelsinger, in a prepared statement. He added the deal is a “win” for users because it presents them with greater choice — meaning they can now choose to migrate to OVH’s data centers, which both companies claim can deliver better performance.

Hmm, well that’s an interesting spin. But time will tell if this optimism has any basis in reality.

After the sale is completed, which should be sometime this quarter, OVH will run the service under the name of vCloud Air Powered by OVH. Whether it is wise to keep the vCloud brand, given the product’s less-than-stellar success, again, remains to be seen.

Ed Scannell is a senior executive editor with TechTarget. Contact him at escannell@techtarget.com.

The post With vCloud Air sale, VMware clears cloud computing path appeared first on The Troposphere.

Powered by WPeMatico

Awareness of shared-responsibility model is critical to cloud success

When companies move to the cloud, it’s paramount that they know where the provider’s security role ends and where the customer’s begins.

The shared-responsibility model is one of the fundamental underpinnings of a successful public cloud deployment. It requires vigilance by the cloud provider and customer—but in different ways. Amazon Web Services (AWS), which developed the philosophy as it ushered in public cloud, describes it succinctly as knowing the difference between security in the cloud versus the security of the cloud.

And that model, which can be radically different from how organizations are used to securing their own data centers, often creates a disconnect for newer cloud customers.

“Many organizations are not asking the right question,” said Ananda Rajagopal, vice president of products at Gigamon, a network-monitoring company based in Santa Clara, Calif. “The right question is not, ‘Is the cloud secure?’ It’s, ‘Is the cloud being used securely?’”

And that’s a change from how enterprises are used to operating behind the firewall, said Abhi Dugar, research director at IDC. The security of the cloud refers to all the underlying hardware and software:

  • compute, storage and networking
  • AWS global infrastructure

That leaves everything else—including the configuration of those foundational services—in the hands of the customer:

  • customer data
  • apps and identify and access management
  • operating system patches
  • network and firewall configuration
  • data and network encryption

Public cloud vendors and third-party vendors offer services to assist in these areas, but it’s ultimately up to the customers to set policies and track things.

The result is a balancing act, said Jason Cradit, senior director of technology at TRC Companies, an engineering and consulting firm for the oil and gas industry. TRC, which uses AWS as its primary public cloud provider, turns to companies like Sumo Logic and Trend Micro to help segregate duties and fill the gaps. And it also does its part to ensure it and its partners are operating securely.

“Even though it’s a shared responsibility, I still feel like with all my workloads I have to be aware and checking [that they] do their part, which I’m sure they are,” Cradit said. “If we’re going to put our critical infrastructure out there, we have to live up to standards on our side as much as we can.”

Trevor Jones is a news writer with SearchCloudComputing and SearchAWS. Contact him at tjones@techtarget.com.

The post Awareness of shared-responsibility model is critical to cloud success appeared first on The Troposphere.

Powered by WPeMatico

Plan to kill Cisco public cloud highlights the investment needed to compete

The graveyard of public clouds is littered with traditional IT vendors, and it’s about to get a bit more crowded.

Cisco has confirmed a report by The Register that it will shut down its Cisco Intercloud Services public cloud early next year. The company rolled out Intercloud in 2014 with plans to spend $1 billion to create a global interconnection among data center nodes targeted at IoT and software as a service offerings.

The networking giant never hitched its strategy to being a pure infrastructure as a service provider, instead focusing on a hybrid model based on its Intercloud Fabric. The goal was to connect to other cloud providers, both public and private. Those disparate environments could then be coupled with its soon-to-be shuttered OpenStack-based public cloud, which includes a collection of compute, storage and networking.

“The end of Cisco’s Intercloud public cloud is no surprise,” said Dave Bartoletti, principal analyst at Forrester. “We’re long past the time when any vendor can construct a public cloud from some key technology bits, some infrastructure, and a whole mess of partners.”

Cisco will help customers migrate existing workloads off the platform. In a statement the company indicated it expects no “material customer issues as a result of the transition” – a possible indication of the limited customer base using the service. Cisco pledged to continue to act as a connector for hybrid environments despite the dissolution of Intercould Services.

Cisco is hardly the first big-name vendor to enter this space with a bang and exit with a whimper. AT&T, Dell, HPE — twice — and Verizon all planned to be major players only to later back out. Companies such as Rackspace and VMware still operate public clouds but have deemphasized those services and reconfigured their cloud strategy around partnerships with market leaders.

Of course, legacy vendors are not inherently denied success in the public cloud, though clearly the transition to an on-demand model involves some growing pains. Microsoft Azure is the closest rival to Amazon Web Services (AWS) after some early struggles. IBM hasn’t found the success it likely expected when it bought bare metal provider SoftLayer, but it now has some buzz around Watson and some of its higher-level services. Even Oracle, which famously derided cloud years ago, is seen as a dark horse by some after it spent years on a rebuilt public cloud.

To compete in the public cloud means a massive commitment to resources. AWS, which essentially created the notion of public cloud infrastructure a decade ago and still holds a sizable lead over its nearest competitors, says it adds enough server capacity every day to accommodate the entire Amazon.com data center demand from 2005. Google says it spent $27 billion over the past three years to build Google Cloud Platform — and is still seen as a distant third in the market.

Public cloud also has become much more than just commodity VMs. Providers continue to extend infrastructure and development tools. AWS alone has 92 unique services for customers.

“We don’t expect any new global public clouds to emerge anytime soon,” Bartoletti said. “The barriers to entry are way too high.”

Intercloud won’t be alone in its public flogging on the way to the scrap heap, but high-profile public cloud obits will become fewer and farther between in 2017 and beyond — simply because there’s no room left to try and fail.

Trevor Jones is a news writer with SearchCloudComputing and SearchAWS. Contact him at tjones@techtarget.com.

The post Plan to kill Cisco public cloud highlights the investment needed to compete appeared first on The Troposphere.

Powered by WPeMatico

Google cloud consulting service a two-way street

Google received plenty of attention when it reshuffled its various cloud services under one business-friendly umbrella, but tucked within that news was a move that also could pay big dividends down the road.

The rebranded Google Cloud pulls together various business units, including Google Cloud Platform (GCP), the renamed G Suite set of apps, machine learning tools and APIs and any Google devices that connect to the cloud. Google also launched a consulting services program called Customer Reliability Engineering, which may have an outsized impact compared to the relatively few customers that will ever get to participate in it.

Customer Reliability Engineering isn’t a typical professional services contract in which a vendor guides its customer through the various IT operations processes for a fee, nor is it aimed at partnering with a forward-leaning company to develop new features. Instead, this is focused squarely on ensuring reliability — and perhaps most notably, there’s no charge for participating.

The reliability focus is not on the platform, per se, but rather the customers’ applications that are run on the platform. It’s a response to uncertainty about how those applications will behave in these new environments, and the fact that IT operations teams are no longer in the war room making decisions when things go awry.

“It’s easy to feel at 3 in the morning that the platform you’re running on doesn’t care as much as you do because you’re one of some larger number,” said Dave Rensin, director of the Customer Reliability Engineering initiative.

Here’s the idea behind the CRE program: a team of Google engineers shares responsibility for the uptime and health operations of a system, including service level objectives, monitoring and paging. They inspect all elements of an application to determine gaps and determine the best ways to get move from four nines to five or six nines.

There are a couple ways Google hopes to reap rewards from this new program. While some customers come to Google just to solve a technical problem such as big data analytics, this program could prove tantalizing for another type of user Rensin describes as looking to “buy a little bit of Google’s operational culture and sprinkle it into some corners of their business.”

Of course, Google’s role here clearly isn’t altruistic. One successful deployment likely begets another, and that spreads to other IT shops as they learn what some of their peers are doing on GCP.

It also doesn’t do either side any favors when resources aren’t properly utilized and a new customer walks away dissatisfied. It’s in Google’s interest to make sure customers get the most out of the platform and to be a partner rather than a disinterested supplier that’s just offering up a bucket of different bits, said Dave Bartoletti, principal analyst with Forrester Research.

“It’s clear people have this idea about the large public cloud providers that they just want to sell you crap and they don’t care how you use it, that they just want you to buy as much as possible — and that’s not true,” Bartoletti said.

Rensin also was quick to note that “zero additional dollars” is not the same as “free” — CRE will cost users effort and organizational capital to change procedures and culture. Google also has instituted policies for participation that require the system to pass an inspection process and not routinely blow its error budget, while the customer must actively participate in reviews and postmortems.

You scratch my back, I’ll scratch yours

Customer Reliability Engineering also comes back to the question of whether Google is ready to handle enterprise demands. It’s one of the biggest knocks against Google as it attempts to catch Amazon and Microsoft in the market, and an image the company has fought hard to reverse under the leadership of Diane Greene. So not only does this program aim to bring a little Google operations to customers, it also aims to bring some of that enterprise know-how back inside the GCP team.

It’s not easy to shift from building tools that focus on consumer life to a business-oriented approach, and this is another sign of how Greene is guiding the company to respond to that challenge, said Sid Nag, research director at Gartner.

“They’re getting a more hardened enterprise perspective,” he said.

There’s also a limit to how many users can participate in the CRE program. Google isn’t saying exactly what that cap is, but it does expect demand to exceed supply — only so many engineers will be dedicated to a program without direct correlation to generating revenues.

Still, participation won’t be selected purely by which customer has the biggest bill. Those decisions will be made by the business side of the GCP team, but with a willingness to partner with teams doing interesting things, Rensin said. To that end, it’s perhaps telling that the first customer wasn’t a well-established Fortune 500 company, but rather Niantic, a gaming company behind the popular Pokémon Go mobile game.

Trevor Jones is a news writer with TechTarget’s Data Center and Virtualization Media Group. Contact him at tjones@techtarget.com.

The post Google cloud consulting service a two-way street appeared first on The Troposphere.

Powered by WPeMatico

Google’s Stackdriver taps into growing multicloud trend

A clear trend has emerged around public cloud adoption in the enterprise: organizations increasingly employ a mix of different cloud services, rather than go all in with one. As that movement continues, cloud providers who support integration with platforms outside their own – and especially with public cloud titan Amazon Web Services – have the most to gain.

Google seems to have that very thought in mind with the recent rollout of its Stackdriver monitoring tool.

Stackdriver, originally built for Amazon Web Services (AWS) but bought by Google in 2014, became generally available this month, providing monitoring, alerting and a number of other capabilities for Google Cloud Platform. Most notably, though, it hasn’t shaken its AWS cloud roots.

Google’s continued support for AWS shouldn’t come as a big surprise for legacy Stackdriver users, said Dan Belcher, product manager for Google Cloud Platform and co-founder of Stackdriver. His team has attempted for the past two years to assuage any customer concerns about AWS support falling by the wayside.

“[Customers were] looking for assurances that, at the time, we were going to continue to invest in support for Amazon Web Services,” Belcher said. “And I think we have addressed those in many ways.”

Mark Annati, VP of IT at Extreme Reach, an advertising firm in Needham, Mass., is a Stackdriver user since 2013 and still uses the tool to monitor his company’s cloud deployment, which spans Google, AWS and Azure. He said his company is still evaluating the full impact of the Stackdriver tool being migrated onto Google’s internal infrastructure, but so far it appears to be business as usual.

And, considering his need for AWS monitoring support, that’s a relief.

“I have had no indication from Stackdriver that they would stop monitoring AWS,” Annati said. “If they did, that would cause us significant pain.”

There are a few changes, however, for legacy Stackdriver users post-acquisition. Now that Stackdriver is hosted on Google’s own infrastructure, for example, users need a Google cloud account to access the tool, and to manage user access and billing. In addition, a few features that existed in the tool pre-acquisition — such as chart annotations, on-premises server monitoring and integration with AWS CloudTrail — are unsupported, at least for now, as part of the migration to Google.

Stackdriver pricing options are slightly different, depending on whether you use the tool exclusively for Google, or for both Google and AWS. All Google Cloud Platform (GCP) users, for example, have access to a free Basic tier and a Premium tier, while users who require the AWS integration only have access to the Premium tier. That higher-level tier costs $8 per monitored cloud resource per month and, in addition to the AWS support, offers more advanced monitoring, as well as a larger allotment for log data.

In general, since the Google acquisition, Stackdriver’s feature set has expanded beyond the tool’s traditional monitoring features, such as alerts and dashboards, to now offer logging, error reporting and debugging tools to both AWS and Google users, Belcher said.

“As an AWS-only customer, your experience using Stackdriver is just as good,” he said.

Moving to a multicloud world

This cross-platform support – particularly for market leader AWS, whose public cloud revenue climbed 55% percent year-over-year in the third quarter, totaling over $3 billion — is going to become table stakes for cloud providers, explained Dave Bartoletti, principal analyst at Forrester Research.

“When you are offering a tool that is great for your platform, you’d better support AWS,” Bartoletti said. “What Google recognizes is that it would be stupid to say, ‘We’re going to release a management tool that is only good for our platform.’”

Google stands to gain from this AWS integration in other ways, too. For example, Stackdriver may eventually prompt more AWS users to evaluate Google’s homegrown data analytics tools, such as BigQuery, as a supplement to Stackdriver itself, Bartoletti said.

“It lets Google show off what else it has to offer,” he said.

While he didn’t offer any specifics, Belcher said Google will consider broadening Stackdriver to support other cloud platforms, such as Azure, and potentially on-premises deployments as well.

“There are more than enough customers on AWS and GCP that are running in some hybrid mode with some unsupported platform, so you can imagine we get requests every day to extend the support,” he said.

Annati, for one, would welcome the move.

“It would be great if Stackdriver covered it all,” he said. “That would be an easy decision for us.”

The post Google’s Stackdriver taps into growing multicloud trend appeared first on The Troposphere.

Powered by WPeMatico

Three IT nightmares that haunted cloud admins in 2016

Cloud doesn’t treat enterprise IT teams all the time; in fact, it occasionally throws out a few tricks. While there are many benefits to cloud, sometimes a cloud deployment can go terribly awry, prompting real-life IT nightmares — ranging from spooky security breaches to pesky platform as a service implementations.

We asked the SearchCloudComputing Advisory Board to share the biggest cloud-related IT nightmares they faced, or saw others face, so far in 2016. Here’s a look at their tales of terror:

Bill Wilder

Halloween nightmares came ten days early this year for DNS provider Dyn, as it was hit with a massive DDoS attack. The Internet simply can’t function without reliable DNS, and most cloud applications and services outsource that to companies like Dyn. Among the parties impacted by the attack on Dyn is a “who’s who” of consumer sites, such as Twitter, Spotify and Netflix, and developer-focused cloud services, such as Amazon Web Services, Heroku and Github. This news comes about a month after security researcher and journalist Brian Krebs had his own web site attacked by one of the largest ever DDoS attacks, reportedly reaching staggering levels exceeding a half terabit of data per second.

Both attacks appear to have been powered by bot armies with significant firepower from unwitting internet-connected internet of things (IoT) devices. This is truly frightening, considering that there are billions of IoT devices in the wild already, from video cameras, DVRs and door locks to refrigerators and Barbie dolls. Since internet-exposed IoT devices are easily found through specialized search engines, and IoT device exploit code is readily available for download, we can be sure of one thing: we are only seeing the early wave of this new brand of DDoS attack.

Gaurav “GP” Pal

My biggest cloud computing nightmare was the first-hand experience of implementing a custom platform as a service (PaaS) on an infrastructure as a service (IaaS) platform. Many large organizations are pushing the innovation envelope in search of cloud nirvana, including hyper-automation, cloud-platform independence and container everything. Sounds great! But with the lines between IaaS, managed IaaS and PaaS constantly blurring, the path to nirvana is not a straight one. It took way longer to create the plumbing than anticipated, the platform was unable to pass security audits and getting the operational hygiene in place was challenging.

Adding to the cup of woes is the lack of qualified talent that truly has experience with custom PaaS, given that it has been around only for a short period of time. On top of that you have a constantly changing technology foundation on the container orchestration side. All of this made for a ghoulish mix. Only time will tell whether a custom PaaS on an IaaS platform is a trick or a treat.

 Alex Witherspoon

The trend I keep seeing repeat is off-base cost expectations and the risk of operating non-cloud-architected applications in a private or public cloud environment that is not ideal for them.

Cloud environments should essentially be the automated abstraction and utilization of physical resources. Additionally, public cloud charges you for that value, in addition to the physical servers that cloud lives on — without your input in the buying decisions. For some, businesses align well with the public cloud of choice and its cost model, and so perhaps the tradeoffs to the business tabulate well. For many, such as Dropbox, to name a public example, they find that public cloud was quickly going to transform at an inflection point from a savings to an operational cost that would only continue to grow with the business and never provide stable controlled operational expenditure (OPEX) or capital expenditure (CAPEX) like a private cloud could provide. Given modern financial mechanisms to take CAPEX investments in private clouds and convert them into flexible OPEX arrangements, the financial models for private cloud are often more economically feasible at the expense of some additional complexity in managing the private cloud. Often, though, that tradeoff for complexity is justified in the control one gains by shaping the architecture of the private cloud to perfectly align to the business needs technologically and economically.

These optimizations in cloud can be numerous, one of them being support for non-cloud architected applications. This is important to consider because not all clouds are built alike, and many public cloud providers like AWS, Azure and Google suggest the minimum viable architecture is a widely distributed application that can survive random outages at any single node. Many modern applications do provide for that, but ultimately, the majority of software in play today is operating with the expectation that the infrastructure underneath it is going to be 100% reliable, and these applications can be dangerous in a public or private cloud environment that isn’t designed for high availability.

To this end, it’s endlessly important to consider the risks, and the return on investment (ROI) picture throughout the lifecycle of the service. Clouds of all types carry diverse ROI profiles, and being able to specifically quantify the strategic fitment of the business needs against these offerings can avert technological and economic disaster for your business.

The post Three IT nightmares that haunted cloud admins in 2016 appeared first on The Troposphere.

Powered by WPeMatico