5 Ways Data Centers Must Adapt To Support IoT

Recently, the Internet of Things (IoT) has been a hot topic, and it’s easy to see why. Did you know that there are more smart devices and electronic gadgets on earth than people? According to the IDC, the digital universe will reach 44 trillion gigabytes of data from a variety of “things,” such as medical implants, wearable technology, and even vending machines, by 2020. Today, according to CloudTweaks, more than 2.5 billion gigabytes of data is generated every day.

If that doesn’t make you take notice, Cisco says that IoT will reach an economic value of $14.4 trillion by 2022. To take advantage of this growth, many companies will jump on the IoT bandwagon by creating software-defined IoT widgets and the supporting management applications, and make them available around the globe. The result of this IoT explosion causes a strain on data center capacity and accessibility for many com­panies, requiring data center service providers to be able to support increasing data demands.

To understand how the IoT paradigm will affect technologies, we first must have an appreciation for what IoT is. The fundamental concept behind IoT is a network of physical devices (or things) that are embedded with technology to give them the ability to sense or measure their envi­ronment — and then have the capability to store and or programmatically send data through network connectivity. Data from these devices can be sent, stored, or programmatic actions taken. Applications of IoT include smart homes, wearable technology, parking meters, equipment sensors, or refrigerators, to name a few. IoT taps into data that allows us to make smarter decisions, quicker.

IoT brings with it a significantly higher demand for storing and processing data, and requires smarter systems and data center infrastructure tai­lored to handle the increase. At the current rate of IoT growth, now is the time to plan for a scalable data future.

With the influx of data expected to happen in the coming years, IT and tech decision makers need to keep their data operations top of mind, and data centers need to prepare themselves for increased scale, density, and security. When talks of IoT take place, it won’t be just one aspect of the infrastructure that will need to be augmented to support IoT. It will impact the whole technology stack, including the networks, facilities, cabinets, technology platforms, and system administration. Companies and data centers are already starting to see the effects of IoT and must ensure they are capable of handling future data requirements.

According to Dr. Deepak Kumar, CTO at Adaptiva, “In the coming decade, the IoT will cause the bandwidth gap to balloon out of control. Enterprises will see enormous amounts of traffic coming from a massive number of sources. In addition to greater bandwidth, enterprises must plan for bandwidth optimization and enforce stricter traffic management policies. IT departments will need to ensure they have mechanisms that priori­tize internet and intranet access to business-critical applications and devices first.”

In order to prepare for the influx of data, data centers must enhance their current capabilities as it pertains to infrastructure, scalability, services, storage, and security. IoT producers will be looking for data center-as-a-service providers that understand and are making plans to support IoT.

SCALABILITY

Data centers will have to be flexible to meet the growing and changing needs of IoT devices and demands with limited to no impact on the cus­tomer. Not only will we see an increase in products but we can also expect to see devices change, be updated, or even replaced similar to the way Apple comes out with newer models each year.

The impact on data center scalability is one reason why an outsourced model is a smart decision. It is nearly impossible to adequately plan for what the next several years hold without risk of under-building or over-building data centers — both of which can have huge costs and downside associated with them.

THINGS IN THE DATA CENTER TECHNOLOGY

IoT transformation isn’t just for new consumer devices. Data centers themselves are also embracing IoT to gain insights into their own infrastruc­ture and operations. The following enhancements are helping to make data centers the most sophisticated and secure places for businesses to host their data.

  • Real-time asset management with RFID – Radio frequency identification tags (RFID) can be added to equipment or devices inside of a data center. RFID tags use an electromagnetic field to uniquely identify devices. These tags allow data centers to act more efficiently since they require less manpower. Without RFID, employees would have to manually check each piece of equipment to maintain inventories. With RFID the manual checks are automated.

  • Environmental sensors – Data centers are now being equipped with many sensors that are placed in the data center to monitor a wide variety of environmental factors. The data is captured and sent to a system that can then use the data to change the climate of the data center, which is important if the weather or compute demands fluctuate in a data center.

  • Infrastructure sensors – Infrared scanning is used to see what the visible eye cannot see. While current technologies require a human to scan and assess the circuitry, it wouldn’t be a stretch to envision data centers having smart infrared IoT scanners that can monitor cables and electrical circuits for anomalies in real time and either suggest corrective action or instantly resolve issues.

  • Biometric scanners – Biometric scanners allow data centers to ensure that only people who have clearance are able to enter. These devices also make it easy to automatically track every person who enters and exits the data center.

  • Network enhancements – IoT is heavily dependent on having reliable networks in place to support the data produced by IoT devices. Many companies are looking for direct connect solutions between data centers and also channeling their cloud traffic through dedicated secure services.

DATA CENTER-AS-A-SERVICE

With the recent onslaught of data and the need for flexibility around scale and changing requirements, an increasing number of companies are look­ing to Data Center-as-a-Service (DCaaS) to meet their needs.

For those who are evaluating their options here are a couple of points to consider:

  • DCaaS is a good solution for companies who aren’t exactly sure what option is right for their business, or what data center size will be best five years from now.

  • DCaaS gives companies the ability to focus on their core business competency, saving cash for building their business.

  • Running a data center requires an investment not only in the facility, but also in the people, the process, and the equipment. By working with a data center colocation provider, you can add scale as needed and only pay for what you need at that moment.

Given today’s evolving technological advancements and data demands, it makes sense for more companies to start embracing the notion of DCaaS and protect against rapidly changing requirements related to scale, security, and infrastructure.

STORAGE

Assuming the predicted 44-zetabyte increase is correct, and if we agree that the current storage demand is about a tenth of that amount, it’s safe to say that there will need to be major storage advancements to support IoT. An influx of users simply means an influx of data that needs to be stored. As millions of IoT devices collect and transmit data every day, all of their information will need to pass through a data center at some point. Data center owners must ask themselves if their current infrastructure will be able to handle all of the data they will generate each day. With the prolif­eration of IoT devices, data centers will have to dramatically increase their storage options and capacity to meet demands.

SECURITY

Years ago, businesses would turn to data centers and typically their only expectation was a cold facility with network and power. But now, as IoT evolves and a growing number of IoT devices will enter the network, the focus is quickly shifting to increased security. The reason? The more end­points that exist within a network, the greater the likelihood of the network’s security being compromised, and each IoT device is an endpoint.

In addition, data center security has been heavily emphasized as legislation surrounding personal information and credit card information contin­ues to develop, especially on the global stage. Businesses with a U.S.-based website may also have customers in Europe or South America, so their data centers should provide a level of compliance and security that safeguards their assets and data in every country.

Look for an increase in data centers obtaining the ISO 27001 certification, which ensures greater protection of data. This certification tests the overall effectiveness of a data center’s information security management system (ISMS). The ISMS is a framework of policies and procedures that include all legal, physical, and technical controls involved in an organization’s information risk management processes. It’s a systematic approach to managing private and sensitive information so it remains secure.

THE FUTURE IS NOW

It’s easy to see why IoT brings both excitement and trepidation to those who take the time to think about its ramifications. This growing trend will affect organizations at all levels as they try to figure out the best way to benefit and adapt. For data centers, it’s important that they are flexible in order to prepare for the future and ensure their infrastructure is ready for the oncoming blitz of devices and data. n 

This article appeared in the 2016 September/October issue of Mission Critical Magazine.

Powered by WPeMatico

Why Cloud Architecture Matters

Choosing an enterprise cloud platform is a lot like choosing between living in an apartment building or a single-family house. Apartment living can offer conveniences and cost-savings on a month-by-month basis. Your rent pays the landlord to handle all ongoing maintenance and renovation projects — everything from fixing a leaky faucet to installing a new central A/C system. But there are restrictions that prevent you from making customizations. And a fire that breaks out in a single apartment may threaten the safety of the entire building. You have more control and autonomy with a house. You have very similar choices to consider when evaluating cloud computing services.

The first public cloud computing services that went live in the late 1990s were built on a legacy construct called a multi-tenant architecture. Their database systems were originally designed for making airline reservations, tracking customer service requests, and running financial systems. These database systems feature centralized compute, storage, and networking that served all customers. As their numbers of users grew, the multi-tenant architecture made it easy for the services to accommodate the rapid user growth.

All customers are forced to share the same software and infrastructure. That presents three major drawbacks:

  1. Data co-mingling: Your data is in the same database as everyone else, so you rely on software for separation and isolation. This has major implications for government, healthcare, and financial regulations. Further, a security breach to the cloud provider could expose your data along with everyone else co-mingled on the same multi-tenant environment.
  2. Excessive maintenance leads to excessive downtime: Multi-tenant architectures rely on large and complex databases that require hardware and software maintenance on a regular basis, resulting in availability issues for customers. Departmental applications in use by a single group, such as the sales or marketing teams, can tolerate weekly downtime after normal business hours or on the weekend. But that’s becoming unacceptable for users who need enterprise applications to be operational as close to 24/7/365 as possible.
  3. One customer’s issue is everyone’s issue: Any action that affects the multi-tenant database affects all shared customers. When software or hardware issues are found on a multi-tenant database, it may cause an outage for all customers, and an upgrade of the multi-tenant database upgrades all customers. Your availability and upgrades are tied to all other customers that share your multi-tenancy. Entire organizations do not want to tolerate this shared approach on applications that are critical to their success. They need software and hardware issues isolated and resolved quickly, and upgrades that meet their own schedules.

With its inherent data isolation and multiple availability issues, multi-tenancy is a legacy cloud computing architecture that cannot stand the test of time.

The multi-instance cloud architecture is not built on large centralized database software and infrastructure. Instead, it allocates a unique database to each customer. This prevents data co-mingling, simplifies maintenance, and makes delivering upgrades and resolving issues much easier because it can be done on a one-on-one basis. It also provides safeguards against hardware failures and other unexpected outages that a multi-tenant system cannot.

The provider is able to replicate application logic and database for each customer instance between two paired and geographically diverse data centers in each of our eight regions around the world. This can be done in near real-time with each side of the paired data centers fully operational and active. Automation technology can quickly move customer instances between these replicated data center pairs.

It’s important to emphasize that multi-instance is not the same single-tenant, where the cloud provider actually deploys separate hardware and software stacks for each customer. There is some sharing of infrastructure pieces, such as network architecture, load balancers, and common network components. But these are segmented into distinct zones so that the failure of one or more devices does not affect more than a few customers. This enables the creation of redundancy at every layer. For example, at the internet borders, a vendor might have multiple border routers that connect to several tier- one providers on many different private circuits, direct connections, and on different pieces of fiber.

This leads to another important difference between multi-tenant and multi-instance architectures: the approach to disaster recovery. Permanent data loss is a risk inherent to all multi-tenant architectures, and that means external disaster recovery sites are no longer viable options.

True, these are sites that a vendor can fail to if the active side fails. But they are only tested a few times a year and only used if an extreme situation arises. If (when) that happens, they risk failing under load. When that happens, data is lost forever.

That risk virtually disappears in a multi-instance environment. Again, there is not one master file system that services all customers. You can scale out pieces of hardware — stack them on top of each other like LEGO blocks. Each block services no more than a few customers, so one hardware crash cannot affect all the blocks. And because replication is automatic, the secondary side is immediately accessible.

When you partner with a cloud provider that bases its platform on a multi-instance architecture, you’re moving into your own house. Your data is isolated, a fully replicated environment provides extremely high availability, and upgrades on the schedule you set, not the provider. Cloud architecture matters because you’re in control, and better protected when disaster strikes.

Powered by WPeMatico

Transitioning To An Agile IT Organization

If you have even a passing interest in software development, you’re likely familiar with the premise of agile methods and processes: keep the code simple, test often, and deliver functional components as soon as they’re ready. It’s more efficient to tackle projects using small changes, rapid iterations, and continuous validation, and to allow both solutions and requirements to evolve through collaboration between self-organizing, cross-functional teams. All in all, agile development carves a path to software creation with faster reaction times, fewer problems, and better resilience.

The agile model has been closely associated with startups that are able to eschew the traditional approach of “setting up walls” between groups and departments in favor of smaller, more focused teams. In a faster-paced and higher-risk environment, younger companies must reassess priorities more frequently than larger, more established ones; they must recalibrate in order to improve their odds of survival. It is for this reason that startups have also successfully managed to extend agile methods throughout the entire service lifecycle — e.g., DevOps — and streamline the process from development all the way through to operations.

Many enterprises have been able to carve out agile practices for the build portion of IT, or even adopt DevOps on a small scale. However, most larger companies have struggled to replicate agility through the entire lifecycle for continuous build, continuous deployment, and continuous delivery. Scaling agility across a bimodal IT organization presents some serious challenges, with significant implications for communication, culture, resources, and distributed teams — but without doing so, enterprises risk being outrun by smaller, nimbler companies.

If large enterprises were able to start from scratch, they would surely build their IT systems in an entirely different way — that’s how much the market has changed. Unfortunately, starting over isn’t an option when you have a business operating at a global, billion-dollar scale. There needs to be a solution that allows these big companies to adapt and transform into agile organizations.

So what’s the solution for these more mature businesses? Ideally, to create space within their infrastructure for software to be continuously built, tested, released, deployed, and delivered. The traditional structure of IT has been mired by ITIL dogma, siloed teams, poor communication, and ineffective collaboration. Enterprises can tackle these problems by constructing modern toolchains that shake things up and introduce the cultural changes needed to bring a DevOps mindset in house.

I like to think of the classic enterprise technology environments as forests. There are certainly upsides to preserving a forest in its entirety. Its bountiful resources — e.g., sophisticated tools and talented workers — offer seemingly endless possibilities for development. Just as the complex canopy of the forest helps shield and protect the life within, the infrastructure maintained by the operations team can help protect the company from instability.

But the very structure that protects the software is also its greatest hindrance. It prevents the company from making the rapid-fire changes necessary to keep up with market trends. The size and scale of the infrastructure, which were once strengths, become enormous obstacles during deployment and delivery. Running at high speed through a forest is a bad idea — you will almost certainly trip over roots, get whacked by branches, and find your progress slowed as you weave through a mix of legacy technology, complex processes, regulatory concerns, compliance overhead, and much more.

By making a clearing in the forest, enterprises can create a realm where it’s possible to run without the constraints of so many trees. This gives them the ability to mimic the key advantage of smaller companies by creating the freedom to quickly build, deploy, and deliver what they want — without the tethers of legacy infrastructure.

For example, I have worked with a multinational retailer that, in addition to operating 7,800 stores across 12 markets, manages 4,500 IT employees around the world — which translates to 7 million emails and 300 phone calls per day from distributed operation centers in nine different countries. The major issue was that notification processes were inconsistent on a global level, and frequently failed to get relevant information to the right people at the right time. This, of course, translated into slower response times to issues affecting their customers.

In order to modernize its IT force, the company reorganized into a service-oriented architecture (SOA), featuring separate service groups that owned the design, development, and run of each of their respective systems. This meant that many IT members were given new roles with responsibilities; though most had worked on developing systems, most hadn’t worked on supporting systems. They also integrated tools to enable automation and self-service for end-users. Today, they have a more consistent and collaborative digital work environment, and the result is greater efficiency, happier customers, and more growth opportunities for the future.

Similarly, I worked with a retail food chain that presented a challenge in improving the communication and collaborative capabilities of its teams in food risk management. Prior to IT modernization, in-store staff manually monitored freezer temperatures every four hours — a complex and time-consuming task that was highly prone to human error. If an incident arose, the escalation process couldn’t identify the correct team member to address the temperature issues, so a mass email would be blasted out. There was no way of knowing if the correct team member has been made aware of the issue and had addressed it.

The company tackled this challenge by creating a more robust process for incident management involving SMS messages to identified staff, emails and phone calls to management, and automated announcements over the in-store system. In addition, they implemented an Internet of Things (IoT) program to completely automate and monitor refrigerator and frozen food temperature management. The result has been significantly increased efficiency, transparency, and accountability — not to mention a safer experience for their customers.

As you can see, these companies were able to identify target areas and problems, and create new spaces within their existing infrastructures to allow them to communicate better, and ultimately become faster, nimbler, and more responsive. Any enterprise looking to move toward agile software development and operations should look at technology-based projects and initiatives that will be most impactful in enhancing team focus and culture. Before you even start thinking about the problems you want to solve with agile and DevOps, you should identify and initiate the conversations that will provide the starting points for adoption. Without a detailed map of your infrastructure and the activities within it, you cannot clear a path to complete, end-to-end DevOps adoption.

Powered by WPeMatico

Summertime And Living In The Cloud Is Easy

Welcome to Cloud Strategy’s 2016 Summer issue! We really outdid ourselves this time.

To begin, Allan Leinwald of ServiceNow is here with an in-depth look at cloud architecture for our cover story. But there is more! Kiran Bondalapati from ZeroStack writes about the commoditization of infrastructure; Sumeet Sabharwal of NaviSite writes on the opportunities available to independent software vendors in the cloud; Mark Nunnikhoven of Trend Micro talks about the trend of the everywhere data center and the danger of dismissing the hybrid cloud; Alan Grantham of Forsythe writes about the cloud conversations companies should be having; Peter Matthews of CA Technologies, Anthony Shimmin of AIMES Grid Services, and Balazs Somoskoi of Lufthansa Systems share their tips for selecting the right cloud services provider; Adam Stern, founder and CEO of Infinitely Virtual writes about the importance of cloud storage speed; Shea Long of TierPoint tackles the hot topic of DRaaS; and Steve Hebert, CEO of Nimbix writes on the challenges CIO face in balancing public, private, and hybrid clouds.

In addition, we have a case study from Masergy on its successful implementation of a high-speed network to implement Big Data analytics.

Another great issue, if we say so ourselves.

Powered by WPeMatico

Hyper-scale data center eliminates IT risk and uncertainty

In June 2016, CyrusOne completed the Sterling II data center at its Northern Virginia campus. A custom facility featuring 220,000 sq ft of space and 30 MW of power, Sterling II was built from the ground up and completed in only six months, shattering all previous data center construction records.

The Sterling II facility represents a new standard in the building of enterprise-level data centers, and confirms that CyrusOne can use the streamlined engineering elements and methods used to build Sterling II to build customized, quality data centers anywhere in the continental United States, with a similarly rapid time to completion.

CyrusOne’s quick-delivery data center product provides a solution for cloud technology, social media, and enterprise companies that have trouble building or obtaining data center capacity fast enough to support their information technology (IT) infrastructure. In trying to keep pace with overwhelming business growth, these companies often find it hard to predict their future capacity needs. A delay in obtaining data center space can also delay or stop a company’s revenue-generating initiatives, and have significant negative impact on the bottom line.

The record completion time of the Sterling II facility was the result of numerous data center construction principles developed by CyrusOne. These include standardized data center design techniques that enable CyrusOne and its build partners to customize the facility to optimize space, power, and cooling according to customer needs; effective project management in all phases of design and construction, thanks to CyrusOne’s established partnerships with data center architects, engineers, and contractors; advanced supply-chain techniques that enable CyrusOne to manufacture or pre-fabricate data center components and equipment without disrupting work at the construction site; and the use of Massively Modular® electrical units and chillers to enable rapid deployment of power and cooling at the facility according to customers’ IT capacity needs.

Introduction

In late December 2015, CyrusOne broke ground on the Sterling II data center, the second facility at its Northern Virginia campus. Built for specific customers, the Sterling II facility is a 220,000-sq-ft data center with 30 MW of critical power capacity. The facility was completed and commissioned in mid-June 2016. Its under six-month construction time frame is the shortest known time to completion ever achieved by CyrusOne for an enterprise-scale data center of its size. The 180-day build time shattered all known industry construction records.

CyrusOne had previously set another industry record by delivering a 120,000-sq-ft, 6MW facility in Phoenix, Arizona, in 107 days, or just over three months. The Sterling II facility is almost twice the size of the Phoenix facility, offers five times more power capacity, and took only twice as long to deliver. Its record time to market represents a new industry standard in the construction and deployment of built-to-suit enterprise data centers.

The Challenge

Many large-scale cloud, internet, social media and enterprise companies are growing at an unprecedented and unpredictable rate, with their IT footprints often doubling or tripling in size in just a few years. But rapid growth makes it harder for these companies to predict or plan for future IT infrastructure expansion.

“When enterprises determine how much IT capacity they will require to handle future business growth, it often turns out that they needed it ‘yesterday,’” explains John Hatem, CyrusOne’s executive vice president of data center design, construction. and operations. “But they can’t build new data centers or buy colocation space fast enough to meet their skyrocketing IT infrastructure demands. In addition, the quest to build or obtain new data center space is a distraction from the company’s core business, whether that’s software development, cloud technology, social media. or other business applications.”

The Solution

CyrusOne Solutions™ build-to-suit IT deployments can deliver a completed, high-quality data center product, often in the same amount of time it takes enterprises to order and receive the computing equipment that will operate inside the facility. This rapid time to delivery helps relieve the customer’s risk of not having adequate IT capacity to support their key business growth, or the infrastructure demands of new initiatives. Significantly, CyrusOne is typically able to deliver this data center product with lower construction, engineering and operational costs to the customer.

The Sterling II and Phoenix enterprise data centers were completed in record time thanks to CyrusOne Solutions’ streamlined construction and IT deployment approach, which includes:

  • CyrusOne’s signature Massively Modular engineering disciplines, which employ standardized data center design using pre-fabricated components and template construction techniques.
  • Effective project management by the CyrusOne Solutions team through productive and collaborative relationships with experienced data center architects, engineers and contractors involved in the project.
  • Advanced supply-chain techniques that enable CyrusOne to manufacture or pre-fabricate data center components with time-saving efficiency.
  • CyrusOne’s Massively Modular approach, which uses modular electrical units and chillers to provide flexible power and cooling deployments for the facility.

Massively Modular Construction 

“We think of building our data centers as a manufacturing process, not a construction process,” Hatem says. “We deliver the same high-quality product to all of our customers, which is a reliable data center with space, power and cooling. Using a standardized data center design and components enables us to deploy a similar product anywhere in the continental United States, with the fastest time to market available.”

Through its Massively Modular construction/engineering methods, CyrusOne builds data centers in standardized building blocks with 60,000 sq ft of infrastructure and 4.5 MW of power. For customized data center projects, CyrusOne builds as many blocks as the customer requires. The Phoenix data center consists of two building blocks, while the Sterling II data center consists of five building blocks (with additional power capacity added). Using this standardized layout as a basis, CyrusOne can then customize the design of a built-to-suit data center to optimize space, power and cooling according to the individual customer’s IT needs.

Effective Project Management through Industry Parnerships

To build the Sterling II facility, CyrusOne Solutions put together a project-management team that included outside architects, engineers, and contractors who had worked with CyrusOne on previous data center builds. By working with these industry experts, CyrusOne was able to plan and execute the Sterling II project so the facility could be built in a very short time.

“I can’t say enough about the entire team that worked on the project,” says Laramie Dorris, CyrusOne’s vice president of design and construction. “That includes the architect and engineering team, general contractors, third-party consultants, structural and civil engineers, and local contractors in Northern Virginia, who all pulled together to manage and execute this project. A project like this runs 24/7 for the entire duration, and it was incredible to watch everyone working together in a collaborative, cohesive effort to meet the project requirements and finish the facility within the established six-month time frame.”

Corgan, a Dallas firm, is the architect of record for the Sterling II facility. According to Mike Connell, who served as Corgan’s project manager on Sterling II, “One reason for CyrusOne’s success is they don’t try to micromanage a data center project from the top down. Instead, they hire the right people, build the right teams and empower project managers to make important decisions based on their roles. It makes their construction projects run more smoothly and efficiently.

“For Sterling II, CyrusOne provided Corgan with the basis of design, a budget and a time frame for building the data center, and let our engineers take care of the rest. We were able to give them several design options and tell them the impact on construction, schedule and cost for each option. The confidence that CyrusOne showed in our engineers enabled them to use their creativity to meet the challenge and solve the problems of building a facility in just six months. Our engineers are able to work smarter and harder when they aren’t being overly managed by the client.”

Advanced Supply-Chain Techniques

“In Northern Virginia, CyrusOne made an educated decision to go with an all-precast structural concrete building with modular power and cooling units,” Dorris explains. “This enabled us to set up advanced supply-chain operations to manufacture or pre-fabricate the components we needed for the data center, which gave us significant savings in time and costs.

“For example, a normal data center building has tilt-up concrete walls, which are cast on-site at the construction site. But for the Sterling II data center, we set up a separate off-site facility where we could cast pre-fabricated concrete wall panels. We then brought those panels to the construction site on trucks and used them to set up the data center building. It saved time because we didn’t have to stop work at the building site while the concrete walls were being cast.

“Also, we decided to use pre-fabricated concrete supports in the data center building, which we could also cast off-site. This saved additional time and money because we didn’t have to buy a reinforced steel framework for the building or wait for it to be delivered to us. Using pre-cast concrete walls and supports shaved a couple of months off our time to market for Sterling II.”

Modular Power and Cooling

“To provide power and cooling to the Sterling II facility, we used CyrusOne’s Massively Modular engineering approach,” Dorris says. “We set up another off-site facility where we could assemble modular power units. Each unit included an uninterruptible power supply (UPS), a backup generator, and a utility transformer, all housed in weatherproof containers. We brought the modular units to the Sterling II site and set them up in ‘lineups’ outside the facility. Using modular power units speeds up construction, saves money and reduces the building’s footprint because we don’t have to build additional rooms inside the data center to house power equipment. Also, we used modular cooling units from Stulz at the Sterling II facility, which saved us from having to build a large centrifugal cooling plant on-site.

“The Massively Modular approach provides flexible power and cooling options for Sterling II. If our customer needs to change their IT deployment within the facility, we can bring in additional power units and chillers, and increase power density and cooling with no negative impact or downtime on their current environment. The modular cooling units help lower operating costs because they’re cheaper to operate and maintain over a regular on-site cooling plant. Also, the Massively Modular approach provides redundancy. If a power or cooling unit breaks down, the others will take up the slack until the broken unit can be repaired or replaced.”

Conclusion

CyrusOne Solutions’ built-to-suit data center product is the best solution for cloud, internet, or enterprise customers who need quality data center facilities built in the shortest time possible. The standardized construction approach is a repeatable process employable in multiple locations to ensure rapid speed to market for data center projects, with significant cost savings for customers.

By delivering data centers like the Sterling II and Phoenix facilities in record times, CyrusOne is continuously setting the bar higher for the data center industry. Additionally, CyrusOne is helping ensure its customers are able to scale at hyper-speed to meet their data center capacity needs by removing the risks of running out of space or power.

“CyrusOne has a culture of dedication to client service that starts with their executives and permeates throughout their company,” Connell adds. “When a customer asks them to do something, instead of saying no, they try to figure out ways to make it happen.”

*This case study first appeared on the CyrusOne website.

Powered by WPeMatico

With vCloud Air sale, VMware clears cloud computing path

With the sale of its long languishing vCloud Air offering this week, VMware found a way to step away from the product that has had an uncertain future for quite some time.

The company sold its vCloud Air business to OVH, Europe’s largest cloud provider, for an undisclosed sum, handing off its vCloud Air operations, sales team and data centers to add to OVH’s existing cloud services business.

But VMware isn’t exactly washing its hands of the product. The company will continue to direct research and development for vCloud Air, supplying the technology to OVH – meaning VMware still wants to control the technical direction of the product. It also will assist OVH with various go-to-market strategies, and jointly support VMware users as they transfer their cloud operations to OVH’s 20 data centers spread across 17 countries.

The sale of vCloud Air should lift the last veil of mist that has shrouded VMware’s cloud computing strategy from the start for years. VMware first talked about its vCloud initiative in 2008, and six years later re-launched the product as vCloud Air, a hybrid, IaaS offering for its vSphere users. It never gained any measurable traction among IT shops, getting swallowed up by a number of competitors, most notably AWS and Microsoft.

The company quickly narrowed its early ambitions for vCloud Air to a few specific areas, such as disaster recovery, acknowledged Raghu Raghuram, VMware’s chief operating officer for products and cloud services, in a conference call to discuss the deal.

Further obscuring VMware’s cloud strategy was EMC’s $12 billion purchase of Virtustream in 2015, a product that had every appearance of being a competitor to vCloud Air. This froze the purchasing decisions of would-be buyers of vCloud Air who waited to see how EMC-VMware would position the two offerings.

Even a proposed joint venture between VMware and EMC, called the Virtustream Cloud Services Business, an attempt to deliver a more cohesive technical strategy, collapsed when VMware pulled out of the deal. Dell’s acquisition of EMC, and by extension VMware, didn’t do much to clarify what direction the company’s cloud computing strategy would take.

But last year VMware realized the level of competition it was up against with AWS and made peace with the cloud giant, signing a deal that makes it easier for corporate shops to run VMware on both their own servers as well as servers running in AWS’ public cloud. Announced last October and due in mid-2017, the upcoming product will be called VMware Cloud on AWS, that lets users run applications across vSphere-based private, hybrid and public clouds.

With the sale of vCloud Air, the company removes another distraction for both itself and its customers. Perhaps now the company can focus fully on its ambitious cross-cloud architecture, announced at VMworld last August, which promises to help users manage and connect applications across multiple clouds. VMware delivered those offerings late last year, but the products haven’t created much buzz since.

VMware officials, of course, don’t see the sale as the removal of an obstacle, but rather “the next step in vCloud Air’s evolution,” according to CEO Pat Gelsinger, in a prepared statement. He added the deal is a “win” for users because it presents them with greater choice — meaning they can now choose to migrate to OVH’s data centers, which both companies claim can deliver better performance.

Hmm, well that’s an interesting spin. But time will tell if this optimism has any basis in reality.

After the sale is completed, which should be sometime this quarter, OVH will run the service under the name of vCloud Air Powered by OVH. Whether it is wise to keep the vCloud brand, given the product’s less-than-stellar success, again, remains to be seen.

Ed Scannell is a senior executive editor with TechTarget. Contact him at escannell@techtarget.com.

The post With vCloud Air sale, VMware clears cloud computing path appeared first on The Troposphere.

Powered by WPeMatico

Awareness of shared-responsibility model is critical to cloud success

When companies move to the cloud, it’s paramount that they know where the provider’s security role ends and where the customer’s begins.

The shared-responsibility model is one of the fundamental underpinnings of a successful public cloud deployment. It requires vigilance by the cloud provider and customer—but in different ways. Amazon Web Services (AWS), which developed the philosophy as it ushered in public cloud, describes it succinctly as knowing the difference between security in the cloud versus the security of the cloud.

And that model, which can be radically different from how organizations are used to securing their own data centers, often creates a disconnect for newer cloud customers.

“Many organizations are not asking the right question,” said Ananda Rajagopal, vice president of products at Gigamon, a network-monitoring company based in Santa Clara, Calif. “The right question is not, ‘Is the cloud secure?’ It’s, ‘Is the cloud being used securely?’”

And that’s a change from how enterprises are used to operating behind the firewall, said Abhi Dugar, research director at IDC. The security of the cloud refers to all the underlying hardware and software:

  • compute, storage and networking
  • AWS global infrastructure

That leaves everything else—including the configuration of those foundational services—in the hands of the customer:

  • customer data
  • apps and identify and access management
  • operating system patches
  • network and firewall configuration
  • data and network encryption

Public cloud vendors and third-party vendors offer services to assist in these areas, but it’s ultimately up to the customers to set policies and track things.

The result is a balancing act, said Jason Cradit, senior director of technology at TRC Companies, an engineering and consulting firm for the oil and gas industry. TRC, which uses AWS as its primary public cloud provider, turns to companies like Sumo Logic and Trend Micro to help segregate duties and fill the gaps. And it also does its part to ensure it and its partners are operating securely.

“Even though it’s a shared responsibility, I still feel like with all my workloads I have to be aware and checking [that they] do their part, which I’m sure they are,” Cradit said. “If we’re going to put our critical infrastructure out there, we have to live up to standards on our side as much as we can.”

Trevor Jones is a news writer with SearchCloudComputing and SearchAWS. Contact him at tjones@techtarget.com.

The post Awareness of shared-responsibility model is critical to cloud success appeared first on The Troposphere.

Powered by WPeMatico

Plan to kill Cisco public cloud highlights the investment needed to compete

The graveyard of public clouds is littered with traditional IT vendors, and it’s about to get a bit more crowded.

Cisco has confirmed a report by The Register that it will shut down its Cisco Intercloud Services public cloud early next year. The company rolled out Intercloud in 2014 with plans to spend $1 billion to create a global interconnection among data center nodes targeted at IoT and software as a service offerings.

The networking giant never hitched its strategy to being a pure infrastructure as a service provider, instead focusing on a hybrid model based on its Intercloud Fabric. The goal was to connect to other cloud providers, both public and private. Those disparate environments could then be coupled with its soon-to-be shuttered OpenStack-based public cloud, which includes a collection of compute, storage and networking.

“The end of Cisco’s Intercloud public cloud is no surprise,” said Dave Bartoletti, principal analyst at Forrester. “We’re long past the time when any vendor can construct a public cloud from some key technology bits, some infrastructure, and a whole mess of partners.”

Cisco will help customers migrate existing workloads off the platform. In a statement the company indicated it expects no “material customer issues as a result of the transition” – a possible indication of the limited customer base using the service. Cisco pledged to continue to act as a connector for hybrid environments despite the dissolution of Intercould Services.

Cisco is hardly the first big-name vendor to enter this space with a bang and exit with a whimper. AT&T, Dell, HPE — twice — and Verizon all planned to be major players only to later back out. Companies such as Rackspace and VMware still operate public clouds but have deemphasized those services and reconfigured their cloud strategy around partnerships with market leaders.

Of course, legacy vendors are not inherently denied success in the public cloud, though clearly the transition to an on-demand model involves some growing pains. Microsoft Azure is the closest rival to Amazon Web Services (AWS) after some early struggles. IBM hasn’t found the success it likely expected when it bought bare metal provider SoftLayer, but it now has some buzz around Watson and some of its higher-level services. Even Oracle, which famously derided cloud years ago, is seen as a dark horse by some after it spent years on a rebuilt public cloud.

To compete in the public cloud means a massive commitment to resources. AWS, which essentially created the notion of public cloud infrastructure a decade ago and still holds a sizable lead over its nearest competitors, says it adds enough server capacity every day to accommodate the entire Amazon.com data center demand from 2005. Google says it spent $27 billion over the past three years to build Google Cloud Platform — and is still seen as a distant third in the market.

Public cloud also has become much more than just commodity VMs. Providers continue to extend infrastructure and development tools. AWS alone has 92 unique services for customers.

“We don’t expect any new global public clouds to emerge anytime soon,” Bartoletti said. “The barriers to entry are way too high.”

Intercloud won’t be alone in its public flogging on the way to the scrap heap, but high-profile public cloud obits will become fewer and farther between in 2017 and beyond — simply because there’s no room left to try and fail.

Trevor Jones is a news writer with SearchCloudComputing and SearchAWS. Contact him at tjones@techtarget.com.

The post Plan to kill Cisco public cloud highlights the investment needed to compete appeared first on The Troposphere.

Powered by WPeMatico

Google cloud consulting service a two-way street

Google received plenty of attention when it reshuffled its various cloud services under one business-friendly umbrella, but tucked within that news was a move that also could pay big dividends down the road.

The rebranded Google Cloud pulls together various business units, including Google Cloud Platform (GCP), the renamed G Suite set of apps, machine learning tools and APIs and any Google devices that connect to the cloud. Google also launched a consulting services program called Customer Reliability Engineering, which may have an outsized impact compared to the relatively few customers that will ever get to participate in it.

Customer Reliability Engineering isn’t a typical professional services contract in which a vendor guides its customer through the various IT operations processes for a fee, nor is it aimed at partnering with a forward-leaning company to develop new features. Instead, this is focused squarely on ensuring reliability — and perhaps most notably, there’s no charge for participating.

The reliability focus is not on the platform, per se, but rather the customers’ applications that are run on the platform. It’s a response to uncertainty about how those applications will behave in these new environments, and the fact that IT operations teams are no longer in the war room making decisions when things go awry.

“It’s easy to feel at 3 in the morning that the platform you’re running on doesn’t care as much as you do because you’re one of some larger number,” said Dave Rensin, director of the Customer Reliability Engineering initiative.

Here’s the idea behind the CRE program: a team of Google engineers shares responsibility for the uptime and health operations of a system, including service level objectives, monitoring and paging. They inspect all elements of an application to determine gaps and determine the best ways to get move from four nines to five or six nines.

There are a couple ways Google hopes to reap rewards from this new program. While some customers come to Google just to solve a technical problem such as big data analytics, this program could prove tantalizing for another type of user Rensin describes as looking to “buy a little bit of Google’s operational culture and sprinkle it into some corners of their business.”

Of course, Google’s role here clearly isn’t altruistic. One successful deployment likely begets another, and that spreads to other IT shops as they learn what some of their peers are doing on GCP.

It also doesn’t do either side any favors when resources aren’t properly utilized and a new customer walks away dissatisfied. It’s in Google’s interest to make sure customers get the most out of the platform and to be a partner rather than a disinterested supplier that’s just offering up a bucket of different bits, said Dave Bartoletti, principal analyst with Forrester Research.

“It’s clear people have this idea about the large public cloud providers that they just want to sell you crap and they don’t care how you use it, that they just want you to buy as much as possible — and that’s not true,” Bartoletti said.

Rensin also was quick to note that “zero additional dollars” is not the same as “free” — CRE will cost users effort and organizational capital to change procedures and culture. Google also has instituted policies for participation that require the system to pass an inspection process and not routinely blow its error budget, while the customer must actively participate in reviews and postmortems.

You scratch my back, I’ll scratch yours

Customer Reliability Engineering also comes back to the question of whether Google is ready to handle enterprise demands. It’s one of the biggest knocks against Google as it attempts to catch Amazon and Microsoft in the market, and an image the company has fought hard to reverse under the leadership of Diane Greene. So not only does this program aim to bring a little Google operations to customers, it also aims to bring some of that enterprise know-how back inside the GCP team.

It’s not easy to shift from building tools that focus on consumer life to a business-oriented approach, and this is another sign of how Greene is guiding the company to respond to that challenge, said Sid Nag, research director at Gartner.

“They’re getting a more hardened enterprise perspective,” he said.

There’s also a limit to how many users can participate in the CRE program. Google isn’t saying exactly what that cap is, but it does expect demand to exceed supply — only so many engineers will be dedicated to a program without direct correlation to generating revenues.

Still, participation won’t be selected purely by which customer has the biggest bill. Those decisions will be made by the business side of the GCP team, but with a willingness to partner with teams doing interesting things, Rensin said. To that end, it’s perhaps telling that the first customer wasn’t a well-established Fortune 500 company, but rather Niantic, a gaming company behind the popular Pokémon Go mobile game.

Trevor Jones is a news writer with TechTarget’s Data Center and Virtualization Media Group. Contact him at tjones@techtarget.com.

The post Google cloud consulting service a two-way street appeared first on The Troposphere.

Powered by WPeMatico