Taking Stock Of Today’s Trends To Set Tomorrow’s Cloud Strategies

Looking at the state of the cloud computing market, we see the coming year as one in which organizations will be thinking more strategically about the cloud. There will still be growth as organizations continue to invest in the advantages of cloud computing, but it’s necessary for everyone to evaluate the lessons learned and make sure that those investments are, in fact, strategic.

This is already happening to a degree and can be seen in two cloud trends reported in the Wall Street Journal. In a study by the tech research firm IDC, reports on cloud infrastructure spending is not only on an upward trend, it is also outpacing that of traditional IT. The second report, based on a CompTIA survey, found that the adoption of enterprise cloud applications was trending down.

This contradiction in trends can be explained as a symptom of where we are in the maturation process of the cloud computing market.

No one questions that cloud adoption comes with a host of clear benefits in terms of cost, accessibility, and flexibility. These have been and will continue to be major drivers for cloud computing. Lingering questions related to security are, in large part, fear, uncertainty, and doubt generated by segments of the industry that find themselves well behind the curve. Of course, it’s normal to have concerns about the security of data in the cloud, but security is an issue that spans all aspects of technology, not just the cloud. We all grapple with enterprise security.

As for the IDC and CompTIA findings, how do you reconcile the increase in adoption of cloud infrastructure and the migration of workloads to the cloud with a slowdown in the adoption of cloud applications?

The answer is that, as organizations examine their experience with the cloud, they recognize that while it is great for some things, the cloud may not be the technical panacea they’d hoped. It may be that existing investments in back-office systems are simply not ready for cloud integration and that day is farther down the road than first thought. New IT projects may well be cloud-centric, but for legacy IT that is already in place and operating satisfactorily, the ROI may not make sense.

That is an opportunity for vendors whose portfolios span cloud and traditional offerings and can leverage goodwill to maintain recurring revenue with the maintenance of existing systems while capturing new revenue with the sale of both hybrid and cloud products that make sense for existing and new customers.

In my experience, the IDC and CompTIA trends make perfect sense as we’ve seen our customers engaged in the migration to the leading cloud infrastructure service providers like Microsoft Azure, Amazon Web Services (AWS), Google Cloud, and others whose Infrastrucutre-as-a-Service (IaaS) offerings represent the pinnacle of cloud ROI. They are taking advantage of the cloud’s cost savings by shifting the responsibility of managing equipment and the capital costs of hardware to the IaaS provider, but maintaining management-level control of their operations and, in particular, their mission critical systems.

In other cases — and especially for companies operating in or expanding into today’s global markets — the cloud can offer advantages associated with the flexibility of being able to establish a local footprint in countries where local control is necessary because of regulation. In such cases, the right cloud strategy can give the organization the ability to focus on compliance in an increasingly regulated business environment without being distracted with the hassles of standing up a new server farm. Consider the changing environment in the EU where the future of the recently adopted EU-US Privacy Shield agreement is already in question, and where the UK’s looming exit from the European Union may have further implications on cross-border data management. For organizations active in Pacific markets, that can also mean responding to changes to the APEC Cross Border Privacy Rules.

It makes sense for any organization to respond to changing circumstances and adjust plans accordingly. You may be three years into a five-year cloud migration plan and, if you haven’t been correcting course along the way, you may find yourself a long way from your destination. Just as cloud consumers must take stock of where they are today in order to adapt strategy, cloud vendors must also recognize how their product development and sales strategies need to change to meet the needs of their customers and of the market as a whole.

Powered by WPeMatico

Business Access, The Cloud, And Security

Access governance continues to be a surging market in many different industries across the globe and organizations are investing resources in technology that can efficiently improve processes and ensure security of their networks. While the cloud has been established as a standard for organizations, access and governance to manage such solutions has not yet become a standard solution set for the cloud. Perhaps the question remains: How does access governance apply to the cloud?

Access governance helps organizations of all sizes in every industry by ensuring that each employee has the correct access to the systems that they need to perform their jobs, along with keeping the company’s network secure. Access management specifically then allows organizational leaders to easily manage accounts and access, and is put in place to monitor that access is correct for security reasons.

This works by setting up a model of exactly the access rights each role in the organization requires. Access rights are created for specific roles in each relevant department. So, for example, an IT department manager needs certain access rights to systems, applications, and resources, more so than other employees will need. This allows the person who is creating the account to easily do so without accidently making any access mistakes; either giving the employee too many rights or too few rights.

Separation of Duties

Access governance means organizations are able to enable correct access rights according to a model that its leadership have established, thus there are no errors or omissions in the model. Large organizations probably have different types of positions and employees working these positions and their professional responsibilities might overlap so that permission to initiate some type of request and then also accept it is necessary.

Reconciliation is another way to ensure access rights remain accurate. This compares how access rights are set up to be in the model to how they actually are, and creates a report on any differences found. Insomuch, anything that is not accurate can then be easily corrected.

Attestation is still another form of checking access and helps verify all information. A report is forwarded to managers of a department for them to verify that all users and their rights are accounted for and that everything in the log is correct. The manager, for whatever department needs verifying, will need to look over and either mark access rights for deletion, change access rights immediately, or create a helpdesk ticket to change the access right. After examining all of the rights, the manager must give final approval for the proposed set of changes to ensure that everything is correct.

Why is Access Governance Important in the Cloud?

As the number of employees who are working remotely increases so does the users of cloud applications. Access governance is then a way of ensuring security for these types of applications and for employees who are not working in the physical office.

When an employee is first hired by an organization, it is extremely common for the employee to receive too many rights, or acquire them while working on projects and never have them revoked even when the projects have ceased. Access rights, unfortunately, are frequently overlooked access rights and are not considered important enough to revoke, especially in regard to cloud applications. So, access governance means that access is correct across the entire organization, from in-house applications and cloud applications to even physical resources such as cell phones.

Organizational access can be easily monitored through the use of access governance. Here’s why this is important: The typical process goes a little something like this — a new employee is hired in the human resources department as a senior recruiter and needs accounts and resources created so he or she can begin work. The employee then automatically receives a Coupa cloud account, PeopleSoft access, and the ability to open the department’s shared drive and an email address, for example. He or she is ready for work.

For those that participate in such practices, the process looks a little like this: Rules are established so that once a quarter (or whatever interval) the business manager receives a report of all of the employees in his or her department and the access rights of those individuals. When new employees are added to the roles, the list is updated. Then, two quarters later, the manager sees that the senior recruiter has access to an application for which he or she had been using, but the project is now completed or the individual never needed access to the system. Thus, because of advance access management protocols, the business manager, or other departmental leader, can easily tag the access to be revoked and ensure that it is done right away. No multi-level manual processes; simply by the click of a button, all access to the employee for a specific system or all systems can be revoked. That’s the added value of a security measure.

Business leaders have many types of applications to manage, as well as many working situations for employees — because the employee may be traveling, working offsite, or working onsite in the office – and varying resources, all of which can affect access governance and technology within all of these situations. Likewise, leaders that invest because access governance solutions improve security while allowing employees the opportunity to remain productive save organizations time and money.

Powered by WPeMatico

So You’ve Transitioned To The Cloud – Now What?

I’m willing to bet that when Chinese philosopher Lao-Tzu coined his famous phrase around 500 B.C., “The journey of a thousand miles begins with a single step,” he wasn’t thinking about the time it takes to migrate legacy data center operations to the public cloud. But it couldn’t be more applicable.

For many IT departments, shifting operations to the public cloud can be a long, daunting, and frustrating process. However, it doesn’t have to be. Understanding where the public cloud migration journey begins and where it will ultimately end allows IT professionals to ensure that the first step — and all subsequent steps — are taken in the right direction. And well before the cloud journey actually begins, it’s critical that all stakeholders involved understand the value of moving some or all their IT operations to the public cloud. No one wants to walk a thousand miles in the wrong direction.

While the enormous potential of the public cloud has been well documented, realizing that potential in terms of both quantitative ROI and measurable qualitative benefits requires that a plan be developed and implemented to achieve specific desired results. The reality for most companies embarking on this path is that they can’t do it alone. They require a partner that has experience; they need a “Cloud Sherpa” — a partner who can ensure that their journey into uncharted IT territory will be safe and successful. By moving some or all of your applications to a third-party expert’s management and care, IT departments can better focus on their specific objectives, which can result in significant organizational bottom line results.

The main benefits of transitioning to the cloud are agility, increased scalability, reduced total cost of ownership (TCO), and improved security. To reach your ideal results, below are five main steps for companies implementing the public cloud, and thoughts as to how a third-party provider’s management and care could aid the process.



Begin with the end in mind.

It’s key to keep the long game in mind when planning the move to the public cloud. It starts with identifying challenges to be solved and opportunities to be pursued. Make sure all stakeholders are kept in the loop and involve them in the process. CEOs are typically more open to new applications that increase sales and improve customer satisfaction. CFOs, however, often put more emphasis on cost containment and profit-building, and CIOs usually want service-level improvements. By keeping these individuals in the loop, you increase your chances of success because you will have executive leadership buy-in.

Take stock.

Once you have pinpointed the company’s IT goals, it’s important to conduct a high-level inventory or a refresh of the current list of all the apps being used across your enterprise. The appropriate teams and departments conducting this inventory may uncover utilities, databases, and websites you may have missed. Include information about the purpose of the application, who uses it, and the sensitivity and importance of the data to the business. In order to chart your path to the cloud, you need to know the current state of all apps being utilized.

Map demand.

Mapping demand is crucial in strategizing your move to the public cloud. Ask managers to project growth for existing apps over the next three years, and include new apps the company will take online. By identifying and anticipating future traffic levels and spikes, the team can plan accordingly and be ready for increased growth.

Determine the best cloud candidates.

Review your inventory of applications and data and find the best candidates for cloud migration and implementation. You can pick and choose which apps to move to the cloud and which can stay running on-premise. Apps that experience spikey demand or involve parallel processing (e.g., batch) are naturally a better fit for the public cloud. This is also true for apps requiring DR or needing broad geographic placement.

Decide how far you’ll go.

The beauty of the cloud is that you don’t have to go 100% in all at once. Usually, legacy apps should be left where they are, as they typically can’t benefit from cloud scalability. On the other hand, if you have a pending capital expenditure (CAPEX) investment to refresh infrastructure providing legacy apps, it may make sense to move them to the cloud. Scenarios such as this explain the growing popularity of the hybrid cloud amongst enterprises. Low-risk operations such as project management, file sharing, and any other non-revenue generating applications are all low-hanging fruit that can be moved into the public cloud. With cloud, you can start small and grow at the pace that suits your business.

Any IT trek into new territory is bound to encounter unforeseen issues and challenges. With all the factors to consider when migrating to the cloud, it’s beneficial to have seasoned experts that have successfully managed the transition before. The process is much more streamlined with a guide walking you through it, step by step. Once you’ve narrowed down the list of possible managed public cloud partners, the journey begins.

Powered by WPeMatico

Vertical vs. Horizontal Sourcing In The Cloud Era

When Thomas Friedman wrote his bestselling book The World is Flat in 2005, he was not just talking about level playing fields in terms of commerce. He also touched upon two key trends which were helping the world market to flatten, namely outsourcing and offshoring.

Outsourcing meant segregating manufacturing and services into “components” that could be performed in the most cost effective and efficient manner. This, coupled with offshoring of these components, gave rise to phrases such as “outsource manufacturing to China” and “outsource services to India.” These outsourcing strategies allowed corporations to rearrange their supply chain and orchestrate a greater value by the enterprises, which resulted in enhanced market expansion and profitability.

Componentization of Services Built on Horizontal Layers 

In this instance, componentization of services was built on horizontal layers. Enterprises were using technology “services components,” typically layered on top of one another.

Various models were created to unbundle these service components into distinct services that could then be outsourced to one or many third parties. One of the most successful initiatives in the last decade has been modular sourcing, where each service component was delivered by an entity that has built the expertise and efficiency for that component. Outsourcing companies became good at “remote infrastructure management” or “application management” or as full scale service providers. This model worked well, as long as the service component worked within these layers.

Advent of Cloud Computing & Data Brings Need for Vertical Sourcing

The advent of cloud computing has blurred the lines between these layers. The cloud has not only collapsed the first layer of the distributed data center, but has added a third and critical dimension to the service component — namely application and data leading to greater insights.

This unique phenomenon makes it possible to process data along with applications to generate real time insights so businesses can take decisions faster. In the “cloud” world, these service components will need to be organized differently.

Use Case: Financial Services

As an example, a credit card issuer can launch new products every two to three months (at best) because the insight generated has an inherent delay in the feedback because the sourcing is bundled across multiple entities.

For example, if users stop calling to dispute transactions, it might mean that there are fewer fraudulent transactions. However, it could also mean that the users move on and simply abandon the credit card if they are not able to seamlessly dispute a transaction on the card. The card issuer needs this insight quickly. They must have the ability to respond in a timely manner to a dispute — for a particular age group, on a mobile device, in a particular geography — and offer a different method of resolving the dispute.

Competing products from “born in the cloud companies” are able to respond quickly because their data and applications reside together on cloud.

For companies to respond quickly in today’s cloud world, their services components must change from horizontal to vertical: process, application, data, API, and cloud together for a related function. As an example, the credit card issuer should consider outsourcing the entire service component together.

Some companies have already employed this vertical model of outsourcing in the older world and have reaped tremendous benefits as it ensured sustenance and change worked hand in hand. With the advent of the cloud and its ability to handle large volumes of data and generate insights, it is extremely important that outsourcing is handled as a vertical function.

Table 1. Horizontal Sourcing
Layers Service Components Speed of change
Distributed Data Centers Physical data center management
Remote infrastructure management
12-18 months
Application Management
(custom built and packaged software)
First line of support (incident management)
Second line of support (problem management and long term fixes)
0-3 months
Application Development Routine changes, related business as usual activities, compliance
Changes to support the business for new products and services
3-12 months
Large Development Programs Build net new systems and applications 3-24 months
Table 2. Vertical Sourcing
Layers Service Components Speed of change
Cloud Physical data center management Remote infrastructure management 0-4 weeks
Application Management
(custom built and packaged software)
First line of support (incident management) Second line of support (problem management and long term fixes) 0-4 weeks
Application Development Routine changes, related business as usual activities, compliance Changes to support the business for new products and services 0-12 weeks
Large Development Programs Build net new systems and applications 0-12 months*

* Due to change of technology and processes, it’s uncommon to see programs over a year.

Powered by WPeMatico

Why FPGAs, Hyperconvergence, And DevOps Matter To Your Network

The race to innovate in cloud networking has increased to a sprint. Most recently, Microsoft announced the latest coming out of Project Catapult — the decades-old field programmable gate arrays (FPGAs) that last burst onto the scene in a meaningful way with Bing. FPGAs are being brought to the forefront again as a way to increase the speed and efficiency of Azure while decreasing its cost. The effects of these cloud networking innovations will be felt in myriad ways, so it’s important to explore how these technologies — using Microsoft FPGAs as an example — will start to take shape in network environments.

Decoding the Acronyms

With all of the different custom server technologies at its disposal, it’s important to explore why Microsoft is going in the direction of FPGA vs. central processing unit (CPU), graphics processing unit (GPU), or application-specific integrated circuit (ASIC).

The CPU is a general purpose processor with a broad published instruction set, and while not a speed demon, it can do everything from IP address resolution to analog decoding and graphics. That is why these are ubiquitous in nearly every device type, from phones to computers to embedded devices. On the other hand, a GPU has hundreds, or even thousands, of cores, each performing only a handful of tasks, but doing so very quickly, thanks to custom silicon, programming, and parallelism (think Bitcoin and NSA data centers). Finally, ASIC — the network equivalent of a GPU — is a custom chip that knows how to route network traffic without all of the reporting fuss. It moves packets efficiently and quickly, but slows considerably for non-routine tasks. Each of these chips requires an element of custom-building with tradeoffs in speed and efficiency.

What Microsoft did with Bing, and how Bing was able to catch up to Google, was it started looking for ways to achieve neural net processing and machine learning, knowing that Bing would need to have the kind of performance dedicated chip processors deliver, while also allowing them to adapt over time. So Microsoft turned to a slightly old-school technology: FPGAs. They started putting them on their servers instead of having specialized compute nodes. Microsoft distributed the programmable chips in each one of their servers to localize task-specific compute power that was much more efficient for certain workloads than the servers themselves.

Perhaps Bing’s legacy won’t be that it became a respectable challenger to Google search, but that it launched the architecture behind Azure Project Catapult, the distributed FPGA network. With its FPGA-based network, instead of building custom chips, Microsoft built a distributed network of reprogrammable chips designed for machine learning and other capabilities, including softwared-defined networking (SDN) and routing as part of a beneficial standard infrastructure that is unique to Azure.

It’s important to note that alongside recent announcements about FPGAs, Azure also lowered its pricing, further accelerating the price and efficiency war between cloud goliaths Azure and Amazon Web Services (AWS), and in the process, making the networking in the cloud race a bit more interesting.

But while this is all a very high-level examination of the industry, the question remains: how will this actually affect IT professionals?

Reality for IT Professionals: Hyperconvergence and DevOps in a Hybrid IT World

Hyperconvergence is where we are likely to see this all come into play for the IT professional. For example, Azure Stack is a Microsoft version of enterprise hyperconvergence and essentially allows one to deploy Azure in a data center. It’s blurring the lines between enterprise and cloud technologies and making Azure increasingly attractive for the enterprise. With Azure Stack everything works like it does in the cloud, but on-premises. Microsoft is essentially pushing highly converged capabilities into a rack of homogenous systems side-by-side, supporting a common management and monitoring toolset and transitioning administrators, finally, from Infrastructure-as-a-Service (IaaS) to Platform-as-a-Service (PaaS).

With Azure in the data center, more IT professionals will be moving on-premises workloads to Azure Stack. This is because it will allow them to programmatically manage enterprise infrastructure, and ultimately hit a button and move elements of the infrastructure to Azure. Talk about hybrid IT.

This introduces another approach to hybrid IT and an expansion of the DevOps function in two ways: via cloud networking technologies being applied to enterprise environments (and on-premises businesses seeking to hire Azure developers, for example), and the natural adoption of DevOps as a necessary function for anyone managing apps by APIs and not GUIs.

Best Practices

With the explosion of cloud networking innovations leading to hyperconvergence and an increased blending of traditional and cloud technologies in the enterprise, IT professionals need to be armed with best practices to keep pace with the changing landscape. They should consider the following:

  • Expand understanding of monitoring. Effective network monitoring today means looking at elements from components of the application stack (databases, servers, storage, routers, and switches) to internal network firewalls, internet path, and Software-as-a-Service (SaaS) provider internal network monitoring. Although it’s necessary to be able to get information about the components of application delivery for detailed troubleshooting, from a monitoring perspective, it’s more important than ever to do user experience monitoring across all elements of the delivery chain, including the internet and service provider networks.
  • Learn the intricacies of virtual private cloud (VPC) networking. This involves security policy management, policy group assignment, and security policy auditing. In short, IT professionals can no longer get by with just knowing how to secure internal networks; they must understand how to replicate this process in their VPC.
  • Focus on understanding how bulk traffic travels. When running backups in on-premises environments, the only concern is if offline analytics processing runs at the same time as backups, and if these should be separated to avoid overloading storage. But in cloud environments, this is much more complex and involves understanding where backups are going and where processes are happening. IT professionals should keep an eye on the evolving nature of network trafficking — LAN, WAN, and VPC networks.
  • Hit the books. All of these technologies will require a burst of education to get caught up. And IT professionals shouldn’t wait! These innovations are coming fast and furious and it’s important to keep skillsets fresh to adopt the DevOps mentality.
  • Re-evaluate services regularly. Technology is evolving so quickly, and the services being offered by cloud providers are much differentiated. Vendors are constantly adding capabilities and catching up with one another, like with FPGAs and AWS and Azure block chain services. Having an understanding of these ever-evolving service offerings is important because business will look to IT professionals to be experts in these services just as they would with enterprise technologies.

It can be very difficult to keep up with all of the changes to cloud networking and how these will begin to affect IT professionals in hybrid environments. But understanding the practical viewpoint of these technologies’ predicted effects on IT will enable IT professionals to think about this in a level-headed manner and approach the future of the business with confidence.

Powered by WPeMatico

Securing Hybrid IT

Part of the problem with securing hybrid IT is that many people are confused about what that even means. Furthermore, even those who understand what it means are unsure of how security policies should account for hybrid IT.

Why? Well, it’s complicated, literally.

At its core, hybrid IT is complex — IT infrastructure and applications running on-premises (in your own or a hosted data center) combined with anything in the cloud. It’s a mix of services completely owned and managed by an internal team plus services completely owned and managed by a third-party vendor.

In the most recent SolarWinds IT Trends Report (2016), 92% of IT professionals said adopting cloud technologies is important to their organizations’ long-term business success. While that may point to an all-cloud future, the reality is that you will be leveraging cloud as just part of your overall IT strategy, but not moving all your infrastructure to the cloud for some time, if ever. In fact, according to the report, 60% also said it’s unlikely that all of their organizations’ infrastructure will ever be migrated to the cloud.

This means you need to understand and develop security policies that account for a world with a mixed ownership model.

One of the key pieces to this mixed ownership model is Software-as-a-Service (SaaS). SaaS is a way of delivering software. It simply means that the consumer of the software doesn’t have to worry about the underlying details of the application or infrastructure, they just consume the business service, such as email or CRM. Similarly, in the enterprise, IT usually delivers applications as a service, often with monthly or quarterly bill back to the department or business unit consuming the service. And the past 10 years have seen the mass market adoption of public SaaS, which means we in IT now have even less to worry about in regards to getting applications to users. Of course, there are some challenges that come along with this.

When there’s a problem with the infrastructure or applications required to deliver a service that we don’t own or manage, we’re stuck opening a ticket and waiting to hear back like everyone else. Sure, there are a few things we can check — we can ensure our internal infrastructure is operating or that our next ISP isn’t experiencing any problems, but that’s about it.

This is the core challenge of hybrid IT — responsibility without control.

And of course, this isn’t just a problem for ensuring availability. The classic security model of confidentiality, availability, and integrity all look different in a hybrid IT world. By definition, hybrid IT takes data that was in your data center and spreads it out across the internet. How do you ensure confidentiality if your data is entered into a vendor’s application and that data is then shipped across the world to data centers with different local regulations on data security? Application-level encryption in transit, typically TLS, can help, but just because the data was transported securely doesn’t mean it will be stored securely.

The same thing applies to the integrity of your data. How do you ensure that the data stored out of your control doesn’t get modified? Even in complete on-premises deployments, I rarely see IT departments have a program in place to ensure and audit the integrity of the data they store. To be fair, it’s much easier to find news about data breaches from on-premises deployments than from public cloud or SaaS vendors. The point isn’t to argue that private is more secure than public or hybrid, but that as a supplier or consumer of these services, you need to understand how the confidentiality and integrity of your data is being managed.

Another security issue related to hybrid IT has to do with where certain components of an application are deployed in the cloud. For example, a database or message queue service. This is how many IT departments start when they want to migrate their existing applications to the cloud, particularly web services. Of course, net new applications also follow this path as well.

Whenever you do this, you need to ensure that you not only follow your internal security processes, but that those processes are updated to take into account the unique deployment nature of cloud-based services and how that changes your design. For example, it’s easy to spin up a Database-as-a-Service (DBaaS) instance and simply start using it. But just as you wouldn’t put your database server directly on the public internet, you need to ensure your network policies are in place such that only the required servers can access that service.

This is where I see a lot of people get tripped up. If you are using DBaaS, understand that it’s just one component. Remember, you still have to solve the connectivity and security problems just as you did when you deployed a database in your own data center. Complicating matters is that when it comes to anything “as-a-service,” there is often the expectation of very fast deployment, often at the expense of security. Although this speed vs. security issue has always been a problem, it’s exacerbated by the very nature of the cloud — easy deployment and sitting outside the existing security perimeter.

Whether you’re just getting started down the cloud path or are fully involved in a hybrid IT environment, your security policies and controls should clearly reflect the reality of a distributed, mixed ownership IT world. Wherever you’re at, it’s never too early or too late to ensure your hybrid IT plans position you to deliver secure and reliable services; just be sure to take the necessary time to fully understand how it changes your infrastructure, your team, and your approach to security.

This article appeared in the the 2016 September/October issue of Mission Critical Magazine.

Powered by WPeMatico

5 Ways Data Centers Must Adapt To Support IoT

Recently, the Internet of Things (IoT) has been a hot topic, and it’s easy to see why. Did you know that there are more smart devices and electronic gadgets on earth than people? According to the IDC, the digital universe will reach 44 trillion gigabytes of data from a variety of “things,” such as medical implants, wearable technology, and even vending machines, by 2020. Today, according to CloudTweaks, more than 2.5 billion gigabytes of data is generated every day.

If that doesn’t make you take notice, Cisco says that IoT will reach an economic value of $14.4 trillion by 2022. To take advantage of this growth, many companies will jump on the IoT bandwagon by creating software-defined IoT widgets and the supporting management applications, and make them available around the globe. The result of this IoT explosion causes a strain on data center capacity and accessibility for many com­panies, requiring data center service providers to be able to support increasing data demands.

To understand how the IoT paradigm will affect technologies, we first must have an appreciation for what IoT is. The fundamental concept behind IoT is a network of physical devices (or things) that are embedded with technology to give them the ability to sense or measure their envi­ronment — and then have the capability to store and or programmatically send data through network connectivity. Data from these devices can be sent, stored, or programmatic actions taken. Applications of IoT include smart homes, wearable technology, parking meters, equipment sensors, or refrigerators, to name a few. IoT taps into data that allows us to make smarter decisions, quicker.

IoT brings with it a significantly higher demand for storing and processing data, and requires smarter systems and data center infrastructure tai­lored to handle the increase. At the current rate of IoT growth, now is the time to plan for a scalable data future.

With the influx of data expected to happen in the coming years, IT and tech decision makers need to keep their data operations top of mind, and data centers need to prepare themselves for increased scale, density, and security. When talks of IoT take place, it won’t be just one aspect of the infrastructure that will need to be augmented to support IoT. It will impact the whole technology stack, including the networks, facilities, cabinets, technology platforms, and system administration. Companies and data centers are already starting to see the effects of IoT and must ensure they are capable of handling future data requirements.

According to Dr. Deepak Kumar, CTO at Adaptiva, “In the coming decade, the IoT will cause the bandwidth gap to balloon out of control. Enterprises will see enormous amounts of traffic coming from a massive number of sources. In addition to greater bandwidth, enterprises must plan for bandwidth optimization and enforce stricter traffic management policies. IT departments will need to ensure they have mechanisms that priori­tize internet and intranet access to business-critical applications and devices first.”

In order to prepare for the influx of data, data centers must enhance their current capabilities as it pertains to infrastructure, scalability, services, storage, and security. IoT producers will be looking for data center-as-a-service providers that understand and are making plans to support IoT.

SCALABILITY

Data centers will have to be flexible to meet the growing and changing needs of IoT devices and demands with limited to no impact on the cus­tomer. Not only will we see an increase in products but we can also expect to see devices change, be updated, or even replaced similar to the way Apple comes out with newer models each year.

The impact on data center scalability is one reason why an outsourced model is a smart decision. It is nearly impossible to adequately plan for what the next several years hold without risk of under-building or over-building data centers — both of which can have huge costs and downside associated with them.

THINGS IN THE DATA CENTER TECHNOLOGY

IoT transformation isn’t just for new consumer devices. Data centers themselves are also embracing IoT to gain insights into their own infrastruc­ture and operations. The following enhancements are helping to make data centers the most sophisticated and secure places for businesses to host their data.

  • Real-time asset management with RFID – Radio frequency identification tags (RFID) can be added to equipment or devices inside of a data center. RFID tags use an electromagnetic field to uniquely identify devices. These tags allow data centers to act more efficiently since they require less manpower. Without RFID, employees would have to manually check each piece of equipment to maintain inventories. With RFID the manual checks are automated.

  • Environmental sensors – Data centers are now being equipped with many sensors that are placed in the data center to monitor a wide variety of environmental factors. The data is captured and sent to a system that can then use the data to change the climate of the data center, which is important if the weather or compute demands fluctuate in a data center.

  • Infrastructure sensors – Infrared scanning is used to see what the visible eye cannot see. While current technologies require a human to scan and assess the circuitry, it wouldn’t be a stretch to envision data centers having smart infrared IoT scanners that can monitor cables and electrical circuits for anomalies in real time and either suggest corrective action or instantly resolve issues.

  • Biometric scanners – Biometric scanners allow data centers to ensure that only people who have clearance are able to enter. These devices also make it easy to automatically track every person who enters and exits the data center.

  • Network enhancements – IoT is heavily dependent on having reliable networks in place to support the data produced by IoT devices. Many companies are looking for direct connect solutions between data centers and also channeling their cloud traffic through dedicated secure services.

DATA CENTER-AS-A-SERVICE

With the recent onslaught of data and the need for flexibility around scale and changing requirements, an increasing number of companies are look­ing to Data Center-as-a-Service (DCaaS) to meet their needs.

For those who are evaluating their options here are a couple of points to consider:

  • DCaaS is a good solution for companies who aren’t exactly sure what option is right for their business, or what data center size will be best five years from now.

  • DCaaS gives companies the ability to focus on their core business competency, saving cash for building their business.

  • Running a data center requires an investment not only in the facility, but also in the people, the process, and the equipment. By working with a data center colocation provider, you can add scale as needed and only pay for what you need at that moment.

Given today’s evolving technological advancements and data demands, it makes sense for more companies to start embracing the notion of DCaaS and protect against rapidly changing requirements related to scale, security, and infrastructure.

STORAGE

Assuming the predicted 44-zetabyte increase is correct, and if we agree that the current storage demand is about a tenth of that amount, it’s safe to say that there will need to be major storage advancements to support IoT. An influx of users simply means an influx of data that needs to be stored. As millions of IoT devices collect and transmit data every day, all of their information will need to pass through a data center at some point. Data center owners must ask themselves if their current infrastructure will be able to handle all of the data they will generate each day. With the prolif­eration of IoT devices, data centers will have to dramatically increase their storage options and capacity to meet demands.

SECURITY

Years ago, businesses would turn to data centers and typically their only expectation was a cold facility with network and power. But now, as IoT evolves and a growing number of IoT devices will enter the network, the focus is quickly shifting to increased security. The reason? The more end­points that exist within a network, the greater the likelihood of the network’s security being compromised, and each IoT device is an endpoint.

In addition, data center security has been heavily emphasized as legislation surrounding personal information and credit card information contin­ues to develop, especially on the global stage. Businesses with a U.S.-based website may also have customers in Europe or South America, so their data centers should provide a level of compliance and security that safeguards their assets and data in every country.

Look for an increase in data centers obtaining the ISO 27001 certification, which ensures greater protection of data. This certification tests the overall effectiveness of a data center’s information security management system (ISMS). The ISMS is a framework of policies and procedures that include all legal, physical, and technical controls involved in an organization’s information risk management processes. It’s a systematic approach to managing private and sensitive information so it remains secure.

THE FUTURE IS NOW

It’s easy to see why IoT brings both excitement and trepidation to those who take the time to think about its ramifications. This growing trend will affect organizations at all levels as they try to figure out the best way to benefit and adapt. For data centers, it’s important that they are flexible in order to prepare for the future and ensure their infrastructure is ready for the oncoming blitz of devices and data. n 

This article appeared in the 2016 September/October issue of Mission Critical Magazine.

Powered by WPeMatico

Why Cloud Architecture Matters

Choosing an enterprise cloud platform is a lot like choosing between living in an apartment building or a single-family house. Apartment living can offer conveniences and cost-savings on a month-by-month basis. Your rent pays the landlord to handle all ongoing maintenance and renovation projects — everything from fixing a leaky faucet to installing a new central A/C system. But there are restrictions that prevent you from making customizations. And a fire that breaks out in a single apartment may threaten the safety of the entire building. You have more control and autonomy with a house. You have very similar choices to consider when evaluating cloud computing services.

The first public cloud computing services that went live in the late 1990s were built on a legacy construct called a multi-tenant architecture. Their database systems were originally designed for making airline reservations, tracking customer service requests, and running financial systems. These database systems feature centralized compute, storage, and networking that served all customers. As their numbers of users grew, the multi-tenant architecture made it easy for the services to accommodate the rapid user growth.

All customers are forced to share the same software and infrastructure. That presents three major drawbacks:

  1. Data co-mingling: Your data is in the same database as everyone else, so you rely on software for separation and isolation. This has major implications for government, healthcare, and financial regulations. Further, a security breach to the cloud provider could expose your data along with everyone else co-mingled on the same multi-tenant environment.
  2. Excessive maintenance leads to excessive downtime: Multi-tenant architectures rely on large and complex databases that require hardware and software maintenance on a regular basis, resulting in availability issues for customers. Departmental applications in use by a single group, such as the sales or marketing teams, can tolerate weekly downtime after normal business hours or on the weekend. But that’s becoming unacceptable for users who need enterprise applications to be operational as close to 24/7/365 as possible.
  3. One customer’s issue is everyone’s issue: Any action that affects the multi-tenant database affects all shared customers. When software or hardware issues are found on a multi-tenant database, it may cause an outage for all customers, and an upgrade of the multi-tenant database upgrades all customers. Your availability and upgrades are tied to all other customers that share your multi-tenancy. Entire organizations do not want to tolerate this shared approach on applications that are critical to their success. They need software and hardware issues isolated and resolved quickly, and upgrades that meet their own schedules.

With its inherent data isolation and multiple availability issues, multi-tenancy is a legacy cloud computing architecture that cannot stand the test of time.

The multi-instance cloud architecture is not built on large centralized database software and infrastructure. Instead, it allocates a unique database to each customer. This prevents data co-mingling, simplifies maintenance, and makes delivering upgrades and resolving issues much easier because it can be done on a one-on-one basis. It also provides safeguards against hardware failures and other unexpected outages that a multi-tenant system cannot.

The provider is able to replicate application logic and database for each customer instance between two paired and geographically diverse data centers in each of our eight regions around the world. This can be done in near real-time with each side of the paired data centers fully operational and active. Automation technology can quickly move customer instances between these replicated data center pairs.

It’s important to emphasize that multi-instance is not the same single-tenant, where the cloud provider actually deploys separate hardware and software stacks for each customer. There is some sharing of infrastructure pieces, such as network architecture, load balancers, and common network components. But these are segmented into distinct zones so that the failure of one or more devices does not affect more than a few customers. This enables the creation of redundancy at every layer. For example, at the internet borders, a vendor might have multiple border routers that connect to several tier- one providers on many different private circuits, direct connections, and on different pieces of fiber.

This leads to another important difference between multi-tenant and multi-instance architectures: the approach to disaster recovery. Permanent data loss is a risk inherent to all multi-tenant architectures, and that means external disaster recovery sites are no longer viable options.

True, these are sites that a vendor can fail to if the active side fails. But they are only tested a few times a year and only used if an extreme situation arises. If (when) that happens, they risk failing under load. When that happens, data is lost forever.

That risk virtually disappears in a multi-instance environment. Again, there is not one master file system that services all customers. You can scale out pieces of hardware — stack them on top of each other like LEGO blocks. Each block services no more than a few customers, so one hardware crash cannot affect all the blocks. And because replication is automatic, the secondary side is immediately accessible.

When you partner with a cloud provider that bases its platform on a multi-instance architecture, you’re moving into your own house. Your data is isolated, a fully replicated environment provides extremely high availability, and upgrades on the schedule you set, not the provider. Cloud architecture matters because you’re in control, and better protected when disaster strikes.

Powered by WPeMatico

Transitioning To An Agile IT Organization

If you have even a passing interest in software development, you’re likely familiar with the premise of agile methods and processes: keep the code simple, test often, and deliver functional components as soon as they’re ready. It’s more efficient to tackle projects using small changes, rapid iterations, and continuous validation, and to allow both solutions and requirements to evolve through collaboration between self-organizing, cross-functional teams. All in all, agile development carves a path to software creation with faster reaction times, fewer problems, and better resilience.

The agile model has been closely associated with startups that are able to eschew the traditional approach of “setting up walls” between groups and departments in favor of smaller, more focused teams. In a faster-paced and higher-risk environment, younger companies must reassess priorities more frequently than larger, more established ones; they must recalibrate in order to improve their odds of survival. It is for this reason that startups have also successfully managed to extend agile methods throughout the entire service lifecycle — e.g., DevOps — and streamline the process from development all the way through to operations.

Many enterprises have been able to carve out agile practices for the build portion of IT, or even adopt DevOps on a small scale. However, most larger companies have struggled to replicate agility through the entire lifecycle for continuous build, continuous deployment, and continuous delivery. Scaling agility across a bimodal IT organization presents some serious challenges, with significant implications for communication, culture, resources, and distributed teams — but without doing so, enterprises risk being outrun by smaller, nimbler companies.

If large enterprises were able to start from scratch, they would surely build their IT systems in an entirely different way — that’s how much the market has changed. Unfortunately, starting over isn’t an option when you have a business operating at a global, billion-dollar scale. There needs to be a solution that allows these big companies to adapt and transform into agile organizations.

So what’s the solution for these more mature businesses? Ideally, to create space within their infrastructure for software to be continuously built, tested, released, deployed, and delivered. The traditional structure of IT has been mired by ITIL dogma, siloed teams, poor communication, and ineffective collaboration. Enterprises can tackle these problems by constructing modern toolchains that shake things up and introduce the cultural changes needed to bring a DevOps mindset in house.

I like to think of the classic enterprise technology environments as forests. There are certainly upsides to preserving a forest in its entirety. Its bountiful resources — e.g., sophisticated tools and talented workers — offer seemingly endless possibilities for development. Just as the complex canopy of the forest helps shield and protect the life within, the infrastructure maintained by the operations team can help protect the company from instability.

But the very structure that protects the software is also its greatest hindrance. It prevents the company from making the rapid-fire changes necessary to keep up with market trends. The size and scale of the infrastructure, which were once strengths, become enormous obstacles during deployment and delivery. Running at high speed through a forest is a bad idea — you will almost certainly trip over roots, get whacked by branches, and find your progress slowed as you weave through a mix of legacy technology, complex processes, regulatory concerns, compliance overhead, and much more.

By making a clearing in the forest, enterprises can create a realm where it’s possible to run without the constraints of so many trees. This gives them the ability to mimic the key advantage of smaller companies by creating the freedom to quickly build, deploy, and deliver what they want — without the tethers of legacy infrastructure.

For example, I have worked with a multinational retailer that, in addition to operating 7,800 stores across 12 markets, manages 4,500 IT employees around the world — which translates to 7 million emails and 300 phone calls per day from distributed operation centers in nine different countries. The major issue was that notification processes were inconsistent on a global level, and frequently failed to get relevant information to the right people at the right time. This, of course, translated into slower response times to issues affecting their customers.

In order to modernize its IT force, the company reorganized into a service-oriented architecture (SOA), featuring separate service groups that owned the design, development, and run of each of their respective systems. This meant that many IT members were given new roles with responsibilities; though most had worked on developing systems, most hadn’t worked on supporting systems. They also integrated tools to enable automation and self-service for end-users. Today, they have a more consistent and collaborative digital work environment, and the result is greater efficiency, happier customers, and more growth opportunities for the future.

Similarly, I worked with a retail food chain that presented a challenge in improving the communication and collaborative capabilities of its teams in food risk management. Prior to IT modernization, in-store staff manually monitored freezer temperatures every four hours — a complex and time-consuming task that was highly prone to human error. If an incident arose, the escalation process couldn’t identify the correct team member to address the temperature issues, so a mass email would be blasted out. There was no way of knowing if the correct team member has been made aware of the issue and had addressed it.

The company tackled this challenge by creating a more robust process for incident management involving SMS messages to identified staff, emails and phone calls to management, and automated announcements over the in-store system. In addition, they implemented an Internet of Things (IoT) program to completely automate and monitor refrigerator and frozen food temperature management. The result has been significantly increased efficiency, transparency, and accountability — not to mention a safer experience for their customers.

As you can see, these companies were able to identify target areas and problems, and create new spaces within their existing infrastructures to allow them to communicate better, and ultimately become faster, nimbler, and more responsive. Any enterprise looking to move toward agile software development and operations should look at technology-based projects and initiatives that will be most impactful in enhancing team focus and culture. Before you even start thinking about the problems you want to solve with agile and DevOps, you should identify and initiate the conversations that will provide the starting points for adoption. Without a detailed map of your infrastructure and the activities within it, you cannot clear a path to complete, end-to-end DevOps adoption.

Powered by WPeMatico

Summertime And Living In The Cloud Is Easy

Welcome to Cloud Strategy’s 2016 Summer issue! We really outdid ourselves this time.

To begin, Allan Leinwald of ServiceNow is here with an in-depth look at cloud architecture for our cover story. But there is more! Kiran Bondalapati from ZeroStack writes about the commoditization of infrastructure; Sumeet Sabharwal of NaviSite writes on the opportunities available to independent software vendors in the cloud; Mark Nunnikhoven of Trend Micro talks about the trend of the everywhere data center and the danger of dismissing the hybrid cloud; Alan Grantham of Forsythe writes about the cloud conversations companies should be having; Peter Matthews of CA Technologies, Anthony Shimmin of AIMES Grid Services, and Balazs Somoskoi of Lufthansa Systems share their tips for selecting the right cloud services provider; Adam Stern, founder and CEO of Infinitely Virtual writes about the importance of cloud storage speed; Shea Long of TierPoint tackles the hot topic of DRaaS; and Steve Hebert, CEO of Nimbix writes on the challenges CIO face in balancing public, private, and hybrid clouds.

In addition, we have a case study from Masergy on its successful implementation of a high-speed network to implement Big Data analytics.

Another great issue, if we say so ourselves.

Powered by WPeMatico