Cloud Management from IndependenceIT

IndependenceIT’s Cloud Workspace® Suite version 5.1 is an integrated automation software platform has been enhanced with several new features to facilitate the management and delivery of software defined data centers (SDDCs), workspaces, applications, and data to users anywhere, anytime, and on any device.

As MSPs, CSPs, and ISVs support business efforts to digitally transform their operations, much of their success will follow the deployment of cloud-based workspaces. Rapid adoption of Cloud Workspaces has accelerated in recent years and is expected to skyrocket with recent research showing global sales set to reach US $18.37 billion by 2022(1). With a focus on cloud enablement for service providers, IndependenceIT has been at the forefront of innovation in this space. Today’s release of Cloud Workspace Suite version 5.1 furthers the functionality of its platform with several new capabilities.

Cloud Workspace Suite Software Enhancements

  • Support for Windows 2016 Session Server: CWS 5.1 will support Windows Server 2016 server versions for all supported platforms. Windows 2016 Server provides the “Windows 10” desktop experience for shared RDS session users and enables configuration options such as GPU assignment for graphics intensive applications.
  • Full Stack Support for Microsoft Azure Resource Manager: Microsoft has recommended migrating from the traditional encryption key/delegated account user entitlement model to the Azure Resource Manager model. Microsoft Azure Resource Manager is a framework that enables users to work with the resources within a solution as a group. The required authentication attributes are collected once during software defined data center (SDDC) deployment and then reused for other Microsoft Azure activities without the need for re-entry or re-authentication.
  • Support for Office 365 Single Authentication: Microsoft Office 365 utilizes an authentication model that requires end users to enter credentials every time they use the office productivity suite on a new computer or device. The enhanced CWS platform eliminates this requirement by using a User Profile Disk implementation to cache the credentials in the user’s profile disk, which follows the user across session servers and improves the start-up time for user sessions.
  • Hypervisor/Cloud Direct Template Management: Configuration and management of hypervisor templates for session and data servers are time consuming tasks for administrators. Version 5.1 incorporates server template management directly into the CWS Web Application. Automated hypervisor management functions include the creation of a server instance based on an existing template or Windows VM image; direct connection/login to the created server for installation of applications from the CWS Web App; automatic template creation/Windows sysprep from the configured server instance, and validation of application paths and installs from within CWS to eliminate the need for accessing the hypervisor or cloud service dashboard directly.
  • Administrator Defined Automation: CWS also provides improved deployment/management automation for service providers with Administrator Defined Automation of tasks/script execution. With this enhancement, version 5.1 will significantly speed deployments, simplify management, and reduce overhead costs. CWS Administrator Defined Automation will allow for the installation or upgrading of applications based on events, allowing partners to trigger automated application installations using this method. IndependenceIT will also provide several task type templates to supplement application install capabilities with this release.

1. Transparency Research, Workspace as a Service (WaaS) Market – Global Industry Analysis, Size, Share, Growth, Trends, and Forecast 2015 – 2022, http://www.transparencymarketresearch.com/workspace-as-a-service-waas-market.html]

Powered by WPeMatico

Multi-Cloud and Hybrid Cloud Solutions from Faction

Faction’s Faction Internetwork eXchange (FIX) allows enterprises to easily and cost-effectively connect private cloud and colocation resources into public clouds privately and securely. This extends Faction’s patent-pending “bring your network as-is” private cloud networking to hybrid cloud designs, allowing enterprises to easily add the use of public cloud to their private clouds without complex networking changes.

Faction Internetwork eXchange allows organizations to quickly add private, secure connectivity to hundreds of other networks over their existing infrastructure. No additional hardware or software is needed to create many virtual circuits to different networks, all over their existing ports. This allows customers to easily pursue hybrid cloud and multi-cloud strategies without spending money up front on hardware, tools, and training. Among the hundreds of available connection endpoints, Faction can easily connect customers into popular public cloud destinations, such as Amazon AWS, Microsoft Azure, Google Cloud Platform, IBM Softlayer, and many others.

Powered by WPeMatico

Dell EMC Elastic Cloud Storage (ECS) Support from Avere Systems

Avere Systems has announced support for Dell EMC Elastic Cloud Storage (ECS), the Gartner Magic Quadrant Leader in Distributed File Systems and Object Storage. Through this new support, joint customers can now take advantage of object storage scalability from Dell EMC ECS, while using Avere FXT Edge filers to seamlessly integrate existing file-based applications and maintain storage performance.

Dell EMC ECS delivers software-defined object storage to the enterprise, empowering organizations with the distributed storage architecture needed to store large amounts of data across a number of locations. Easily deployed in a matter of hours, the solution is not only highly scalable, but offers data resiliency to ensure data protection at massive capacity levels. Avere FXT Edge filers support Dell EMC ECS with Avere’s bi-directional translation of object APIs to NAS protocols. Using the supported solution, organizations can reap the cost and scale benefits of object storage without changes to applications.

Powered by WPeMatico

Cloud Customization from CloudVelox

CloudVelox’s One Hybrid Cloud™ (OHC) software is designed for complex data center environments. Cloud network customization enables enterprises to map their existing network environments in a data center to a cloud network design, so that enterprises can accelerate the migration and deployment of workloads without sacrificing workload compatibility or control.

Large data centers and cloud environments have complex network configurations and settings to satisfy regulatory and internal policies. These data centers and environments have various levels of internal as well as external permissions and access. Additionally, for production deployments, enterprise workloads may be configured to operate in specific sub-networks, VLANs and use specific IP address ranges as well as physical IP addresses. To ensure these workloads can run in the cloud seamlessly would require a significant amount of effort, mapping the existing network design into a virtual network environment within the cloud. OHC cloud network customization automates the task of mapping an existing network environment into the cloud network, so that customers can accelerate deployments and maintain control to address regulatory, security, and compliance mandates.

OHC version 3.0 is a cloud migration software that will allow network teams and partners to map their cloud network design with extensive automation, enabling them to easily leverage cloud-native services and APIs that maximize cloud ROI and resource efficiency as well as ease of use.
 

OHC 3.0 offers three forms of automated cloud network customization to accelerate migration of existing workloads to public cloud networks:

  • Recreate existing networks into the public cloud including customization and extensibility of IP addresses 
  • Accurately map the network from the source to the destination including network state, security settings, policies and permissions; and
  • Fit workloads and applications into a previously defined cloud network.
     

One Hybrid Cloud uses an application blueprint-based approach to automate the provisioning and orchestration of compute, storage, security and network processes on Amazon Web Services (AWS) which boosts productivity. The product is a single solution for cloud migration, cloud recovery and cloud Dev/Test that features a simple drag-and-drop interface which allows IT to leverage cloud services without specialized cloud skills.

Powered by WPeMatico

Simplifying On-Premises Infrastructure with Self-Driving Clouds

Cloud computing has become a common paradigm for businesses of all types and sizes, but when most of them think of cloud, they think of public cloud providers like Amazon, Microsoft, or Google. While it’s true that businesses can benefit greatly from cloud computing, however, many don’t want the cost, performance, and governance concerns of public cloud. That leaves them with the option to build an on-premises or private cloud. But building a private cloud has always been a complex, costly, and time-consuming process, and many companies can’t or don’t want to acquire the cloud-building expertise necessary. Now, cloud infrastructure vendors are beginning to use automation to offer self-driving clouds, and these greatly reduce the overhead of deploying and operating a private cloud. In this article, we’ll look at the requirements for self-driving clouds.

From installation to long-term planning, many cloud management tasks can be automated to create a self-driving cloud.


Article Index:


Automatic Installation and Configuration

Simply installing a cloud can be a complex process. One must assemble the necessary servers, storage, and networking resources, and then implement an operating system and cloud software. Wouldn’t it be nice if there was no need for integration or the idea of a “Day 0” went away? Self-driving cloud vendors are starting to implement cloud software that is pre-installed into the operating system image, so that once a server is deployed and powered on, the cloud should come up automatically without IT administrators having to know anything about various services and their persistent stores. The image software should pool together servers, storage, and networking resources to create a highly resilient cloud. Ideally, the user should be able to install a cloud and have it up and running in less than 30 minutes.

Integration with Other Clouds and Internal Systems 

Clouds are not designed to work in isolation, so users should be able to quickly connect an on-premises cloud with existing virtualized infrastructure and other public clouds. Ideally, the cloud should allow migration of workloads to and from public clouds so users can “cloudburst” onto public cloud when they need to scale quickly. Another form of cooperation with existing infrastructure is the ability to add existing storage systems and make them part of the cloud through open (i.e., RESTful) APIs. Similarly, most users want to integrate with AD/LDAP as well to have a single source of users and authentication.

Self-Service Application Deployment

The goal for any cloud is to enable various teams to access cloud resources themselves through a point-and-click interface. For example, developers could use this facility to access application development tools, support teams could use it to bring up replicas of customer environments to troubleshoot any support issues, sales could bring up quick PoCs for customer demos, and IT could bring up staging or production deployments of various applications. These steps need to be fully automated, so that one can repeat them without spending too much time. Any cloud solution should provide a self-service interface with pre-built application templates for quick deployment.

Real-Time Monitoring 

To reveal the state of applications and what actions other users have performed, the cloud should be able to monitor events, statistics, and dashboards in real time. IT should be able to get logs and audit the actions of all users. For example, if a service was down since 10 p.m. last night, it is good to know if a user or script mistakenly shut down a VM that provides that service.

Self-Healing

Any system as complex as a cloud needs to monitor critical services and help monitor workloads. Companies can spend a lot of manpower resources to perform this function manually, but a self-driving cloud can monitor and heal itself. For example, if any hardware component or software service fails, the system should detect and fix the situation. Then, it can alert the admin about which component had failed, so the admin could take corrective action to restore the capacity of the system.

Machine Learning for Long-Term Decision Making

The self-healing layer takes care of short-term decisions, but IT administrators need another layer of automation that can observe the cloud and applications over a longer period to help optimize the cloud, improve efficiency, and plan for the future. A self-driven cloud platform collects telemetry or operational data and leverages machine learning to guide data scientists as to how to develop algorithms that now model this behavior. The algorithms help customers make decisions.

This machine learning layer should observe cloud usage to do predictive capacity modeling, recommending orders for new servers, for example. It should also determine what sort of servers to add in terms of their CPU, memory, and IO ratio. For instance, if the applications are more CPU-intensive, one should order servers with more cores and less storage. Another area is to help optimize the size of VMs based on utilization.

A learning system can also help users detect any anomalies in your environment. For example, you might notice that suddenly a VM was sending a lot of data to other public IPs because of the machine getting hacked by a bot. Any such security risk can be detected using a smart anomaly detection system. The list of machine learning-based algorithms can get long, but the key is to have a platform where these can be easily added over time.

Hands-Free Upgrades

Upgrading a cloud is like changing the tires on a running car. With a live cloud running a variety of workloads, it is critical that the upgrade process be completely handled by an intelligent software layer, and not by humans who are reading release notes from vendors to figure out the right path to upgrade for their environment.

By meeting the above criteria, vendors can create self-driving cloud platforms that are easy to install and configure, easy to operate, and easy to manage. As enterprises come to trust the artificial intelligence that enables self-driving clouds, they can implement private clouds that deliver the self-service benefits of cloud computing without the complexity and cost it has traditionally required.

Powered by WPeMatico

Database Performance Certainty

Most IT professionals want as little change as possible. After all, at the end of the day, the ultimate responsibility is to make sure applications run, and run well. Less change usually equals more stability. The mantra usually goes, “If it’s not broken, why mess with it?”

However, life for database administrators (DBAs) is rife with change these days, whether they want it or not, including major evolutions that impact every IT department. With so many changes and variables, the old way of figuring things out — trial and error — no longer works. What DBAs and all IT professionals need, and what the business expects from IT, is performance certainty.

Key Changes Impacting DBAs and Databases

Virtualization and cloud

For years, DBAs resisted moving to virtualized environments because the uncertainty surrounding how a database server would perform on a virtual machine (VM). There was no performance certainty. But today, 80% of databases are running in virtual environments. And now that they’re in dynamic environments in the cloud, performance can change at any time: Got a noisy neighbor? An administrator can move the database to another VM, and so on.

Evolution of storage

Storage systems are on the move: flash, compression, hyper-converged systems, and intelligent storage, which does hot/cold tiering (software will dynamically change the underlying storage system based on observed behavior), are all growing in implementation.

Push toward continuous development

Continuous development — which means application code is changing all the time, sometimes multiple times a day — is becoming ever more common, driven in part by the adoption of DevOps culture. These changes are on top of everything that can and will change in the database itself.

Direct correlation between performance and cost

As these and other key changes lead toward a software-defined, dynamic environment, there is one more new consideration: the direct correlation between performance and cost, which becomes more evident in pay-as-you-go cloud environments. Lower performance usually results in provisioning more hardware, or faster hardware, which results in a higher cost.

Achieving Performance Certainty

So, how does the DBA achieve performance certainty in today’s ever-changing technology landscape? Here are a few ideas:

Adopt performance as a discipline.

Uptime is no longer the key metric for how one measures quality of work; instead, uptime is assumed. Performance and enduser experience are the new goals. The questions should become: how fast can we make the system work? How often do the teams talk about performance? What tools do we have to understand and improve performance? It’s about being proactive.

Adopt a wait-time analysis mindset.

Focus must shift from simple resource metrics to time — the time spent on every process, query, wait state and contribution to time from storage (I/O and latency), networking, and other components supporting the database and the application. The fundamental methodology to understand database performance is wait-time analysis.

Establish benchmarks and baselines.

It’s important to define the key metrics to observe, which should ideally be application-, enduser-, and throughput-centric. Statistical baselines help one understand what is normal and how/when performance changes. Alerts based on baselines, which are based on relevant metrics, allow the DBA to focus on what matters. Tools that allow you to look back in time and compare performance become extremely useful.

Understand the performance contribution of each component.

Before moving to faster hardware or provisioning more resources, one must understand the performance contribution of each component and each step the database takes in responding to a query, which will show their potential contribution to performance improvement.

Become the performance guru.

Knowledge is power. With the shift in IT towards performance, one who better understands performance, what drives it, and how to improve it, quickly becomes more valuable to the organization.

Return and report.

DBAs should report on performance weekly or monthly, and take credit for performance improvements and costs savings resulting from reclaimed hardware or delayed investments. They should report the performance impact and improvement (or not) of each infrastructure component.

Plan performance changes.

A DBA will know when they have achieved performance certainty and when they have become a performance guru when they can accurately predict application performance before changes occur, and when they can guide their organization towards better performance.

Conclusion

Today, regardless of the role within IT, it’s all about the applications. This is especially true for DBAs, who must be proactive when it comes to performance. Performance certainty — when you know how a system will perform and how to improve it — will very soon become a job requirement.

Powered by WPeMatico

Simple Secrets of Successful Knowledge Management

Knowledge that doesn’t serve is knowledge wasted. And for knowledge gained from experience and research to be useful, IT enterprises need to organize, manage, and offer it in the best way possible. Fortunately, the best way isn’t a Herculean task when you employ simple tricks to build a profound knowledge base (KB). A sound knowledge base eliminates the need to rediscover or reformulate knowledge and improves the support process. With that in mind, consider these best practices to help build a successful knowledge base.

Collect Information to Build Your KB

The most important part of knowledge management is building knowledge itself. The first step is to identify prospective areas from which knowledge can be derived and extract information. Resolutions on common issues can be used as templates if they are added to the KB as knowledge items. Converting tacit knowledge to explicit knowledge is essential for a successful knowledge management system, but the conversion requires collaborative efforts with careful investigation and input from experienced technicians. Also, to achieve a comprehensive KB, encourage your IT technicians to move resolutions directly to the KB. A good IT help desk application will allow the creation of knowledge articles right from ticket resolution. This significantly reduces the percentage of repeat incidents and keeps the KB up to date.

Categorize to Identify, Retrieve and Use Knowledge

Organizing and categorizing existing data can be challenging, especially when handling large KBs with wide scopes. However, it is important to group knowledge items and place them under relevant topics so that information is not lost in a pool of data. There are different ways in which you can organize knowledge, depending on what suits your organization best. Grouping can be based on document type, such as guidelines or bug fixes, or on the subject matter, such as hardware issues or software updates. Creating logical hierarchies is a method that will ease user navigation. The hierarchy should begin with broad topics and move on to categories and subcategories.

Implement a Knowledge Approval Process

Creating a well-structured piece of information that is relevant to the user is crucial. The quality of the content should be peer reviewed by subject matter experts for accuracy and relevance. Ultimately, information cannot be published as knowledge without a proper knowledge approval process. The content that is generated must go through peer review and be improved.

Along those lines, you can configure an automated approval workflow that prevents a solution from being published without peer approval. Create a unique knowledge manager role with permissions to approve solutions. Configuring an automatic trigger for notifications to approvers on submission of a solution will make the approval process easier. Approval processes eliminate ambiguity, making knowledge items more accurate and minimizing any reopening of closed tickets. For instance, there may be multiple solutions to troubleshoot a printer issue (network issue, hardware issue, etc.). However, the approval committee should be able to decide on the appropriate solution.

Choose Your Audience for Each Solution

Not every piece of information in the KB is relevant to all users. By choosing the right audience for a knowledge item, you can eliminate clutter in the endusers’ self-service portal. For technicians, create specific roles and groups based on their field of expertise and share only relevant topics. For example, finance documents are always confidential and therefore should be accessible only to related users. Along the same lines, documents on registry settings or swapping hardware parts are only relevant to IT experts in the field and can be restricted from endusers. However, make sure your technicians have full access to the KB, especially when the services are integrated in the help desk application.

Prompt Endusers Effectively with Relevant Knowledge Items

No matter how elaborate a KB is, it cannot be effective if it is out of reach. Making the KB easily accessible to endusers in the self-service portal will help them arrive at solutions without assistance from a technician, lowering the number of incidents. This can be done in the following ways.

  • When the enduser logs in to the application, the recently viewed or used solutions are listed.
  • When an enduser tries to log a ticket, relevant knowledge articles are suggested based on keywords.
  • In the self-service portal, endusers have easy access to the KB articles that have been made visible to them.
  • Relevant KB articles are automatically e-mailed to the enduser in response notifications (as auto suggestions) when the ticket is logged.

Likewise, the sooner an IT technician can get to a resolution in the KB, the easier it is to reduce the mean time to resolve incidents and improve first call resolution rates. This can be achieved by adding keywords and tags to solutions to make items easily searchable.

Widen the KB’S Horizon

A well-built KB should not be limited to storing resolutions for incidents. Use the KB as a repository of important checklists that keep a particular service up and running. Commonly used information such as checklists on regular server housekeeping tasks or changes that require restarting the server will keep technicians from missing crucial steps in change implementation. The KB should also be used to save important workflows in IT services, training material for technicians, user guides, and even FAQs. This, in turn, helps reduce incident response time and will help technicians keep up with pre-defined SLAs.

Establish a Knowledge Management Team

When it comes to creating a knowledge management (KM) system as a key resource in your organization, a knowledge management team certainly has its advantages. One of the most significant advantages is the added ownership and accountability in the KM process. You can create a user group of technicians who are well trained in the proposed KM model for your organization. This team should be assigned to supervise the approval process. They should also be able to streamline KM workflows, identify possible areas of extension, and be responsible for collecting information from resources. The whole KM process is cyclical, and the KM team should oversee it. This will help avoid chaotic roles and prevent any missed information.

Evaluate Your KB’s Performance

Constantly monitoring the efficiency of your KM system with relevant metrics will help you evaluate its performance. The following are the metrics and methodologies generally used in KM to identify its strengths and weaknesses:

  • Customer surveys on the quality and accessibility of the KB content
  • Identifying zero click-throughs where KM content exists
  • Evaluation of knowledge gaps (where KM content does not exist)
  • Reports from your help desk application on ticket response and resolution times, as well as reopen rates, show you the statistics required to improve your knowledge management system.

After you’ve built your knowledge base and have a good knowledge management system running, sit back and reap the benefits. Whether it’s just a few tweaks to an existing knowledge base or a brand new one, it shouldn’t be long before customers and employees say, “It’s on fleek!”

Powered by WPeMatico

The Cloud Industry: Middle Child No Longer

Even if you didn’t grow up watching re-runs of the Brady Bunch, you know the saying: “Marcia, Marcia, Marcia!” It’s the exasperated cry of middle children everywhere who too often get overlooked — caught in the twin shadows of younger and older siblings who always seem to get the attention, just like poor Jan Brady did in every single episode.

The cloud industry has its own middle child whose needs have too long been underserved in favor of its larger and smaller siblings. It’s mid-market companies who have gotten far less attention from cloud providers compared to 1) the wide swath of small businesses that represent a lucrative market for retail-style cloud services; and 2) large enterprises who have much deeper pockets and whose large implementations are a magnet for cloud providers’ sales teams. Mid-market companies have been the Jan Brady of the cloud industry, but that middle child status is about to be as outdated as the 1970s fashion in those re-runs.

A “Cloud” burst on the Horizon

In large part because of the lack of support from cloud providers to serve their unique needs, mid-market companies have lagged behind both large enterprises and small businesses in cloud adoption. But that is changing rapidly and it will re-shape the landscape of the cloud market, which for years has been dominated by the upper and lower ends of the market. Large enterprises have naturally led the way in cloud adoption because of their far greater staffing resources and financial resources for experimenting with the cloud, conducting more pilot projects, launching more products that leverage the cloud, and more. To complement that internal momentum, large enterprises have also received tremendous support from cloud providers who provide specialized services and dedicated teams to meet those needs.

On the other end of the market, small businesses have also been very aggressive adopters of cloud because retail-style packages of bundled, plug-and-play services meet their basic needs, achieve some clear savings and require minimal support from cloud providers. This lower end of the market has also gotten a lot of attention from cloud providers who frequently utilize a retail sales and service model to tap into this massive market of hundreds of thousands of companies with pretty basic cloud needs.

That focus on the needs of the biggest of the big and the smallest of the small meant that the mid-market has too often been left on its own to figure out the cloud for itself. And without adequate support from cloud providers, it’s no wonder that the mid-market has lagged behind on cloud adoption. It may have taken this size of company a bit longer to start and finish their cloud pilot projects, but everything I am seeing in analyst research and my own conversations with customers says that the mid-market is going to embrace the cloud in a big way in the next six to 12 months. Mid-market companies — all 200,000 of them in the U.S. economy — are ready to start transitioning critical applications and business operations to the cloud, and they will need help to do that in a way that factors in the lessons that early adopters have learned about the cloud.

One Size Doesn’t Fit All in the Mid-Market

So what are the key challenges that mid-market companies will face as they move ahead more aggressively with cloud adoption?

One is that mid-market companies often have far more nuanced drivers for cloud adoption than their bigger and smaller siblings. For large enterprises, it often boils down to nimbleness in delivering new products and services to market. For small businesses, it is largely about replacing pricier IT costs with far cheaper cloud-based services. Mid-market companies typically have a more complex set of drivers because they aren’t just trying to save a few bucks, they are trying to reduce complexity. With smaller IT teams and limited IT budgets, mid-sized companies desperately want to tame the unruly scope of their IT operations so that it is manageable for their IT team while also making sense financially. When a mid-market company embraces the cloud, they aren’t just saying, “move these things to the cloud.” They are often doing a fundamental re-thinking of what their IT infrastructure should be today and what it needs to be in the future. Simply put, cloud implementations give mid-market opportunities to do a re-set on legacy IT systems that are woefully complex, outdated, expensive, and ill-suited for future growth.

That means a lot is riding on how mid-market companies handle their cloud implementations, and the honest truth is that they need help, and CIOs and IT managers of mid-market companies are themselves often the first people to admit that. Small companies typically only need the basics that are pre-packaged in the bundles they sign up for, and the support they need tends to be more customer service oriented than strategic in nature. For large enterprises, they are usually up to their eyeballs in strategy and support from their in-house teams and from the dedicated external teams from their IT consultants and cloud providers. Mid-market companies need external strategic support to supplement the in-house brain power they have, and the cloud providers who do the best job of meeting that need will position themselves as the go-to providers for these 200,000 companies looking to go big with the cloud.

Strategy, Not Simply Tactics

One of the biggest strategic challenges that mid-market companies face once they sit down with their provider to map out a plan is to determine what type of cloud to use. The vast majority of cloud pilot projects by this segment of the market are done only with the public cloud, so when mid-market companies are given the green light to deploy a larger cloud implementation they are typically working with limited information or zero information about how to use private cloud services and how to decide when to use both public and private in a hybrid cloud strategy. For the mid-market, a blended approach is often the most appropriate solution because it matches the security and performance of private cloud for select applications with the lower cost of public cloud for other applications. No two mid-market companies are the same, so this hybrid strategy must be designed in a thoughtful way that meets near-term needs while also flexing to adapt over time as the company’s size and IT needs change, including future adoption of technologies like SD-WAN.

Being the middle child isn’t fun, as Jan Brady articulated so well for all of us who grew up watching too many television reruns. But it is possible for middle children to break the mold and not live in the shadow of their “perfect” older sibling and their “too cute” little sister. Mid-market companies are poised to turn the dynamics of the cloud market upside down in a way that demands more attention for their unique needs. For all of us who work closely with mid-market companies (or who are simply middle children), this is something to celebrate. For once, Marcia might be the jealous one. We can dream can’t we?

Powered by WPeMatico

7 Considerations For Adopting A Hybrid-Cloud Approach To UC&C

Though the cloud market has exploded over the last few years, arming companies with cloud applications for every documented process imaginable, many organizations’ security policies mandate at least some of their data stays on-site. To meet their business objectives while complying with any corporate security policies, these companies are turning to the hybrid cloud. In fact, IDG forecasts that by the end of 2016, more than half (54%) of enterprises will employ a hybrid-cloud unified communications and collaboration (UC&C) model.

Adopting a hybrid-cloud approach is a great option for companies looking to extend investments on existing systems while leveraging the scalability of the cloud. According to a report from West’s Unified Communications Services, the hybrid-cloud strategy resonates well with today’s IT managers. For instance, 32% of IT leaders think a hybrid approach would be the best method if they decided to migrate their telecommunications to the cloud.

While the hybrid cloud is a practical stepping stone towards a cloud-first approach, IT departments must develop a well-thought out plan to reap the benefits. Here are some key considerations for crafting a hybrid-cloud strategy:

Evaluate costs.

Cost comparison is a logical first step toward establishing an effective hybrid-cloud environment. While cloud UC systems have lower upfront costs, many have recurring charges billed monthly as a service. As such, it’s crucial to consider total cost of ownership before migrating UC and collaboration solutions to the cloud. In comparison, going with an on-premises UC solution requires a large capital expense up front with additional installation and network integration costs (along with ongoing maintenance and upgrade expenses).

Assess third-party providers.

While shopping around for hybrid cloud communications providers, IT departments should spearhead conversations around scalability, security, and uptime. Tech leaders should ask questions such as what type of disaster recovery plan is offered? What support is provided when the local network goes down? What is the process for adding and removing users from the plan? What security controls are in place to safeguard data?

Outside providers’ customer portfolios should also be considered. For instance, how many customers do they have, and what size company do they typically serve? Since the answers to these questions vary depending on provider, organizations need to ensure they identify one that has the bandwidth and experience needed to execute on their short and long-term goals.

Prepare for redundancy costs.

An attractive feature of a hybrid-cloud environment is the ability to back up internal applications and business data to the cloud. This benefit ensures a smooth disaster recovery process and business survival if an internal network or hardware failure were to occur. In the event of an outage, network redundancy acts as backup for moving operations onto redundant infrastructure. Though redundancy is recommended, it’s often accompanied by additional costs, so organizations should create a buffer in the budget.

Assess network architecture.

Determine if your network architecture is equipped for migration before moving any of your UC and collaboration applications to the cloud. Total number of users, remote workers, and plans for future growth should all be evaluated in advance of a migration. For example, when a company moves to a hosted VoIP phone system, organizations need to evaluate their call volume — specifically what time calls peak — and then build in some extra coverage to plan for any other spikes that may take place. Organizations should work with their providers to ensure they don’t run out of capacity. If your needs do evolve, cloud providers can expand services like SIP trunks to meet your goals. Your network provider may also be able to QoS so that you adjust bandwidth to prioritize mission critical data over less important traffic.

Figure out which data to move to the cloud and which to keep on-premises.

The next, and perhaps most tedious task, is for the IT department to determine what data they want (or are required) to host on-premises and what data can live in the cloud. Cloud providers generally spend more time securing their platforms than most organizations can do for on-premises, so it’s time to put to rest the myth that premises-based systems are more secure.

Consider maintenance time and costs.

To prevent unexpected expenses, organizations should factor maintenance costs into their budgets when considering a hybrid strategy. For most cloud-based UC solutions, maintenance and upgrades are handled by the hosting company. Since these upgrades can be made remotely, on-site support is usually not needed. For organizations with a smaller IT staff, this can be especially beneficial. At the same time, maintenance and adjustments to on-premises platforms call for on-site support to implement upgrades. Organizations might want to consider hiring contractors to handle extension changes, firmware upgrades, and other upgrades to on-premises systems.

Craft a timeline for migration.

Some organizations make the mad rush to the cloud rather than approaching it with a strategic timeline. Hybrid environments often allow for a more phased-out approach, so organizations should meet with their provider to develop a plan that fits their goals and requirements while capitalizing on the company’s onsite resources.

Embracing a hybrid-cloud model can be a challenging yet necessary step towards modernizing your organization’s communications solutions. When all of the factors outlined above are considered, companies can ensure a smooth transition and, more importantly, a successful long-term relationship with their hybrid-cloud environments.

Powered by WPeMatico

Encryption as a Service from Peak 10

Peak 10, Inc. has announced the launch of its Encryption as a Service offering, providing businesses with a location-agnostic solution that encrypts workloads anywhere — a Peak 10 data center, the Peak 10 cloud, an onsite data center, a third party data center or hyperscaler cloud (such as AWS, Azure, etc.).

According to a Peak 10 encryption study, 70% of respondents are expecting to increase or maintain their encryption budgets this year. Encryption as a Service will allow Peak 10 to continue meeting customer demand for secure, scalable solutions among the industry’s most robust suite of solutions.

Cybercrime is a $400 billion global enterprise, and increasing daily. Working as a reliable last line of defense for protecting sensitive data, essentially scrambling information and rendering it inaccessible to those without the keys, encryption has become a critical tool for any organization’s security posture as large-scale hacks have crippled American businesses for weeks at a time. Peak 10’s Encryption as a Service allows businesses to protect critical customer and proprietary data, while retaining complete control over encryption keys, which means they retain control over who can decrypt data. Additionally, the solution offers privileged user controls and policies that dictate access to data as well as full audit logs.

With Encryption as a Service, customers can more fully leverage Peak 10’s services, allowing for seamless implementation of its solutions, including cloud, infrastructure, disaster recovery and object storage. These offerings allow customers to benefit from previously inaccessible business advantages, including faster to-market time, increased agility and a considerable reduction in costs. Additionally, Encryption as a Service is compatible with existing applications and appliances, giving customers full access to their encryption keys and data assets.

Powered by WPeMatico