High-Performance Virtualization from Red Hat

Red Hat, Inc. has announced the general availability of Red Hat Virtualization 4, the newest release of its Kernel-based Virtual Machine (KVM) -powered virtualization platform. Red Hat Virtualization 4 challenges the economics and complexities of proprietary virtualization solutions by providing a fully-open, high-performing, more secure, and centrally managed platform for both Linux- and Windows-based workloads. It combines a powerful updated hypervisor, advanced system dashboard, and centralized networking for users’ evolving workloads. Built on Red Hat Enterprise Linux, Red Hat Virtualization 4 is designed to easily integrate with existing IT investments while providing a foundation for emerging technology deployments, including containerized and cloud-native applications.

While virtualization remains a key element of datacenter infrastructure, customer needs around the technology are rapidly evolving. Enterprises just embarking on a virtualization deployment may want a complete, agile platform that embraces efficiency and open standards of interoperability, while enterprises who have already deployed virtualization technologies may become increasingly concerned about their investment due to costs, performance limitations, or incompatibility. Red Hat Virtualization 4 is designed to address these emerging scenarios with a platform built on open standards, providing a powerful, flexible solution for new deployments and helping existing virtualization users migrate to an open, extensible solution.

Red Hat Virtualization 4 includes both a high-performing hypervisor (Red Hat Virtualization Host) and a web-based virtualization resource manager (Red Hat Virtualization Manager) for management of an enterprise’s virtualization infrastructure. Specifically, Red Hat Virtualization 4 introduces new and enhanced capabilities around:

  • Performance and extensibility
  • Management and automation
  • Support for OpenStack and Linux containers
  • Security and reliability
  • Centralized networking through an external, third-party API

Performance and Extensibility

Red Hat Virtualization 4 introduces a new powerful and smaller footprint hypervisor co-engineered with Red Hat Enterprise Linux 7.2. The new hypervisor helps streamline the installation of system packages and driver updates, simplify the deployment of modern technologies, and provide better hardware support configuration management integration. Additionally, Red Hat Virtualization can now be installed via Anaconda, the common installer for both Red Hat Enterprise Linux and Red Hat Virtualization hypervisor.

The new platform also includes support for advanced network functionality, helping to simplify the process of adding and supporting third party network providers via a new open API. This feature allows for the centralization and simplification of network management systems by enabling Red Hat Virtualization Manager to communicate with external systems to define networking characteristics that can be applied to a virtual machine’s network interfaces.

Management and automation

To improve overall virtualization management, Red Hat Virtualization 4 offers an advanced system dashboard that provides a comprehensive view of virtualized resources and infrastructure. This enables administrators to better diagnose and remediate problems before they impact operations. Additional automation functionality includes:

  • A storage image uploader, which provides a browser-based interface to upload existing KVM Virtual Machine files directly or via a remote URL, placing the image in the storage domain without requiring third party tools.
  • Advanced live migration policies to enable users to fine-tune granular migration characteristics of hosts, down to an individual VM or cluster level, enabling faster operations and overall performance.

OpenStack and Linux containers

While virtualization as a technology is mature, Red Hat Virtualization 4 provides key support features for Linux container-based workloads as well as OpenStack private and hybrid cloud deployments. For containers, Red Hat Virtualization 4 supports Red Hat Enterprise Linux Atomic Host as a configurable guest system and allows guest agents to be run as, and report on, containers on the Atomic Host VM.

Red Hat Virtualization 4 also provides native support for Red Hat OpenStack Platform Neutron. This enables organization to streamline shared services and minimize their operational footprint by deploying services more seamlessly across traditional and cloud-enabled workloads.

A More Secure Virtualization Environment

These newly-introduced features in Red Hat Virtualization 4 complement the security assets brought to Red Hat Virtualization through its base in Red Hat Enterprise Linux. Red Hat Virtualization 4 includes and supports sVirt, which applies Mandatory Access Control (MAC) for greater VM and hypervisor security. This helps to improve overall security and harden the physical and virtual environment against vulnerabilities that could be used as an attack vector against the host or other VMs.

Red Hat Virtualization is also integrated with Red Hat Satellite, Red Hat’s systems management solution. Red Hat Virtualization standardizes infrastructure and virtual machine guest provisioning through existing Red Hat Satellite 6 implementations. It also provides visibility into the host and virtual machine errata details to ensure patch compliance across a physical and virtual environment.

Powered by WPeMatico

Managed SD-WAN Solution from Masergy

Masergy Communications Inc. has announced the addition of Managed Software Defined WAN (SD-WAN) to its global hybrid networking portfolio. SD-WAN is a relatively new approach to designing and deploying wide area networks to meet business and application performance requirements.

Mainstream enterprise adoption of public cloud services and SaaS applications are driving the demand for agile hybrid networks that can intelligently combine broadband and high performance private networks. Market research firm International Data Corp (IDC) underscores this trend, reporting that “SD-WAN revenues are expected to exceed $6 billion in 2020, with a compounded growth rate of more than 90% between 2015 and 2020.”

Masergy’s Managed SD-WAN is the latest addition to the Masergy Managed Network f(n)™ family of fully managed network functions. The solution supports premise, cloud and virtualized deployments, and can utilize any combination of broadband and high-performance private WAN connections. General availability is scheduled for Q4 2016.

Masergy’s Managed SD-WAN enables enhanced network designs that drive application performance and security while enhancing price-performance optimization. Performance and data resiliency features include:

  • Dynamic Path Control and Adaptive Forward Error Correction
  • Automatic IP-VPN tunnels with AES 256 encryption
  • Dynamic Policy-based Application Routing
  • Centralized Policy and Configuration Management

Powered by WPeMatico

Hosted Private Cloud Software from dinCloud

dinCloud has announced new features available in dinManage, the company’s cloud orchestration platform, including cloud analytics and data monitoring. Now, customers can have detailed and continuous insight into how their hosted private cloud (consisting of hosted workspaces and cloud infrastructure) is operating.

Putting control in the user’s hands, the easy-to-use dinManage console gives dinCloud customers full transparency into their cloud infrastructure. They can manage their network and security parameters for dinCloud’s enterprise suite of offerings including hosted workspaces, cloud infrastructure, and cloud security services.

Cloud Analytics and Monitoring

Not all cloud service providers offer cloud analytics and monitoring, and those that do have very limited functionality or capabilities. dinCloud’s cloud analytics and monitoring features give a CTO or IT decision maker the ability to create reports showing the data utilization (e.g. CPU, memory, etc.) of a specific virtual machine. Analytics data can be viewed in real-time or historically. Users can also setup monitoring alerts, which provide notifications when specified utilizations reach certain thresholds. Now, IT can make decisions based on actual data utilization and choose whether to shift unused usage to another department to reduce costs.

dinCloud collects data at 1-minute intervals, gathering metrics including:

  • Local drive utilization (depending upon the number of DISK added to the machine) – a feature unique to dinCloud
  • Total DISK utilization
  • Memory utilization
  • CPU utilization
  • Read/Write throughputs
  • Network IN/Network OUT speeds

These metrics are provided to customers in simple-to-understand graphics and tables, and kept on file for up to two weeks.

dinCloud’s cloud analytics and monitoring helps customers figure out their data utilization in a variety of ways, with benefits including:

  • Seeing exactly at what time intervals their machine resources are being utilized most and which machines are being underutilized.
  • Increase virtual resources if a machine needs more CPU, memory, or disk.
  • Troubleshooting various application level issues by looking at historical utilization graphs.
  • Administrators can quickly resolve issues by looking at real-time or historical data without logging into a machine.
  • Setup alerts to proactively fix a problem before it starts. For example, receiving an alert before a machine runs out of disk space.

www.dinCloud.com

Powered by WPeMatico

Storage from FalconStor Software

FalconStor® Software Inc. has announced FreeStor® for the hybrid cloud. Building on its heterogeneous storage platform, it has enabled enterprises and cloud service providers (CSPs) to utilize the performance and reliability benefits of block-based enterprise storage within a hybrid model at public cloud prices.

New benefits of FreeStor’s intelligent approach to flexible data management include:

  • Simple, fair pricing – organizations now only pay for licensing of their primary instance of data, not the total amount of storage consumed.
  • Cloud enablement – enable users to add public cloud storage in order to create a hybrid solution that can be managed through a single pane of glass.
  • Secure multi-tenancy – integration with Active Directory or LDAP for authorization, access and audit compliance providing trustworthy security at all levels of an organization’s installation.
  • Enhanced analytics – enabling core-to-edge decision-making abilities while providing information for proactive management of SLAs.
  • Unified client management – overcome business disruption with easy, templated agent deployment, simplified configuration and updates, and intelligent analytics from core-to-edge.
  • Performance optimization – Improved support for NVMe unlocking new levels of I/O and latency. The addition of Linux 7 compliance together with enhanced, patented application acceleration, workload portability both on-premises and in the cloud, and faster zero-downtime configurations.

Powered by WPeMatico

5 Considerations For A Successful Tech Startup Launch

Tech startups today are all over the map — but an interesting distinction is the difference between startups that heavily rely on technology and actual technology startups.

Here are several kinds of technology startups that are prevalent today:

Security. The prominence of security has skyrocketed, and it’s no secret as to why. Look at how much the security landscape at large has advanced, from the exponentially increasing occurrences of cybercrime to the complexity of executing a breach — a solid security posture is now a “must have” for every business regardless of industry. It comes as no surprise then that there are security tech startup companies cropping up everywhere.

Easy-to-use cloud migration tools. Countless companies are in the process of building easy-to-use migration tools for migrating from one cloud environment or an internal virtualized environment to another virtualized environment. These startups are markedly popular right now, probably based on the attractiveness of such a product for essentially any business that has the need to migrate — this kind of service makes an otherwise complex, critical IT project seamless, efficient, and the responsibility of a third party. Can’t argue with that kind of convenience.

Single-pane-of-glass management (regardless of software functionality). Everyone wants single-pane-of-glass management; it’s a popular luxury in tech. It doesn’t matter where an environment lives or how many places it lives — an organization can have an on-premise data center presence, an AWS presence, or a presence on the other side of the globe. Regardless of geographic dispersal, IT management teams want to use one software interface to manage all resources or assets. Generically referred to as multi-cloud management, this software allows an enduser to have a single login portal that provides visibility into all environments, regardless of location. Companies providing such software are prevalent and growing steadily.

Software-defined solutions. Software-defined networking (SDN), or software-defined solutions in general, are becoming ubiquitous, but they’re not just the new IT buzzword. Visit any tech tradeshow and you’ll see nearly infinite tech startups touting applications with diverse interaction capabilities, whether with infrastructure or other software.

If You’re a Tech Startup, You Need to Plan Ahead — for IT and Business Initiatives

Regardless of your tech startup’s core business, your biggest ally is planning. Think of it like taking a vacation: if you start to plan the trip after you’ve already left your house, there will inevitably be challenges you’ll face along the way. Did you turn off the stove? Did you put gas in the car? Where are you going? How long is it going to take to get there? Where are you staying? Did you reserve a hotel room? You’ll experience a similar scenario if you jump feet-first into launching a startup without putting enough time and effort into planning. This is the basis of many tech startups’ most common pitfalls.

One of the biggest mistakes I see tech startups making is trying to start too large. The cloud/virtualization hype that now precipitates decision making for infrastructure planning may not necessarily be the right place for a startup to begin from an IT perspective. As we all know, startup companies live and breathe in a pretty extreme ebb and flow for the first few years of existence; you’ve got to start, and then you’ve got to sell, period. As a result, there must be some upfront, out-of-pocket costs. But none of the startups that I’ve ever been involved in, whether my own, or in consulting for other companies, have had a revenue stream out of the gate.

So, instead of going and committing to monthly recurring costs or stroking a check for six figures to purchase hardware, consider if launching your infrastructure makes more sense as a physical or virtual deployment and then talk to a provider to see what the different options cost.

Take it from me, a colocation solution will generate a predictable monthly cost that is considerably easier to manage than a cloud cost. You can go into your environment with a physical device, server, or multiple servers and have room to grow. Yes, the old adage used at the outset of all things cloud was based on the premise of overbuying and underutilizing RAM, CPU, and disk. It’s not that this concept doesn’t have basis, it’s just that for a startup, right-sizing will probably make more sense down the road after you’ve started generating revenue. In the beginning stages, I’d recommend seriously considering spending $10,000 or $15,000 on infrastructure, parking it in a data center, and dealing with static costs.

When your business grows and your revenue is relatively predictable, that is the point at which reassessing and potentially considering a cloud strategy, rather than buying and managing hardware, will make sense. In this case, say three to five years from your company’s beginning, you’ll have the wherewithal to leverage a cloud service provider to manage your infrastructure and consume it as you need vs. making an initial purchase and then growing into it.

Basically, don’t look at your ideal state of operations for year five and build your infrastructure based on that. Just get past the first year.

Vital Considerations for a Successful Startup Launch: Development Endeavors You Can’t Get Wrong 

You’ve got a challenging road ahead of you, but knowing the questions you need to ask yourself and your team at the starting line will take you a long way, from both a technical and non-technical perspective.

Technical considerations.

I’m a huge proponent of taking the most minimalistic approach possible where IT is concerned because it’s expensive and it’s not necessarily core to what you’re doing as a business — but it is needed to support what you’re doing. So, consider your core business and define where you really want your efforts to be focused.

For instance, if you’re a SaaS provider you need outstanding programmers and a place to store and test code, but you probably don’t want to manage and maintain infrastructure. You’re in software, so focus on software development. Also consider what initiatives your IT environment needs to support. Using the SaaS company example again, consider that given your key practice areas you probably need an environment that aligns with your initiatives. It should allow your developers easy access and the ability to create, test, and launch code.

  • Start as minimally as possible and as cheaply as possible.
  • Define what is really needed that’s core.
  • Ask yourself: do you really want to manage IT infrastructure?
  • Identify business areas that should be most heavily invested in.
  • Build a healthy IT environment. THEN explore infrastructure.
  • Find the best IaaS provider for your solution.

Nontechnical Considerations

1. Know the competitive landscape like the back of your hand.

  • Who else is doing what we’re doing, and how are they doing it?
  • What are they charging for it?
  • What kind of revenues are they generating?

There’s always a rare exception, but there are very few tech companies out there who have a totally unique service that no one else is providing. Know your competitors and set realistic revenue expectations based on what you find.

2. Build an amazing team who reinforces your weaknesses.

Your first consideration after you know where your product fits into the tech world should be building an amazing team. If you’re the visionary, you probably have a natural tendency to lean toward one side of IT vs. another, whether it be development or infrastructure. Know your strengths and put together a team of experts who can support the areas where you’re weakest:

  • Financial
  • Software
  • Infrastructure

3. Understand the financials — your cost to sales ratio in particular.

Understanding the financials is key. I’ve seen many startup companies come up with a great idea that costs, for example, $1 to develop, but will sell for $1.10 because of the competitive landscape, which is not a good cost to sales ratio. Know the costs of building your application and product and how much you can sell it for. As a frame of reference consider:

  • Competitive landscape
  • Cost of building the application and product
  • Viable price for selling the app

4. Be unmistakably different and have an excellent plan for getting the word out about your application.

We live in a tech world today where if one person does it, 15 people do it. Differentiate what you can do with your product versus what your competitors can do — and once again, to do that you have to understand your competitors. What are the incentives for customers to go buy from a competitor vs. your business? Figure out what makes your product different, and if you can’t, find a way to make it different.

It’s a noisy IT world. How will you announce to the world that (1) your product is awesome and (2) why your product is better than the others? Develop a marketing plan for putting your stake in the ground, and for keeping it there long-term.

  • Announce the awesomeness of your product
  • Make it clear why your product is better

5. Find a proven, credible small business consultant who can vet your plan.

Don’t go it alone. Make sure you have people you can count on who are equally invested in the success of your business, will tell you bluntly if your idea isn’t viable, and has the skills to look for potholes in the process you’ve put together.

Fail to Plan, Plan to Fail

It sounds cliché, but it’s true. The road is long and winding, but startups are the future of what we’re doing in this industry, so guarantee your success by planning the right way. If you fail to plan, you plan to fail. Every startup is gung-ho to get out there and start selling, but it’s in your best interest take the necessary time to put a blueprint together — know what you’re building and how.

Peak 10 was once a startup, so we understand the DNA of startups. We started as a colocation company and have grown products and services because we’ve done what I just laid out. If you’re considering embarking on a tech startup journey, feel free to reach out to Peak 10 to sit down and talk about your plan, and understand what other companies have done with success and failure.

Powered by WPeMatico

How Cloud-Native Digital Asset Management Drives Transformation

Cloud-based applications proliferate today’s enterprise. From Google Drive, Box and Dropbox, to Salesforce and Adobe Creative Cloud, on-the-go workforces rely on the cloud every step of their day. The problem? The traditional digital asset management (DAM) platforms they use to drive business workflows and manage business content lack the cloud integration, scalability, and availability necessary to truly realize the benefits and efficiencies of cloud computing.

The Value of Digital Asset Management

Why is DAM so important? Comprised of the management tasks, policies, and controls for intelligent decision making, advanced DAM enable comprehensive business workflows that can transform business processes. Through the ingestion, annotation, cataloguing, storage, search, retrieval, and distribution of digital assets, DAM powers the processes that run business.

Yet, without a cloud-native approach, the benefits of DAM may be limited. A true digital workplace needs to enable its workforce to use content efficiently and collaboratively, wherever it resides. That means powerfully connecting local systems with cloud-based storage, content delivery networks (CDNs), cloud-file services like Google Drive, and more. The success of a DAM solution is directly related to its integration capabilities and reach it has throughout all content repositories, cloud and on premises.

While most DAM vendors refer to their software as cloud-based, they’re really products that are cloud-hosted; a version of their legacy on-premises application, retrofitted to run on virtual machines. What cloud-hosted solutions can’t support is the integration, infrastructure, agility, availability, and security essential to truly benefit from the cloud.

The Advantage of a Cloud-Native Approach

In contrast, cloud-native platforms have been built for the cloud. They’re rapidly deployable to cloud infrastructures such as Amazon Web Services (AWS) or Microsoft Azure and they more intuitively integrate with the cloud-based applications users want to use, like Google Drive or Dropbox.

For example, a true cloud-native DAM platform merges local and cloud-based content within a single addressable framework with ease. It can use content residing in enterprise file sharing services (EFSS), as if they were local files, making them a seamless part of the content workflow. The most advanced DAM platforms will actually manage these files in place while adding full text search, versioning, security, and integration into enterprise workflows as if they were stored in the native repository. Developers can thus build specific application logic and business workflows using files retained in the cloud, while users can access, share, and collaborate on them right alongside other related content.

Taking the cloud-native DAM approach a step further is sophisticated search functionality. Advanced DAM platforms that use a cloud-native approach feature powerful embedded search functionality that will search across content repositories, including those residing in the cloud. As a result, sophisticated workflows can be built using faceted search, fuzzy search, synonyms search, geo distance, and more for truly transformative business operations.

Finally, a cloud-native approach to DAM empowers the user through comprehensive desktop sync. Potential version conflicts between local desktop files and the content repository — including content residing in the cloud — are automatically avoided. This improves workforce productivity, avoiding user frustration and saving countless hours of file conflict resolution. Collaborative projects instead stay up-to-date and efficient to ensure business workflows run seamlessly.

The cloud is powering digital transformation everywhere. But only if and when DAM technology is used to its fullest — truly using the cloud to its advantage — will transformation become embedded throughout the enterprise workflows of tomorrow.

Powered by WPeMatico

Why Self-Service Analytics Is Replacing Traditional BI

Modern organizations typically use several IT tools to monitor their applications, networks, and other IT components in real time. Unfortunately, this leads to independent data islands, which creates a one-dimensional view of IT. In order to make strategic decisions, organizations need an IT operational analytics tool to analyze data from multiple sources, spot trends, and make better decisions.

While there are several analytics tools in the market, most tend to be either complicated or expensive to use. Self-service analytical tools, on the other hand, offer a rare combination of simplicity and affordability, making them popular among users. With the emergence of these comprehensive tools, every IT user can access data from various silos, get unified insight, collaborate with other teams, and gain the visibility necessary to make faster and smarter decisions.

In a recent survey conducted by ManageEngine, over 160 IT professionals, including CIOs, managers, and technicians from around the globe, highlighted their priorities and challenges when it comes to analyzing data.

Here are some of the findings, which suggest that self-service analytics is here to stay.

Analytics is not just for data experts anymore. Owing to their complexity, traditional business intelligence (BI) tools have always been handled by data experts, meaning decision-making was limited to a privileged few. But not anymore. Today, data is an integral part of any business, and users need to access it on a daily basis so they can make decisions on their own.

Empowered users ensure better IT governance. Gone are the days when a user had to wait for the IT department to furnish a report or a chart to get the information they need. IT and business users alike no longer want to depend on other sources to fulfill their reporting requirements and prefer to do it on their own. Self-service BI tools provide more flexibility in this regard and allow users to quickly create personalized reports, get real-time insight on the data they need, and take necessary action.

Customization replaces standardization. Different teams have different reporting needs, and self-service reports can be personalized based on individual requirements. By doing so, they also provide more insight into why certain strategies work and others don’t.

On-demand reporting is critical. Ad hoc reports are considered more popular because they provide answers to particular questions and analyze specific data. This means they need to be created instantly, without any delay. With self-service reporting, enterprises can finally enable users to easily access and share any necessary information.

Visual analytics are more popular. A visually driven, intuitive user interface adds more context to data and lets you instantly view, interpret, and analyze information. Users can now create reports and dashboards using visualization tools such as charts, widgets, KPI metrics, pivot tables, and more. Self-service tools allow them to visually slice and dice data, drill down into details, and change appearances with different chart types and predefined templates.

Organizations need to be proactive and agile to meet ever-changing business requirements. Investing in a self-service analytics tool is a step in the right direction. It will empower users, increase productivity, and positively impact business.

Powered by WPeMatico

Taking Stock Of Today’s Trends To Set Tomorrow’s Cloud Strategies

Looking at the state of the cloud computing market, we see the coming year as one in which organizations will be thinking more strategically about the cloud. There will still be growth as organizations continue to invest in the advantages of cloud computing, but it’s necessary for everyone to evaluate the lessons learned and make sure that those investments are, in fact, strategic.

This is already happening to a degree and can be seen in two cloud trends reported in the Wall Street Journal. In a study by the tech research firm IDC, reports on cloud infrastructure spending is not only on an upward trend, it is also outpacing that of traditional IT. The second report, based on a CompTIA survey, found that the adoption of enterprise cloud applications was trending down.

This contradiction in trends can be explained as a symptom of where we are in the maturation process of the cloud computing market.

No one questions that cloud adoption comes with a host of clear benefits in terms of cost, accessibility, and flexibility. These have been and will continue to be major drivers for cloud computing. Lingering questions related to security are, in large part, fear, uncertainty, and doubt generated by segments of the industry that find themselves well behind the curve. Of course, it’s normal to have concerns about the security of data in the cloud, but security is an issue that spans all aspects of technology, not just the cloud. We all grapple with enterprise security.

As for the IDC and CompTIA findings, how do you reconcile the increase in adoption of cloud infrastructure and the migration of workloads to the cloud with a slowdown in the adoption of cloud applications?

The answer is that, as organizations examine their experience with the cloud, they recognize that while it is great for some things, the cloud may not be the technical panacea they’d hoped. It may be that existing investments in back-office systems are simply not ready for cloud integration and that day is farther down the road than first thought. New IT projects may well be cloud-centric, but for legacy IT that is already in place and operating satisfactorily, the ROI may not make sense.

That is an opportunity for vendors whose portfolios span cloud and traditional offerings and can leverage goodwill to maintain recurring revenue with the maintenance of existing systems while capturing new revenue with the sale of both hybrid and cloud products that make sense for existing and new customers.

In my experience, the IDC and CompTIA trends make perfect sense as we’ve seen our customers engaged in the migration to the leading cloud infrastructure service providers like Microsoft Azure, Amazon Web Services (AWS), Google Cloud, and others whose Infrastrucutre-as-a-Service (IaaS) offerings represent the pinnacle of cloud ROI. They are taking advantage of the cloud’s cost savings by shifting the responsibility of managing equipment and the capital costs of hardware to the IaaS provider, but maintaining management-level control of their operations and, in particular, their mission critical systems.

In other cases — and especially for companies operating in or expanding into today’s global markets — the cloud can offer advantages associated with the flexibility of being able to establish a local footprint in countries where local control is necessary because of regulation. In such cases, the right cloud strategy can give the organization the ability to focus on compliance in an increasingly regulated business environment without being distracted with the hassles of standing up a new server farm. Consider the changing environment in the EU where the future of the recently adopted EU-US Privacy Shield agreement is already in question, and where the UK’s looming exit from the European Union may have further implications on cross-border data management. For organizations active in Pacific markets, that can also mean responding to changes to the APEC Cross Border Privacy Rules.

It makes sense for any organization to respond to changing circumstances and adjust plans accordingly. You may be three years into a five-year cloud migration plan and, if you haven’t been correcting course along the way, you may find yourself a long way from your destination. Just as cloud consumers must take stock of where they are today in order to adapt strategy, cloud vendors must also recognize how their product development and sales strategies need to change to meet the needs of their customers and of the market as a whole.

Powered by WPeMatico

Business Access, The Cloud, And Security

Access governance continues to be a surging market in many different industries across the globe and organizations are investing resources in technology that can efficiently improve processes and ensure security of their networks. While the cloud has been established as a standard for organizations, access and governance to manage such solutions has not yet become a standard solution set for the cloud. Perhaps the question remains: How does access governance apply to the cloud?

Access governance helps organizations of all sizes in every industry by ensuring that each employee has the correct access to the systems that they need to perform their jobs, along with keeping the company’s network secure. Access management specifically then allows organizational leaders to easily manage accounts and access, and is put in place to monitor that access is correct for security reasons.

This works by setting up a model of exactly the access rights each role in the organization requires. Access rights are created for specific roles in each relevant department. So, for example, an IT department manager needs certain access rights to systems, applications, and resources, more so than other employees will need. This allows the person who is creating the account to easily do so without accidently making any access mistakes; either giving the employee too many rights or too few rights.

Separation of Duties

Access governance means organizations are able to enable correct access rights according to a model that its leadership have established, thus there are no errors or omissions in the model. Large organizations probably have different types of positions and employees working these positions and their professional responsibilities might overlap so that permission to initiate some type of request and then also accept it is necessary.

Reconciliation is another way to ensure access rights remain accurate. This compares how access rights are set up to be in the model to how they actually are, and creates a report on any differences found. Insomuch, anything that is not accurate can then be easily corrected.

Attestation is still another form of checking access and helps verify all information. A report is forwarded to managers of a department for them to verify that all users and their rights are accounted for and that everything in the log is correct. The manager, for whatever department needs verifying, will need to look over and either mark access rights for deletion, change access rights immediately, or create a helpdesk ticket to change the access right. After examining all of the rights, the manager must give final approval for the proposed set of changes to ensure that everything is correct.

Why is Access Governance Important in the Cloud?

As the number of employees who are working remotely increases so does the users of cloud applications. Access governance is then a way of ensuring security for these types of applications and for employees who are not working in the physical office.

When an employee is first hired by an organization, it is extremely common for the employee to receive too many rights, or acquire them while working on projects and never have them revoked even when the projects have ceased. Access rights, unfortunately, are frequently overlooked access rights and are not considered important enough to revoke, especially in regard to cloud applications. So, access governance means that access is correct across the entire organization, from in-house applications and cloud applications to even physical resources such as cell phones.

Organizational access can be easily monitored through the use of access governance. Here’s why this is important: The typical process goes a little something like this — a new employee is hired in the human resources department as a senior recruiter and needs accounts and resources created so he or she can begin work. The employee then automatically receives a Coupa cloud account, PeopleSoft access, and the ability to open the department’s shared drive and an email address, for example. He or she is ready for work.

For those that participate in such practices, the process looks a little like this: Rules are established so that once a quarter (or whatever interval) the business manager receives a report of all of the employees in his or her department and the access rights of those individuals. When new employees are added to the roles, the list is updated. Then, two quarters later, the manager sees that the senior recruiter has access to an application for which he or she had been using, but the project is now completed or the individual never needed access to the system. Thus, because of advance access management protocols, the business manager, or other departmental leader, can easily tag the access to be revoked and ensure that it is done right away. No multi-level manual processes; simply by the click of a button, all access to the employee for a specific system or all systems can be revoked. That’s the added value of a security measure.

Business leaders have many types of applications to manage, as well as many working situations for employees — because the employee may be traveling, working offsite, or working onsite in the office – and varying resources, all of which can affect access governance and technology within all of these situations. Likewise, leaders that invest because access governance solutions improve security while allowing employees the opportunity to remain productive save organizations time and money.

Powered by WPeMatico

So You’ve Transitioned To The Cloud – Now What?

I’m willing to bet that when Chinese philosopher Lao-Tzu coined his famous phrase around 500 B.C., “The journey of a thousand miles begins with a single step,” he wasn’t thinking about the time it takes to migrate legacy data center operations to the public cloud. But it couldn’t be more applicable.

For many IT departments, shifting operations to the public cloud can be a long, daunting, and frustrating process. However, it doesn’t have to be. Understanding where the public cloud migration journey begins and where it will ultimately end allows IT professionals to ensure that the first step — and all subsequent steps — are taken in the right direction. And well before the cloud journey actually begins, it’s critical that all stakeholders involved understand the value of moving some or all their IT operations to the public cloud. No one wants to walk a thousand miles in the wrong direction.

While the enormous potential of the public cloud has been well documented, realizing that potential in terms of both quantitative ROI and measurable qualitative benefits requires that a plan be developed and implemented to achieve specific desired results. The reality for most companies embarking on this path is that they can’t do it alone. They require a partner that has experience; they need a “Cloud Sherpa” — a partner who can ensure that their journey into uncharted IT territory will be safe and successful. By moving some or all of your applications to a third-party expert’s management and care, IT departments can better focus on their specific objectives, which can result in significant organizational bottom line results.

The main benefits of transitioning to the cloud are agility, increased scalability, reduced total cost of ownership (TCO), and improved security. To reach your ideal results, below are five main steps for companies implementing the public cloud, and thoughts as to how a third-party provider’s management and care could aid the process.



Begin with the end in mind.

It’s key to keep the long game in mind when planning the move to the public cloud. It starts with identifying challenges to be solved and opportunities to be pursued. Make sure all stakeholders are kept in the loop and involve them in the process. CEOs are typically more open to new applications that increase sales and improve customer satisfaction. CFOs, however, often put more emphasis on cost containment and profit-building, and CIOs usually want service-level improvements. By keeping these individuals in the loop, you increase your chances of success because you will have executive leadership buy-in.

Take stock.

Once you have pinpointed the company’s IT goals, it’s important to conduct a high-level inventory or a refresh of the current list of all the apps being used across your enterprise. The appropriate teams and departments conducting this inventory may uncover utilities, databases, and websites you may have missed. Include information about the purpose of the application, who uses it, and the sensitivity and importance of the data to the business. In order to chart your path to the cloud, you need to know the current state of all apps being utilized.

Map demand.

Mapping demand is crucial in strategizing your move to the public cloud. Ask managers to project growth for existing apps over the next three years, and include new apps the company will take online. By identifying and anticipating future traffic levels and spikes, the team can plan accordingly and be ready for increased growth.

Determine the best cloud candidates.

Review your inventory of applications and data and find the best candidates for cloud migration and implementation. You can pick and choose which apps to move to the cloud and which can stay running on-premise. Apps that experience spikey demand or involve parallel processing (e.g., batch) are naturally a better fit for the public cloud. This is also true for apps requiring DR or needing broad geographic placement.

Decide how far you’ll go.

The beauty of the cloud is that you don’t have to go 100% in all at once. Usually, legacy apps should be left where they are, as they typically can’t benefit from cloud scalability. On the other hand, if you have a pending capital expenditure (CAPEX) investment to refresh infrastructure providing legacy apps, it may make sense to move them to the cloud. Scenarios such as this explain the growing popularity of the hybrid cloud amongst enterprises. Low-risk operations such as project management, file sharing, and any other non-revenue generating applications are all low-hanging fruit that can be moved into the public cloud. With cloud, you can start small and grow at the pace that suits your business.

Any IT trek into new territory is bound to encounter unforeseen issues and challenges. With all the factors to consider when migrating to the cloud, it’s beneficial to have seasoned experts that have successfully managed the transition before. The process is much more streamlined with a guide walking you through it, step by step. Once you’ve narrowed down the list of possible managed public cloud partners, the journey begins.

Powered by WPeMatico