Build an application migration plan step by step

To migrate apps to the cloud, start with nonessential workloads and move to more mission-critical ones from there. But, remember, some apps might not be suited for cloud at all.

Powered by WPeMatico

Servers from 3W

3W Infra has launched a new dedicated server plan based on Dell PowerEdge R230 hardware. The entry-level servers are aimed at SMBs and managed cloud hosting companies, and would provide cost-effective compute power for low-traffic websites, virtualization, development, cloud storage, as well as relatively small web and email applications.

3W Infra’s new dedicated server package based on Dell PowerEdge R230 server hardware will replace the company’s existing entry-level dedicated server offering based on an earlier hardware version, the R220.

Key features of the new 3W Infra dedicated server packages for SMBs and managed cloud hosting companies include: 

  • 1x Intel E3-1240 v5 4-Core CPU
  • Up to 64 GB DDR4 RAM
  • Dell Remote Access Controller (iDRAC8 Enterprise)
  • Software RAID
  • Up to 4x SATA/SSD HDD
  • Up to 20Gbps dedicated uplink to 3W Infra’s high bandwidth (160Gbps), fully redundant global network
  • Operating Systems: Windows Server 2016, VMware ESXi, CentOS, Ubuntu, Debian, or BSD.
  • Free-of charge dedicated server setup

Transit-Only Global Network

3W Infra’s new dedicated server plan comes with a dedicated uplink to the company’s high bandwidth (160 Gbps) global network. This network would cater to the needs of the most demanding 3W Infra customers including streaming media companies and online gaming providers.

The proprietary global network purposely features a transit-only strategy, which would pre-eminently suit demanding requirements of business critical applications such as streaming and gaming. The transit-only strategy — compared to a peering strategy often used for establishing a global network — would provide 3W Infra’s customers with an enterprise-grade 100% uptime guarantee.

Just before summer, 3W Infra migrated its entire IaaS hosting infrastructure to a new data center hall in Switch Datacenters’ Amsterdam AMS1 facility with Tier 4 and OCP (Open Compute Project) specifications. The newly launched entry-level dedicated server package will be housed in this Amsterdam data center. The global network attached would ensure an easy and efficient global reach with low-latency features. 

For the new servers offered, this type of data center infrastructure would imply that these entry-level dedicated server plans are equipped with enterprise-grade high-availability and scalability features – such as 2N power infrastructure, modular power usage, a redundant 10Gbps (up to 20Gbps) full-duplex fiber uplink, and ample data center space available for future growth. 

ISO 27001, PCI-DSS Certification 

The upgraded entry-level server plans complement 3W Infra’s complete IaaS hosting portfolio which also includes mid-range and high-end dedicated servers for dynamic and demanding use cases such as high performance cloud computing infrastructure and big data.

Founded three years ago, the 3W Infra organization is now working on achieving the ISO 27001 information security management certification, as well as taking its steps to achieve PCI-DSS compliance — to ultimately ensure safety of the payment card data stored within 3W Infra’s IaaS hosting infrastructure. The 3W infra management team expects to implement the ISO 27001 and PCI-DSS certifications before year-end 2017.

www.3winfra.com

Powered by WPeMatico

Data Center Platform from Cloudistics®

Cloudistics has announced the release of Cloudistics FLARE; a simplified data center platform optimized for SMB, ROBO, and retail environments. At the same time, it announced the launch of Cloudistics v3.3, now featuring deduplication and wider support for larger and more diverse applications.

Cloudistics FLARE

Cloudistics FLARE is ideally suited to fulfill the needs of smaller facilities. By combining all the physical resources for a small data center with Cloudistics enterprise software, FLARE provides a single, easy to manage solution that requires no special skills and can be deployed in a matter of minutes. The FLARE package includes:

  • Pre-packaged servers, storage, networking, virtualization, and management software.
  • Plug-and-play, easy to deploy, and simple single-pane-of-glass management from anywhere.
  • Built-in services such as an application marketplace, data protection, disaster recovery, firewalls etc.
  • Hosted SaaS (Software-as-a-Service) management interface.
  • Limitless scaling, grow as your application demands small footprint and energy efficiency
  • Simple, all-inclusive pricing

FLARE will change all that. It’s an all-inclusive data center-in-a-box that incorporates key infrastructure components — network, storage, compute, and virtualization — and which can moreover be managed from anywhere, using the all-inclusive web-based SaaS Ignite user interface.

Cloudistics v3.3

The Cloudistics v3.3 release is an update to the software component of the company’s complete on-premises cloud platform. The new release delivers several features that enhance scalability and productivity and increase customers’ return on investment in the support of larger and more diverse application workloads.

The new release includes these notable updates:

Compression and Deduplication with Single-Instance – Cloudistics Storage Blocks will natively include compression and deduplication with single-instance for data reduction. This will help customers reduce their storage footprint by a factor over 3x to 5x, increase data density, and lower facilities’ power and infrastructure costs. Customers deploying workloads will achieve storage reduction without any loss in performance, reliability, or scalability. A fully populated storage block with thirty-two 4TB flash drives will deliver over 500PB effective storage.

Support for the soon-to-be-released 14th generation Dell-EMC PowerEdge FC640 half-width modular servers as compute nodes – In addition to support of Dell-EMC PowerEdge FC430 and FC630. The new servers incorporate Intel® Xeon® Scalable processors and deliver high performance with best-in-class density and exceptional scalability. A large memory capacity, of up to 2TB per node, make them a strong building block in large and dense workload scenarios.

Expanded Application Support for a wide variety of application workloads based on Microsoft Windows, Linux, BSD, and OpenSolaris – enabling customers to migrate or deploy an increasing variety of workloads on the platform including, virtualized legacy applications, purpose-built virtual appliances, and more.

www.cloudistics.com 
 

Powered by WPeMatico

IT Service Management from ManageEngine

ManageEngine has updated the cloud version of its flagship ITSM product, ServiceDesk Plus. With the ability to launch and manage multiple service desk instances on the go, organizations can now leverage proven IT service management (ITSM) best practices to streamline business functions for non-IT departments, including HR, facilities, and finance. Available immediately, the ServiceDesk Plus cloud version comes loaded with built-in templates unique to various business processes, giving users the flexibility to perform codeless customizations for quick and easy deployment of business services.

Within any organization, employees consume services provided by various departments on a daily basis. While each department offers unique services, the processes and workflows associated with those services follow a pattern similar to that of IT service management. However, organizations often implement ITSM workflows only within their IT department, seldom leveraging these ITSM best practices to manage service delivery across other departments.

Becoming a Rapid-Start Enterprise Service Desk

To date, ServiceDesk Plus has focused on providing ITSM best practices to the IT end of business. By discovering the common thread between the different service management activities within an enterprise, ServiceDesk Plus is now able to carry its industry-leading capabilities beyond IT. As an enterprise service desk, ServiceDesk Plus helps organizations instantly deploy ITSM solutions for their supporting business units by providing:

  • Rapid deployment: Create, deploy, and roll out a service desk instance in less than 60 secs.
  • Single enterprise directory: Maintain users, service desks, authentications, and associations in one place.
  • Unique service desk instances: Create separate service desk instances for each business function and facilitate organized service delivery using code-free customizations.
  • Service automation: Implement ITSM workflows to efficiently manage all aspects of the business service life cycle.
  • Built-in catalog and templates: Accelerate service management adoption across departments by using prebuilt templates and service catalogs unique to each business unit.
  • Centralized request portal: Showcase all the services that end users require using a single portal based on each individual’s access permissions.

www.manageengine.com

Powered by WPeMatico

Facing Up To The IT Shadow

Certain psychological schools of thought posit the existence of “the shadow,” a scary figure which lurks in the darkness of the psyche affecting everything we do.

The decentralized, virtualized environment which now characterizes business IT architecture has also given rise to a shadow. And just like its psychological counterpart, so-called shadow IT operates out of sight of business management and can sometimes appear dangerously out of control. It can be painful to look at but if IT professionals fail to deal with shadow IT, it has the potential to do severe damage in terms of data loss and non-compliance fines.

Facing Up to the Shadow

Shadow IT can be thought of as the sum of all the network assets not directly authorized and controlled within your current business IT policies. It includes but is not limited to devices such as unauthorised smartphones and tablets; cloud services like DropBox and Google Docs and third-party applications. As a responsible IT professional, ignoring shadow IT is not a viable long-term strategy.

First, ignoring shadow IT allows it to continue and grow in secret, increasing its ability to undermine security and utilize network resources.

Second, the difference between your authorized IT and shadow IT may not be appreciated by those higher up in the corporate food chain. To the leadership team, if something breaks and it is due to IT, the buck stops with the IT department. Ignorance may turn out to be no defense should your company lose data or are financially impacted by untamed shadow IT.

Third, by actively getting a grip on shadow IT, turning it into numbers and bringing the issue up to the board you are more likely to secure both the respect of the leadership team and even procure additional resources to help you to do your job.

Finally, anything that harms the business as a whole will harm you as a department and as individual employees. There is no valid case to be made for ignoring shadow IT.

How to Detect Shadow IT

Once you have decided to face the nightmare of shadow IT, the first step is to incorporate it into your existing network monitoring system.

You will undoubtedly already have network management software set up which can monitor the assets used by users who are logged in to their company accounts. By analysing each user’s assets, it can be determined if non-authorized devices or services are being accessed.

Nevertheless, you should still set alerts for the appearance of new and unknown devices on the network and carefully compare scans to pinpoint when and how they are making a connection. Regularly checking process logs from firewalls and proxies for evidence of shadow IT is also advisable.

As with all IT monitoring and troubleshooting processes, the frequency and granularity of scans will need to be weighed with the resource cost but if extensive shadow IT is suspected, creating a dedicated shadow IT project is well worth considering, particularly when factoring in the potential privacy, security, and compliance issues involved.

Using Specific Shadow IT Detection Software

There are now countless apps, virtual services, and cloud providers in Los Angeles, Miami, New York, and further afield. It can be almost impossible to identify and trace their signatures from a firewall log.

As part of your shadow IT clean-up drive, it is worth considering the shadow IT-specific software that is increasingly available to IT professionals.

Some software can monitor the network for thousands of different applications and cloud services not yet categorized by firewalls and proxies, simplifying and speeding up the shadow IT detection process. Access count, traffic patterns, and usage trends can add more information to build up a fuller picture of the extent of shadow IT exposure.

Some services can assist pressurized IT professionals even further by analysing and categorizing cloud services in terms of risk, helping them to prioritize those services and platforms that are posing the greatest security risk. As would be expected, data can be modified and customized to suit individual company risk profiles and reports can be filtered and converted into various formats (csv, Excel, pdf, etc.) to help present the data in a meaningful way.

Some services include policy enforcement capability, restricting access rights and, when integrated with firewalls and proxies, helping to identify new and insecure configurations.

Embracing the Shadow

Recognising and facing up to the shadow — and shadow IT — is the easiest part. The road to healing comes from embracing it, no matter how much it worries and disturbs you.

This is not something that the IT department can or should be wholly responsible for. Shadow IT is a signal that the business is either not providing a critical service or tool or that the tools it does provide are not fast or smart enough. After all, if the head of finance is logging on to his/her smartphone to access company accounts out of hours, is it because they don’t have the option of a company-owned device? If the business is asking the marketing team to deliver multi-gigabyte files to third-parties with nothing but an email account to work with, is it any wonder they are using DropBox or WeTransfer?

Rather than blaming employees for using shadow IT and banning it (which is unlikely to work anyway), a more productive stance is to ask them what they need to do their job and to look for in-house solutions. Some companies call a shadow IT amnesty whereby employees are called to safely disclose any non-authorized IT they are using with a view to finding alternative workarounds rather than punishing them.

The business can then follow this “no questions asked” policy with a deep security audit whereby existing policies are refined and redrafted and automated policy controls installed with any future changes requiring approval from the leadership team. Policy actions might include blocking access to the highest risk services altogether and restricting access to others (e.g., setting permissions to  “read-only” either across the board or depending on user role).

Shadow IT is a fact of the modern workplace, arising from the increasing availability of enterprise-grade technology in the public sphere. Although facing and sizing up the shadow is a necessary first step, only by truly embracing its existence can a business draw the necessary lessons and use these to neutralize the very real danger it poses.

Powered by WPeMatico

Keys To Agile Software Development

Organizations want to move to a continuous integration/continuous deployment (CI/CD) model in software development and the cloud can help, but there are key obstacles standing in the way. For example, in many software companies, the IT operations (IT Ops) team has a lengthy procurement and provisioning cycle, and developers may have to wait anywhere from several days to a few weeks for IT Ops to handle each request for new tools or workspaces.

In addition, there is a lack of automation in IT Ops procurement and provisioning cycles. For example, developers may be using an outdated IT ticketing system to request infrastructure and software — each time a developer issues a ticket request, an IT Ops person must follow a certain workflow to fulfill that request, and must complete each step before sending the ticket on to the next IT Ops person in the chain.

Silos are another obstacle. In large enterprises, IT Ops teams have silos — individuals or sub-teams with different responsibilities at different levels. Each individual or sub-team fulfills certain tasks for a Dev/Test request separately from the others. These silos turn IT Ops procurement into a series of handoffs, where each team member must complete their assigned tasks before handing the job off to another team member. This results in delays, where DevOps people must wait for requests to move through the IT Ops pipeline as each person in each silo performs their separate tasks. The silos create a lack of collaboration between separate IT Ops teams, and between IT Ops and DevOps, when ideally they should all be working together to accelerate the procurement process.

A self-service private cloud will remove these obstacles, helping to clear the IT Ops pipeline while providing developers, testers, and QA people with the IT infrastructure and tools they need for ongoing CI/CD development. According to Gene Kim, “When self-service provisioning can be done quickly and consistently through complete virtualization, you eliminate the obstacles to give developers, testers, and QA the environments they need for continuous integration and continuous deployment.”

The following are some attributes of a self-service automated cloud.

On-Demand Self-Service Provisioning

A self-service private cloud should offer a set of automated provisioning tools so developers and testers can create their own Dev/Test environments. It should include both a self-service user interface and an API-driven infrastructure-as-code that lets developers create VMs and databases, access storage, set up network connections, etc., using RESTful API coding.

In essence, a self-service private cloud automates the provisioning process by letting developers and testers manage their own initial deployments and configurations. This removes many of the confusing and error-prone manual steps that IT Ops people must go through to deliver the infrastructure and software stacks that developers need. It also removes the silos within the IT Ops organization, as it puts provisioning of individual elements (VMs, databases, storage, etc.) in the hands of developers.

An Intelligent and Well-Organized User Interface

A self-service private cloud should have an intelligent user interface (UI) that gives developers access to a set of common infrastructure tools (e.g., VMs, Oracle, or SQL Server databases, storage, network connections) and applications. This eliminates the old ticketing-based processes where users must submit multiple tickets to IT Ops to build their physical and software stacks, allowing developers to set up their own DevOps environments on the private cloud with just a few clicks.

Ideally, a self-service UI should allow IT Ops administrators to assign resources in an organized manner. They should be able to designate business units (BUs) on the UI (e.g., “AppDevTeam1,” “WebDevTeam1”) based on units within the company; assign team members to BUs; and designate current projects (e.g., “AppDevProject1”) on the UI according to BU. They should also be able to assign quotas to each BU (e.g., AppDevTeam1 gets 100 VMs, 25 vCPU cores, 100 GB memory, and 400 GB storage), and to each individual project (e.g., AppDevProject1 gets 10 VMs, 5 vCPU cores, 8 GB memory, and 40 GB storage).

Application Store

A self-service private cloud should have an online “application store” that provides single-click access to common development tools and services, such as:

  • CI/CD tools such as Jenkins, Git, Maven, and Junit
  • Workload management tools such as Ansible, Puppet, and Chef
  • Middleware services such as RabbitMQ and Redis
  • Storage back-ends such as MySQL, Postgres, Cassandra, and MongoDB

The private cloud should also allow IT Ops to place commonly-used applications on the self-service UI to give developers and testers even easier access to them. For example, IT Ops can make a duplicate of the production environment available on the UI for testers. They can then update the clone production environment every few months to make sure testers are using the most current and accurate duplicate.

Seamless, two-way cloud migration

A self-service hybrid should have seamless two-way migration to allow developers and testers who have public cloud access to easily move applications and workloads back and forth between public and private clouds.

Administrative dashboards

A self-service private cloud solution should have a set of administrative dashboards that allow the IT Ops team to control access to resources on the UI. These dashboards should give admins complete visibility into which teams and team members are utilizing which resources, and allow the IT Ops team to perform admin tasks such as:

  • Creating user accounts and passwords on the UI
  • Creating BUs, and projects for each BU, on the UI
  • Assigning developers and testers as “members” of a certain BU or project
  • Assigning self-service infrastructure tools (compute, storage, network) on the UI, and setting quotas for those tools according to BU or project
  • Importing new applications to the application store or the UI

A Software as a Service (SaaS) platform

A SaaS platform with portal-based access is the best venue for a self-service private cloud solution. A SaaS platform provides the flexibility to do upgrades and customization to the various features of the UI, application store, and administrative dashboards. The flexibility of the SaaS private cloud solution gives it the potential to become the de facto standardized platform for CI/CD and deployment in agile software development.

Conclusion

Both Dev/Test and IT Ops teams are under pressure to support the demands of agile development, but they have different goals. Developers and testers want to achieve faster time to market in delivering applications to their customers. Meanwhile, IT Ops teams want to achieve a faster time to value in delivering IT resources and applications to Dev/Test teams to support their goal of faster time to market.

As we’ve seen, the obstacles that hinder these goals — lengthy procurement and provisioning cycles, lack of automation, internal silos that create bottlenecks — are formidable, but can be overcome with the right technologies. A self-service, automated private cloud empowers developers and testers, giving them the tools they need to create their own DevOps environments. It also frees IT Ops teams from manual provisioning tasks, allowing them to provide developers and testers with IT resources in a more direct and timely manner. In short, a self-service private cloud helps both developers and IT Ops teams achieve their separate goals of faster time to market and time to value by clearing IT Ops obstacles to support the ongoing CI/CD cycle.

Powered by WPeMatico

The Cloud Goes Underground

Extreme weather, seismic events, and even rodents have compromised the physical security of cloud servers and other data center infrastructure. Selecting an underground colocation facility with above industry standards provides the solution to these and other threats.

With cyberattacks such as Petya and WannaCry making big headlines recently, it’s understandable that fortifying cybersecurity is top-of-mind for many CIOs. Last May, WannaCry invaded 200,000 computers in 150 countries, including the U.S., UK, Russia, and China, as it attacked hospitals, banks, and telecommunications companies.

A mere six weeks later, Petya struck — first hitting targets in the Ukraine, including its central bank, main airport, and even the Chernobyl nuclear power plant before quickly spreading and infecting organizations across Europe, North America, and Australia. Its victims included a UK-based global advertising agency, a Danish container ship company, and an American pharmaceutical giant.

Virtual Security Is Only Half the Equation

According to the 2017 BDO Technology Outlook Survey, 74% of technology chief financial officers say cloud computing will have the most measurable impact on their business this year, while IDC predicts that at least 50% of IT spending will be cloud-based in 2018. Although cyberattacks remain a significant threat in this environment, it’s important to remember that virtual security is only half of the equation. With the cloud growing ever more critical to businesses, ensuring the physical security of cloud servers is also essential.

Physical security at the colocation or data center facility is critical to effectively safeguarding not only cloud computing, of course, but also mission-critical business applications, data storage, networking, and computing related to Big Data analytics and emerging technologies such as artificial intelligence and IoT-enabled devices. To be fully secure, companies must ensure that their colocation provider can deliver a high level of physical resilience on-site. As evidenced by the devastation wreaked by Hurricanes Harvey and Irma, these physical threats include extreme weather events, but also seismic disturbances, breaches by unauthorized intruders, and given the current geopolitical climate, terrorism.

Explosives and Squirrels

In recent years, many customers have deprioritized physical security from their data center to-do list. However, physical threats remain real and have the potential to become much more sophisticated. As the late Uptime Institute founder Kenneth Brill wrote, “The oldest cyber frontier is actual physical attack or the threat of attack to disable data centers. Previously in the realm of science fiction, asymmetrical physical attacks on data centers by explosives, biological agents, electromagnetic pulse, electric utility, or other means are now credible.”

While electromagnetic pulses do sound like the stuff of science fiction, some physical security breaches perpetrated against data centers have been more suggestive of a Quentin Tarantino crime drama, and others, a Pixar animated movie for children starring woodland creatures. This is not to minimize the economic impact or the damage these attacks on record have caused to business reputation.

Consider the Chicago-based data center that experienced a physical security breach not once but twice in the span of two years. In the first breach, a lone IT staffer working the graveyard shift was held hostage and his biometric card reader taken from him, allowing the masked assailants to freely enter the facility. They made off with computer equipment estimated at a cost of upwards to $100,000. In the second, resourceful miscreants managed to break through a wall using a chainsaw and stole servers.

Yahoo once saw half its Santa Clara data center taken down by squirrels that managed to chew their way through powerlines and fiber-optic cables. Google “Yahoo and frying squirrels” if you think this episode is referenced merely for entertainment purposes. It is not.

Among the most infamous physical breaches to have taken place was a 2011 attack on Vodafone’s data center in Basingstoke, England. A gang broke in and stole servers and networking equipment, causing systems to go down and the telecom company’s business reputation to suffer greatly.

The Rock-Solid Safety of Colocating Underground

For some companies, the cloud and IT infrastructure altogether have moved even farther from the skies to underground data centers. Data center operators have been retrofitting underground bunkers into functional data centers for many years. But as security and energy demands as well as concerns about terrorism have lately intensified, there’s an increasing trend towards building subterranean colocation facilities to host mission-critical infrastructure and data.

Today, you’ll find underground data center facilities in Lithuania, the Netherlands, Switzerland, Ukraine, the United Kingdom, and Sweden, as well as the U.S. Some of these facilities were previously the site of mining operations while others were originally Cold War era bunkers designed to protect citizens in the event of a nuclear attack.

Surrounded by rock, underground data centers are highly physically secure, and since subterranean temperatures are naturally regulated, environmental conditions are more efficient. But not all underground data centers are created equal. Key design factors to consider during the site selection process include utilities infrastructure, availability and capacity of fiber-optic systems, the risk of natural and man-made risks, and how well the physical perimeter of the facility can be secured.

The issue of location is especially critical, and any data center selection needs to consider whether the facility is in a flood zone or if the region has an unstable seismic profile. The most fortified underground data centers also implement multi-layered security access methods, including visual inspections from multiple 24×7 guard stations, keycard access, video monitoring, and biometric scanning. Best practices incorporate mantraps and restrictive access policies for each customer’s space, providing security within each zone of the facility.

To ensure business continuity, it’s advantageous that all critical infrastructure of a subterranean data center be located underground. This extends to dual utility feeds backed up by two MW generators and N+1 critical infrastructure components, including UPS, chillers, and battery back-up. Such a design is further enhanced by being a SOC 2, Type 2 certified data center, ensuring the customer’s confidence in the provider’s 100% uptime guarantee, if indeed they offer one at all.

And because connectivity means everything, subterranean facilities should also have access to high-speed, carrier-class internet and data services through a fiber network that runs in and out of the data center via multiple fiber paths and entrances.

From presidential bunkers and NORAD facilities hosting military analysts, to scientists studying astrophysics in subterranean laboratories, and to Warner Brothers film archives stored safely away from the elements, some of the world’s most essential personnel, valued assets and activities are located underground, protected from natural and most man-made disasters. So, why should your cloud servers and critical data be any different?

But the crux of the matter is this: while cyberattacks are on the increase and the cloud can be vulnerable, the importance of physical data center security cannot be overstated. The underground data center is a prime example of using the earth’s resource to offer protection from natural and unnatural disasters.

Powered by WPeMatico