Blog

Why Reliable Cloud Performance Matters

People are right to be wary of any vendor claiming to be the “top performing!” or “fastest!” cloud provider. Most folks know that ANYTHING can look spectacular – or unspectacular for that matter – if you stack the deck just right. But at the same time, cloud shoppers have a deep hunger for legitimate information on realistic performance expectations. Cloud performance has a direct impact on what you spend on compute resources, how you decide the right host for your workload, and how you choose to scale when the need arises. In this blog post, we’ll summarize some recent findings and put them in context.

With the launch of our new Hyperscale instances, we approached an independent analytics company, CloudHarmony, and asked them to conduct an extended performance test that compared CenturyLink Cloud Hyperscale servers to the very best equivalent servers offered by AWS and Rackspace. CloudHarmony is a well-respected shop that collects data from dozens of benchmarks and shares the results publicly for anyone to dissect.  After running a variety of benchmarks over a long period of time (to ensure that the test gave an accurate look over an extended window), they shared their findings with the world.

The results were positive – as we’ll talk through below – but how do reliable performance metrics help you in your cloud journey?

More Bang for the Buck

In an ideal world, you want reliable performance at a fair market price and no hidden charges. In the CloudHarmony results, we saw that our Hyperscale SSD storage provided excellent disk read performance and strong disk write performance through a variety of tests. In the results below – run against AWS c3 servers and Rackspace Performance servers – you can see that Hyperscale has a fantastic IO profile for large block sizes.

Disk Read Performance

Disk Read Performance

Why does this matter? Consider databases running on Microsoft SQL Server that often works with 64k blocks. By running this workload on Hyperscale, you get persistent storage, high performance, and no charges for IO requests or provisioned IOPS. This results in predictable costs and fewer resources needed to achieve optimal performance.

Simplified Decision Making

Choice is great, but is also a paradox. When you’re faced with dozens of server types to choose from, you find yourself selecting a “best fit” that may compromise in one area (“too much RAM!”) in order to get another (“need 8 CPUs”). In CenturyLink Cloud, we have two classes of servers (Standard and Hyperscale) and both have shown to have reliable performance. Pick whatever amount of CPU or memory that makes sense – which is of course how traditional servers have always been purchased.

Choose Your Own VM Size

If built-in data redundancy doesn’t matter, but reliable, high performance does, choose Hyperscale. Need strong, consistent performance but want daily storage snapshots and a SAN backbone? Use Standard servers. Straightforward choices means that you spend less time navigating a gauntlet of server types and more time deploying killer applications.

Predictable Performance & Scaling

Valid performance testing results can help you understand how best to scale an application. Should I add more capacity to this VM, or does it make sense to add more VMs to the environment? That’s a hard question to answer without understanding how the platform reacts to capacity changes,. The CloudHarmony results not only showed that the CenturyLink Cloud Hyperscale CPU performed better than the others in the “Performance Summary Metric” that compared cloud servers to a bare metal reference system, but also showed that performance improved as CPU cores were added. That’s obviously not shocking, but it’s good to see that performance change was relatively linear.

CPU Performance

How does this information help you maximize your cloud portfolio? If you know that you can add resources to a running VM *before* scaling out to new hardware, that can simplify your infrastructure and lower your costs. Scaling out is fantastic cloud pattern, but it doesn’t always have to be the first response. You can trust that Hyperscale scales out *and* up well, and you can plan your scaling events accordingly.

Summary

Performance metrics are only a snapshot in time. The individual results may change from month to month or year to year, but a reliable performance profile means that you can minimize costs, make decisions faster, and make predictable choices.

Want to read this CloudHarmony report in full? Simply get it here and see all the details about this thorough analysis. Price out a Hyperscale server for yourself, and sign up to take the platform for a spin!

5 Reasons You Will Love this CenturyLink Cloud News (Including a Price Update!)

When CenturyLink acquired Tier 3 in November, our newly integrated cloud team promised customers big things – faster innovation and access to more capabilities, to name a few.  The team has delivered in the first 140 days: new services and an expanded footprint.  Our Cloud SVP Andrew Higginbotham shares the results here by the numbers.

Another benefit?  Scale.  That leads to lower costs, which CenturyLink can pass along to customers.  Today, we’re pleased to announce a major price reduction for CenturyLink Cloud services. 

For our clients, and businesses considering cloud, here’s what you need to know about these changes:

  1. The new pricing changes are effective immediately for CenturyLink Cloud CPU, RAM, and block storage.  These drops are dramatic – a typical CenturyLink Cloud VM will cost at least 60% less with the new pricing.  Customers without contracts will see the new pricing effective immediately; customers with contracts will hear from their account team to adjust terms accordingly. 
  2. New support bundles.  Until today we have bundled premium support as part of our price offering – a key benefit to organizations formulating their cloud strategy.  But the market has matured.  As such, we are evolving our singular support offering to a strong portfolio of a la carte services.  Businesses can now choose from three support tiers, select from a list of our most popular NOC service items, or work with CenturyLink’s highly capable Professional Services team.  This approach is a big win for businesses in terms of choice and flexibility.  It’s also worth noting that a comparable level of support, combined with the new pricing, is still a major cost reduction for customers.
  3. Introducing Technical Cloud Service Engineers.  A big part of our value proposition at CenturyLink Cloud is our consultative approach pre- and post-sales.  This helps customers target workloads to migrate and configure their account for chargebacks and IT as a service.   Customers can now opt-in to take this to another level by purchasing various levels of Technical Cloud Service Engineering functions.  This offering (either shared or designated) will help enterprises achieve the full benefits of our cloud services.
  4. Service Tasks.  In decoupling our support, our operations teams analyzed seven years worth of support request patterns, and developed a list of the most commonly requested work items.  The end result is our new menu of service tasks - 15 work items that are priced hourly.  Customers can turn to our platform experts to get these common items completed, and do so with cost and SLA clarity.  This is another benefit of our new support services model.
  5. Major change in bandwidth pricing – from 95/5 to GB out, effective in June.  CenturyLink owns and operates a world-class global network and we are now offering significant savings on network, based on usage patterns.  Pricing will be $0.05 GB out / per month, which represents the best bandwidth prices you’ll find in the public cloud. Customers will see this change reflected in the Control Portal after our mid-June release.

In November, I promised that “we would build amazing things” as part of CenturyLink.  Regular readers of this blog know this to be true, as our product roadmap and monthly releases have grown in scope and become more aggressive without missing a beat.

These changes demonstrate our scale and ability to put customer choice and flexibility front and center.  And you can be sure that there’s much more in store for CenturyLink Cloud and our customers as we charge ahead.

 

Protect critical workloads with CenturyLink Cloud and Safehaven Disaster Recovery as a Service

IT teams spend a good deal of time thinking about disaster recovery – and rightly so.  Unexpected downtime is hugely expensive, from a loss of productivity, to an immediate loss of revenue, to long-term damage to the corporate brand.

For disaster recovery of virtual machines already in the public cloud, CenturyLink Cloud’s premium storage option offers a simple, elegant option for enterprises.  Just click a button, and you have an RTO of 8 hours, and RPO of 24 hours in the event of a declared disaster.

What about protecting on-premise VMs?  This is a perfect workload for public clouds like CenturyLink Cloud.  Traditional DR systems are simply too hard to get up and running.  Too often, these products:

  1. Take too long to deploy.  Many take months to plan, and then even more time to deploy.
  2. Add cost and complexity.  Enterprises often need to bring in consultants or additional resources to manage the effort.
  3. Increase CapEx. New hardware usually needs to be purchased and managed accordingly.
  4. Require burdensome testing procedures.  Once a DR solution is deployed, users are often forced to navigate long, labor intensive lead times to conduct DR tests throughout the year.

To help customers protect on-premise VMs, we’re excited to launch Safehaven for CenturyLink Cloud.  This solution offers much of the simplicity as CenturyLink’s cloud-to-cloud DR – including self-service and a point-and-click interface – with powerful customization options.  In addition, customers receive a “white-glove” on-boarding experience to ensure your configuration aligns with your DR strategy.  Here’s a high level overview of how Safehaven for CenturyLink Cloud works:

  1. Build the CenturyLink Cloud servers using our self-service interface.
  2. Install the Safehaven replication software in your Cloud and Production environments
  3. Configure Safehaven including settings for RPO requirements and server bring up sequencing
  4. Test your environment against your with failover and failback operations runbook

Upon disaster declaration initiate the failover sequence in the Safehaven Console and let the pre-configured API calls boot your Cloud VMs, and start your OS so you can resume application processing.

Once your production environment is restored, initiate a failback command that automatically pauses your CenturyLink Cloud VMs and starts data transfer back to your production site. SafeHaven provides full recovery orchestration and planning at the server group and data center levels. Users can be sure that multi-tiered applications involving multiple servers and data volumes will come up in the exact sequence you configured. In the case of controlled shut down events, SafeHaven can bring up a replica of your data center in the cloud without any data loss. 

The service shines through where traditional DR systems fall short, specifically when it comes to:

  1. Cost.  Safehaven is orders of magnitude less expensive compared for enterprises looking for recovery of private data centers in a multi-tenant cloud.
  2. On Demand Cloud Infrastructure. Configure and provision your VM and storage in minutes, and avoid long and expensive lead times to provision infrastructure. 
  3. Performance, specifically ultra-low recovery times. Frequently an entire data center can be restarted in the CenturyLink Cloud in a matter of minutes.
  4. Non-disruptive Testing. Users can run test to validate application–layer recovery without affecting production systems.
  5. Group Consistency & Recovery Plans. Users can develop and test automated run book recovery plans.
  6. Automated Failback. When disaster conditions are resolved, users can failback to their original production data centers in minutes and without data loss.
  7. Continuous Data Protection. Users can retain up to 2,048 checkpoints to protect against data corruption or loss.
  8. Versatility.  SafeHaven can be configured to protect both physical and virtual IT systems.
  9. Ease of use. The product UI is very intuitive and simple-to-use.

Let’s dive into a technical overview of the product to see exactly how it works.

draas

Core SafeHaven for CenturyLink Cloud replication software services include:

  1. Inter-site migration, failover, and failback
  2. Continuous data protection with up to 64 checkpoints for rollback in cases of server infection or data corruption
  3. Non-disruptive testing of recovery and migration plans

These features can be executed at the level of individual Servers, groups of servers or entire sites. How? A virtual appliance called a SafeHaven Replication Node or SRN is uploaded into the site that is to be protected.  SafeHaven then leverages local mirroring software already embedded within standard operating systems to replicate new writes from protected servers to the SRN.  The SRN buffers these changes and transmits them asynchronously to a protection VM in the CenturyLink Cloud that then writes the changes to disk.  When disasters occur, SafeHaven will boot the recovery VMs using the replica data images it has been maintaining in the cloud. Meanwhile, another SafeHaven virtual appliance called a Central management server transmits control traffic between the SRNs and the SafeHaven Management Console.

Disaster Recovery is a complex topic area and that there are many protection strategies to achieve the desired results. We created this service offering in response to customer feedback as yet another option for our customers to protect their production IT environments. We also recognize this solution is not a panacea for DR; rather it’s a solution oriented tool that may help you service certain production workloads in a cost effective, low investment model. And for those workloads that require a different protection technique we’ve got the depth of resources to help you realize your goals.

Heartbleed Vulnerability Update

A dangerous bug was identified in a popular SSL/TLS library that powers many of the web servers in the internet. This bug – called Heartbleed – allows attackers to retrieve data stored in a server’s memory and access sensitive information.

CenturyLink Cloud wants you to be aware of one impacted area which was identified through our comprehensive assessment: OpenVPN software. The Linux distribution used for OpenVPN does not yet have an updated, patched package available to remediate this vulnerability. We are actively pursuing other solutions and will have an update on this issue shortly. [Please see update and action items below]

As this issue is related to OpenVPN client software, we believe it is important to detail what type of communication between users/machines may be affected by this vulnerability.

     
  • Control Portal system is NOT affected, so there is no need to change your password to the web site.
  •  
  • Site to site VPN tunnels from customer premise equipment to CenturyLink Cloud datacenters are NOT affected.
  •  
  • Site to site VPN tunnels between customer servers in a particular CenturyLink Cloud data center to other customer servers in a remote CenturyLink Cloud  data center are NOT affected.
  •  
  • Software created VPN tunnels between customer client computer running OpenVPN typically used to manage/access customer servers in a CenturyLink Cloud datacenter IS affected. [Please see update below]
  •  
  • Customer deployed solutions relying on OpenSSL technology running in CenturyLink Cloud datacenters is LIKELY affected.  As the customer is responsible for configuration, deployment of these systems, it is the customer’s responsibility to remediate any affected systems. If you are running a web server in the CenturyLink Cloud and use OpenSSL to secure your website, test your website and remediate immediately.

Follow this post for continued updates. Please contact the NOC with any questions.

[2014-04-09 10:00AM PST – Initial post]

[2014-04-09 02:00PM PST – Solution identified, validated, and being rolled out to vulnerable OpenVPN servers]

[2014-04-09 07:15PM PST – All Control-deployed OpenVPN servers have been updated. All new OpenVPN servers leverage an updated template. Customers in UC1 and VA1 should also regenerate their VPN certificates as a precaution.]