When CenturyLink acquired Tier 3 in November, our newly integrated cloud team promised customers big things – faster innovation and access to more capabilities, to name a few. The team has delivered in the first 140 days: new services and an expanded footprint. Our Cloud SVP Andrew Higginbotham shares the results here by the numbers.
Another benefit? Scale. That leads to lower costs, which CenturyLink can pass along to customers. Today, we’re pleased to announce a major price reduction for CenturyLink Cloud services.
For our clients, and businesses considering cloud, here’s what you need to know about these changes:
- The new pricing changes are effective immediately for CenturyLink Cloud CPU, RAM, and block storage. These drops are dramatic – a typical CenturyLink Cloud VM will cost at least 60% less with the new pricing. Customers without contracts will see the new pricing effective immediately; customers with contracts will hear from their account team to adjust terms accordingly.
- New support bundles. Until today we have bundled premium support as part of our price offering – a key benefit to organizations formulating their cloud strategy. But the market has matured. As such, we are evolving our singular support offering to a strong portfolio of a la carte services. Businesses can now choose from three support tiers, select from a list of our most popular NOC service items, or work with CenturyLink’s highly capable Professional Services team. This approach is a big win for businesses in terms of choice and flexibility. It’s also worth noting that a comparable level of support, combined with the new pricing, is still a major cost reduction for customers.
- Introducing Technical Cloud Service Engineers. A big part of our value proposition at CenturyLink Cloud is our consultative approach pre- and post-sales. This helps customers target workloads to migrate and configure their account for chargebacks and IT as a service. Customers can now opt-in to take this to another level by purchasing various levels of Technical Cloud Service Engineering functions. This offering (either shared or designated) will help enterprises achieve the full benefits of our cloud services.
- Service Tasks. In decoupling our support, our operations teams analyzed seven years worth of support request patterns, and developed a list of the most commonly requested work items. The end result is our new menu of service tasks - 15 work items that are priced hourly. Customers can turn to our platform experts to get these common items completed, and do so with cost and SLA clarity. This is another benefit of our new support services model.
- Major change in bandwidth pricing – from 95/5 to GB out, effective in June. CenturyLink owns and operates a world-class global network and we are now offering significant savings on network, based on usage patterns. Pricing will be $0.05 GB out / per month, which represents the best bandwidth prices you’ll find in the public cloud. Customers will see this change reflected in the Control Portal after our mid-June release.
In November, I promised that “we would build amazing things” as part of CenturyLink. Regular readers of this blog know this to be true, as our product roadmap and monthly releases have grown in scope and become more aggressive without missing a beat.
These changes demonstrate our scale and ability to put customer choice and flexibility front and center. And you can be sure that there’s much more in store for CenturyLink Cloud and our customers as we charge ahead.
Related Resources: Hyperscale Server, Cloud Servers, Object Storage, Cloud Orchestration
IT teams spend a good deal of time thinking about disaster recovery – and rightly so. Unexpected downtime is hugely expensive, from a loss of productivity, to an immediate loss of revenue, to long-term damage to the corporate brand.
For disaster recovery of virtual machines already in the public cloud, CenturyLink Cloud’s premium storage option offers a simple, elegant option for enterprises. Just click a button, and you have an RTO of 8 hours, and RPO of 24 hours in the event of a declared disaster.
What about protecting on-premise VMs? This is a perfect workload for public clouds like CenturyLink Cloud. Traditional DR systems are simply too hard to get up and running. Too often, these products:
- Take too long to deploy. Many take months to plan, and then even more time to deploy.
- Add cost and complexity. Enterprises often need to bring in consultants or additional resources to manage the effort.
- Increase CapEx. New hardware usually needs to be purchased and managed accordingly.
- Require burdensome testing procedures. Once a DR solution is deployed, users are often forced to navigate long, labor intensive lead times to conduct DR tests throughout the year.
To help customers protect on-premise VMs, we’re excited to launch Safehaven for CenturyLink Cloud. This solution offers much of the simplicity as CenturyLink’s cloud-to-cloud DR – including self-service and a point-and-click interface – with powerful customization options. In addition, customers receive a “white-glove” on-boarding experience to ensure your configuration aligns with your DR strategy. Here’s a high level overview of how Safehaven for CenturyLink Cloud works:
- Build the CenturyLink Cloud servers using our self-service interface.
- Install the Safehaven replication software in your Cloud and Production environments
- Configure Safehaven including settings for RPO requirements and server bring up sequencing
- Test your environment against your with failover and failback operations runbook
Upon disaster declaration initiate the failover sequence in the Safehaven Console and let the pre-configured API calls boot your Cloud VMs, and start your OS so you can resume application processing.
Once your production environment is restored, initiate a failback command that automatically pauses your CenturyLink Cloud VMs and starts data transfer back to your production site.
SafeHaven provides full recovery orchestration and planning at the server group and data center levels. Users can be sure that multi-tiered applications involving multiple servers and data volumes will come up in the exact sequence you configured. In the case of controlled shut down events, SafeHaven can bring up a replica of your data center in the cloud without any data loss.
The service shines through where traditional DR systems fall short, specifically when it comes to:
- Cost. Safehaven is orders of magnitude less expensive compared for enterprises looking for recovery of private data centers in a multi-tenant cloud.
- On Demand Cloud Infrastructure. Configure and provision your VM and storage in minutes, and avoid long and expensive lead times to provision infrastructure.
- Performance, specifically ultra-low recovery times. Frequently an entire data center can be restarted in the CenturyLink Cloud in a matter of minutes.
- Non-disruptive Testing. Users can run test to validate application–layer recovery without affecting production systems.
- Group Consistency & Recovery Plans. Users can develop and test automated run book recovery plans.
- Automated Failback. When disaster conditions are resolved, users can failback to their original production data centers in minutes and without data loss.
- Continuous Data Protection. Users can retain up to 2,048 checkpoints to protect against data corruption or loss.
- Versatility. SafeHaven can be configured to protect both physical and virtual IT systems.
- Ease of use. The product UI is very intuitive and simple-to-use.
Let’s dive into a technical overview of the product to see exactly how it works.
Core SafeHaven for CenturyLink Cloud replication software services include:
- Inter-site migration, failover, and failback
- Continuous data protection with up to 64 checkpoints for rollback in cases of server infection or data corruption
- Non-disruptive testing of recovery and migration plans
These features can be executed at the level of individual Servers, groups of servers or entire sites. How? A virtual appliance called a SafeHaven Replication Node or SRN is uploaded into the site that is to be protected. SafeHaven then leverages local mirroring software already embedded within standard operating systems to replicate new writes from protected servers to the SRN. The SRN buffers these changes and transmits them asynchronously to a protection VM in the CenturyLink Cloud that then writes the changes to disk. When disasters occur, SafeHaven will boot the recovery VMs using the replica data images it has been maintaining in the cloud. Meanwhile, another SafeHaven virtual appliance called a Central management server transmits control traffic between the SRNs and the SafeHaven Management Console.
Disaster Recovery is a complex topic area and that there are many protection strategies to achieve the desired results. We created this service offering in response to customer feedback as yet another option for our customers to protect their production IT environments. We also recognize this solution is not a panacea for DR; rather it’s a solution oriented tool that may help you service certain production workloads in a cost effective, low investment model. And for those workloads that require a different protection technique we’ve got the depth of resources to help you realize your goals.
Does running your application in the cloud mean that it’s suddenly able to survive any problem that arises? Alas, no. Even while some foundational services of distributed systems are built for high availability, a high performing cloud application needs to be explicitly architected for fault-tolerance. In this multi-part blog series, we will walk through the various application layers and example how to build a resilient system in the CenturyLink Cloud cloud. Over the course of the next few posts, we will define what’s needed to build a complete, highly available system. The reference architecture below illustrates the components needed for a fictitious eCommerce web application.
In this first post, we look at a core aspect of every software system: storage. What type of storage is typically offered by cloud vendors?
- Temporary VM storage. Some cloud providers offer gobs of storage with each VM instance, but with the caveat that the storage isn’t durable and does not survive server shutdown or server failure. While this type of cheap and easy accessible block storage is useful in some situations, it’s not as familiar to enterprise IT staff who are used to storage that’s durable by default.
- Persistent VM storage. This sort of block storage is attached to a VM as durable volumes. It can survive reboots, resets, and can even get detached from one server and reattached to another. Multiple servers cannot access the same volume, but this is ideal for database servers and other server types that need reliable, durable, high-performing storage.
- Object storage. What happens if you want to share data between consumers? Object storage offers HTTP access to a highly available repository that can hold virtually any file type. This is a great option for storing business documents, software, server backups, media files, and more. It is also a useful alternative for secure file transfer.
At CenturyLink Cloud, we offer customers two options: persistent block storage and object storage.
Provisioning Persistent Block Storage
Each virtual server launched in the CenturyLink Cloud cloud is backed by one or more persistent storage volumes. Product details:
- Block storage volumes can be of any size up to 1 TB apiece. Why does this matter? Instead of over-provisioning durable storage – which can happen with cloud providers that offer fixed “instance sizes” – CenturyLink Cloud volumes can be any size you want. Only pay for what you need, and resize the drive as necessary.
- The volumes are attached as ISCSI or NFS and offer at least 2500 IOPS. Why does this matter? Run IO-intensive workloads with confidence and get reliable performance thanks to an architecture that minimizes latency and network hops.
- Block storage is backed by SANs using RAID 10 which provides the best combination of write performance and data protection. Why does this matter? We’ve architected highly available storage for you. Data is striped across drives and mirrored within RAID sets. This means you that you won’t lose your data even if multiple underlying disks failed.
- We take daily snapshots of each storage volume automatically. Standard storage volumes have 5 days of rolling backups, and Premium storage volumes have 14 days of rolling backups with the 5 most recent ones replicated to a remote data center. Why does this matter? This gives you a built-in disaster recovery solution! While it may not be the only DR strategy you employ, it provides a baseline RPO/RTO to build around.
The way that CenturyLink Cloud has architected its block storage means that you do not need to specifically architect for highly available storage unless you are doing multi-site replication.
For our reference solution, provisioning persistent block storage is easy. Our web servers – based on Windows Server 2012 Data Center Edition – have 42 GB of durable storage built-in, and I’ve added another 100 GB volume to store the web root directory and server logs.
There are multiple ways to layout the disks for a database server, and in this case, we’re splitting the databases and transaction logs onto separate persistent volumes.
Here, we have running servers backed by reliable, high-performing storage. What happens if you find out later that you need more storage? The CenturyLink Cloud platform makes it easily to instantly add more capacity to existing volumes, or add entirely new volumes that are immediately accessible within the virtual machine.
Provisioning Object Storage
Object Storage is a relatively recent addition to the CenturyLink Cloud cloud and gives customers the chance to store diverse digital assets in a highly available, secure shared repository. Some details:
- Object Storage has multiple levels of redundancy built in. Within a given data center, your data is replicated across multiple machines, and all data is instantly replicated to a sister cluster located within the same country. Why does this matter? Customers can trust that data added to Object Storage will be readily available even when faced with unlikely node or data center failures.
- Store objects up to 5 GB in size. Why does this matter? Object Storage is a great fit for large files that need to be shared. For example, use Object Storage for media files used by a public website. Or upload massive marketing proofs to share with a graphics design partner.
- The Object Storage API is Amazon S3-compliant. Why does this matter? CenturyLink Cloud customers can use any of the popular tools built for interacting with Amazon S3 storage.
Customers of CenturyLink Cloud Object Storage do not need to explicitly architect for high availability since the service itself takes care of it.
In our reference solution, Object Storage is the place where website content like product images are stored. We added a new Object Storage “bucket” for all the website content.
Once the bucket was created and permissions applied, we used the popular S3 Browser tool to add CSS files and images to bucket.
A highly available system in the cloud is often a combination of vendor-provided and customer-architected components. In this first post of the blog series, we saw how the CenturyLink Cloud platform natively provides the core high availability needed by cloud systems. Block storage is inherently fault tolerant and the customer doesn’t have to explicitly ask for persistent storage. Object storage provides easy shared access to binary objects and is geo-redundant by default.
Storage provides the foundation for a software system, and at this point we have the necessary pieces to configure the next major component: the database!
We generate massive amounts of data every day. Research firm IDC estimates that 90% of the world’s data was created in the last two years, and the volume of data worldwide doubles every two years. Enterprises are a key contributor to this data explosion as we produce and share digital media, create global systems that collect and generate data, and retain an increasing number of backup and archive data sets. This rapid storage growth puts pressure on IT budgets and staff who have to constantly find and allocate more usable space. CenturyLink Cloud wants to help make that easier and just launched a new Object Storage service to provide you a secure, scalable destination for business data.
What is Object Storage from CenturyLink Cloud? It’s a geo-redundant, elastic storage system for public and private digital data. Based on the innovative Riak CS Enterprise platform, Object Storage infrastructure is being deployed across three global regions: Canada, United States, and Europe. Each region consists of a pair of CenturyLink Cloud data centers that run Riak CS Enterprise on powerful, bare-metal servers. The Object Storage nodes are deployed in a “ring” configuration where data is evenly distributed across the nodes, thus assuring that your data is available even if multiple nodes go offline. When objects are loaded into one data center, they are instantly replicated to the in-country peer data center. This means that an entire data center can go offline, and you STILL will have uninterrupted access to all of your latest enterprise data.
Before diving into this new service, let’s define a few terms:
- Object. An “object” is any digital asset that is less than 5 GB in size. This could be a video that you display on your public website, a PDF file that you are sharing with a business partner, or a database backup file. If the object is larger than 5 GB, then you can do a multi-part upload!
- Bucket. Objects are stored in buckets. A bucket is a logical container that can hold an unlimited number of objects, but not other buckets.
- Region. CenturyLink Cloud has architected Object Storage with unique clusters in three different geographies. Each geographic region has a pair of data centers that hold all of the data uploaded into that region.
- User. An Object Storage user is different from a CenturyLink Cloud platform user and is created separately. While you may create an Object Storage user to represent an individual person, you may also choose to create users that correspond to an application. For example, you may define a user leveraged by your public website that retrieves images and videos from Object Storage.
- Owner. Each bucket has an owner. This is the user that automatically has full control over the bucket and its objects.
- ACLs. Access Control Lists govern who can manage buckets and see objects. By default, Object Storage does not allow any public access to buckets or objects. If you choose, you can provide public, unauthenticated users with the ability to read individual objects. Or, you can choose specific users that have permission to add objects to buckets or view an object.
Managing Object Storage
Interacting with Object Storage is easy. We’ve added a management interface in our Control Portal for Object Storage administrators. From here, you can view a list of users, add new users, and reset user credentials.
The Control Portal also has a bucket administration component where you can view, create, secure, and delete buckets.
Each bucket can have its own security profile. For a bucket such as “website media”, you may let “All Users” have read access to its objects. For buckets set up to exchange large files with business partners, you would likely add read and write permissions for a user representing the chosen partner.
It’s unlikely that you’ll only use a single interface to interact with your data objects. Thanks to the inherent S3 compatibility offered by Riak CS Enterprise, you don’t have to! There is an entire ecosystem of tools for working with object storage that support an Amazon S3-like interface. Want to use a client tool to upload and delete objects? Then check out a utility like the freemium S3 Browser where you can plug in your Object Storage user credentials (and CenturyLink Cloud Object Storage URL) and manage buckets AND objects.
Looking to mount Object Storage as a drive on your database server so that you can easily create and restore backups? Look to a product like ExpanDrive which makes it easy to add Object Storage as a storage volume.
CenturyLink Cloud is among the first cloud providers to offer native, geo-redundant object storage and we’re excited to see how our customers use this to escape the burden of endless provisioning of on-premises storage! Our Canada region is live today, with the United States and Europe following closely. Existing customers can get started right away, and new customers can take Object Storage for a spin by signing up today.
Companies embrace the cloud because it offers agility, speed to market, self-service, rapid innovation, and yes, cost savings. There are plenty of cases where organizations can save money by using cloud resources, but it’s easy to focus on vendor compute and storage pricing, and forget about all the other financial components of a cloud application. See Joe Weinman’s Cloudonomics for an excellent analysis of how to assess the economic impact of using the cloud. An application can very easily cost MORE in the cloud – but that might still be just fine, since it helps the business shed some CapEx and remove servers from corporate data centers. In this post, we’ll talk about the full scope of pricing cloud applications and give you a useful perspective for assessing the overall cost.
Businesses deploy applications, not servers. A typical application is comprised of multiple servers that perform different roles. For instance, let’s consider an existing, commercial website that receives a healthy amount of traffic. It uses a load balancer to route traffic to one of multiple web servers, leverages a series of application servers for caching and business services, and uses a relational database for persistent storage.
To maximize revenue and customer satisfaction, the application is replicated in another geography for availability reasons and traffic can be quickly steered to the alternate site in the case of a disaster or prolonged outage.
“Hidden costs” often bite cloud users. This is especially true for those who buy from a cloud that offers “cheap virtual cores!” but also require you to buy countless other services to assemble an enterprise-class infrastructure landscape. Let’s look at each area where it’s possible – and likely – that you will incur a charge from your cloud provider.
- Application migration. If you are doing greenfield development in the cloud, then this won’t apply. But if you have existing applications that are moving to the cloud, there are a few migration-related costs. First, there can be a labor cost with doing virtual machine imports. Some cloud providers let you import for free, others charge you. In most cases, there is also a bandwidth charge for the transfer of virtual machine images. Finally, there’s likely a cost for storing the virtual machine image during the import process.
- Server CPU processor. This – along with RAM – is the number most frequently bandied about when talking about the costs of running a cloud application. Some providers let you provision the exact number of virtual CPU cores desired; others provide fixed “instance sizes” that come with a pre-defined allocation of CPUs and memory.
- Server memory. Cloud providers are ratcheting up the amount of RAM they offer to address memory-hungry applications, caching products, and in-memory databases.
- Server storage. There are many different types of storage (e.g. block storage, object storage, vSAN storage) and costs vary with each. Don’t forget to include the cost of storing data backups, virtual machine templates, and persistent disks that survive even after servers have been deleted.
- Bandwidth. It’s easy to forget about bandwidth, but it’s a charge that can bite you if you’re not expecting it! You may need to factor in public bandwidth, intra-data center bandwidth, inter-data center bandwidth, CDN bandwidth, and load balancer bandwidth. Not all of these may apply, and some may not be charged by your cloud provider, but it’s important to check ahead of time. Most cloud providers use the “GB transfer” model, charging for all data transferred – and penalizing customers for bursting above their commitments. CenturyLink Cloud utilizes the 95th percentile billing method, preventing surges in traffic from grossly affecting costs.
- Public IP addresses. Nearly every cloud provider offers a way to expose servers to the public Internet, and some charge for the use of public IP addresses. This is usually a nominal monthly charge, but one to consider for scenarios where there are dozens of Internet-facing servers.
- Load balancing. There is often a charge to not only use a load balancer, but also for the traffic that passes through it.
- VPN and Direct Connect. Cloud users are looking for ways to connect cloud environments to on-premises infrastructure, and vendors now offer a rich set of connectivity options. However, those options come at a cost. Depending on the choice, you could be subjected to fees for setup, operations, and bandwidth associated with these connections.
- Firewalls. This is usually baked into each cloud provider’s native offering, but you will want to check and make sure that sophisticated firewall rules don’t come with an additional charge.
- Server monitoring. Even those cloud servers aren’t in your data center, it doesn’t mean that you don’t need to monitor them! Depending on your monitoring needs, there can be a range of charges associated with standard and advanced monitors for each cloud server.
- Intrusion detection. Given that cloud servers are often accessible through the public Internet, it’s important to use a defense-in-depth approach that includes screening incoming traffic for potential attacks. CenturyLink Cloud is a bit unique in that we offer this at no cost, but you can still get this sort of protection from other vendors – but rarely for free.
- Labor for integrating with on-premises assets. You don’t want to create silos in the cloud, and you will likely spend a non-trivial amount of time integrating with your critical applications, data, identity provider, and network. If this effort requires assistance from the cloud provider themselves, there could be a charge for that time and effort.
- Distributed, disaster recovery environments. Applications fail, and clouds fail. If you require very high availability, you may need to duplicate your application in other geographically-dispersed cloud data centers. You could choose to keep that environment “warm” by synchronizing a data repository while keeping web/application servers offline. Or, you may choose to build a truly distributed system that leverages active infrastructure across geographies. Either way, it’s possible that you’ll incur noticeable charges for establishing replica environments.
- Development / QA environments. Applications may run differently in the cloud than in your local data center. Hence, you could choose to provision pre-production environments in the cloud for building and running your applications.
- System administrator labor costs. One of the wonderful things about the cloud is the widespread automation that makes it possible to provision and maintain massive server clusters without adding to your pool of system administrators. However, there are still activities that require administration. This may involve server patching and software updates, deploying new applications, and scaling the environments. Some of those activities can be automated as well, but you should factor in human costs to your cloud budget.
Places to save money
Given the various charges you may incur by moving to the cloud, how can you optimize your spend and take full advantage of what the cloud has to offer? Here are five tips:
- Don’t over-provision. Gone are the days when you have to request a massive server from an internal IT department because you MAY need the extra resources in the future and don’t want to deal with the hassle of upgrading the server later. CenturyLink Cloud makes it simple to change the number of virtual CPUs, amount of RAM, or amount of storage in seconds. Only spend money on what you need right now, and only pay more when you have to scale up. In addition, don’t settle for cloud providers who force you into fixed “instance sizes” that don’t deliver the mix of vCPU/RAM/storage that your application needs. CenturyLink Cloud encourages you provision whatever combination of vCPU/RAM/storage that you want! In fact, we usually tell customers to under-provision to start with, and ratchet up resources as needed.
- Turn off idle servers. If you decide to create development or QA environments in the cloud, it’s likely that those environments will be fairly quiet over weekends. By shutting those down – and doing it automatically – you can potentially save hundreds or thousands of dollars per year.
- Automate mundane server management tasks. Running maintenance scripts or installing software on a cluster of servers is time consuming and tedious. CenturyLink Cloud provides an innovative Group capability that makes it possible to issue power commands, install software, and run scripts against large batches of servers.
- Add resource limits to prevent runaway provisioning. Elasticity is a foundational aspect of cloud computing, but it’s not a bad idea to establish resource caps. With CenturyLink Cloud for example, customers can define the maximum amount of vCPUs, memory, and storage that any one Group can consume.
- Carefully consider uptime requirements and disaster recovery needs. Even though the cloud makes it easier, it’s still not cheap or simple to build a globally distributed, highly available application. Evaluate whether you need cross-data center availability, or, a defined disaster recovery plan. The simplest solution for CenturyLink Cloud customers is to provision Premium block storage which provides daily snapshots and replication to an in-country data center. In the event of a disaster, CenturyLink Cloud brings up your server in an alternate data center and gets you back in business. If you want to avoid nearly any downtime, then you can architect a solution that operates across multiple data centers. To save money, you could choose to keep the alternate location offline but synchronized so that it could quickly activated if needed.
When considering all the services you need to deploy and operate enterprise-level business applications, the “cheap virtual cores!” pitch is less compelling. It’s about finding a cloud provider that offers an all-up, integrated offering that gives you the set of services you need to deploy and maintain a robust, connected infrastructure. Give CenturyLink Cloud a try and see if our innovative platform is exactly what you’re looking for!