Hybrid cloud is becoming a standard operating model for many organizations. But how can you realize the expected agility when there are so many challenges ahead of you? In this series of articles, we’ve dissected each challenge and proposed some corresponding solutions. Whether you’re facing security and network concerns, or integration and system management issues, it’s critical to have a proactive plan in place. This final article rounds out the discussion by looking at ways to address the issues around portability, compatibility, and your existing toolset.
Solutions to Hybrid Cloud Challenges
In many cases, a hybrid cloud is the combination of complimentary – but not identical – computing environments. This means that processes, techniques, and tools that work in one place may not work in another.
Compatibility. Gluing together two distinct environments does not come without challenges. Now, it’s possible that you have the same technology stack in both the public and private cloud environment, but the users, technology, and processes may be dissimilar!
- Move above the hypervisor. Even if your public cloud provider supports the import and export of virtual machines in a standard format, no legitimate public cloud exposes hypervisor configurations to the user. If you want to have a consistent experience in your hybrid cloud, avoid any hypervisor-level settings that won’t work in BOTH environments. Tune applications and services, and start to wean yourself off of specific hypervisors.
- Consider bimodal IT needs. If you subscribe to the idea of bimodal IT, then embrace these differences and don’t try to force a harmonization where none exists. Some traditional IT processes may not work in a public cloud. If the more agile groups at your organization are most open to using the public cloud and setting up a hybrid cloud, then cater more to their needs.
- Be open to streamline, and compromise. The self-service, pay-as-you-go, elastic model of public cloud is often in direct conflict with the way enterprise IT departments manage infrastructure. Your organization may have to loosen the reigns a bit and give up some centralized control in order to establish a successful hybrid cloud. Look over existing processes and tools, and see which will not work in a hybrid environment, and incubate ways to introduce new efficiencies.
Portability. One perceived value of a hybrid cloud is the ability to move workloads between environments as the need arises. However, that’s easier said than done.
- Review prerequisites for VM migration. A virtual machine in your own data center may not work as-is in the public cloud. Public cloud providers may have a variety of constraints around choice of Operating System, virtual machine storage size, open ports, and number of NICs.
- Embrace standards between environments. Even if virtual machines are portable, the environmental configurations typically aren’t. Network configurations, security settings, monitoring policies, and more are often tied to a specific cloud. Look to multi-cloud management tools that expose compatibility layers, or create scripting that re-creates an application in a standard way.
Tooling and Skills. Even if you have plans for all of the items above, it will be hard to achieve success without robust tooling and talented people to design and operate your hybrid cloud.
- Invest in training. Your team needs new skills to properly work in a hybrid cloud. What skills are most helpful? Your architects and developers should be well-versed in distributed web application design and know what it means to build scalable, resilient, asynchronous applications. Operations staff should get familiar with configuration management tools and the best practices for repeatedly building secure cloud environments.
- Get hands on experience. Even if you’re using a private cloud hosted by someone else, don’t outsource the setup! Participate in the hybrid cloud buildout and find some initial projects to vet the environment and learn some do’s and don’ts..
- Modernize your toolset. The tools that you used to develop and manage applications 5-10 years ago aren’t the ones that will work best in the (hybrid) cloud today, let alone 5-10 years from now. Explore NoSQL databases that excel in distributed environments, use lightweight messaging systems to pass data around the hybrid cloud, try out configuration management platforms, and spend time with continuous deployment tools that standardize releases.
Taking the Next Steps
Hybrid cloud can be a high risk, high reward proposition. If you do it wrong, you end up with a partially useful but frustratingly mediocre environment that doesn’t stop the growth of shadow IT in the organization. However, if you build a thoughtfully integrated hybrid cloud, developers will embrace it, and your organization can realize new efficiencies and value from IT services. How can CenturyLink help? We offer an expansive public cloud, a powerful private cloud, and a team of engineers who can help you design and manage your solutions.
If you’ve been reading cloud-related news lately or you follow any developers or system admins on Twitter, then you’ve undoubtedly seen the words “container”, “Docker”, and “CoreOS” written a few thousand times over the past year or so. Chatter has particularly picked up in the last few months with Docker 1.0 being released in June and CoreOS announcing their first stable release within the past few weeks. CoreOS also received an 8 million dollar investment just a couple of months ago, and Docker just got another $40 million in funding a few days ago. And just yesterday, CenturyLink joined the container party and announced the release of the open-source Docker management platform, Panamax. Developed by the recognized thought-leaders at CenturyLink Labs, Panamax was described by RedMonk principal analyst James Governor as “Docker management for humans. It dramatically simplifies multi-container app deployment.”
This is bleeding edge technology we’re talking about here, so if you haven’t heard about any of it yet, there’s no time like the present. Docker is one of the fastest-growing open-source projects ever, with more than 550 contributors and 7 million downloads in just over a year since its release. The power of Docker lies in its ability to build and deploy applications in containers, which are extremely efficient and more portable than traditional virtual machines. This is because they abstract only the operating system kernel rather than an entire device. Of course, there are plenty of places to read up and find out more information on what all the fuss is about, and none are better than our very own CenturyLink Labs blog, where the Labs team has been pumping out exceptional content about all things Docker and CoreOS for months.
But if you’re like me, you’ll never be satisfied just reading about anything – you want to try it already! If so, I’ve got good news for you. Whether you’re looking to just get your feet wet and experiment with containers or you’re feeling ready to jump right into the deep-end and start deploying applications with them, CenturyLink Cloud has got you covered. There are at least three ways you can get Docker up and running on CenturyLink Cloud right now: install Docker on a CentOS server, provision a CoreOS server running Docker, or take advantage of Panamax and make it even easier to use Docker. Whichever route you choose, all you need is a CenturyLink Cloud account to get started.
Option #1 – Installing Docker on CentOS
You might not be too familiar with CoreOS, so if you want to get started using Docker on a more familiar Linux distribution, you can easily use our Docker blueprint to install it on any CentOS server running on CenturyLink Cloud. You’ll even get the option to deploy a Hello World container so you can see a simple example of how Docker containers work and get started building your own.
Option #2 – Installing CoreOS
Interested in CoreOS? This lightweight Linux distribution is optimized for massive server deployments and it comes with Docker preinstalled because it’s designed specifically to run applications as containers. You can follow our step-by-step instructions or watch our how-to video for using blueprints to build a CoreOS server cluster on CenturyLink Cloud and start deploying your applications on Docker in minutes.
Option #3 – Installing Panamax on CoreOS
Maybe you like the idea of Docker and CoreOS, but you’re not a Linux expert and you’re a little afraid of getting too into the weeds on the command line. If so, CenturyLink Labs has developed just the answer for you: Panamax. Panamax is a single management platform for creating, sharing, and deploying Docker-containerized applications. By following similar steps to our CoreOS deployment above and selecting the “with Panamax” version of the blueprint, you can have a CoreOS server up and running with Panamax installed in no time, and there’s no easier way to get started with Docker.
Not only can you use Panamax to deploy images from Docker’s repository, you can also deploy complex multi-container Dockerized apps from Panamax’s Open-Source Application Template Library. Think of these templates as collections of Docker images that work together to form the complete architecture of an application, with separate containers for the database vs. web tiers, for example.
If you’re looking to deploy one of the available template options like Wordpress or Drupal, you’ll have it working with a single click in seconds flat. However, you can also choose to define your own custom templates to use and even add custom repositories to search as the Panamax community grows. There’s no easier or faster way to start using Docker containers than with Panamax, and it’s built to leverage the power and scale of CoreOS.
Have a server already? Install Docker! Curious about CoreOS? Provision it! Feeling overwhelmed? Try Panamax. With CenturyLink Cloud, you’ve got lots of ways to get started using Docker right now, so no more excuses! Sign up for a CenturyLink Cloud account today and add containers to your repertoire of application deployment options today and start enjoying their power, performance, and portability.
Related Resources: Cloud Server, Private Cloud, Object Storage, Cloud Orchestration
Are you getting the full benefit of the cloud if you don’t take advantage of its elasticity? To be sure, there are many ways that cloud environments—running dynamic OR static workloads—can positively impact your business agility. But cloud computing fundamentally changes the relationship between infrastructure and workloads that run upon it; you can constantly right-size by adding and removing capacity on demand instead of being stuck with over-sized or under-powered environments. To do this effectively, you need flexible options for automatically and manually adjusting your infrastructure resources. In this post, I outline five different application scenarios, and which CenturyLink Cloud scaling capability delivers the optimal elasticity solution.
1. Modern web application with variable usage? Horizontal Autoscale!
Are most of your internal or external facing web applications in constant, heavy use? If so, I’d be surprised! The applications that your employees rely on may be busy during predictable periods, or, experience load whenever random business conditions occur. Public web applications may spike in usage when marketing campaigns are in flight, or when an avalanche of traffic follows a social mention.
Instead of standing up gobs of (costly) infrastructure that only add value during random usage spikes, consider services like CenturyLink Cloud Horizontal Autoscale. Our Horizontal Autoscale service is a great fit for web applications that cleanly scale by adding or removing virtual servers from a defined pool. Simply park powered off servers in a CenturyLink Cloud Server Group and define an Autoscale policy that outlines criteria for scaling out and in. When that policy is applied to a Server Group and tied to our Load Balancing service, the platform quickly powers servers on and off in response to changes in utilization.
What does it cost to "park" a server? Customers only pay for storage and operating system licensing when a server is powered off. For example, if your mobile web application can satisfy its regular load with three servers (Ubuntu 12.04, 2 CPUs, 6 GB of RAM, 20GB of storage each), it only costs just $15 per month to keep five servers powered off in reserve to handle occasional spikes. That uptime peace of mind will cost you less than lunch for two in a moderately priced Chinese restaurant.
2. Relational database under load? Vertical Autoscale!
Let’s be honest, not EVERY application is designed to scale horizontally by adding more servers to share the load. Rather, many applications benefit by adding horsepower to the existing servers. For instance, relational databases work in multi-server configurations, but each server typically has a lot of resources allocated. In that case, adding more CPU/memory/storage to a given server is a perfectly viable way to handle new demand.
The CenturyLink Cloud is one of the few providers that offer an automated vertical scaling function. Our Vertical Autoscale service adds CPU capacity to running servers without requiring a reboot, increasing capacity based on utilization criteria that you specify. When the usage spike is over, the Vertical Autoscale service will remove CPU capacity and reboot the server during the window that you select. If you need to add storage or RAM to a server on the fly, you can also manually update servers, also typically without a need to reboot. This is a powerful way to take advantage of cloud elasticity without rebuilding your existing applications for horizontal scale.
3. Worker nodes that are falling behind? Horizontal Autoscale!
In loosely coupled, distributed systems, you’ll often find services that work asynchronously in the background. These services may take product orders from your website and update the transaction system, perform financial calculations, render complex animation sequences, and much more. For example, consider a website where people can register for a new, paid service. That system has to perform a fraud check, authenticate a payment method, and create a container for the new user. A "new user signup" message is dropped to a queue, and a set of servers are all tasked with reading data from the queue and processing the request. If the number of signups spikes, these worker nodes can get overwhelmed and the new customers are stuck waiting for their signup confirmation.
In a case like this, it makes a lot of sense to scale the worker nodes horizontally. CenturyLink Cloud Horizontal Autoscale can respond to CPU or memory spikes by powering on (and off!) servers that can instantly help relieve the backlog of queued up requests. Cloud users don’t have to choose a load balancer to associate with an Autoscale policy, so in that case, the Server Group just expands and contracts the number of running servers without worrying about routing traffic to them. A strategy like this can reduce the risk of a poor user experience and encourage customers to trust your application, even during busy periods.
4. Web application with predictable bursts in usage? Schedule-based Scaling!
We’re probably all familiar with this back-office scenario: at the end of the month, the financial accounting system is overwhelmed by closing activities and invoice generation. To combat these predictable spikes, many companies either (a) deploy systems like this on pricey hardware that always has enough headroom to deal with the spike, or (b) resign themselves to delivering a subpar, slow application during these bursty windows.
There’s a better way! The CenturyLink Cloud is built with automation and management in mind. Apply a "scheduled task" to a server so that it powers on at a specific point each day/week/month to increase application capacity. Create a second scheduled task that powers that server back down when the predictable spike is over. This sort of elasticity is exactly what the cloud is good at, and helps you deliver an optimized application that delights users, keeps costs down, and helps you arrive at business conclusions faster.
5. Cache cluster that needs controlled resizing? Manually scale up/out!
You may love automation as much as we do, but sometimes a scale event requires careful planning and manual resizing because of complexity with the target application. You may not want an automated service resizing your NoSQL database, cache cluster, or mission critical line of business system whenever it detects a heavy load.
In cases like this, you can choose from the full catalog of elasticity options that the cloud provides. Experiencing I/O contention and want to add more servers and spread the intense demand? Clone a running server or quickly build a new one from scratch. Need to add storage to a server that’s rapidly running out of room? Add more space to an existing volume to add a new volume to the running server. Looking to add CPU or memory to a server and then update the application to recognize the new capacity? Immediately add resources and run a script against all the resized servers.
CenturyLink Cloud Scaling Tools Deliver Elasticity
Elasticity is a hallmark of the public cloud. It helps you maintain a dynamic resource pool that expands and contracts to meet business demand. The CenturyLink Cloud offers a leading set of services to help you automatically and manually adjust capacity for one server, or a fleet of servers.
As you migrate applications to the cloud—or design entirely new cloud-native ones—do it with scalability and elasticity in mind!
Related Resources: Hyperscale Server, Cloud Servers, Object Storage, Cloud Orchestration
People are right to be wary of any vendor claiming to be the “top performing!” or “fastest!” cloud provider. Most folks know that ANYTHING can look spectacular – or unspectacular for that matter – if you stack the deck just right. But at the same time, cloud shoppers have a deep hunger for legitimate information on realistic performance expectations. Cloud performance has a direct impact on what you spend on compute resources, how you decide the right host for your workload, and how you choose to scale when the need arises. In this blog post, we’ll summarize some recent findings and put them in context.
With the launch of our new Hyperscale instances, we approached an independent analytics company, CloudHarmony, and asked them to conduct an extended performance test that compared CenturyLink Cloud Hyperscale servers to the very best equivalent servers offered by AWS and Rackspace. CloudHarmony is a well-respected shop that collects data from dozens of benchmarks and shares the results publicly for anyone to dissect. After running a variety of benchmarks over a long period of time (to ensure that the test gave an accurate look over an extended window), they shared their findings with the world. See the report
The results were positive – as we’ll talk through below – but how do reliable performance metrics help you in your cloud journey?
More Bang for the Buck
In an ideal world, you want reliable performance at a fair market price and no hidden charges. In the CloudHarmony results, we saw that our Hyperscale SSD storage provided excellent disk read performance and strong disk write performance through a variety of tests. In the results below – run against AWS c3 servers and Rackspace Performance servers – you can see that Hyperscale has a fantastic IO profile for large block sizes.
Why does this matter? Consider databases running on Microsoft SQL Server that often works with 64k blocks. By running this workload on Hyperscale, you get persistent storage, high performance, and no charges for IO requests or provisioned IOPS. This results in predictable costs and fewer resources needed to achieve optimal performance.
Simplified Decision Making
Choice is great, but is also a paradox. When you’re faced with dozens of server types to choose from, you find yourself selecting a “best fit” that may compromise in one area (“too much RAM!”) in order to get another (“need 8 CPUs”). In CenturyLink Cloud, we have two classes of servers (Standard and Hyperscale) and both have shown to have reliable performance. Pick whatever amount of CPU or memory that makes sense – which is of course how traditional servers have always been purchased.
If built-in data redundancy doesn’t matter, but reliable, high performance does, choose Hyperscale. Need strong, consistent performance but want daily storage snapshots and a SAN backbone? Use Standard servers. Straightforward choices means that you spend less time navigating a gauntlet of server types and more time deploying killer applications.
Predictable Performance & Scaling
Valid performance testing results can help you understand how best to scale an application. Should I add more capacity to this VM, or does it make sense to add more VMs to the environment? That’s a hard question to answer without understanding how the platform reacts to capacity changes,. The CloudHarmony results not only showed that the CenturyLink Cloud Hyperscale CPU performed better than the others in the “Performance Summary Metric” that compared cloud servers to a bare metal reference system, but also showed that performance improved as CPU cores were added. That’s obviously not shocking, but it’s good to see that performance change was relatively linear.
How does this information help you maximize your cloud portfolio? If you know that you can add resources to a running VM *before* scaling out to new hardware, that can simplify your infrastructure and lower your costs. Scaling out is fantastic cloud pattern, but it doesn’t always have to be the first response. You can trust that Hyperscale scales out *and* up well, and you can plan your scaling events accordingly.
Performance metrics are only a snapshot in time. The individual results may change from month to month or year to year, but a reliable performance profile means that you can minimize costs, make decisions faster, and make predictable choices.
Want to read this CloudHarmony report in full? Simply get it here and see all the details about this thorough analysis. Price out a Hyperscale server for yourself, and sign up to take the platform for a spin!
Recent history has shown that after a cloud provider is acquired, the pace of innovation slows and there’s a loss of focus (and staff). If you don’t believe me, check out the release notes (if you can find them!) of some recently acquired cloud companies. It’s not pretty. I’m here to say that we’re different.
140 days ago, the acquisition of Tier 3 by CenturyLink was described as a "transformational deal for the industry." Instead of randomizing Engineering post-acquisition with unnecessary process, and haphazard integrations with legacy and redundant products, we’ve actually accelerated pace of development on our go-forward platform, CenturyLink Cloud. In the past four months, we’ve maintained our software release cadence, grown our team, expanded our data center footprint, actively integrated with our parent company, and solidified a game-changing vision that has retained and attracted a phenomenal set of customers.
We update our cloud platform every month with new, meaningful capabilities. Only a very small subset of cloud providers can make that claim. In the past 140 days, we’ve shipped over 1,200 features, enhancements, and fixes. This includes a new high performance server class, faster virtual machine provisioning, new reseller services, a major user interface redesign, a compelling monitoring/alerting service, a new RESTful API, and a pair of new data centers.
Our ambitious data center expansion is on track. In the past few weeks, we’ve lit up a pair of new data centers in the US. This gives customers access to world-class CenturyLink network, security, and management services in those locations. With 11 total data centers, the CenturyLink Cloud has a greater geographic breadth than all but two public cloud providers. That’s pretty awesome for our customers who want a highly distributed environment for running their portfolio of applications.
Our Engineering team has also grown as additional experienced developers have come on board and contributed in a major way. The Operations team continues to scale out as well while becoming even more efficient at managing infrastructure at scale. Just as important, we’ve integrated with the broader CenturyLink teams and have a single, comprehensive vision for delivering multiple infrastructure options on a unified platform to a global customer base. Why should organizations compromise when trying to fit their needs into the cloud? With CenturyLink, customers can consume co-location, dedicated hardware, managed services, public infrastructure-as-a-service, and platform-as-a-service all with a single provider. And we’re working to integrate these options into a groundbreaking customer experience.
We aren’t close to being done disrupting this space. The next 140 days will be just as exciting. Try out our compelling platform, or join the team building the future of cloud and infrastructure.