Cloud. It’s a disruptive force. Here in Seattle some may think of it only in terms of weather, but at CenturyLink we know it’s something much greater and its impact on our organization is being felt company wide. It’s been less than a year since CenturyLink acquired Bellevue-based Tier 3 and branded it CenturyLink Cloud. At the time we announced our plans of opening the Cloud Development Center in the Seattle area, saying “Tier 3’s products, roadmap and vision are now the foundation of CenturyLink’s cloud strategy and anchor the new Seattle-based CenturyLink Cloud Development Center.”
Why Seattle? It turns out Seattle is the center of the cloud universe. Forbes ranked it the best city for tech jobs. We all know that Amazon has an enormous campus in South Lake Union, Microsoft is headquartered in Redmond, but did you know Google’s development center for cloud is based in Seattle and Kirkland? And, since the entire country believes it rains here 24/7, can you think of a more natural place for the Cloud Development Center to be placed?
Officially opening on October 14th, the CenturyLink Cloud Development Center encompasses almost 30,000 square feet in Bellevue, WA. Beyond its partnership with CenturyLink Field – go Seahawks! - CenturyLink has truly invested in our presence in Seattle. Today our more than 1,500 local employees serve our consumer and business customers with services ranging from 1 GIG broadband services to our three local data centers offering a wide-range of colocation-to-cloud. At the Cloud Development Center the engineering staff, which has doubled in size since November, will grow to more than 250 professionals. Anchored by a Cloud Executive Briefing Center, the Cloud Development Center will host CenturyLink clients from around the globe for discussions surrounding the evolving role of cloud in Hybrid IT strategies. The Cloud Development Center will also be a hub for community events such as startup up and developer meet ups – one example is the upcoming Cloud Foundry Meet Up we’ll host on October 16th.
Form Meets Function: Designing for Today’s Workforce
The Cloud Development Center features a flowing, open design, which is reflective of our employees’ personality.
One cultural artifact: During the job interview process, all potential employees are asked to name a favorite movie and why. Yes – this is all about understanding how the potential employee thinks. This movie question has become so ingrained into the new center that meeting rooms are named after movie places: Death Star, Jack Rabbit Slims, Thunderdome and the Bat Cave. And, the break room wall (pictured) has a few favorite lines on it – all suggested by employees. Can you name the movie they came from? (See the end of the article for the answers) There is also a little local flavor; the tables pictured were fabricated by the manufacturer who made the reclaimed distressed wood tables at CenturyLink Field.
Watch for more over the next month about the CenturyLink Cloud Development Center as we get ready to celebrate its Grand Opening on October 14th.
Movie Wall Quiz Answers:
We have a hulk – The Avengers
Do or do not, there is no try – Star Wars V – The Empire Strikes Back
Fear is the mind killer – Dune
Come get some – Army of Darkness
A new vulnerability was recently identified in the “bash” shell that a default component of most Linux operating systems deployed globally today. This vulnerability – dubbed “Shellshock” - is being compared to what was experienced earlier this year with the Heartbleed bug because of the widespread use of the impacted Linux operating systems.
Shellshock has been assigned the highest risk rating of “10” according to the Common Vulnerability Scoring System (CVSS). Why? The vulnerability can be exploited across the network, it does not require any authentication to exploit, and exploiting this vulnerability is simple.
Unmanaged Customers - Patch Your Systems in the CenturyLink Cloud Immediately
If you have instances running a Linux operating system in CenturyLink Cloud data centers, you are likely affected. Our unmanaged customers are responsible for day-to-day configuration and deployment of these systems, so it is the customer’s responsibility to remediate any affected systems.
We recommend you apply the updates for this vulnerability as quickly as possible. This is especially important for those servers running Apache web servers as there are published exploits already circulating for Apache websites.
Managed Customers – Request Patching via Ticket with Managed Services Help Desk
Customers running managed environments (including Apache) on CenturyLink Cloud will have their systems patched upon request. To initiate a request, open a ticket with the CenturyLink Cloud Managed Services team. CenturyLink hosting engineers and operations are currently working with multiple software vendors to enable the necessary critical patches for quick resolution.
Actions Taken by the CenturyLink Cloud Team
CenturyLink Cloud has assessed our infrastructure and we will be updating all OpenVPN servers with the patches that fix this bug. You will receive additional communication from us when those updates are scheduled. Any additional updates will be posted to this blog article so please check back regularly.
Information on Patches for Each Linux Distribution:
[2014-09-25 9:30AM PT] Original Post
[2014-10-06 11:29AM PST – All externally facing systems including customer OpenVPN servers managed by CenturyLink Cloud have been updated]
A few weeks ago, we announced CenturyLink Private Cloud – a new approach to the private cloud segment that offers breakthrough simplicity for large enterprises. CenturyLink Private Cloud is designed for those looking to deploy a transformational private cloud, instead of eking out incremental gains.
Entering a new market segment is a significant undertaking for any product organization. Every solution requires a series of trade-offs —just ask any product manager - and development of the CenturyLink Private Cloud is no exception. So what were the trade-offs that we made, and how do they compare to other private cloud alternatives? For a little insight into how we evaluated the private cloud market landscape and our decision making process, read on.
Hybrid cloud is becoming a standard operating model for many organizations. But how can you realize the expected agility when there are so many challenges ahead of you? In this series of articles, we’ve dissected each challenge and proposed some corresponding solutions. Whether you’re facing security and network concerns, or integration and system management issues, it’s critical to have a proactive plan in place. This final article rounds out the discussion by looking at ways to address the issues around portability, compatibility, and your existing toolset.
Solutions to Hybrid Cloud Challenges
In many cases, a hybrid cloud is the combination of complimentary – but not identical – computing environments. This means that processes, techniques, and tools that work in one place may not work in another.
Compatibility. Gluing together two distinct environments does not come without challenges. Now, it’s possible that you have the same technology stack in both the public and private cloud environment, but the users, technology, and processes may be dissimilar!
- Move above the hypervisor. Even if your public cloud provider supports the import and export of virtual machines in a standard format, no legitimate public cloud exposes hypervisor configurations to the user. If you want to have a consistent experience in your hybrid cloud, avoid any hypervisor-level settings that won’t work in BOTH environments. Tune applications and services, and start to wean yourself off of specific hypervisors.
- Consider bimodal IT needs. If you subscribe to the idea of bimodal IT, then embrace these differences and don’t try to force a harmonization where none exists. Some traditional IT processes may not work in a public cloud. If the more agile groups at your organization are most open to using the public cloud and setting up a hybrid cloud, then cater more to their needs.
- Be open to streamline, and compromise. The self-service, pay-as-you-go, elastic model of public cloud is often in direct conflict with the way enterprise IT departments manage infrastructure. Your organization may have to loosen the reigns a bit and give up some centralized control in order to establish a successful hybrid cloud. Look over existing processes and tools, and see which will not work in a hybrid environment, and incubate ways to introduce new efficiencies.
Portability. One perceived value of a hybrid cloud is the ability to move workloads between environments as the need arises. However, that’s easier said than done.
- Review prerequisites for VM migration. A virtual machine in your own data center may not work as-is in the public cloud. Public cloud providers may have a variety of constraints around choice of Operating System, virtual machine storage size, open ports, and number of NICs.
- Embrace standards between environments. Even if virtual machines are portable, the environmental configurations typically aren’t. Network configurations, security settings, monitoring policies, and more are often tied to a specific cloud. Look to multi-cloud management tools that expose compatibility layers, or create scripting that re-creates an application in a standard way.
Tooling and Skills. Even if you have plans for all of the items above, it will be hard to achieve success without robust tooling and talented people to design and operate your hybrid cloud.
- Invest in training. Your team needs new skills to properly work in a hybrid cloud. What skills are most helpful? Your architects and developers should be well-versed in distributed web application design and know what it means to build scalable, resilient, asynchronous applications. Operations staff should get familiar with configuration management tools and the best practices for repeatedly building secure cloud environments.
- Get hands on experience. Even if you’re using a private cloud hosted by someone else, don’t outsource the setup! Participate in the hybrid cloud buildout and find some initial projects to vet the environment and learn some do’s and don’ts..
- Modernize your toolset. The tools that you used to develop and manage applications 5-10 years ago aren’t the ones that will work best in the (hybrid) cloud today, let alone 5-10 years from now. Explore NoSQL databases that excel in distributed environments, use lightweight messaging systems to pass data around the hybrid cloud, try out configuration management platforms, and spend time with continuous deployment tools that standardize releases.
Taking the Next Steps
Hybrid cloud can be a high risk, high reward proposition. If you do it wrong, you end up with a partially useful but frustratingly mediocre environment that doesn’t stop the growth of shadow IT in the organization. However, if you build a thoughtfully integrated hybrid cloud, developers will embrace it, and your organization can realize new efficiencies and value from IT services. How can CenturyLink help? We offer an expansive public cloud, a powerful private cloud, and a team of engineers who can help you design and manage your solutions.
As hybrid cloud adoption grows, proper architecture and design of these solutions becomes critical. In the first part of this article series, we discussed the challenges any organization faces when linking public and private cloud environments. The second article outlined strategies for mitigating the network and security challenges of hybrid cloud. In this third of four articles, we will assess success strategies for application integration and system management in hybrid clouds.
Solutions to Hybrid Cloud Challenges
Data and Application Integration. Nearly every useful system is made up of data and business logic from multiple applications. Siloed, monolithic systems are fading in popularity as more dynamic systems take their place. But as you look to work with data and applications in a hybrid cloud, you need to keep a few things in mind.
- Recognize the presence of data gravity. The concept of data gravity—a principle identified by Dave McCrory that claims that applications and services are drawn closer to large collections of data—comes to play in a hybrid cloud. Do you find yourself shuttling data back and forth over long distances? Would it make sense to move some of your large data repositories to whichever cloud most of the consuming applications are running in? Bulk data movement between on-premises and public cloud systems can get slow, so look for ways to optimize placement based on known integration points.
- Map secure integration paths. Some services in your hybrid cloud may be software-as-a-service (SaaS) products that don’t offer private network tunnels for communication. When creating a hybrid application integration strategy, consider tools—such as the Informatica Cloud or SnapLogic—that make it possible to securely transfer data from public SaaS platforms to systems behind your corporate firewall.
- Know your technical constraints. The applications in your data center are probably only limited by the hardware they run on. However, most multi-tenant cloud systems apply resource governors to make sure that no single consumer can swamp the platform with requests. Make sure that you understand the constraints of each public cloud in your hybrid architecture and refactor any integration processes that would obviously violate these constraints.
- Design for failure. When systems span environments in a hybrid scenario, the risk of localized failures goes up. Microservices and distributed components make for a more flexible architecture. The flipside, however, is that your system requires greater resilience. Work with your architects and developers to ensure that hybrid cloud applications can fail fast or apply circuit breakers to bypass failed components.
System Management—Work Smarter, Not Harder. This seems to be one of those areas that doesn’t factor heavily into a company’s first assessment of cloud costs. Ongoing maintenance is a part of nearly every server environment, unless you’re among the few who successfully run immutable servers. How can you mitigate this challenge?
- Invest in configuration management. Configuration management tools like Chef, Ansible, Puppet, and Salt are now mainstream and you can find plenty of expert material on how to use each platform. Why do those tools matter? It’s one thing to have inconsistencies in a small server environment where manual intervention is annoying, but not catastrophic. It’s another thing entirely to tolerate “configuration drift” at scale! If you set up configuration management across your hybrid environment, it becomes possible to manage a constantly growing fleet of servers without corresponding increases in administrator headcount.
- Look for ways to perform management in bulk. Even if you do not have a full configuration management platform in place, aggressively pursue options that let you manage your assets in bulk instead of one at a time. Use scripting to programmatically interact with many servers at once, or leverage group-based management capabilities found in platforms like CenturyLink Cloud.
- Consider agent-based monitoring solutions that feed a centralized repository. In the public cloud, you will likely not have the same level of control that you have in a private environment. Don’t assume that you can tap into the underlying virtualization layer of the public cloud, but rather, use server-based agents that can provide granular machine-level statistics. If you want to apply a standardized alerting process across your hybrid cloud, collect all the monitoring data into a centralized repository where it can be analyzed and acted on.
- Make it easy to find cloud resources. Classic configuration management databases won’t survive in a hybrid environment. Clouds are defined by their elasticity, and servers will be created and torn down at will. Trying to manually keep a tracking system in place is a fool’s errand. Instead, figure out how to organize and find your dynamic compute resources in a way that helps your team. In the CenturyLink Cloud, you can use Server Groups to create collections of related servers, and leverage our Global Search to quickly find assets across any data center.
System management can be an unexpected – but critical – new cost of hybrid cloud computing. Your focus should be on streamlining processes and management at scale, not preserving all aspects of the current state. Data and application integration strategies for hybrid cloud help you place workloads where they make the most sense, and not sacrifice the benefits of each environment. In our final article of the series, we will take a look at how to succeed in the face of compatibility, portability, and tooling challenges.