Organizations migrating applications to public IaaS providers must continue to deliver an outstanding end-user experience while maintaining security, visibility, and control. F5 application and security services can achieve these goals while providing agility, consistent application and security policies, and operational cost-efficiency.
Enterprises and organizations from all industries and sectors are migrating or deploying new applications to IaaS public cloud providers to achieve greater agility, faster time to market, and flexible utility payment models. Whether these applications are revenue generating or critical business apps, they must ensure the same great user experience, including across associated availability, performance, and security services.
However, there are challenges that need to be addressed, including determining which workloads are suitable for the cloud due to the inherent design of cloud data centers, the application delivery and security capabilities of each cloud provider, and the overall lack of visibility and control.
These challenges may lead to slow and expensive customized cloud implementations, prevent cloud vendor choice and mobility, and increase the risk of security vulnerabilities. This paper will review key considerations and known
inhibitors to the successful migration of applications to the public cloud, and how deploying F5® BIG-IP® application and security services—available with flexible licensing models in the leading cloud IaaS providers—are a critical element of that success.
Challenges in Migrating Applications to the Public Cloud While we all recognize the benefits of the public cloud, the fact is that there are
significant differences between how an application runs in a public IaaS provider data center designed for multiple tenants and how it runs in your private enterprise Cloud adoption barrier
While we all recognize the benefits of the public cloud, the fact is that there are significant differences between how an application runs in a public IaaS provider data center designed for multiple tenants and how it runs in your private enterprise data center. The public cloud provider will have designed its data centers and networks with massive scalability in mind, using virtualization, commoditization, and
standardization to drive down costs.
The level of network control is different, access to L2 functionality (e.g., multicast, 802.1q VLAN tagging, etc.) will be limited, and
you may only get one public-facing IP for each application. Adding compute capacity is done by scaling out many small instances versus scaling up via high-performance dedicated hardware, which directly impacts maintaining state on any particular element or node.
Application workloads suitable to move to the cloud
Though the ideal goal is to “lift and shift” to the public cloud, some applications may not be able to be moved without design changes or being completely re-architected. Deciding which applications to migrate to the public cloud and what changes are needed is key. Questions and criteria for deciding which apps to move should
Every application requires app and security services regardless of location. Each cloud provider’s tool sets and services for availability, performance, and security will differ in capabilities and management, and may incur additional costs that need to be factored in.
Learning and configuring these new services for your requirements will require time, testing, and training. This can create prohibitive switching costs when using a multiple-cloud-provider strategy, resulting in cloud vendor lock-in.
Most important, depending on your specific application requirements, they may not be adequate nor provide the same capabilities as what you use today. This can limit business flexibility in choosing cloud providers, and increases the risk and complexity of cloud migration.
There are three principle challenges:
Making sure that applications are secure in the public cloud is the top concern for most organizations. Protecting against the sophisticated, blended L3–7 security threats, where multiple types of volumetric DDoS attacks are combined with app layer attacks (OWASP Top Ten, cross-site scripting, SQL injection, etc.) is critical.
Another consideration is the inconsistency of access and application security policies when using the cloud provider’s basic security tools. This can increase attack surfaces and expose the vulnerabilities related to provisioning and deprovisioning access for users, especially the bad actors. Organizations need the ability to replicate and enforce consistent and proven security policies and access across the private data center and cloud.
Advanced traffic management beyond basic load balancing is typically deployed for business-critical and major enterprise applications. While cloud providers may offer basic load-balancing services, you should consider what protocol support beyond HTTP/HTTPS and TCP will be needed. Are basic health checks and load-balancing algorithms hash-based and round robin sufficient? Application data manipulation is often needed, which requires full L7 proxy functions, such as URL inspection/rewrite.
How does the cloud provider address uptime and resiliency of its infrastructure? Typical infrastructure availability targets are 99.95% for the larger providers, which may be lower than what you require. More important, understanding the risks and finding ways to mitigate the effect of an outage are critical. Some providers have redundant data centers and locations in multiple geographic regions to maintain
availability in the face of major failure modes, such as natural disasters. Leveraging the redundancy requires careful planning that factors in specific implementation details, latency, and failover/recovery times.
End-user experience and productivity will continue to be vital and are dependent on how well the application performs once in the cloud. The data center may be farther away from your users, which means increased latency between the end user and application, impacting performance.
Some of the methods that are typically used, such as caching, compression, and TCP optimizations, may not be available. Ensuring that users get directed to the closest location is another requirement. One of the key reasons for going to a public cloud is to gain flexible, on-demand allocation of resources to address spikes in demand—planned and unplanned— based on predefined thresholds. Applications where a short-lived burst in capacity or highly variable demand can be expected may be better candidates for temporary migration to a public cloud provider.
Application and security services and policies that don’t follow applications from the data center to the cloud will require customized implementations for each cloud provider. Organizations need insight into application performance, security, and application access by users to determine when workloads should be moved from one location to another. This requires visibility into user interactions with
applications and the user experience across all deployment infrastructures. Policy sprawl, variability, and complexity per app and per provider, combined with lack of coherent visibility, can lead to increased OPEX, reduced service velocity, and a degraded customer experience.-Migrating Application Workloads to Public Cloud.
Contact Musato Technologies for more information.