serverless platform computing is a style of programming for cloud platforms that are changing the way applications are built, deployed, and – ultimately – consumed. So where do servers enter the picture?
Serverless computing is not, despite its name, the elimination of servers from distributed applications. Serverless architecture refers to a kind of illusion, originally made for the sake of developers whose software will be hosted in the public cloud, but which extends to the way people eventually use that software. Its main objective is to make it easier for a software developer to compose code, intended to run on a cloud platform, that performs a clearly-defined job.
If all the jobs on the cloud were, in a sense, “aware” of one another and could leverage each other’s help when they needed it, then the whole business of whose servers are hosting them could become trivial, perhaps irrelevant. And not having to know those details might make these jobs easier for developers to program. Conceivably, much of the work involved in attaining the desired result might already have been done.
Chris Munns, senior developer advocate for serverless at AWS, during a session at the Re: Invent 2017 conference. “There are no servers to manage or provision at all. This includes nothing that would be bare metal, nothing that’s virtual, nothing that’s a container — anything that involves you managing a host, patching a host, or dealing with anything on an operating system level, is not something you should have to do in the serverless world.”
AWS’ serverless, functional service model is called Lambda. Its name comes from a long-standing mathematical code where an abstract symbol represents a function symbolically.
Serverless computing has been pitched to developers as a means for them to produce code more like it was done in the 1970s, and even the ’60s when everything was stitched together in a single system. But that’s not a selling point that enterprises care much about. For the CIO, the message is that serverless changes the economic model of cloud computing, with the hope of introducing efficiency and cost savings.
Improved utilization — The typical cloud business model, which AWS championed early on, involves leasing either machines — virtual machines (VMs) or bare-metal servers — or containers (such as Docker or OCI containers) that are reasonably self-contained entities. Virtually speaking, since they all have network addresses, they may as well be servers. The customer pays for the length of time these servers exist, in addition to the resources they consume.
With the Lambda model, what the customer leases is instead a function — a unit of code that performs a job and yields a result, usually on behalf of some other code (which may be a typical VM or container, or conceivably a web application).
The customer leases that code only for the length of time in which it’s “alive” — just for the small slices of time in which it’s operating. AWS charges based on the size of the memory space reserved for the function, for the amount of time that space is active, which it calls “gigabyte-seconds.”
Separation of powers — One objective of this model is to increase the developer’s productivity by taking care of the housekeeping, bootstrapping, and environmental matters (the dependencies) in the background.
This way, at least theoretically, the developer is freer to concentrate on the specific function he’s trying to provide. This also compels him to think about that function much more objectively, thus producing code in the object-oriented style that the underlying cloud platform will find easier to compartmentalize, subdivide into more discrete functions, and scale up and down.
Improved security — By constraining the developer to use only code constructs that work within the serverless context, it’s arguably more likely the developer will produce code that conforms with best practices, and with security and governance protocols.
Time to production — The serverless development model aims to radically reduce the number of steps involved in conceiving, testing, and deploying code, with the aim of moving functionality from the idea stage to the production stage in days rather than months.
Uncertain service levels — The service level agreements (SLA) that normally characterize public cloud services, have yet to be ironed out for FaaS and serverless. Although other Amazon Compute services have clear and explicit SLAs, AWS has actually gone so far as to characterize the lack of an SLA for Lambda functions as a feature, or a “freedom.” In practice, the performance patterns for FaaS functions are so indeterminate that it’s difficult for the company, or its competitors, to decide what’s safe for it to promise.
Untested code can be costly -Since customers typically pay by the function invocation (for AWS, the standard arbitrary maximum is 100), it’s conceivable that someone else’s code, linked to yours by way of an API, may spawn a process where the entire maximum number is invoked in a single cycle, instead of just one.
Monolithic tendency — Lambda and other functions are often brought up in conversation as an example of creating small services, or even microservices, without too much effort expended in learning or knowing what those are.
(Think of code that’s subdivided into very discrete, separated units, each of which has only one job, and you get the basic idea.) In practice, since each organization tends to deploy all its FaaS functions on one platform, they all naturally share the same context.
But this makes it difficult for them to scale up or down as microservices were intended to do. Some developers have taken the unexpected step of melding their FaaS code into a single function, in order to optimize how it runs.
Yet that monolithic choice of design actually works against the whole point of the serverless platform computing principle: If you were going to go with a single context anyway, you could have built all your code as a single Docker container, and deployed in on Amazon’s Elastic Container Service for Kubernetes, or any of its growing multitude of cloud-based containers-as-a-service (CaaS) platforms.
Clash with DevOps — By actively relieving the software developer from responsibility for understanding the requirements of the systems hosting his code, one of the threads necessary to achieve the goals of DevOps — mutual understanding by developers and operators of each other’s needs — may be severed.
More than any other commercial or open source organization, AWS has taken the lead in defining serverlessness with respect to consumers and the serverless business model. But its entry into the field immediately triggered the other major cloud service provider to enter the FaaS market (whether or not they adopt the serverless motif in its entirety): Azure Functions is Microsoft‘s approach to the event-driven model. Google Cloud Functions is that provider’s serverless platform. And IBM Cloud Functions is IBM’s approach to the open source OpenWhisk serverless framework.
The typical cloud business model, which AWS championed early on, involves leasing either machines — virtual machines (VMs) or bare-metal servers — or containers (such as Docker or OCI containers) that are reasonably self-contained entities.
Virtually speaking, since they all have network addresses, they may as well be servers. The customer pays for the length of time these servers exist, in addition to the resources they consume. AWS’ serverless, functional service model is called Lambda. Its name comes from a long-standing mathematical code where an abstract symbol represents a function symbolically.
With the Lambda model, what the customer leases is instead a function — a unit of code that performs a job and yields a result, usually on behalf of some other code (which may be a typical VM or container, or conceivably a web application). The customer leases that code only for the length of time in which it’s “alive” — just for the small slices of time in which it’s operating. AWS charges based on the size of the memory space reserved for the function, for the amount of time that space is active, which it calls “gigabyte-seconds.”
Another phrase used by Amazon and others in marketing its serverless services is functions-as-a-service (FaaS). From a developer’s perspective, it’s a lousy phrase, since functions in the source code have always been, and always will be, services. But the “service” that’s the subject of the capital “S” in “FaaS” is the business service, as in cloud “service” provider. The service there is a unit of consumption. You’re not paying for the server but for the thing it hosts, and that’s where AWS has stashed the server.
Amazon uses the terms “serverless” and “FaaS” interchangeably, and for purposes of the customers who do business in the realm of AWS, that’s fair. But in the broader world of software development, they are not synonymous.
Serverless frameworks can, and more often in recent days do, span the boundaries of FaaS service providers. The idea here is if you truly don’t care who or what provides the service, then you shouldn’t be bound by the rules and restrictions of AWS’ cloud, should you?
“The idea is, it’s serverless. But you can’t define something by saying what it’s not,” explained David Schmitz, a developer for Germany-based IT consulting firm Senacor Technologies, speaking at a recent open source conference in Zurich.
Citing AWS’ definition of serverless platform computing from its customer web site, Schmitz said, “They say you can do things without thinking about servers. There are servers, but you don’t think about them.
And you are not required to manually provision them, to scale them, to manage them, to patch them up. And you can focus on whatever you are really doing. That means, the selling point is, you can focus on what matters. You can ignore everything else.
“You will see that this is a big lie, obviously,” he continued.
In his recent O’Reilly book Designing Distributed Systems, Microsoft Distinguished Engineer and Kubernetes co-creator Brendan Burns warn readers not to confuse serverless for FaaS. While it is true that FaaS implementations do obscure the host server’s identity and configuration from the customer, it is not only possible but, in certain circumstances, desirable for an organization to run a FaaS service on servers that it not only manages explicitly but optimizes especially for FaaS. FaaS may appear serverless from one angle.
A truly serverless programming model and a serverless distribution model, some advocates are saying, would not be bound to, of all things, a single server — or, any single service provider.
Serverless is supposed to be an open-ended cloud workshop. Optimistically, it should incite developers to build, for instance, services that respond to commands, such as “Call up my grocery store and have them hold two K.C. strip steaks for me.” The process of building such a service would leverage already written code that handles some of the steps involved.
The developer-oriented serverless ideal paints a picture of a world where a software developer specifies the elements necessary to represent a task, and the network responds by providing some of those elements. Suddenly the data center is transformed into something more like a kitchen.
Whereas a chef may have a wealth of resources open to her, most everyday folks cook with vegetables that come from their refrigerators, not their gardens. That doesn’t make gardens somehow bad or wrong, but it does mean a whole lot more people can cook. In practice, “serverlessness” (a term I invented) is more of a variable. Some methodologies are more serverless than others.
You may have already surmised that a distributed application hosted in the cloud is hosted by servers. But servers in this context are places in a network. So a distributed application may rely on software resources that exist in places other than the host from which it was accessed. Imagine a system where “place” is irrelevant — where every function and every resource that the source code uses, appears to be “here.” Imagine instead of a vastly dispersed internet, one big location where everything was equally accessible.
At the recent CloudNativeCon Europe event in Copenhagen, Google Cloud Platform developer advocate Kelsey Hightower presented a common model of a FaaS task: One that would translate a text file from English to Danish, perhaps by way of a machine learning API.
For the task to fit the model, the user would never need to see the English-language file. Once the text file became available to the server’s object store, translators attached to that store would trigger an internal function, which would in turn set forth the translation process.
An event procedure does not have to be explicitly called, which means it doesn’t have to be addressed — a process which often involves identifying its location, which includes its server. If it’s set up to respond to an event, it can be left unguarded like a mousetrap or a DVR.
In distributed applications, services are typically identified by their location — specifically, by a URI that begins with http:// or https://. Naturally, the part of the URI that follows the HTTP protocol identifier is the primary domain, which is essentially the server’s address. Since an event-driven program is triggered passively, that address never has to be passed, so the server never needs to be looked up. And in that sense, the code becomes “serverless.”
“This is beautiful — this is like the dream come true!” said Google’s Hightower. He presented his audience with three choices: “You can destroy all your code; you could do no code, but that’s a little extreme, or you could do this serverless thing. This is how it’s sold. Anyone see the problem with this?”
After a few hints, Hightower revealed what he characterizes as a flaw in the model: Its dependence upon a single FaaS framework, operating within a single context, within the constraints of a single cloud provider. The reason you don’t see so many servers in such a context is that you’re inside, from its perspective, the only one there is.
Hightower is an advocate for an emerging framework, being developed under the auspices of the Cloud Native Computing Foundation (CNCF, also responsible for Kubernetes) entitled CloudEvents.
Its goal is to come up with a common method for registering an event — an occurrence that hosts should watch for, even if it emerges from elsewhere on some other system or platform. This way, activity or method on one cloud platform can trigger a process on another. For instance, a document stored in Amazon’s S3 storage can trigger a translation process into Danish on Google Cloud.
“The goal here is to define a few things,” he told the audience. “Number one, the producer owns the type of the event. We’re not going to try to standardize every event that can be emitted from every system. That is a fool’s errand.
What we want to do, though, is maybe standardize the envelope in which we capture that event — a content type, [and] what’s in the body. And then we need to have some decision, and one of those decisions so far is, maybe we can use HTTP to transport this between different systems.”
A bit of background for what Hightower’s talking about here: The earliest attempts at distributed systems — among them, DCOM and CORBA — imposed some type of centralized regimen where the context of jobs being processed was resolved at a high level by some mutually agreed-upon authority. Something was in charge. This would be the opposite of the serverless ideal; this would ensure that there’s always a principal host at the top of the food chain.
This concept does not work at large scale, because that host would need some kind of all-encompassing directory of contexts, like Windows’ System Registry, to specify what each type of data meant, and to whom it would belong. That type of authority is just fine if you happen to be the maker of a platform that wants to be the only cloud in town.
The more successful distributed systems that have come to light since the CORBA and DCOM days have recognized that there are ways to resolve the mutual context problem at or near runtime, through a kind of negotiation protocol.
But that might not be the type of framework that developers in the field, like Senacor’s Schmitz, would like to see. From his perspective and experience, one of the main benefits of serverless computing as he practices it is the promise of the lack of a framework or protocol for these types of inter-cloud communications. In fact, the very presence of such a framework would imply that there were entities that need to communicate at all — in effect, servers.
“We all love frameworks, runtimes, and tools. And there are many,” Schmitz told his audience. “There are things like Serverless [Framework] which extract away Lambda. There are things like Chalice which does something in a similar way. There’s serverless platform computing Express where you can wrap an existing application.
“Ye-u-u-gh,” he uttered, in a single syllable, like a brown bear uncovering an empty dumpster. “We don’t need that. Really, you do not need a framework to work with AWS. They have an SDK. Apply sane practices, and you will be fine.”
Schmitz conceded that staying within the AWS Lambda paradigm does result in the production of code that is somewhat monolithic and inflexible, difficult if not impossible to scale, and a bear to secure properly. In exchange for these concessions, he said, Lambda gives the developer instantaneous deployment, code that is simple enough to produce, and a learning curve that is not very steep at all.
Schmitz and Hightower are on opposite sides of the evolutionary path of serverless platform computing in the data center. Throughout the history of this industry, simplification and distribution have stared each other down across this moving barricade.
It has been the goal of the DevOps movement to break impasses like this one and to incite coordination between software developers and network operators to work together toward a mutual solution. One of serverless advocates’ stated goals has been to devise the means to automate such processes as conformance, handshaking, security, and scalability without all that cumbersome human interaction.
The end result should be that the manual processes of provisioning resources elsewhere in the cloud — processes that are susceptible to human error — are substituted with routines that take place in the background, so discreetly that the developer can ignore the server even being there. And since the end user shouldn’t have to care either, it may as well be truly serverless.
Serverless architectures, they insist, should free the developer from having to be concerned with the details of the systems that host her software — to make the Ops part of DevOps irrelevant to the Dev part. So doesn’t serverless platform computing work against DevOps?
“There is no doubt that, as you move to higher levels of abstraction of platforms, there are operational burdens that go away,” responded Nigel Kersten, a chief technical strategist for CI/CD resource provider Puppet.
“You adopt virtualization, [and] a lot of your people don’t need to care as much about their metal. You adopt infrastructure-as-a-service in the cloud, [and] you’re not needing to worry about the hypervisors any more. You adopt a PaaS, and there are other things that essentially go away. All become ‘smaller teams’ problems.
“You adopt serverless, and for developers to be successful in developing and architecting applications that work on these platforms,” Kersten continued, “they also have to learn more of the operational burden.
And it may be different to your traditional sysadmin who is racking and stacking hardware, and having to understand disk speed and things like that, but the idea that developers get to operate in a pure bubble and not actually think about the operational burden at all is completely deluded.
It just isn’t how I’m seeing any of the successful serverless deployments work. The successful ones are developers who have some operational expertise, have some idea of what it’s like to actually manage things in production because they’re still having to do things.”
The development patterns Kersten sees emerging in the serverless field, he told ZDNet, are only now emerging as a result of evolutionary paths bunching themselves up against the edges of this proverbial bubble.
The new logic is required to resolve the adaptability burdens facing FaaS-optimized code, once it becomes encumbered by the stress of customer demand at large scale. Configuration management systems on the back end can only go so far. The simple act of updating a function requires the very type of A/B comparisons against older versions that a serverless context, with its lack of contextual boundaries, would seek to abolish.
There’s also the issue of the deployment pipeline. In organizations that practice continuous integration and continuous delivery (CI/CD), the pipeline is the system of testing and quality control each code component receives before it’s released to production for consumer use. The very notion of staging implies compartmentalization — again, against the serverless ideal of homogeneity.
“I still think there need to test environments, there still needs to be staging environments,” argued JP Morgenthal, CTO for applications services at DXC Technology. “And I’m still of the firm belief that somebody should be responsible for validating something moving into production.
“I know there are some schools of thought that say, it’s okay for the developer to push directly into production. Netflix does that,” Morgenthal told ZDNet. “Somebody not getting their movies, sure, that’s a bad thing because you want customers to be happy.
But it’s a lot different when you let somebody issue a new function inside of a banking application without appropriate validation at multiple levels — security, ethics, governance — before that code gets released.
That is still DevOps because that still has to go from the developer developing, deploying, in a test environment, to somebody testing it and ensuring that those things hold, before it can go the rest of the way in the pipeline into production deployment.”
Giving developers the appearance of operating in a “pure bubble” — a cushioned, comfy, safe haven where all is provided for them — and giving these same people a way to integrate themselves and their roles with everyone else in IT, seem to be two gifts for competing holidays.
Sure, we may yet devise new automated methods to achieve compliance and security that developers can comfortably ignore. But even then, the pure bubble of serverlessness could end up serving as a kind of temporary refuge, a virtual closed-door office for some developers to conjure their code without interference from the networked world outside.
That may work for some. Yet in such circumstances, it’ll be difficult for employers and the folks whose jobs are to evaluate developers’ work, to perceive the serverless architectural model as anything other than a coping mechanism. Contact Musato Technologies to learn more about our innovative and smart ICT solutions and specifically of serverless platform computing that empowers businesses and organizations. An article by ZDNet
You must be logged in to post a comment.