Serverless Architecture Conference Blog

Serverless? Is That Even Possible?

Column: EnterpriseTales

Mar 21, 2023

We often hear the term "serverless" in the media, only to be consoled by the fact that servers still exist. If we pay for them, they are operated by someone else. Despite the fact that we have the ability to be truly "serverless" with the browser, very few people use it. So let's take a look at what's actually possible with modern APIs in the browser these days.

Where it all began


Application development is an exciting field that has seen many trends in recent years. While we used to develop and compile native applications, the web application trend has gained traction in the enterprise sector over the years. Above all, constant accessibility, automatic updates, and multi-user capabilities were unquestionably important in this case. However, the times have changed for web applications.


In the past, it was sufficient for a web application to be “web-capable,” which meant that the server could output HTML and interpret form data.


However, in recent years, more and more requirements have been added — particularly by the user. Even in the enterprise environment, applications that are difficult to use by the user are frequently rejected. Users are now accustomed to the system assisting them in any situation in order to achieve their goals as quickly as possible.



The way to the single-page application


One example of this support is the use of many small simple animations known as micro-interactions, which provide hints about the action being performed and thus significantly improve the user experience (UX). Of course, the context of the page should not be interrupted or disrupted by reloading, which is why server-side applications are typically unsuitable for these use cases. Pure CSS solutions are often insufficient.


When applications include complex interactions, such as drag and drop, a significant amount of data must be kept on the client, in the browser, which is managed by JavaScript. As a result, almost all new developments today are so-called single page applications — JavaScript applications in which the rendering logic is executed entirely in the browser, reducing communication and thus latencies with any servers to a bare minimum. Although SPAs are frequently used nowadays, giving the client a great deal of control over the application, a server (or several servers) is still used to receive, validate, process, and store the data. This must be maintained, monitored, and paid for despite the fact that the required computing power is already available in the user’s computer (i.e., on the client) and is often unused. In recent years, the browser has been upgraded to provide more opportunities to use this power. It thus provides us with a platform that can be run entirely without the use of a server — serverless. To consider the browser solely as a rendering engine and for improved UX is now too narrow-minded.

The browser as a serverless platform


As a result, Google, in particular, expanded SPAs into so-called progressive web apps (PWA). PWAs are applications that run in the browser but can still use native device functions and are thus available offline. What appears to be excitingly beautiful at first has both opportunities and costs.


It is possible, for example, to cache various assets and HTTP requests locally and thus avoid HTTP calls using service workers. If we had a server application with exactly one API for exactly one client, as was often the case, then changing the interface was less of a hassle. Following a release, all users simply downloaded a new version. In the case of service workers, however, it is quite possible for applications to be run for several days after the fact, because a new version, even if downloaded, does not become active until all tabs have been closed (or a reload is triggered by an appropriate update). This makes it especially risky to implement rapid API changes in the backend without rendering old cached applications unusable.


So, even if we have an application that the server can cache, it is often necessary to store data persistently. We do this traditionally by sending an HTTP request to our server, which contains all of the data. . If the server is no longer accessible, we have a problem. Because very few applications can function without persistent data, the browser provides us with the option of storing large amounts of data in a database, the IndexedDB. IndexedDB exists only on the client and provides a full database system. This means that we now have quite complex memory management on the frontend and have to understand the logics that used to be attributed more to the backend. Front-end developers must now understand and apply concepts such as data migrations, indices, and transactions. A deeper understanding of JavaScript and its unique single-threaded execution model is required; basic HTML and CSS knowledge is no longer sufficient. It is also necessary to understand how to use the APIs available in the browser.


Setting up browsers on JavaScript was especially difficult because many ecosystems in various industries are based on C or C++. For a long time, it was simply not possible to run web applications — only JavaScript was available. The WebAssembly standard has fundamentally changed this. WebAssembly enables the use of external ecosystems other than npm — on the web. Many programming languages already support a WebAssembly target for this (unfortunately often still experimentally). As a result, the possibilities for the web and browser have significantly expanded.


Want more great articles about Serverless Architecture? Subscribe to our newsletter!

Opportunities of a client-side approach


However, as the browser’s functionality grows, the question naturally arises as to the benefits of moving so much logic from the server to client-provided systems in the first place. The answer is straightforward: decentralised applications scale better and function independently. If all of the applications’ tasks are performed on the client, the application will automatically scale with the number of clients (and the computing power of the end devices, which keeps increasing anyway).


Take, for example, a somewhat computationally intensive conversion process, which requires a server with a lot of RAM and CPU. If many of these processes are now started concurrently, the server quickly becomes overloaded and must scale up — that is, more computing power must be purchased. If load peaks fall, the server must scale down as soon as possible to avoid unnecessary costs. One solution was, for example, Amazon’s Function-as-a-Service model. In this case, for example, a file is uploaded to a system such as S3. This then automatically generates an event and executes a predefined processing procedure. When a large number of events, such as file uploads, arrive at the same time, Amazon completely handles scaling the computing power. The entire thing is then charged based on runtime and CPU and RAM strength.


A client-side model can be useful in this situation. This would simply make use of the computing power that is already available for conversion — and it would be completely free of charge from the operator’s point of view. The same logic that was previously executed on a serverless function can now be done directly on the client, for example, using WebAssembly or JavaScript. After that, the server would simply validate the results in a much more computationally efficient manner.


As is so often the case, such models introduce new issues. Of course, the question of how much code can be disclosed arises. After all, the code to be delivered, compiled, or even with source maps in some form is freely available to the public. Business-critical use cases, such as secret formulas, are thus unsuitable for a browser-driven model. What is exciting is the interaction of the APIs. In theory, if the conversion process has already occurred on the client, it is not even necessary to send the result to a server. The server, as we know, already has its own database. As a result, the user could convert a file and continue working on it with IndexedDB without incident — without the server or data centre even noticing.



But what does this actually demonstrate? The modern web, as characterised by PWAs, creates numerous opportunities, such as cost reduction. However, it also leads to an already complex world, characterised by decisions, becoming even more complex and difficult. We must ask ourselves: what can and cannot be done without the server? So many new possibilities are being created, particularly with WebAssembly, that it is difficult to keep track of which technologies will prevail and which will not. Many developers are unaware of what the browser can do today other than the client-server model. It must always be determined in detail which technology and model is used to what extent, but one thing is certain: the browser is truly serverless and thus largely (operationally) free for the operator.

Stay tuned!
Learn more about Serverless
Architecture Conference 2020

Behind the Tracks

Software Architecture & Design
Software innovation & more
Architecture structure & more
Agile & Communication
Methodologies & more
Emerging Technologies
Everything about the latest technologies
DevOps & Continuous Delivery
Delivery Pipelines, Testing & more
Cloud & Modern Infrastructure
Everything about new tools and platforms
Big Data & Machine Learning
Saving, processing & more