Where it all began
Application development is an exciting field that has seen many trends in recent years. While we used to develop and compile native applications, the web application trend has gained traction in the enterprise sector over the years. Above all, constant accessibility, automatic updates, and multi-user capabilities were unquestionably important in this case. However, the times have changed for web applications.
In the past, it was sufficient for a web application to be “web-capable,” which meant that the server could output HTML and interpret form data.
However, in recent years, more and more requirements have been added — particularly by the user. Even in the enterprise environment, applications that are difficult to use by the user are frequently rejected. Users are now accustomed to the system assisting them in any situation in order to achieve their goals as quickly as possible.
The way to the single-page application
One example of this support is the use of many small simple animations known as micro-interactions, which provide hints about the action being performed and thus significantly improve the user experience (UX). Of course, the context of the page should not be interrupted or disrupted by reloading, which is why server-side applications are typically unsuitable for these use cases. Pure CSS solutions are often insufficient.
The browser as a serverless platform
As a result, Google, in particular, expanded SPAs into so-called progressive web apps (PWA). PWAs are applications that run in the browser but can still use native device functions and are thus available offline. What appears to be excitingly beautiful at first has both opportunities and costs.
It is possible, for example, to cache various assets and HTTP requests locally and thus avoid HTTP calls using service workers. If we had a server application with exactly one API for exactly one client, as was often the case, then changing the interface was less of a hassle. Following a release, all users simply downloaded a new version. In the case of service workers, however, it is quite possible for applications to be run for several days after the fact, because a new version, even if downloaded, does not become active until all tabs have been closed (or a reload is triggered by an appropriate update). This makes it especially risky to implement rapid API changes in the backend without rendering old cached applications unusable.
Opportunities of a client-side approach
However, as the browser’s functionality grows, the question naturally arises as to the benefits of moving so much logic from the server to client-provided systems in the first place. The answer is straightforward: decentralised applications scale better and function independently. If all of the applications’ tasks are performed on the client, the application will automatically scale with the number of clients (and the computing power of the end devices, which keeps increasing anyway).
Take, for example, a somewhat computationally intensive conversion process, which requires a server with a lot of RAM and CPU. If many of these processes are now started concurrently, the server quickly becomes overloaded and must scale up — that is, more computing power must be purchased. If load peaks fall, the server must scale down as soon as possible to avoid unnecessary costs. One solution was, for example, Amazon’s Function-as-a-Service model. In this case, for example, a file is uploaded to a system such as S3. This then automatically generates an event and executes a predefined processing procedure. When a large number of events, such as file uploads, arrive at the same time, Amazon completely handles scaling the computing power. The entire thing is then charged based on runtime and CPU and RAM strength.
As is so often the case, such models introduce new issues. Of course, the question of how much code can be disclosed arises. After all, the code to be delivered, compiled, or even with source maps in some form is freely available to the public. Business-critical use cases, such as secret formulas, are thus unsuitable for a browser-driven model. What is exciting is the interaction of the APIs. In theory, if the conversion process has already occurred on the client, it is not even necessary to send the result to a server. The server, as we know, already has its own database. As a result, the user could convert a file and continue working on it with IndexedDB without incident — without the server or data centre even noticing.
But what does this actually demonstrate? The modern web, as characterised by PWAs, creates numerous opportunities, such as cost reduction. However, it also leads to an already complex world, characterised by decisions, becoming even more complex and difficult. We must ask ourselves: what can and cannot be done without the server? So many new possibilities are being created, particularly with WebAssembly, that it is difficult to keep track of which technologies will prevail and which will not. Many developers are unaware of what the browser can do today other than the client-server model. It must always be determined in detail which technology and model is used to what extent, but one thing is certain: the browser is truly serverless and thus largely (operationally) free for the operator.