Serverless applications utilize multiple cloud services and consist of many moving parts, so they are hard to manage without employing an infrastructure as code (IaC) approach.
During the workshop we will go through three IaC tools: Azure ARM Templates, Terraform, and Pulumi. We will learn the pros and cons of different approaches and see how coding and infrastructure can be blended together in the serverless world. The workshop content uses Azure cloud but the concepts are mostly applicable to AWS and GCP.
This workshop is designed to educate you about the AWS platform with architectural best practices on how to build production-ready and future-proof serverless applications with AWS – optimized for high speed app delivery and lowest maintenance overhead.
APIs have become an essential ingredient of Digital Transformation and other initiatives to modernize IT and organizations. However, while APIs are an essential ingredient, they are not everything you need, and thus only focusing on APIs may lead to disappointment when expectations are running too high.
In short, APIs are necessary for improving organizational fitness, but they are not sufficient.
The hard work of changing an organization also has to happen in other areas than in the technical infrastructure. After looking at some common patterns of "Acute API disillusionment", we will have a look at how a structured approach to transformation initiatives can help to make sure that all necessary parts of the transformation process are tackled simultaneously, or that in case of more serial approach, expectations are adjusted accordingly.
APIs are an essential ingredient in modern IT architectures, but understanding their possibilities and limitations is important to get the most out of your API investments.
Approaching problems with a serverless first mindset means rethinking, re-architecting, and rethinking again. Let's build a model for thinking serverless and practice applying it. Serverless isn’t just FaaS, it isn’t just about the cloud. We'll put our concept of serverless architectures to the test with a mindset to match.
In serverless architectures, controlling the way requests and responses are routed through the network can be a powerful way to enhance the security and performance to your application. HTTP has been gradually adding lots of new and exotic headers, and more are on the way. Learn about current best practices with Vary, Link, Content-Security-Policy, Referrer-Policy, Client-Hints, Clear-Site-Data and Alt-Svc, upcoming features such as Feature-Policy and proposals like Variants, Early-Hints and Origin-Policy. HTTP gives you incredibly powerful control over many aspects of the way a browser will process the page and is often a more effective or more secure option than trying to achieve the same effect with tags or script in the page.
Cold starts are definitely a hot topic when it comes to serverless, on any FaaS platform. Suggested remediations such as tricks to make them warmer are all around. But why are cold starts happening at all? Is it a broken design in the infrastructure or just a very hard thing to implement?
Let us walk you through a few concepts that make serverless platforms react the way they do, from a container and sandboxing in linux, up to packaging and orchestration. Join me in this serverless internals talk to dig deep in the infrastructure, understand more, followed by live examples and interesting experiments.
Adopting serverless sounds great for a lot of people, until you tell them that they might need to use some NoSQL database. That usually becomes one of the biggest pain points of when adopting serverless.
In this talk, we will cover, how to choose a database for serverless and why DynamoDB is a great choice for most of the serverless applications hosted in AWS, and what options are out there when Dynamo is not a good fit. Also, we cover good practices and patterns to work with DynamoDB in serverless environments.
It became essential for businesses to protect their applications, services and customer data from attackers. If you want to stay competitive, knowing how to efficiently and easily apply security and auth while being aware of the most common pitfalls is key in today’s serverless world. Traditional machine-to-machine auth approaches where you can rely on a stateful environment fall short in a modern serverless and thus stateless world. After a short recap of some auth fundamentals, you’ll learn how to efficiently apply authentication to Azure Functions without compromising security – using an external Identity Provider like Auth0, OAuth 2, JWT, the secrets management system Azure Key Vault, Azure Managed Identities and Cloudflare Workers.
APIs originally evolved from messaging systems emerging from the proliferation of distributed systems. Bypassing the normal payload of application interactions meant a much faster performance when distance was a factor. Distributed systems are now the norm rather than the exception and performance standards are higher than ever. Ensuring fast, reliable API systems in today's competitive landscape requires the staple of the distributed system: caching. Come and see how caching API responses can increase your API performance and reduce infrastructural requirements.
Serverless – is it yet another buzzword? Is it real? Is it for big corporations? Or is it for everyone? Where can we find answers to such questions? Well, the best way to answer such concerns is to simply talk about your serverless experience and take the audience through the journey you have been through! And that is exactly what this talk is about.
The Shopper Engagement Technology team at LEGO has been busy migrating the legacy monolith eCommerce platform onto a cloud based solution on AWS. This employs serverless and managed services at its core within an agile development process. In this talk I will share the experience of the team, going through some of the architectural patterns and serverless best practices employed during this journey.
One of the critical components of any startup company or pilot project is the ability to “fail fast” – to rapidly develop and test ideas, making adjustments as necessary. Learn how one startup team leverages serverless architecture to get app ideas off the ground in hours instead of weeks, greatly reducing the cost of failure and experimentation. You'll learn how to launch an app quickly, add features orthogonally, integrate with 3rd party apps in minutes and control operating costs.
Pay-per-use – we all like it, right? It’s one of the core principals of serverless, but it can also be a challenge. How can we plan our monthly budget, if we don’t know what our bill is going to look like? What are we really spending our money on? And, of course: What needs to be monitored? Pay-per-use is one of the major drivers of serverless adoption. Small startups love it because their monthly bill is almost zero. Large organizations are attracted to improving their IT spending when the old servers have very low utilization and are mostly idle. While it sounds promising, it also produces a massive challenge – paying per use means that you don’t know how much you are going to pay – because most of us don’t know exactly how much we are going to use. In addition, new and unique challenges arise – a bug in the code can suddenly lead to a very high cloud bill, as well as an external API that has a very slow response, causing us to pay for this entire time.
Knative is a Kubernetes-based platform that comes with a set of building blocks to build, deploy and manage modern serverless workloads. Knative consists of three major areas: Build, Serving and Eventing. The session gives you an introduction into the Knative Eventing component and walks you through an end-to-end demo, showing the lifecycle of event-driven workloads on Knative. You'll see integration of 3rd party events, from systems like Apache Kafka, and how your application can be hooked up to a firehose and connect your service to process incoming events, leveraging built-in Knative features to provide request-driven compute, so that services can autoscale, including down to 0, depending on the actual throughput. If you are interested in learning about an event-driven serverless developer experience on Kubernetes, this session is for you!
Contrary to the current narrative, serverless is not at its core a technology—it's an approach to development that delivers true business value. Serverless is a mindset. It's the direction modern software teams choose to go in rather than a destination they stop at. In this talk, Stackery’s Ecosystems Director, Farrah Campbell, will define what the “serverless state of mind” means to her, her road to get there, and how she uses this mindset as a compass to guide decision-making for both personal and professional goals. From unlocking a better development workflow to furthering career growth and satisfaction, we can all benefit from adopting a serverless state of mind.
Serverless applications are the epitome of highly distributed microservices applications. Execution happens everywhere – both inside and outside the serverless compute environment. For example, your functions could be triggered by an external service, then execute some code within AWS Lambda, then send a request over to a database, which then requires AWS Lambda to perform an update in a second data store. You might be able to predict and design for certain troublesome issues but there are many, many more that you probably will not be able to easily plan for. How do you build a resilient system under these highly distributed circumstances? The answer is chaos engineering. Join us as we walk through:
The unique challenges of building a highly resilient serverless app
Why you need to design for problems you cannot predict and cannot easily test for
How you can use chaos engineering to build a resilient serverless application
How observability platforms like Thundra can help you understand system behavior and prepare for unpredictable failure scenarios
APIs are the food for the digital world. They pave the way to new business models through digital services. But APIs are just a technical enabler for an evolutionary End-to-end Architecture. They are part of a bigger journey to modernization strategies and can unlock further economic and agility benefits through a fully implemented API Economy. This session focuses on possibilities how APIs can foster this initiative, embedded in the bigger picture of Integration development using cloud native architectures.
As companies are adopting serverless architectures and moving away from monolithic and microservice-based deployments, they realise that the challenge lies not only in the rewrite of an old application, but also in the shift towards a new way of thinking. We see many serverless architecture patterns today, such as function chaining, function chaining with rollback (for transaction), ASync HTTP, fan-out and more. We also have a number of tools on the market that ease application development using serverless, of which Apache OpenWhisk (via action chaining or using function composites) and Amazon Step Functions are some of the more popular. In this talk, we will present a new alternative way of building serverless applications based on the orchestration of typed functions, using the probabilistic inference programing paradigm. Inference-based programming brings about the best of the current modelling approaches: the expressiveness and simplicity of decision trees, the high debugging capabilities of state machines, the scalability and flexibility of flow based programming and superior logic expressions to forward chaining approaches. The talk will include a live demo of how to use probabilistic inference programming for a complex IoT application.
Managed services have become a main part of modern cloud applications. Utilizing existing services via APIs provides developers with great flexibility and velocity. However, the significant reliance on these services, which you have no control over, puts your application’s performance and costs in danger. In recent years, managed SaaS services, usually consumed via APIs, have become extremely popular. They also provide developers and system architects with ways to build more robust applications. With the growth of serverless applications, APIs have become even more useful, due to the limited resources and running time of the serverless functions, which sometimes forces you to pick an existing service. However, these services have their own behaviour, running time and latency, which in many cases greatly affects the overall performance of your own applications. Moreover, in pay-per-use environments which many people like these days, the implications can be huge – a slowness in a managed service can cause you to end up with an inflated cloud bill.
In this session you will learn about the benefits of serverless for startups and why Laserhub sees a lot of potential by adopting Serverless Framework in early use cases. By limiting the scope of serverless, we balanced risk and value add, but entered a new world for development and DevOps within two weeks.
This talk should address software engineers with an entrepreneurial mindset who are willing to integrate new technologies, provide new challenges to their team and always need to balance cost, value and risk. Attendees will learn about a technical solution including Serverless Framework, infrastructure as code (functions, AWS resources, domains, authentication), API Gateway and message-based decoupling.
As an exploratory tester I get a good look at the ways we build the client and the backend in a product shifting heavily into the world of serverless. We’ve gone through the premature adoption of technologies, scaled to large numbers and figured out how to own things over multiple locations and teams in an internal open source model.My perspective is that of filters - how to apply a set of perspectives that continuously improves things through breaking illusions that change as the technology set around us changes. In this talk, we look at learning to apply those filters to learn about what we are building to know where we are in a more holistic way.There’s a tester perspective that we all can learn. Join me in picking up a filter that could be useful for you. My way of looking at testing is one of TOAD = Testing, Observability and DevOps. They’re all about feedback we need smart ways of doing.
Exploratory Testing is a skilled multidisciplinary style of testing. Many have learned to apply it on user interfaces that naturally speak to testers as their external imagination. Yet with systems of today, it is important we move that skill of smart thinking with external imagination to interfaces hidden from users - public and private APIs. How can you use exploratory testing on something that does not have a GUI?Let’s shape up our skills of exploring both the functional and parafunctional aspects of a system through its APIs in their operating environments, without forgetting developer experience of having to maintain and troubleshoot these systems. Let’s learn to be intentional with our APIs, instead of being accidental - through delivering relevant, timely feedback. Intertwining test automation and exploration, we include considerations of the best for today and for the future. For great testing bringing value now as well as when we are not around, we need to be great at testing - uncovering relevant information - and programming - building maintainable test systems. At the core of all of this is learning. What we lack in a set of skills, we can compensate through collaboration.
Serverless is fun and easy. But what about monitoring? Is there a monitoring monster lurking around the corner? I'll admit, there are tons of options to tackle the monster. But what features does AWS offer? And more importantly, how can you best use them to make sure you aren’t called out of bed at night? In this talk, I’ll speak from the heart and share my own experience from using serverless in production for more than a year.
Should you structure your logs? Of course, but why?
How do you set up distributed tracing with X-Ray?
Provide visibility using CloudWatch metrics and dashboards
Get insight in your log date using CloudWatch Logs Insights
Using the tools and best practices, you’ll tackle the monster no doubt. I’ll do you one better. There won't be a monster, you'll have a new best friend!
Build a fully managed Serverless CI/CD solution using AWS Service. Multiple development teams within an organization to collaborate securely and efficiently on serverless application deployments. AWS services such as Amazon Simple Storage Service (Amazon S3), AWS CodeCommit, AWS CodeBuild, and AWS CodeDeploy provide artifact storage, automated testing, builds, deployment,for serverless applications.
One of the advantages of building on serverless is the drastic reduction in development time and time to market. This session will show you how to build a powerful recommendation engine using image recognition technology and run it on serverless in 72 hours. There will be a demo session. This talk is rated level 200-300 with a target audience of engineers, architects, and developers and assumes you have some knowledge of Amazon Web Services.