Serverless functions give us the power to fail faster, and more often. Since there are no servers to manage, you can create isolated, production-like environments much quicker. Because of this, it will be easier to develop and run pipelines which will run faster and more stable. You will gain the needed feedback much faster.
However, giving the natural small size of functions, within no time we will have thousands of them running. Managing this will soon slow us down, and we can get lost in translation with our business. We will lose the fast feedback that was promised.
Join us in this workshop where we leverage the Bounded Context pattern from Domain-Driven Design. We will design the boundaries using Event Storming, leverage Test-Driven Development to code our AWS Lambdas, and use GitLab with SAM and CloudFormation to continuous testing and continuous delivery of our application. So if you don’t want to make a mess of all your lambda’s, and want your models to stay connected with the way the business thinks about them, this is the perfect workshop for you!
A "zero to hero" workshop about building serverless systems consisting of functions and containers, inter-connecting them using messages and events, monitoring your system, provisioning, and deploying.
Come learn about Knative, an open source collaboration to define the future of serverless on Kubernetes. In this session, I will introduce the Knative project and outline how it unifies function, application, and container developer workflows under a single API to make you more productive and your system easier to manage. I will also use demonstrations to highlight the key benefits developers and operators can expect by adopting platforms based on Knative.
FaaS functions on Kubernetes are increasingly popular. We often talk about the developer productivity advantages, such as the time to create a useful application from scratch without learning a lot about Kubernetes. In this talk, we will focus on the operational aspects of serverless applications on Kubernetes. What does it take to use serverless functions in Production, with safety and at scale? This talk covers six specific approaches, patterns, and best practices that you can use with any FaaS/Serverless framework. These practices are geared toward improving quality, reducing risk, optimizing costs, and generally moving you closer towards production-readiness with serverless systems. We'll discuss: declarative configuration, live-reload for fast feedback, record-replay for testing and debugging, Canary Deployments to reduce the risk and impact of application changes, monitoring with metrics and tracing, and cost optimization. We'll show how you can make different cost-performance tradeoffs; we'll discuss what the defaults choices imply, and how to tune these. We also share a live demo showing how you can easily follow these practices with the open source Fission.io FaaS framework, so you can use them on any infrastructure that runs Kubernetes (whether it’s your datacenter or the public cloud).
Serverless is cool and has many advantages. So far, so good. But for many people, the Damocles sword of "vendor-lock" still hangs over the serverless world. This must not be necessarily the case. I'll show you how you can run serverless apps happily and effectively in a multi-cloud environment. We will discuss vendor-specific APIs and whether and how they can be bypassed. In addition, I will show you how you can use an event gateway to distribute the individual events in a public multi-cloud and even in your own infrastructure, transparently to the consumers of the API. Vendor-lock was yesterday. #LessSlides #MoreLive
How to build a serverless application with the least amount of code needed? In this talk, I will show you how to architect serverless applications with GraphQL, using AppSync. I will introduce you to the AppSync service and all its different components. AppSync is a managed service from AWS and is a GraphQL server. It has a lot of out of the box functionalities that are really helpful when building applications like authorization or subscriptions, and it connects directly to services like DynamoDB so you don't need to code that interface yourself.
Traditional serverless technologies are cloud-vendor-specific with limited portability. Come and learn
about emerging open source technologies like Kubernetes, Knative, and GitLab Serverless,
making it possible to write functions once and run them using compute from your cloud provider
When we talk about prices, we often talk only about Lambda costs. But we rarey use only Lambda in our applications. We usually have other building blocks like API Gateway, data sources like SNS, SQS or Kinesis and Log service (Cloud Watch). Also, we store our data either in S3 or in serverless databases like DynamoDB or recently in Aurora Serverless. All these services have their own price models which we have to pay attention to. Moreover, we have to consider application data transfer costs. In this talk, we will draw the complete picture about the costs in the serverless applications, look at the Total Cost of Ownership and make some recommendations about when it's worth using serverless and when the traditional approach (EC2, Container).
In cloud-native environments in general, and serverless in particular, the cloud provider is responsible for securing the underlying infrastructure, from the data centers all the way up to the container and runtime environment. This relieves much of the security burden from the application owner, however, it also poses many unique challenges when it comes to securing the application layer. In this presentation, we will discuss the most critical challenges related to securing serverless applications - from development to deployment. We will also walk through a live demo of a realistic serverless application that contains several common vulnerabilities, see how they can be exploited by attackers, and how to secure them.
“Serverless” is fundamentally changing the way how software gets developed, shipped, and operated. For many organizations these change are going to become a major challenge. Entire disciplines and teams might get obsolete or change substantially within organizations. What will change with serverless? What are typical signs of resistance against the change? How can we prepare our org and people for unlearning old patterns and behaviors that don’t work anymore in a serverless world? How can organizational *un*learning get institutionalized in companies? Let’s have a look from a knowledge management perspective.
Microservices architectures use an assembly of fine-grained services to deliver functionality. This reduces dependencies between teams resulting in faster code to production. Serverless architecture code is an execution model where server-side logic runs in stateless, event-triggered, compute containers that are fully managed by cloud vendor. It is associated with less management overhead (as there are no servers to maintain) and is cheaper to operate since you only pay for what you use. While there are similarities and dissimilarities with both architectural styles, both require an application to be composed of a collection of loosely coupled components, which implement business capabilities. Thus, it is possible to implement microservices architecture as a serverless application. This talk elaborates on this topic, covering the pros and cons, details of various deployment patterns, and best practices. It shows how to implement distributed sagas, how code can be structured in both monorepo and multirepo, and how to leverage Thrift/Protocol buffers to manage contracts between functions. It also covers how to prevent the problem of getting stuck in an infinite loop as well as why you need a rollback mechanism (which is needed for sagas anyway), and why one should embrace eventual consistency when designing fault-tolerant business processes.
As developers, we know the importance of testing. The move to cloud-native applications, CI/CD world, and serverless specifically should drive changes in our testing methodologies and mindset. In this session, we’ll discuss the adaption in our testing processes, in order to release good working software quickly and often. We will discuss: What type of tests is more important? What is the right environment for the testing activities? What are the tools that can help us? Let's deliver good software, fast!
Bindings and runtime extensions are the engine of Azure Functions that fuel Azure-based Serverless architectures. In this short talk, Christian Weyer will show you in a live coding session how you can build and use your own custom bindings and extensions. Based on a real project's requirements, he will walk you through the internals of Azure Functions and create a fully functional custom binding that implements patterns for reusing custom infrastructure needs.
In this talk, John McCabe will introduce OpenFaaS: Serverless Functions made simple for Docker and Kubernetes. OpenFaaS values simplicity in the developer workflow, operation, and community engagement. We demonstrate how to build functions using the community provided templates with a live demo before exploring some real-world examples of how and why people are leveraging OpenFaaS in their serverless architectures. We’ll also see OpenFaaS Cloud in action, which brings a streamlined "Git-centric“ (aka GitOps) and multi-user workflow to your functions. Finally, we’ll touch on recent work in the community around Istio integration and the use of backend providers such as AWS Fargate. OpenFaaS has won Best Cloud Computing Software 2018 from InfoWorld and has a thriving community with over 155 contributors, 3.5k commits and over 14k stars. https://www.openfaas.com/
One of the critical components of any startup company or pilot project is the ability to “fail fast” - to rapidly develop and test ideas, making adjustments as necessary. Learn how one startup team leverages serverless architecture to get app ideas off the ground in hours instead of weeks, greatly reducing the cost of failure and experimentation. You'll learn how to launch an app quickly, add features orthogonally, integrate with 3rd party apps in minutes, and control operating costs
Why does it have to be Serverless versus Microservices? Couldn't it be rather Microservices with Serverless? Based on some of the well accepted principles of Microservices, we can use Serverless architectures and technologies to build highly focused Microservices - which we might call Nanoservices.
Function-as-a-Service serverless offerings are advertised as the way to build event-driven applications in days or hours, and then scale them up to millions of users – all for several dollars a month. But how does that work in today's cloud? What do we know about the internals of FaaS implementations? Is scalability a solved problem? Join Mikhail on a journey into the depths of how serverless scalability works, what's common and what's different across cloud providers, and why you should care.
Ever felt lost in your microservices architecture, unable to tell which requests go where? This talk will give you a practical guide on how to clarify where requests go, and how to visualise them. This talk will help you make the case for OpenTracing, giving you a short tour of the current tracing options. Moving on it will give you practical implementation advice including common problems such as high load and event-based systems, as well as diving into the future of tracing with the increasing adoption of FaaS. This is a practical talk aimed at people near the code.
Serverless gives us the power to focus on writing code without worrying about the provisioning and ongoing maintenance of the underlying compute resources. Cloud providers (like AWS) also give us a huge number of managed services that we can stitch together to create incredibly powerful and massively scalable serverless microservices. This talk focuses on common design patterns that can be used to implement serverless microservices in AWS.
Serverless promises on-demand, optimal performance for a fixed cost. Yet, we see that the current serverless platforms do not always hold up this promise in practice; serverless applications can suffer from platform overhead, unreliable performance, “cold starts”, and more. In this talk, we review optimizations used in popular FaaS platforms, and recent research findings that aim to optimize the trade-off between cost and performance. We will review function reuse, resource pooling, function locality, and predictive scheduling. To illustrate, we will use the open-source, Kubernetes-based Fission FaaS platform to demonstrate how you can achieve specific goals around latency, throughput, resource utilization, and cost. Finally, we take a look at the horizon; what are the current performance challenges and opportunities to make FaaS even faster?
When it comes to the cloud these days it is often associated with DevOps and micro-services related technologies. Serverless started a few years back but it was only recently that it really caught peoples' interest, maybe since mature enough tooling became available. Many teams identified serverless as a much better fit than micro-services to improve the efficiency of their existing solutions: It is quite incremental, flexible and does not necessarily require structural modifications.
In our team, we decided to confirm these thoughts trying to integrate serverless into our business application platform. It was the occasion to appreciate the pros and cons but also to get a better idea of how complex it is to support.
In this talk, we share our really positive journey and demonstrate how serverless can be a great start for teams with the objective of ramping-up one step at a time on the cloud in general.
Serverless functions (like AWS Lambda, Google Cloud Functions, and Azure Functions) have the ability to scale almost infinitely to handle massive workload spikes. While this is a great solution to compute, it can be a MAJOR PROBLEM for other downstream resources like RDBMS, third-party APIs, legacy systems, and even most managed services hosted by your cloud provider. Whether you're maxing out database connections, exceeding API quotas, or simply flooding a system with too many requests at once, serverless functions can DDoS your components and potentially take down your application. In this talk, we’ll discuss strategies and architectural patterns to create highly resilient serverless applications that can mitigate and alleviate pressure on non-serverless downstream systems during peak load times.