Serverless applications utilize multiple cloud services and consist of many moving parts, so they are hard to manage without employing an infrastructure as code (IaC) approach.
This workshop is designed to educate you about the AWS platform with architectural best practices on how to build production-ready and future-proof serverless applications with AWS – optimized for high speed app delivery and lowest maintenance overhead.
Approaching problems with a serverless first mindset means rethinking, re-architecting, and rethinking again. Let's build a model for thinking serverless and practice applying it. Serverless isn’t just FaaS, it isn’t just about the cloud. We'll put our concept of serverless architectures to the test with a mindset to match.
In serverless architectures, controlling the way requests and responses are routed through the network can be a powerful way to enhance the security and performance to your application. HTTP has been gradually adding lots of new and exotic headers, and more are on the way. Learn about current best practices with Vary, Link, Content-Security-Policy, Referrer-Policy, Client-Hints, Clear-Site-Data and Alt-Svc, upcoming features such as Feature-Policy and proposals like Variants, Early-Hints and Origin-Policy. HTTP gives you incredibly powerful control over many aspects of the way a browser will process the page and is often a more effective or more secure option than trying to achieve the same effect with tags or script in the page.
Let us walk you through a few concepts that make serverless platforms react the way they do, from a container and sandboxing in linux, up to packaging and orchestration. Join me in this serverless internals talk to dig deep in the infrastructure, understand more, followed by live examples and interesting experiments.
Adopting serverless sounds great for a lot of people, until you tell them that they might need to use some NoSQL database. That usually becomes one of the biggest pain points of when adopting serverless.
It became essential for businesses to protect their applications, services and customer data from attackers. If you want to stay competitive, knowing how to efficiently and easily apply security and auth while being aware of the most common pitfalls is key in today’s serverless world. Traditional machine-to-machine auth approaches where you can rely on a stateful environment fall short in a modern serverless and thus stateless world. After a short recap of some auth fundamentals, you’ll learn how to efficiently apply authentication to Azure Functions without compromising security – using an external Identity Provider like Auth0, OAuth 2, JWT, the secrets management system Azure Key Vault, Azure Managed Identities and Cloudflare Workers.
3:32 am: PagerDuty wakes you up, DynamoDB is throttling. You open CloudWatch: 542 issues in the last hour.
In this session, we will share new methodologies for troubleshooting complex serverless environments, based on real-world experience.
APIs originally evolved from messaging systems emerging from the proliferation of distributed systems. Bypassing the normal payload of application interactions meant a much faster performance when distance was a factor.
Distributed systems are now the norm rather than the exception and performance standards are higher than ever. Ensuring fast, reliable API systems in today's competitive landscape requires the staple of the distributed system: caching. Come and see how caching API responses can increase your API performance and reduce infrastructural requirements.
One of the critical components of any startup company or pilot project is the ability to “fail fast” – to rapidly develop and test ideas, making adjustments as necessary. Learn how one startup team leverages serverless architecture to get app ideas off the ground in hours instead of weeks, greatly reducing the cost of failure and experimentation. You'll learn how to launch an app quickly, add features orthogonally, integrate with 3rd party apps in minutes and control operating costs.
Pay-per-use – we all like it, right? It’s one of the core principals of serverless, but it can also be a challenge. How can we plan our monthly budget, if we don’t know what our bill is going to look like? What are we really spending our money on? And, of course: What needs to be monitored? Pay-per-use is one of the major drivers of serverless adoption. Small startups love it because their monthly bill is almost zero. Large organizations are attracted to improving their IT spending when the old servers have very low utilization and are mostly idle. While it sounds promising, it also produces a massive challenge – paying per use means that you don’t know how much you are going to pay – because most of us don’t know exactly how much we are going to use. In addition, new and unique challenges arise – a bug in the code can suddenly lead to a very high cloud bill, as well as an external API that has a very slow response, causing us to pay for this entire time.
Knative is a Kubernetes-based platform that comes with a set of building blocks to build, deploy and manage modern serverless workloads. Knative consists of three major areas: Build, Serving and Eventing. The session gives you an introduction into the Knative Eventing component and walks you through an end-to-end demo, showing the lifecycle of event-driven workloads on Knative. You'll see integration of 3rd party events, from systems like Apache Kafka, and how your application can be
hooked up to a firehose and connect your service to process incoming events, leveraging built-in Knative features to provide request-driven compute, so that services can autoscale, including down to 0, depending on the actual throughput. If you are interested in learning about an event-driven serverless developer experience on Kubernetes, this session is for you!
Contrary to the current narrative, serverless is not at its core a technology—it's an approach to development that delivers true business value. Serverless is a mindset. It's the direction modern software teams choose to go in rather than a destination they stop at. In this talk, Stackery’s Ecosystems Director, Farrah Campbell, will define what the “serverless state of mind” means to her, her road to get there, and how she uses this mindset as a compass to guide decision-making for both personal and professional goals. From unlocking a better development workflow to furthering career growth and satisfaction, we can all benefit from adopting a serverless state of mind.
Serverless applications are the epitome of highly distributed microservices applications. Execution happens everywhere – both inside and outside the serverless compute environment. For example, your functions could be triggered by an external service, then execute some code within AWS Lambda, then send a request over to a database, which then requires AWS Lambda to perform an update in a second data store. You might be able to predict and design for certain troublesome issues but there are many, many more that you probably will not be able to easily plan for. How do you build a resilient system under these highly distributed circumstances? The answer is chaos engineering. Join us as we walk through:
APIs are the food for the digital world. They pave the way to new business models through digital services. But APIs are just a technical enabler for an evolutionary End-to-end Architecture. They are part of a bigger journey to modernization strategies and can unlock further economic and agility benefits through a fully implemented API Economy. This session focuses on possibilities how APIs can foster this initiative, embedded in the bigger picture of Integration development using cloud native architectures.
As companies are adopting serverless architectures and moving away from monolithic and microservice-based deployments, they realise that the challenge lies not only in the rewrite of an old application, but also in the shift towards a new way of thinking. We see many serverless architecture patterns today, such as function chaining, function chaining with rollback (for transaction), ASync HTTP, fan-out and more. We also have a number of tools on the market that ease application development using serverless, of which Apache OpenWhisk (via action chaining or using function composites) and Amazon Step Functions are some of the more popular. In this talk, we will present a new alternative way of building serverless applications based on the orchestration of typed functions, using the probabilistic inference programing paradigm. Inference-based programming brings about the best of the current modelling approaches: the expressiveness and simplicity of decision trees, the high debugging capabilities of state machines, the scalability and flexibility of flow based programming and superior logic expressions to forward chaining approaches. The talk will include a live demo of how to use probabilistic inference programming for a complex IoT application.
Managed services have become a main part of modern cloud applications. Utilizing existing services via APIs provides developers with great flexibility and velocity. However, the significant reliance on these services, which you have no control over, puts your application’s performance and costs in danger. In recent years, managed SaaS services, usually consumed via APIs, have become extremely popular. They also provide developers and system architects with ways to build more robust applications. With the growth of serverless applications, APIs have become even more useful, due to the limited resources and running time of the serverless functions, which sometimes forces you to pick an existing service. However, these services have their own behaviour, running time and latency, which in many cases greatly affects the overall performance of your own applications. Moreover, in pay-per-use environments which many people like these days, the implications can be huge – a slowness in a managed service can cause you to end up with an inflated cloud bill.
In this session you will learn about the benefits of serverless for startups and why Laserhub sees a lot of potential by adopting Serverless Framework in early use cases. By limiting the scope of serverless, we balanced risk and value add, but entered a new world for development and DevOps within two weeks.
Come see how easy it can be to use Google Firebase to take your app idea from concept to production. In this session, you will build your own messaging application, start to finish, with support for images, markdown and connecting with friends. While building this web app, you will learn about many Firebase features, including: Firestore, cloud functions, cloud storage, hosting, authentication, security rules, client and admin SDK. The code will be in JavaScript and Node.js from a git repository, so please be prepared to start coding. You will also need a Google account to sign in to Firebase.
As an exploratory tester I get a good look at the ways we build the client and the backend in a product shifting heavily into the world of serverless. We’ve gone through the premature adoption of technologies, scaled to large numbers and figured out how to own things over multiple locations and teams in an internal open source model.My perspective is that of filters - how to apply a set of perspectives that continuously improves things through breaking illusions that change as the technology set around us changes. In this talk, we look at learning to apply those filters to learn about what we are building to know where we are in a more holistic way.There’s a tester perspective that we all can learn. Join me in picking up a filter that could be useful for you. My way of looking at testing is one of TOAD = Testing, Observability and DevOps. They’re all about feedback we need smart ways of doing.
Exploratory Testing is a skilled multidisciplinary style of testing. Many have learned to apply it on user interfaces that naturally speak to testers as their external imagination. Yet with systems of today, it is important we move that skill of smart thinking with external imagination to interfaces hidden from users - public and private APIs. How can you use exploratory testing on something that does not have a GUI?Let’s shape up our skills of exploring both the functional and parafunctional aspects of a system through its APIs in their operating environments, without forgetting developer experience of having to maintain and troubleshoot these systems. Let’s learn to be intentional with our APIs, instead of being accidental - through delivering relevant, timely feedback. Intertwining test automation and exploration, we include considerations of the best for today and for the future. For great testing bringing value now as well as when we are not around, we need to be great at testing - uncovering relevant information - and programming - building maintainable test systems. At the core of all of this is learning. What we lack in a set of skills, we can compensate through collaboration.
Serverless is fun and easy. But what about monitoring? Is there a monitoring monster lurking around the corner? I'll admit, there are tons of options to tackle the monster. But what features does AWS offer? And more importantly, how can you best use them to make sure you aren’t called out of bed at night? In this talk, I’ll speak from the heart and share my own experience from using serverless in production for more than a year.
Using the tools and best practices, you’ll tackle the monster no doubt. I’ll do you one better. There won't be a monster, you'll have a new best friend!
Build a fully managed Serverless CI/CD solution using AWS Service. Multiple development teams within an organization to collaborate securely and efficiently on serverless application deployments. AWS services such as Amazon Simple Storage Service (Amazon S3), AWS CodeCommit, AWS CodeBuild, and AWS CodeDeploy provide artifact storage, automated testing, builds, deployment, for serverless applications.
One of the advantages of building on serverless is the drastic reduction in development time and time to market. This session will show you how to build a powerful recommendation engine using image recognition technology and run it on serverless in 72 hours. There will be a demo session. This talk is rated level 200-300 with a target audience of engineers, architects, and developers and assumes you have some knowledge of Amazon Web Services.