Serverless Architecture Conference Blog

Integration of GraphQL into modern serverless architectures

Dec 11, 2018

With three years on the market, GraphQL is a sophisticated and established alternative to REST. It should be taken into consideration when creating or further developing an API. Various applications such as Facebook, Instagram, and XING already successfully use this REST alternative. This is enough reason to give an insight into how GraphQL can be integrated into modern serverless architectures with little effort. In the interaction between GraphQL and AWS Lambda, a highly scalable implementation is presented that can be adapted to different architectures and frameworks.

How often do front-end developers get annoyed that a REST call did not deliver all the required data? Even back-end developers are asked by colleagues to add another property to a response if it was missing. Fortunately, thanks to GraphQL, these problems are a thing of the past. While REST defines predetermined structures for the return of a call, GraphQL only returns the data that is desired in the front-end. The so-called over- and under fetching is avoided, because calling the interface not only names the desired executed method, but it also names the desired return structures.

 

Important GraphQL schemata terms

GraphQL returns a set of terms that are used in its schema definition. Some of them are discussed in the article. For others, please refer to the GraphQL documentation:

  • Query – read accesses to data.
  • Mutation – write access to data. The structure of a mutation within the schema corresponds to that of a query, but begins with the word “mutation”.
  • Inline Fragments – object trees can be clearly structured and reused in other queries, for example. Duplicate Code is thus avoided.
  • Type and InputType – objects and their properties are firmly defined in the schema. This info is known to the client and server, which means that a validation can be used directly when the server is started and a request is executed.
  • Scalar – objects such as date values (DateTime) can be added to the GraphQL native elements such as String, Int, and Boolean and used directly as data types.
  • Argument/Variable – when passing server requests, arguments can be written directly into the request or passed as separate variables.
  • Mandatory fields – described within the schema through means of a following “!”
  • Directive – the desired return structures can be filtered by the conditional operators if and skip.

Modern developments in microservices and serverless architectures make it possible to create highly scalable systems. Combining this advantage with GraphQL for networkload optimized APIs, results in highly optimized, data-driven systems. The article gives a first insight into GraphQL, and a special focus on the interaction with AWS Lambda as a representative of serverless architectures.

A first look

What does the call of a GraphQL server look like exactly? The client creates a JSON request with the elements query and variables. The content of the query object is a string that contains the name-giving graph query language as a value. Normal JSON objects of any complexity are passed as variables. This request is sent to the server via a classic POST request. The endpoint is, for example, /graphql. Listing 1 shows a server request including parameters.

{
  "query": "
    query testQuery($id: Int!) {
      getCustomer(id: $id) {
        id
        name
        orders {
          date
        }
      }
    }
  ",
  "variables": {
    "id": 0
  }
}

The example could now be expanded at will, for example by the birthday of the customer, his order IDs or the last log-in. Anything is possible as long as the properties within GraphQL are defined as returns. This is done by Schema (Listing 2). It contains all operations and object structures with which GraphQL shall work (Box: “Important GraphQL schema terms”).

type Query {
  getCustomer(id: Int!) : Customer
}
 
type Customer {
  id: Int!
  name: String!
  age: Int
  birthdate: String
 
  orders: [Order]
}
 
type Order {
  amount: Int!
  date: String
}

After a query has been processed on the server, the response is returned. The response is also a JSON format, and can be read and processed by existing client implementations (Listing 3).

{
  "data": {
    "getCustomer": {
      "id": 0,
      "name": "Micha",
      "orders": [
        {"date": "2017-12-21"}, {"date": "2018-02-17"}, {"date": "2018-02-21"}
      ]
    }
  }
}

Want more great articles about Serverless Architecture? Subscribe to our newsletter!

Implementation as Java back-end

After getting to know the basic usage of a GraphQL server, we now move on to the concrete implementation. An AWS serverless (Lambda) function with a connection to a NoSQL database will be created in the following. Choosing the programming language must come first, since AWS supports programming languages like Node.js, Python and Java. For the following example, Java 8 is used in conjunction with AWS’ own NoSQL database DynamoDB. In that regard it’s sufficient to create a Maven project with the following AWS and GraphQL dependencies (Listing 4).

<dependencies>
  <!-- GraphQL dependencies -->
  <dependency>
    <groupId>com.graphql-java</groupId>
    <artifactId>graphql-java</artifactId>
    <version>7.0</version>
  </dependency>
  <dependency>
    <groupId>com.graphql-java</groupId>
    <artifactId>graphql-java-tools</artifactId>
    <version>4.3.0</version>
  </dependency>
 
  <!-- AWS dependencies -->
  <dependency>
    <groupId>com.amazonaws</groupId>
    <artifactId>aws-lambda-java-core</artifactId>
    <version>1.2.0</version>
  </dependency>
  <dependency>
    <groupId>com.amazonaws</groupId>
    <artifactId>aws-java-sdk-dynamodb</artifactId>
    <version>1.11.280</version>
  </dependency>
  <dependency>
    <groupId>com.google.code.gson</groupId>
    <artifactId>gson</artifactId>
    <version>2.8.2</version>
  </dependency>
</dependencies>

The end point for calls is a method with two parameters – the input parameter and the context parameter:

public String handleRequest(InputType input, Context context)

The input parameter has already been optimized for GraphQL. AWS Lambda automatically deserializes the received JSON request in the named objects. The following two properties are sufficient for GraphQL:

class InputType {
  String query;
  Map<String, Object> variables;
  ...
}

Start up of the serverless function

When the Lambda function is started, the GraphQL schema is initially analyzed and the corresponding Java handlers are wired. For that to happen, the schema must be filled with the corresponding information during creation. Three aspects are of importance here:

The first aspect is the parsing and validation of the schema; syntax errors are detected during the start up process:

SchemaParserBuilder parser = SchemaParser.newParser().file("schema.graphqls");

The second one is setting up the Java resolvers; these classes contain the subsequent business logic:

parser.resolvers(new QueryResolver());
GraphQLSchema schema = parser.build().makeExecutableSchema();

The third aspect is the transfer of the data to the GraphQL service – the transferred parameters are parsed by GraphQL and the corresponding business logic is called:

ExecutionInput exec = ExecutionInput.newExecutionInput()
  .query(input.getQuery())
  .variables(input.getVariables())
  .build();

The results can later be used directly as a response:

return GraphQL.newGraphQL(schema).build()
  .execute(exec)
  .toString();

Workflow of the requests

After the query has been passed to GraphQL, the method to be called is parsed and determined. Furthermore, the transferred parameters are automatically validated and converted into the corresponding Java objects. Thus, the GraphQL service has done its duty at this point. Now all the desired Java functionality can be executed in a well known shape. Listing 5 connects to a DynamoDB table and reads a customer object. The idiosyncrasy here is this: If the Lambda function and the created DynamoDB table are operated in an AWS account, it’s sufficient to specify the AWS region and the table name as connection parameters.

public class QueryResolver implements GraphQLQueryResolver {
 
  public Customer getCustomer(int id) {
    return getDB().load(Customer.class, id);
  }
 
  private DynamoDBMapper getDB() {
    AmazonDynamoDBClientBuilder builder = AmazonDynamoDBClientBuilder.standard();
    builder.withRegion(Regions.EU_CENTRAL_1);
 
    return new DynamoDBMapper(builder.build());
  }
 
}

When working with DynamoDB objects, you can of course proceed in the usual POJO manner (Listing 6). For this AWS offers JPA based annotations which convert return values into Java objects when communicating with the database.

@DynamoDBTable(tableName = "customer")
public class Customer {
  @DynamoDBHashKey(attributeName = "id")
  public Integer getId() { return id; }
 
  @DynamoDBAttribute(attributeName="name")
  public String getName() { return name; }
 
  @DynamoDBAttribute(attributeName="orders")
  public List getOrders() { return orders; }
 
  [...]
}
 
 
@DynamoDBDocument
public class Order {
  @DynamoDBHashKey(attributeName = "id")
  public Integer getId() { return id; }
 
  @DynamoDBHashKey(attributeName = "date")
  public String getDate() { return date; }
 
  [...]
}

As soon as the processing of the request is completed, the results are passed directly to the executing GraphQL service. This is where the added value of GraphQL comes into its own. Remember that the request only requested the customer’s ID, name, and orders. The customer object on the other hand contains additional properties. If this object is passed, the GraphQL service removes the unwanted properties and ignores structures so that only the requested elements are delivered to the client.

I want to briefly point out write inquiries (Mutations). These run according to the same scheme as queries and are identified in the query only by the keyword “mutation”. Function enhancements only require three updates in GraphQL.

The first update is the expansion of the schema:

type Mutation {
  addOrder(newOrder: OrderInput!) : Order
}
 
input OrderInput {
  customerId: Int!
  amount: Int!
}

The second one is the registration of the handler:

parser.resolvers(new MutationResolver());

And the third one, the implementation of business logic:

public Order addOrder(OrderInput newOrder) {
  Customer c = getDB().load(Customer.class, newOrder.getCustomerId());
  Order o = new Order();
 
  o.setAmount(newOrder.getAmount());
  o.setDate(DateTime.now().toDateTimeISO().toString());
 
  c.getOrders().add(o);
  getDB().save(c);
 
  return o;
}

Table 1 shows the example request and the return data. The table clearly shows that the structure of a mutation is very similar to that of a query.

Deployment and set up of AWS Lambda

One small detail is still missing in order to run the Lamba function within AWS. The Maven project must be compiled as a so-called Fat-JAR. Maven offers the possibility to work with the Shade plug-in. This bundles all required dependencies into a single, AWS deployable, artifact. With the execution of mvn clean package, the artifact is now created and can be uploaded to AWS. Lambda is now operational. In order to work with a client such as Angular, it only needs to be connected to the Internet via an API gateway and assigned the appropriate roles and rights. Here, I like to point to the very detailed AWS documentation for creating an API gateway with proxy integration to Lambda.

Summary and conclusion

The example shown is a good starting point for using GraphQL in Java. Furthermore, the implementation within a serverless application guarantees a highly scalable use. Multiple methods can be bundled in a single request in GraphQL. Additionally, there are already existing libraries that allow for a seamless integration of a GraphQL interface with Spring Boot, as well as various implementations for Angular, Node.js and Python, among others. This guarantees a seamless use in server and client applications.

Due to its design, GraphQL can be easily integrated into existing architectures. Since it’s only a wrapper between the requested data and business logic, it’s easy to connect known databases like MySQL and Oracle via JPA. Thanks to this flexibility, it can be very well integrated alongside existing REST APIs to guarantee highly optimized requests to backend services.

I would recommend two more links to the inclined reader: The first one is the demonstration code that accompanied the article on GitHub. The second recommended link is the GraphQL homepage, featuring many more ideas regarding the use of GraphQL are pointed out.

 

Learn more about cloud services and backend as a service at the Serverless Architecture Conference in The Hague


● Azure Workshop: Building mostly serverless distributed cloud systems step by step

● Scalability Myth Busters

● Serverless Apps in a Multi-Cloud World

Stay tuned!
Learn more about Serverless
Architecture Conference 2020

Behind the Tracks

Software Architecture & Design
Software innovation & more
Microservices
Architecture structure & more
Agile & Communication
Methodologies & more
Emerging Technologies
Everything about the latest technologies
DevOps & Continuous Delivery
Delivery Pipelines, Testing & more
Cloud & Modern Infrastructure
Everything about new tools and platforms
Big Data & Machine Learning
Saving, processing & more