Microservices with AWS Lambda

I’ve been building microservices for several years. I’ve mostly used DotNet, DotNet Core, and Ruby on Rails to build them, and I’ve generally deployed them either into AWS EC2, or Azure Service Fabric. I’ve found most enterprises aren’t ready for managing microservices in containers, either in the cloud or their own data centres. Keeping things as simple as possible has often meant ignoring the existence of ‘cool’ tech in preference of less complicated approaches. When I first evaluated AWS Lambda as a microservice platform it was pretty immature, but there have been significant improvements over the years which make it worth another look.

Not just a function

Let’s take a look at a very simple handler taken from the AWS documentation. I’ve made a few changes to it while experimenting:

src/index.ts

'use strict';
import parseName from "./parseName";
import parseCity from "./parseCity";
import parseDay from "./parseDay";
import parseTime from "./parseTime";
console.log('Loading hello world function');
 
exports.handler = async (event: any) => {
    console.log("request: " + JSON.stringify(event));
    let responseCode = 200;    
    let name = parseName(event);    
    let city = parseCity(event);    
    let day = parseDay(event);    
    let time = parseTime(event);
 
    let greeting = `Good ${time}, ${name} of ${city}.`;
    if (day) greeting += ` Happy ${day}!`;
    let responseBody = {
        message: greeting,
        input: event
    };
    
    let response = {
        statusCode: responseCode,
        headers: {
            "x-custom-header" : "my custom header value"
        },
        body: JSON.stringify(responseBody)
    };
    console.log("response: " + JSON.stringify(response))
    return response;
};

Remember, I am not by any means holding this handler up as a glowing example of perfection, I merely want to point out that whatever we do in this file, we’re always implementing a single function – one method, not an API. We export only the one handler (line 8, highlighted), so how can a Lambda serve as a microservice?

Although we don’t want our microservices to be ‘chatty’, they will each inevitably need to implement more than one method. Even allowing for a basic healthcheck and a Swagger endpoint, that’s two functions without even starting to implement any domain logic. Even if we’re simply listening to a queue, it’s highly likely there will be more than one type of event in that queue.

We also desperately need to avoid calling our Lambda via the AWS SDK. There’s nothing wrong with the SDK, but making service calls via this leaks the underlying implementation of our microservice – all consumers would need to know how to speak Lambda. This is not a good thing, let’s make sure we stick to well known standards, such as HTTP.

HTTP calls to AWS Lambda can be proxied in a few different ways:

  1. AWS API Gateway is probably the most common method. The HTTP API functionality is very reasonably priced.
  2. AWS also provide the Application Load Balancer, which differs somewhat from the API Gateway in pricing and functionality. To make sure you know which will work for you, make sure to take a look at this article. An ALB can proxy traffic to a Lambda in much the same way as the API Gateway.
  3. You could also spin up something from a 3rd party. I highly recommend Kong – it’s based on Nginx and is very intuitive. Kong can be extended with plugins, and there is one for proxying traffic to AWS Lambda.

I’ll be mostly talking about API Gateway, but this article is applicable whatever proxy you’re using.

So how does the proxy allow us to handle multiple endpoints? Well, the structure of the event parameter in the above handler looks something like this:

{
      version: '2.0',
      routeKey: '$default',
      rawPath: '/my/path',
      rawQueryString: 'parameter1=value1&parameter1=value2&parameter2=value',
      cookies: [ 'cookie1', 'cookie2' ],
      headers: {
        'Header1': 'value1',
        'Header2': 'value1,value2'
      },
      queryStringParameters: { parameter1: 'value1,value2', parameter2: 'value' },
      requestContext: {
        accountId: '123456789012',
        apiId: 'api-id',
        authorizer: { jwt: {
            claims: {'claim1': 'value1', 'claim2': 'value2'},
            scopes: ['scope1', 'scope2']
            }
        },
        domainName: 'id.execute-api.us-east-1.amazonaws.com',
        domainPrefix: 'id',
        http: {
          method: 'POST',
          path: '/my/path',
          protocol: 'HTTP/1.1',
          sourceIp: 'IP',
          userAgent: 'agent'
        },
        requestId: 'id',
        routeKey: '$default',
        stage: '$default',
        time: '12/Mar/2020:19:03:58 +0000',
        timeEpoch: 1583348638390
      },
      body: 'Hello from Lambda',
      pathParameters: {'parameter1': 'value1'},
      isBase64Encoded: false,
      stageVariables: {'stageVariable1': 'value1', 'stageVariable2': 'value2'}
    }

By checking the values of event.requestContext.http (specifically method and path), we have enough information to allow us to route requests to suitable handlers. This gives us something more resembling a traditional microservice.

There is a pretty decent project which can be used for this here (npm package can be found here). What’s more, this package allows us to use the same approach for handling events from various sources, such as SQS.

Running a classic microservice in Lambda

There are some interesting aspects to running something larger than the average function in a single Lambda. For starters, each behaviour of our service will have different processing characteristics – they will use different amounts of memory, take different times to complete, and require different resources. Lambda will happily scale, regardless, although the scaling profile won’t be tied to just one behaviour. I’ve heard some people talking about this being a bad thing, but if you had used a more classic platform for the microservice, it would be scaling with exactly the same characteristics.

The ability of a service to scale horizontally, shouldn’t be confused with the ability to trace resource usage for different processes within that service. Whether you build a monolith, classic SOA, a microservice, or anything else, you still want to be able to trace hot spots; the same approaches will work for each. While it’s true that a larger service can take longer to warm up, I would say this is an optimisation problem you only need to solve if you have it (like all optimisation problems).

Size limits

The limits involved with AWS Lambda can be read here.

At the time of writing, the first point of interest here is the invocation payload size limits. A synchronous request or response object can be as large as 6 MB, but an asynchronous payload is limited to merely 256 KB. I find the difference in size surprising, although I’m sure there must a reason behind it, I would have thought the effort would have been to push people toward an asynchronous model.

Where you will probably find the most difficulty is with the deployment package size. The hard limit of 250 MB (unzipped) can’t even be negated using layers; if your lambda references a layer of 50 MB, then you can only deploy 200 MB with your lambda.

AWS enforce these restrictions because they didn’t create Lambda with the classic microservice architecture in mind. Lambda functions are meant to be very small. Unfortunately, if you embrace Lambda as your default platform, then going very small also means you lose the ‘one decoupled deployable’ aspect of microservices – this might (or might not) cause you some pain.

Flexibility

So I don’t think I’ve ever recommended that one approach is right all the time, and this is no different. The beauty of building your microservice as a Lambda is that if you need to, you can break out a handler into its own Lambda (although you will probably want to keep it in the same repo). Reasons why you might want to split a handler out in this way include:

  • Very different load characteristics.
  • Making a hard divide between parts of a service which require elevated privileges.
  • A code path which requires significantly more resources to run.
  • A code path which requires 3rd party dependencies which push your Lambda over the uncompressed size limit.

Equally, you should also consider that if you share resources, such as database tables or network shares, you are now tightly coupled. The fact that your code is now in a different Lambda doesn’t mean it is no longer a part of your microservice, so when you update your microservice you will have to update your break away Lambda as well. The nature of updating 2 Lambdas, a database, and possibly an S3 bucket, is that it becomes much harder to guarantee they are all updated simultaneously. One approach would be to only make backwards compatible changes to the DB and S3, and deploy completely new versions of both Lambdas every time, before switching the proxy to the new versions. Another could be to break the dependencies down as well as the service. Different strategies will be right for different scenarios, but this isn’t cookie cutter stuff, which means increased complexity and risk.

Obviously, this is a way to deploy many Lambdas together, but it comes at the cost of extra complexity. Whenever you try to imitate synchronicity, you inevitably hit situations which defy your efforts. Versioning in a distributed, event driven architecture is mandatory but often requires multiple steps which are sometimes specific to that particular change and aren’t fully understood ahead of time.

Events and message processing

Microarchitectures tend to go hand in hand with eventing, and AWS Lambda can definitely be used in this way. There are some topics which should be considered.

  1. You aren’t going to do FIFO very well with Lambda. Lambda is designed to scale – multiple messages will trigger multiple instances, which won’t know whether a previous message has been completed or not.
  2. The AWS infrastructure can kill your Lamdba without giving it chance to ‘undo’ part processing. A Lambda has an execution timeout – you can set the timeout, but when something goes wrong and the timeout hits, the logs from AWS may not contain the information you need to link them back to something meaningful.
  3. If you are using a Lambda to call an on-prem API, you need to be aware of the load limitations on that API. Failing messages could be retried multiple times, which means if you have a failing API which is not responding, your functions may double, triple, or quadruple the number of requests being sent – not good for your failing API. Consider implementing a circuit breaker (not straight forward when you’re dealing with such volatile infra).

What’s in it for me?

Much of this article has talked about the difficulties of leveraging AWS Lambda as a microservice platform, some of which are nuanced and may not always cause you problems. It’s clear that Lambda isn’t a cure-all for microarchitecture, so what’s good about it?

There’s a simple answer: automation and scalability.

Deploying into Lambda is a piece of cake. You can use a couple of lines of code with the SDK, you can do something similar with the CDK, loads of automation platforms have step templates for Lambda. You want to version your service? Just deploy a new service with the same name and call it V2. You need multiple environments, you can use different VPC’s or append something to the function name, etc.

I don’t think there’s another framework available today that makes it quite so easy to release code.

And scaling is pretty much the default. It’ll run as hot as you need it to.

Are these good enough reasons to deal with the down sides? They might be, for some situations, not for others. As people say when they want to wash their hands of responsibility: you do you. I hope you found some of this useful enough to help you with that.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s