Integration Testing (With Serverless AWS Microservices and SQL Server)

Sean McDowell
5 min readMar 24, 2021

--

Context

On a recent epic my team spun up a new microservice which consisted of an API Gateway in AWS backed by Node Lambdas with a SQL Server data store.
We started out with a simple view of having high unit test coverage to satisfy automated testing of the new stack. Early on this enabled us to release code rapidly with high confidence.

The Problem

Eventually as the complexity of the solution increased we hit our first bug, a particular path in the code that wasn’t reached often through the front end of the application but which had a high level of unit test coverage. The problem lay in an area outside of our unit tests reach, an assumption about how an ORM core to the application was interacting with the DB.

An interesting property of unit tests is that:

Unit tests only validate the code is working as you assume it should, they can’t validate your assumptions about the world outside your particular unit of code

Solution

With this problem in mind I dusted off the much touted testing pyramid which you may have browsed past in a bulky software engineering book or a testing module in university.

At each level of the pyramid makes the tests make less assumptions about the world around them, cover more technical components and deliver more value but usually at a higher cost of effort and maintenance. I decided the solution to the immediate problem at hand was to implement the next level of the testing pyramid, integration tests, which reach through the the layers of the application from the initial routing of the HTTP request right through to the DB and back.

The problem of testing the integrated tiers of our application in a state as close to production as possible, and in an automated fashion we could gate our build pipeline with, could be split into a few sub problems:

  1. How do we stand up our database, replicate its schema and reference data and add test data.

2. How do we stand up an instance of our API.

3. How do we call the API in a test.

The Database

To stand up an instance of the DB we used Microsofts SQL Server docker image which you can find here. This allowed us to spin up a fresh SQL server DB in a consistent manner locally on our development machines and on the build server in a few lines of bash script. The container was be dropped and recreated between test suites to keep test runs segregated.

# Creates and runs a SQL Server container with docker
docker run -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=yourStrong(!)Password' -p 1433:1433 -d mcr.microsoft.com/mssql/server:2017-latest
# Creates the DB on the new SQL Server instance
docker exec -it <container_id|container_name> /opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P "yourStrong(!)Password" -Q "CREATE DATABASE a_test_database"

This solution hooked in nicely with the data migration tool Umzug which we had previously set up on the project to manage our schema migrations. We triggered Umzug on creation of the DB to add the schema and reference data.

To add test data required for each test we wrote a simple module with two functions which found and executed optional corresponding set up and teardown SQL files based on the name of the test. The functions could then easily be called in Jests beforeAll and afterAll hooks.

The API

The API consisted of an AWS API Gateway backed by Node lambdas. We test this during local development by using the AWS SAM CLI which allows you to stand an API up on a local machine.

We were able to install this tool on the build server using PIP which was already available as prerequisite option in our provisioned pipeline. The only hitch with this approach was that SAMs start-api command starts a process which runs in the foreground and prevented our build agent from continuing to the next step. To get around this we ran the start-api command in the background using nohup and added some scripting to kill the process when the tests were finished.

# Run the API locally without holding up the terminal
nohup sam local start-api

The Tests

The unit tests of the project were written in Jest which made sense as a starting point for the integration tests to avoid technology overload in the project (or JS fatigue as I’ve heard it called for npm projects). A quick search turned up SuperTest, a JS library (with TS support) which has a fluent API allowing for simple HTTP requests and assertions.

// Run a test against the API with supertest
request(http://localhost:8080)
.get('/user')
.expect('Content-Type', /json/)
.expect('Content-Length', '15')
.expect(200)

The Result

We end up with three separate processes talking to eachother: a DB which replicates the production schema, a fully functioning API which hooks into the DB and a suit of Jest tests which call the API via the RESTful API we provide to clients. This enables us to effectively test the API from end to end in conditions close to production, gate our build with the result and ultimately lead to higher confidence releases with less manual regression testing.

Problems

Because the test code is separated out from the actual running API code this approach prevents us from easily mocking out backing dependencies in code like we could in our unit tests. The microservice discussed here has integration points which have not yet been mocked out. So far we don’t have coverage for the endpoints that call through to the integration layer. This might be achieved in the future by setting up a mock instance of the integrations or some sort of test mode in the app that redirects to an integration stub in code(although this would violate dev/prod parity).

Possibilities

When getting the tests off the ground we kept the tests segregated using setup and teardown scripts to provide a clean slate of data for each test. These test could potentially be chained together to move further up into the E2E space of the testing pyramid going through full workflows as our client would in a real use case.

--

--

No responses yet