Secure Private APIs with Client-Certificates and Nginx

Microservice architecture is a commonly used tool to break up complexity of a monolithic system.
They are, in the best case, scalable, flexible and modular.

The CurrySearch-System is separated into three services:

A schematic representation of the CurrySearch-System:

Port 443: Public Port 501: Internal Client 0 Client n Management Search Statistics

The Problem of Securing Inter-Microservice Communication

One problem with microservices that is rarely discussed is the security of inter-service-communication.

In many cases microservices need to talk to each other.
CurrySearch for example has some important internal API-calls:

But how do the services ensure, that they are talking to the correct counterparts? And how does the receiving end know who talks to it?

This problem demands a scalable and flexible solution.
Instances of any service may need to be removed or added at any time.

This blog post will now describe the evolution of the CurrySearch network setup and its solution to the described problem.


SSL Encryption With Certificates and Nginx as Reverse Proxy

The first step is to encrypt any communication (with clients or internal) using HTTPS and valid certificates from Let’s Encrypt.

The CurrySearch-System internally uses Rocket as webserver. Rocket has support for https but it is new and has not proven itself over time.

Check out how to get a Let’s Encrpyt certificate here; for CurrySearch we use the acme-tiny-client.

This is why we choose to put Nginx in front of Rocket to handle all encrypted communication (we also get HTTP2 support for free):

Nginx Configuration:

server {
    listen 443 ssl http2;
    listen [::]:443 ssl http2;

    ssl_certificate /opt/server_certs/hostname.pem;
    ssl_certificate_key /opt/server_certs/keys/hostname.key;
    ssl_session_timeout 5m;
    ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
    ssl_ciphers ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA;
    ssl_session_cache shared:SSL:150m;
    ssl_dhparam /opt/server_certs/dhparam.pem;
    ssl_prefer_server_ciphers on;

    server_name $hostname;

    location / {
        proxy_pass http://localhost:8000;
    }
}

Rocket Setup:

let logging = true;
let rocket_conf = Config::build(Environment::Staging).port(8000).unwrap();
rocket::custom(rocket_conf, logging)
    .mount("/",
           routes![...])
    .launch();

Now the system is only reachable through https on port 443 and every communication must run through Nginx.


“Physically” Seperate Public-Facing and Internals APIs

The next step is to physically seperate the two APIs. We want all internal communication to happen on Port 501; all public communication stays on Port 443.

This way we can implement authentication on a very abstract level. Actually microservice authentication is not part of the application itself anymore.

Nginx Config:

# public communication runs through port 443
server {
    listen 443 ssl http2;
    listen [::]:443 ssl http2;

    [...ssl directives...]

    location / {
    	proxy_pass http://localhost:8000;
    }
}

# internal communication runs through port 501
server {
    listen 501 ssl http2;
    listen [::]:501 ssl http2;

    [...ssl directives...]

    location / {
    	proxy_pass http://localhost:8001;
    }
}

Rocket Setup:

let logging = true;
let internal_rocket = Config::build(Environment::Staging).port(8001).unwrap();
let public_rocket = Config::build(Environment::Staging).port(8000).unwrap();

// the call to .launch() blocks... so we need to start the internal rocket in a different thread
thread::spawn(move || {
                      rocket::custom(internal_rocket, logging)
                          .mount("/", routes![...])
                          .launch();
                  });

rocket::custom(public_rocket, logging)
        .mount("/",
               routes![...])
        .launch();

This is in itself not a secure setup. With this setup, anyone could use port 501 to communicate with the internal API.


Securing the Internal API with Client Certificates

Now to the critical part: securing all internal communication with client certificates. The basic idea is to only allow communication with port 501 when a valid client certificate can be presented.

This entails creating and distributing client certificates and having a valid certificate authority. We used this guide to setup our own certificate authority.

When all client certificates are setup you need to add only two lines to the Nginx config:

# internal communication runs through port 501
server {
    listen 501 ssl http2;
    listen [::]:501 ssl http2;

    #[...ssl directives...]

    # add a ca-cert to authenticate against
    ssl_client_certificate /opt/ca_certs/internal_ca.pem;
    # and turn on client verification
    ssl_verify_client on;

    location / {
    	proxy_pass http://localhost:8001;
    }
}

Now, anyone who tries to call our internal APIs will receive a 400 Bad Request error.

Alas, internal calls are now a bit more complex. Here is how such a call looks like with curl:

cert="/opt/client_certs/hostname.client.pem"
key="/opt/client_certs/keys/hostname.client.key"
curl hostname:501 --cert "$cert" --key "$key"

The application needs the correct access rights to read both the client-certificate and its key.

Here is the resulting system architecture as shown before:

Port 443: Public Port 501: Internal Client 0 Client n Management Search Statistics

Next Up

In a follow-up blog-post we will describe how not only HTTP but also TCP communication can be secured the same way.

We use this internally to distribute preprocessed data as git-repositories with git-daemon.