Docker Cloud for automated deployments

Shoaib Burq
Geografia Company Blog
4 min readApr 4, 2017

--

In this post I will share with you how we deploy application to the cloud at Geografia using Docker Cloud, Amazon AWS or Digital Ocean.

tl;dr We wanted to make application deployment and scaling as easy as running docker-compose up. So we built a deployment pipeline using docker cloud. This allows us to quickly make changes to the application and automatically update the web application. It also allows us to scale the application by starting new nodes in a docker cluster.

At Geografia we aim to build applications in a sustainable, cost-effective and repeatable way. For this we have embraced Docker and settled on a deployment workflow for our Rails Projects.

At a high level it consists of

  1. Github triggered Docker image builds & tests
  2. Docker Cloud (Microservices) stack definition
  3. Linking the cloud provider and Docker Cloud
  4. Docker Cloud CLI for running & managing the microservices
Docker Based Automated Development & Deployment Workflow @Geografia

Git based automated image build

After we have our development and production application dockerized and a docker-compose setup (see my posts on that here, here & here) we go for configuring automated Docker builds, triggered by Github pull requests on docker cloud. It’s fairly easy and the documentation is pretty straight forward. Here we can also create automated test configuration. New images are only pushed to the registry if the tests pass. This is where you will point to the location of your Dockerfile relative to your git repository’s root. So in our case we keep our files under containers directory:

$ tree containers
containers
├── common
├── development
│ ├── Dockerfile
│ ├── docker-compose.yml
│ └── entrypoint
├── production
│ ├── Dockerfile
│ ├── docker-compose.yml
│ ├── entrypoint
│ └── nginx.conf
├── scripts
│ └── wait-for-it.sh
└── test
├── docker-compose.test.yml
└── entrypoint
5 directories, 10 files

So for a production build we would pick the containers/production/Dockerfile. See screenshot below:

selecting the correct `Dockerfile` for automated builds in production

Docker cloud stack

Once this is working nicely it’s time to set up our docker-cloud services stack. docker-cloud stacks configuration is pretty much the same as docker-compose configuration, with a couple of caveats. A small set of docker-compose keys are unsupported in docker-cloud. Again the docs are great!

Adding deployment nodes

Once the stack file is defined it’s time to run a stack in the cloud! So we need to connect docker cloud to our cloud provider. This is easy for AWS you create a role and a policy. See this walk through. For DigitalOcean it’s even easier: OAuth based authorization, heck yeah! With this done, we can create nodes on our cloud platform of choice.

docker-cloud CLI

We are now ready to deploy our stack to a node-cluster. We can do this via docker-cloud CLI. To make sure the CLI works see this document. You will need to set the following environment variables:

export DOCKERCLOUD_APIKEY=xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
export DOCKERCLOUD_USER=sabman
export DOCKERCLOUD_NAMESPACE=geografia

We can now deploy a stack, monitor it and scale it. Below you can see our stack for the City of Melbourne’s Population website, which provides forecasts of the City of Melbourne’s residential population and dwellings to 2036.

$ docker-cloud stack ls
NAME UUID STATUS DEPLOYED
melbourne-population-production 3a579397 ▶ Running 55 days ago

We can inspect this stack:

$ docker-cloud stack inspect melbourne-population-production
{
"current_num_containers": 3,
"name": "melbourne-population-production",
"synchronized": true,
"destroyed_datetime": null,
"uuid": "xxxxxx-xxxx-xxxx-a9d9-7613b69c66a9",
"state": "Running",
"services": [
"/api/app/v1/geografia/service/xxxx-xxxx-d14398d00e81/",
"/api/app/v1/geografia/service/xxxx-xxxx-8899c4622914/",
"/api/app/v1/geografia/service/xxxx-xxxx-468387e5dac7/"
],
"deployed_datetime": "Wed, 8 Feb 2017 09:32:12 +0000",
"nickname": "melbourne-population-production",
"dockercloud_action_uri": "",
"resource_uri": "/api/app/v1/geografia/stack/xxxx/"
}

Related to this, Docker just released docker swarm (which is in beta). We’ll be giving this a spin for running our data analytics tasks. I’ll write more about that in a future post.

When things go wrong — tools for debugging deployments

Sometimes your deployment may not work exactly as you expected. So docker-cloud cli has some useful commands for figuring out what’s going wrong.

So let’s say I do a build and deploy everything seems to go well but when I go to the nginx service endpoint I see this:

oops what happened? let’s debug!

What do you have available to figure out the problem? First thing to do is look at which service could be the problem. So we can list all our services by

$ docker-cloud service ps
NAME UUID STATUS #CONTAINERS IMAGE DEPLOYED PUBLIC DNS STACK
db xxx ▶ Running 1 mdillon/postgis:9.6 17 hours ago db.app-name.xx.svc.dockerapp.io app-name
app xxx ▶ Running 1 geografia/app-name:latest 17 hours ago app.app-name.xx.svc.dockerapp.io app-name
nginx xxx ▶ Running 1 nginx:1.11.9 17 hours ago nginx.app-name.xx.svc.dockerapp.io app-name

Next we can look at the logs for these:

$ docker-cloud service logs -t 100 xxxapp-1 | 2017-... The Gemfile's dependencies are satisfied
app-1 | 2017-... app-name_production already exists
app-1 | 2017-... pg_restore: [archiver (db)] connection to database "_production" failed: FATAL: database "_production" does not exist

We can see there a problem when running pg_restore. This gave us a clue as to where to look. In this case it turned out the rake task was using an environment variable that was not set.

If you have questions feel free to get in touch with me at shoaib@geografia.com.au.

--

--