Continuous Delivery Archetypes

The purpose of this article is to highlight some of the most common and bulletproof deployment strategies (we’ll be using here).

We’ll cover:

  1. Basic Filesystem Deployment Pipeline
  2. Filesystem w/ CI Deployment Pipeline
  3. “Atomic” Strategy
  4. Configuration File Strategies
  5. Docker Deployment

Basic Filesystem Deployment Pipeline

While this is the most basic type of deployment, it’s also the fastest to deploy and easiest to setup. This is a common setup for shared environments — generally LAMP stack applications on one or more servers. Think WordPress, Drupal, etc.

Basic Filesystem Deployment Pipeline Diagram


  • The quickest delivery possible (usually less than a second).
  • Cache/configuration files are placed in the .gitignore file so they are ignored and not removed on each deployment.
  • Third-party dependencies are never stored in the repository.
  • Configuration files are .gitignore’d and securely stored in Stackahoy as dynamic static files.
  • Load balancer w/ concurrent deployment capable.

Filesystem w/ CI Deployment Pipeline

This is almost identical to the first pipeline flow, except with a proper continuous integration pipeline. This is obviously the ideal approach so the engineering team can rest easy knowing broken code isn’t being deployed.

Filesystem w/ CI Deployment Pipeline Diagram


  • CI software such as Travis, Circle, or GitLab is required.
  • Once the CI pipeline completes successfully, it can use the Stackahoy CLI tool to trigger the deployment. This keeps your CI DRY and decoupled from any server authentication or handshakes. If it’s a docker runner, you can just use the official Stackahoy docker container instead of installing the CLI tool explicitly.
  • Load balancer w/ concurrent deployment capable.

“Atomic” Strategy

While Stackahoy’s file delivery mechanism is almost instant due to its diff algorithm, the span of time during the deployment is a moment of instability and uncertainty. This time especially increases when there are post-deployment commands building frontend assets.

A viable solution is to implement “atomic” deployments. The term atomic basically means updates resolve only resolve when the deployment is 100% complete.

This deployment can be setup in three steps:

Step One | Directory Structure

Provision your destination server with the following structure. Make sure the user you’re deploying as has read/write permissions on these directories.


Step Two | Web Root

Setup your web server (nginx, apache, ect.) to make ./live-app the web root. This will be the “hot” directory.

Step Three | Configure Deployment Delivery

Within Stackahoy, setup the Remote Working Directory to be ./.tmp. This will cause your deployment to happen discreetly without effecting what’s in the live directory.

Atomic Delivery Diagram

Now, here is where the magic happens. Place the following post-deployment commands in Stackahoy:

f=$(date +%Y-%m-%d-%H-%M-%S)
dest=$(realpath $(pwd)/..)
# Archive newly deployed files in the releases directory.
cp -R $dest/.tmp/ $dest/releases/$f
# Update the symlink to point to the latest release (atomic).
ln -sfn $dest/releases/$f $dest/live-app
# Clean up.
rm -rf $dest/.tmp/*

Note: This assumes any cache directories or resources which need to persist are kept outside of the working directory.

If your application requires these types of resources, a little extra work will be required in order to update the paths to the directories. This method also requires a little more disk space on your destination server since you’ll be storing previous releases. The upside is that at any point in time the codebase can be reverted to a previous release. Essentially, you get cheap version control.

Configuration File Strategies

There is no shortage of libraries in any language to handle the configurations for an application. When considering a production environment, the following should be true:

  • Sensitive credentials/keys should not be stored in any repository.
  • The environment should be replicable (e.g. DRY, immutable).
  • These sensitive items should be maintained in one place.

We recommend the one of the following strategies for handling this:

  1. Immutable configuration files: By using Stackahoy’s encrypted dynamic configuration files feature, files are destroyed and created on every deployment. This also works great for Docker Deployments, since you can just volume the dynamic configuration file to the running container.
  2. Environmental Variables: This method is dependent on your hosting provider and/or other SaaS products. Some potential solutions are: AWS Key store, setting environmental variable in GCE

Docker Deployment

This is by far the most popular and powerful deployment pipeline. First take a look at the diagram, and we’ll break it down.

Docker Deployment Pipeline Diagram

This deployment requires a CI runner and a Docker registry. In a nutshell, the runner will test the code, build the image, then upload the image to the registry.

Stackahoy makes it easy to decouple and trigger the deployment by using the Stackahoy CLI tool. Here is a snippet from the deployment stage of a GitLab CI pipeline:

image: stackahoy/stackahoy-cli
stage: deploy
- stackahoy deploy -b production -r $REPO_ID -t $STACKAHOY_TOKEN
- production

In a single line we are using the official stackahoy/stackahoy-cli image to run the command so we don’t have to manually install the tool. Alternatively, you could just npm i -g stackahoy-cli.

If you’re using docker-compose, a simple way to handle the actual deployment is to just run the following post-deployment commands:

# Pull the latest image that was created in the registry.
docker-compose pull
# (Re)create any changed containers.
docker-compose up -d

Alternatively, you could be initiating Docker Swarm, a bash script or Kubernetes for this stage.


This is a living and breathing document and will be updated with the rapidly evolving technology. Stay tuned for new serverless (Lamba & GCE Functions) features coming soon to Stackahoy as well!