Deploying (infinitely-scalable) one-hour projects

I often write silly little one-hour projects, and I want to put them online for others to enjoy. Importantly, I don’t want these projects to cost me much. (I write way too many one-off projects for that!) So provisioning little virtual machines for each project is a non-starter. And while the right answer is probably to own one virtual machine and have all my projects share tenancy on it, I’ve had a few one-hour projects that actually gained some traction and needed some scalability built in.

These projects usually take the form of either static pages — in which case I can easily drop them into my existing static website that lives in AWS S3; or they take the form of a simple REST API and accompanying frontend web app, in which case things are a little trickier.

In the latter case, I usually do the following — written here as much for your benefit as for mine, when I inevitably forget it:

  • Deploy the project API to AWS Lambda using Zappa
  • Deploy the web-app to S3, or if I’m feeling particularly lazy, serve it from the flask app using render_template and a static HTML page (note: at scale, that latter option costs slightly more, since you’re paying for lambda executions every time someone hits your landing page instead of just paying for s3 egress)
  • Set up continuous-deployment

Continuous deployment with GitHub actions

Continuous deployment is most easily achieved, in my opinion, by hacking together a zappa deploy pipeline in your GitHub actions. The following workflow file, for example, will run Zappa from inside the GitHub actions worker (even though it’s a bit clunky):

name: Deploy Production Website

on:
  # Trigger the workflow on push,
  # but only for the master branch
  push:
    branches:
      - master

jobs:
  build:
    runs-on: ubuntu-latest

    steps:
    - uses: actions/checkout@v1
    
    - name: Set up Python $\{\{ matrix.python-version \}\}
      uses: actions/setup-python@v2
      with:
        python-version: $\{\{ matrix.python-version \}\}
    
    - name: Install dependencies
      run: |
        python -m pip install --upgrade pip
        pip install virtualenv
        virtualenv venv_zappa
        source venv_zappa/bin/activate
        if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
        pip install zappa
    - name: Deploy Zappa
      run: |
        echo '$\{\{ secrets.ZAPPA_CONFIG \}\}' > zappa_settings.json
        source venv_zappa/bin/activate
        zappa update production
      env:
        AWS_ACCESS_KEY_ID: $\{\{ secrets.AWS_ACCESS_KEY_ID \}\}
        AWS_SECRET_ACCESS_KEY: $\{\{ secrets.AWS_SECRET_ACCESS_KEY \}\}
        AWS_REGION: us-east-1

Secret management

The only remaining setup required is to add the following keys to your repository settings (https://github.com/:user/:repo/settings/secrets):

  • ZAPPA_CONFIG: This is a one-line version of your zappa_settings.json file. Remove the profile name (if you included one), since credentials will be provided directly from the command-line.
  • AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY: I create a standalone IAM user with minimal permissions (minimum IAM privileges required by Zappa) so that even if these were somehow compromised, they can’t grant an attacker exhaustive permission to your AWS account

Now, pushes to master will automatically deploy your website; for “real” applications, I’d highly recommend adding a PR reviewer requirement and perhaps even tests (!!) so that you don’t accidentally merge broken code and bork your production deploy. Although at least in my case, breaking one of my toys usually has little bearing on anyone besides myself :)

Written on January 7, 2022
Comments? Let's chat on Twitter!