Reducing Cloud Costs: Deploying SSL-Enabled Web Apps to Single Instance EBS Environments

Austin Hunt
18 min readSep 5, 2024

--

This article is for anyone aiming to minimize AWS EBS costs and looking for instructions on setting up a single instance EBS web server environment that can be served over HTTPS with valid SSL certs. Instructions are out there, because I found them, but they’re not all in one place. Well, they weren’t. I aim to change that here.

Don’t get me wrong, AWS is a fantastic tool and streamlines so much of the heavy lifting when it comes to deploying cloud infrastructure. However, if you’ve spent any time with development, or, well, anything, you know that convenience always comes with a cost — that could be money, could be time, could even be your physical well-being. I’d like give a shoutout to my sponsor, gas station pizza.

If brevity is the soul of wit, then frugality is the soul of grit. Either because you’re gonna be gritting your teeth out of stress from this DIY process or because it takes grit to build rather than buy your solutions. Oh! Or because grits are cheap. You decide.

Deploying an Elastic Beanstalk environment can realistically be done within a few minutes or less, straight from the GUI. Here’s the drill:

  1. Open EBS.
  2. Click “Create environment”.
  3. Choose your tier (web server or worker),
  4. Provide an app name.
  5. Provide an environment name and description. Don’t be afraid to be verbose.

Verbosity in tech is like cardio for lifters. We all know it’s valuable, but a lot of us ignore it.

Feel free to quote me on that.

6. Choose your platform (e.g., Python or Node.JS)

7. Upload your own source code or choose a sample app.

8. Here’s the kicker for this article: choose Single instance or High Availability as your configuration preset.

Let’s stop here and make sure we understand what we’re talking about when we say Single Instance versus High Availability. If you know, then you skip ahead to the smarty pants section.

Spot the Difference: Single Instance vs. High Availability

In EBS, the terms single instance and high availability (load balanced) refer to different deployment configurations, each with its own use cases and characteristics:

Single Instance

In a single instance environment, only one EC2 instance rides solo like Jason Derulo in running your app. This is the simplest and most cost-effective setup. It’s also the focal point of this article. Common use cases for single instance include development, testing, or staging environments where uptime is not critical and low-traffic applications that don’t require redundancy. Or maybe your app is actually high-traffic and high-impact with super strict uptime requirements but you’re super frugal and want to try reducing your costs as an experiment. I admire your drive. I also fear it.

The cost, of course, is much lower with single instance, but the risk is higher. There’s no auto-scaling or load-balancing, and if the one instance goes down…. you got it, so does your app.

Pay less, stress more. Courtesy of the universe’s love for tradeoffs. Side effects may include gray hair, higher blood pressure, and a thicker wallet (yeah, I know, cash is dying, just like this turn of phrase).

High Availability (Load Balanced)

With a high availability environment, multiple EC2 instances are sharing the load of running your app while a load balancer, specifically an elastic load balancer (ELB) sits out front like a beefy bouncer, but instead of looking tough and blocking the entrance, it distributes incoming traffic across the EC2 instances.

“Losers go through that door. People who use dark theme go through the other one.” ;)

You know, routing. Remember kids, don’t use fake IPs.

Elastic Beanstalk basically handles this whole architecture setup automatically, and it’s great for production environments where uptime and reliability are critical, and for apps with fluctuating traffic that need to scale in and out automatically (heyoo, elasticity!).

Of course, more architecture (multiple EC2 instances plus the ELB) means more money, but it’s also much more robust, more scalable, more available and reliable since it’s more fault-tolerant.

To tie back into the big picture, setting up SSL with high availability (HA) environments is much easier than with single instance environments.

but why?

With HA, you can simply use the GUI to attach listeners to the ELB in front of your EC2 instances. What are listeners? Listeners are AWS components that you can really pour your feelings out to on lonely nights. They also process incoming traffic using a defined protocol and port, and they route that traffic to the right place behind the load balancer. If you specify a secure protocol for a listener, e.g., HTTPS, you just need to configure that listener with an SSL cert. Such a cert can be easily generated using the AWS Certificate Manager (ACM), fully in the GUI. Once generated, it shows up as an option in the SSL certificate dropdown menu shown in the screenshot below.

Adding a port 443 listener that uses the HTTPS protocol for routing traffic. EZPZ.

We’re not going to get into the SSL Policy selection since it’s not too relevant here, but if you’re curious, in a nutshell, this policy governs the types of encryption algorithms (cipher suites) and protocols (such as TLS 1.2, 1.3) that the load balancer uses to secure connections.

The key note here: With HA, you only need to manage the SSL certificate at the load balancer level, right from the GUI, and the load balancer will handle the encryption and decryption of traffic before it reaches your EC2 instances. The setup is straightforward, and AWS automates many aspects, including the dreaded routine of SSL certificate renewal.

Now let’s get to the meat and potatoes. The holy how-to. The make-it-work dark arts. The scrumdiddlyumptious copyable code. You’ve held your horses long enough.

Setting Up SSL on a Single Instance Web Server Environment

We know that a single instance environment has no load balancer; without a load balancer, we have no listeners, and without our listeners, the SSL certificate needs to be installed and configured directly on the EC2 instance.

This requires manual configuration within the web server (e.g., Nginx, Apache) on the instance. IMPORTANT NOTE: You also need to ensure the certificate is kept up to date, and if you ever need to scale to multiple instances, you’ll have to manually replicate the SSL setup across each instance.

For me, knowing the high level architecture and the flow of incoming traffic is valuable in understanding, troubleshooting, and fixing web systems. So, let’s define our goal architecture and flow.

  1. Client C makes request R1 to http://yourdomain.com.
  2. yourdomain.com resolves to an IP of a single instance EBS environment via a Route 53 (R53) DNS record. This EBS environment is really just an EC2 instance E.
  3. Request R1 hits instance E on its external port 80 (which serves http:// requests).
  4. A web server (generally Nginx or Apache2) runs on instance E and is configured with a port 80 listener, which simply redirects the non-secure request R1 to the more secure https://yourdomain.com.
  5. A new secure request R2 once again reaches instance E on its external port 443 (which serves https:// requests).
  6. The same web server is configured with a secure port 443 listener that actually serves requests rather than redirecting. The 443 listener serves the traffic securely using 1) an SSL public certificate, 2) an SSL private key, 3) other SSL configuration for protocols and cipher suites, and most importantly 4) an internal address (AKA an “upstream”) to which we want to send request R2. In short, in the same way we would have added listeners to the load balancer in the AWS GUI for HA environments, we have listeners configured in our web server config file.
  7. Request R2 flows through the 443 listener to the upstream (your actual code), which handles that request however it needs to and sends a secure response back through the pipe to the client.

BIG CONCEPT: One of the most valuable and reusable concepts to be familiar with in the realm of web development is that of the reverse proxy. In this situation, the web server on the EC2 instance plays the role of a reverse proxy in that it relays requests from the external EC2 instance port 443 to the internal port on which your primary process (web app) runs. In this article, we’re focusing on two specific platforms: Node.JS and Python (used for Django apps), so the internal ports are respectively 8080 for Node.JS and 8000 for Python. To be honest, this was a difficult piece of information to find, but very important. You have to know where that internal process listens if you want any requests reaching it. Of course, there’s always the option of explicitly setting the port on which your Node.JS or Python app runs and then using that port for upstream definition, but these are the defaults.

Step 0. I Make Assumptions About Your Project.

Yeah, very programmer of me to zero-index my instructions. If I’m being candid, I added this step last because I forgot to include it and I don’t want to renumber everything. Laziness disguised as efficiency is a beautiful thing. I know what they say about assumptions, but I’m going to make some here.

  1. You have already created an EBS Single Instance web server environment with EITHER the Node.JS platform OR the Python platform. If you have not done this, you can really deploy one using the sample code that AWS provides. That code will be replaced later. We just need the architecture in place for some other setup.
  2. You already have an AWS Route53 DNS entry, e.g., yourdomain.com, pointing at that specific EBS environment.
  3. On the security group assigned to your EBS environment (if unfamiliar, security groups are basically firewall rules), you have inbound rules that allow both port 80 HTTP and port 443 HTTPS traffic. See Step 3: Configure security group inbound rules using the AWS Management Console for more information.

Step 1. Create your SSL Certificate and Private Key with certbot.

First things first, you need an SSL cert an private key. You can refer to John Rix’s article “Automating DNS-challenge based LetsEncrypt certificates with AWS Route 53” for detailed instructions on doing this.

If you don’t want to hop to another article, that’s okay too. In short, we’re using a command line client called certbot to verify domain ownership and generate the SSL cert and key files.

  1. Install the AWS CLI
  2. Install certbot as well as the certbot-r53-dns plugin (you can use pip install for the plugin)
  3. Create a non-root AWS user called certbot-r53-dns with the AWS Identity and Access Management (IAM) service. This username indicates the purpose of the user.
  4. Create and attach the following policy to that non-root user you just created so that it has the ability to handle domain verification. Give the policy the same name, certbot-r53-dns. For the description, you can use: Allow the certbot-r53-dns user to manage R53 domain verification for SSL certificate generation.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"route53:GetChange",
"route53:ListHostedZones",
"route53:ChangeResourceRecordSets"
],
"Resource": "*"
}
]
}

5. You should have a user with an attached policy that looks like this.

6. Go to the Security Credentials tab for the certbot-r53-dns user and create a new access key. You’ll use this to configure your local AWS CLI to communicate with AWS as this user. Choose CLI as the use case. Download the CSV with your access key and secret access key when prompted.

7. You now need to run the certbot command and also provide it with the specific AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY to use (corresponding to the certbot-dns-r53 user). You could either run aws configure and provide them that way or you can set them as environment variables for your current terminal session. We’re going to use environment variables for this example. Copy the following code but replace the relevant portions (<…>) with your own values.

sudo \
AWS_ACCESS_KEY_ID=<copy from your downloaded csv> \
AWS_SECRET_ACCESS_KEY=<copy from your downloaded csv> \
certbot certonly --dns-route53 -d <yourdomain.com>

You should see output similar to the following:

Successfully received certificate.
Certificate is saved at: /etc/letsencrypt/live/studyrocket.ai-0002/fullchain.pem
Key is saved at: /etc/letsencrypt/live/studyrocket.ai-0002/privkey.pem
This certificate expires on 2024-12-03.
These files will be updated when the certificate renews.

NEXT STEPS:
- The certificate will need to be renewed before it expires. Certbot can automatically renew the certificate in the background, but you may need to take steps to enable that functionality. See https://certbot.org/renewal-setup for instructions.

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
If you like Certbot, please consider supporting our work by:
* Donating to ISRG / Let's Encrypt: https://letsencrypt.org/donate
* Donating to EFF: https://eff.org/donate-le
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Now that you have those files on your local machine, let’s do something cool. We’re going to put them in an S3 bucket that your EC2 instance can read from so that they can be “mounted”, ultimately, to your web server whenever it deploys.

Funny thing here is that those files created by certbot are owned by root so you need to use sudo to copy them and then change the ownership of those files to whatever your normal username is.

austinhunt@Austins-MBP-2 Desktop % sudo cp /etc/letsencrypt/live/studyrocket.ai-
0002/{fullchain,privkey}.pem .
Password:
austinhunt@Austins-MBP-2 Desktop % sudo chown austinhunt {fullchain,privkey}.pem
austinhunt@Austins-MBP-2 Desktop % ls -lah *.pem
-rw-r--r-- 1 austinhunt staff 2.8K Sep 5 11:18 fullchain.pem
-rw------- 1 austinhunt staff 241B Sep 5 11:18 privkey.pem

The files are prepared, m’lord.

Step 2. Upload your SSL cert and key to an S3 bucket.

Now, we’re going to create an S3 bucket with the following settings:

  • Block all public access (definitely don’t want public access on a bucket that is storing SSL files).
  • ACLs disabled, bucket-owner preferred.
  • Server-side encryption with Amazon S3 managed keys (SSE-S3)
  • Bucket key enabled

You could either create one bucket specifically for SSL files (probably best to do this) or you could store your SSL files in a subfolder of a shared bucket, assuming the bucket has the above settings.

In our case, we have one bucket called studyrocket that we’re using for multiple purposes, with the above settings configured, and we have an ssl folder in that bucket where we uploaded our SSL certificate and private key.

Once you have your bucket created, upload your SSL files to the desired location in that bucket, nested folder or otherwise.

Step 3. Create a custom IAM role for your EC2 instance with Read Access to Your SSL S3 Bucket.

When you run an EBS environment, the EC2 instance(s) within that environment run with something called an instance profile, which is just an assignment of an IAM role to that instance such that the instance has the permissions defined by that role.

For example, for studyrocket.ai, our production frontend EC2 instance runs with the following IAM role named aws-elasticbeanstalk-ec2-role-custom.

EBS EC2 instance running with IAM role aws-elasticbeanstalk-ec2-role-custom

This role, in addition to including the base permission policies needed for regular Elastic Beanstalk functionality (AWSElasticBeanstalkMulticontainerDocker, AWSElasticBeanstalkService, AWSElasticBeanstalkWebTier, AWSElasticBeanstalkWorkerTier), also includes an additional custom permission policy allowing read operations on the studyrocket S3 bucket (and objects within it) which contains our SSL certs. The policy, seen below, is aptly named studyrocket-s3-read.

Note, we actually added another policy to our custom EC2 role granting read and write access to that same bucket because our app uses it for other app-related features in addition to mounting SSL files from it.

studyrocket-s3-read permission policy allowing read access to the studyrocket.ai S3 bucket and objects within

If you already created a single instance EBS environment, you’ll notice it already has a default EC2 role associated with it. If that’s the case for you, you can simply add one more custom policy to that role to give your EC2 instance access to your S3 bucket. The policy JSON would look like:

{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowReadS3Bucket<YourBucketName>",
"Effect": "Allow",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::<YourBucketName>/*"
}
]
}

You’d replace <YourBucketName> with the name of the S3 bucket in which you stored your S3 files.

If you don’t already have an EBS environment created with an EC2 role, the easiest thing to do is to create a single instance environment and let that role auto-create, then edit that role and add the policy described above.

Once you’ve done that, your EC2 instance has access to your SSL files. Now we just need to actually set it up to use that access.

Step 4. Use .ebextensions to customize deployment configuration of your environment.

If you haven’t used .ebextensions, this is a folder you can add in the root of your project that you are deploying to Elastic Beanstalk. It’s used for environment-specific configuration during the deployment process. To be concise, you can add YAML-formatted *.config files to this folder to customize things like environment variables, IAM roles, software installations, startup commands, and even the automatic download of files sourced from S3. Booyah.

You can pretty much use this full configuration file below. Be sure to name it ending with .config, inside of the .ebextensions folder.

# https-single-instance.config

# Configuration that allows a single instance EBS environment to
# house its own SSL certificates that are actually stored in S3 (must be saved there ahead of
# time). This config also provisions a security group to allow inbound 443 (HTTPS) traffic,
# and an S3 authentication method that specifies a pre-created IAM role name that has access to
# read the specific S3 bucket where the SSL cert and key are stored.

# Documentation: https://gist.github.com/the-vampiire/489299336200659a8f96cb6f2d593b64?permalink_comment_id=3242847
# AWS resources to be provisioned for the EB environment
Resources:
# define a SG inbound rule that allows HTTPS traffic to the EB instance
sslSecurityGroupIngress:
Type: AWS::EC2::SecurityGroupIngress
Properties:
# added to the EB environment SG
GroupId: { "Fn::GetAtt": ["AWSEBSecurityGroup", "GroupId"] }
IpProtocol: tcp
# to (instance) and from (internet) TCP port 443
ToPort: 443
FromPort: 443
CidrIp: 0.0.0.0/0

AWSEBAutoScalingGroup:
Metadata:
AWS::CloudFormation::Authentication:
S3Auth:
type: "s3"
# CHANGEME: Use your bucket name storing your SSL files
buckets: ["BUCKET"]
roleName:
"Fn::GetOptionSetting":
Namespace: "aws:autoscaling:launchconfiguration"
OptionName: "IamInstanceProfile"
# CHANGEME: Use the name of the role assigned to your EC2 instance
DefaultValue: "aws-elasticbeanstalk-ec2-role-custom"

packages:
yum:
mod_ssl: []


files:
# CHANGEME: change YOURDOMAIN.com to your own app's domain (really just a folder name, could be anything)
/etc/letsencrypt/live/YOURDOMAIN.COM/fullchain.pem:
mode: "000400"
owner: root
group: root
# CHANGEME: replace BUCKET and REGION and potentially name of the fullchain.pem if you renamed it
source: 'https://BUCKET.s3.REGION.amazonaws.com/ssl/fullchain.pem'
authentication: S3Auth

# CHANGEME: change YOURDOMAIN.com to your own app's domain (really just a folder name, could be anything)
/etc/letsencrypt/live/YOURDOMAIN.COM/privkey.pem:
mode: "000400"
owner: root
group: root ]
# CHANGEME: replace BUCKET and REGION and potentially name of the privkey.pem if you renamed it
source: 'https://studyrocket.s3.us-east-2.amazonaws.com/ssl/privkey.pem'
authentication: S3Auth

# This is important. On startup, we want the container to a) start the nginx web server, and b) reload it.

container_commands:
01_start_nginx:
command: "systemctl start nginx"
02_reload_nginx:
command: "systemctl reload nginx"

When you deploy your code with .ebextensions/https-single-instance.config containing the configuration above, you are effectively mounting those S3 SSL files into your EC2 instance (really it’s downloading them to the specified /etc/ paths). So, that means once the nginx web server is started and reloaded, those SSL files are present in the local file system.

This means the next step is configuring nginx to use those SSL files for the secure port 443 listener. On to step 5.

Step 5. Extend your Environment Platform with .platform/nginx

If you’re not familiar, the .platform directory (which you should also create in the root of your project, similar to the .ebextensions folder) is used to customize the Elastic Beanstalk platform itself rather than just the environment or application settings.

You can read more about extending your AWS EBS platform in the official AWS docs, but ultimately all we’re we are going to do is use a .platform/nginx folder in our own source code to overwrite default configuration of the nginx web server. We know nginx runs by default on the instance as a reverse proxy on the Python and Node.JS platforms. Why do we need to overwrite the config? Ours is cooler, duh. That and the default nginx web server doesn’t have a secure port 443 listener, just a non-secure port 80 listener, with no SSL. We have to add one.

Here are the files you need to add including their paths and their content.

.platform/nginx/nginx.conf — overwrites the main nginx configuration file. This defines, among other things, a port 80 listener that just redirects requests from http to https.

#Elastic Beanstalk Nginx Configuration File

user nginx;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
worker_processes auto;
worker_rlimit_nofile 200000;

events {
worker_connections 1024;
}

http {
server_tokens off;

include /etc/nginx/mime.types;
default_type application/octet-stream;

log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';

# THIS WILL TELL NGINX TO INCLUDE OUR NEXT conf.d/https.conf FILE
include conf.d/*.conf;

map $http_upgrade $connection_upgrade {
default "upgrade";
}

server {
listen 80 default_server;

# redirect to https and maintain the requested URL
return 301 https://$host$request_uri;


access_log /var/log/nginx/access.log main;

client_header_timeout 60;
client_body_timeout 60;
keepalive_timeout 60;
gzip off;
gzip_comp_level 4;
gzip_types text/plain text/css application/json application/javascript application/x-javascript text/xml application/xml application/xml+rss text/javascript;

}
}

.platform/nginx/conf.d/https.conf — defines a secure port 443 listener that references the S3-downloaded SSL cert and key files and functions as a reverse proxy by sending traffic to the “upstream” which is your app (localhost:8080 for Node.JS or localhost:8000 for Django). Note that the communication between the client and the 443 listener is encrypted (https) but the communication between the 443 listener and the upstream is not (http).

# Nginx Configuration
# CHANGEME - use your own app name. or just use "app", really doesn't matter.
# just has to match the upstream reference in the proxy_pass directive below.
upstream studyrocket {
# IF THE APP IS DJANGO, USE PORT 8000
# IF THE APP PLATFORM IS NODE.JS USE PORT 8080
server localhost:8000; # DJANGO EXAMPLE
}
# HTTPS server
server {
listen 443 ssl;

# CHANGEME - use your own application domain name (we use *.studyrocket.ai to include www and api subdomains)
server_name studyrocket.ai *.studyrocket.ai;

# CHANGEME - these SSL cert and key paths should match the respective paths
# that you defined in the .ebextensions/https-single-instance.config file.
# These point to the local files on the EC2 file system after they are downloaded
# during the environment deployment.

ssl_certificate /etc/letsencrypt/live/studyrocket.ai/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/studyrocket.ai/privkey.pem;

ssl_session_timeout 5m;

ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;

access_log /var/log/nginx/access-443.log main;
error_log /var/log/nginx/error-443.log error;

location / {
# CHANGEME - value of proxy_pass should be http://
# followed by the name of your upstream (above)
proxy_pass http://studyrocket;
proxy_set_header Connection "";
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
}
}

Step 6. Deploy that bad boy.

Assuming your local development environment for your app is in a running, tested, deployable state, it’s time to send your idea to the cloud.

My friend Ayman and I are huge fans of automating deployments (and other things, like PR description generation with LLMs) with GitHub Actions, so I’m going to provide a very simple GitHub Action workflow definition below that you can copy into your own project to deploy your app.

If you haven’t used GitHub Actions before for any of your GitHub projects, I highly recommend trying it out. They let you define automation workflows with YAML files that can be triggered by all sorts of things in GitHub: pull requests, merges, updates, comments, new labels, and a ton more.

For this situation, we want to achieve the following goal: when we add a ‘deploy’ label to a pull request, we want to deploy that pull request code to our target EBS environment. In order to do this, the EBS environment and its parent application need to be created already (even if it’s just running the sample code) so that we can target them by name.

Here is the workflow.

.github/workflows/deploy-prod-ebs-on-label.yml — workflow to deploy pull request code to a target EBS environment / application on addition of “deploy” label to a pull request.

name: Deploy to EBS Environment when Label is Added

on:
pull_request:
types: [labeled]

jobs:
deploy:
# if label != Models Change and label != pr-description
if: ${{ github.event.label.name == 'deploy' }}

runs-on: ubuntu-latest
steps:
- name: Checkout source code
uses: actions/checkout@v2

- name: Generate deployment package
run: zip -r deploy.zip . -x '*.git*'

- name: Deploy to EB
uses: einaregilsson/beanstalk-deploy@v21
with:
aws_access_key: ${{ secrets.DEPLOYER_AWS_ACCESS_KEY_ID }}
aws_secret_key: ${{ secrets.DEPLOYER_AWS_SECRET_ACCESS_KEY }}
application_name: ${{ secrets.EBS_APPLICATION_NAME }}
environment_name: ${{ secrets.EBS_PRODUCTION_ENVIRONMENT_NAME }}
version_label: ${{ secrets.EBS_PRODUCTION_ENVIRONMENT_NAME }}-${{ github.event.label.name }}
region: ${{ secrets.AWS_REGION}}
deployment_package: deploy.zip

Note that we don’t have anything hardcoded in our workflow file, and we are pulling all of the relevant values from the repository’s secrets. It is critical that you use secrets here and that you do not expose your AWS information, particularly your AWS credentials (DEPLOYER_AWS_ACCESS_KEY_ID, DEPLOYER_AWS_SECRET_ACCESS_KEY). I’m including DEPLOYER_ in the secret name so that you know specifically what the key pair is used for.

You may be wondering, which AWS credentials should I use? Great question. The best thing to do here is to create another AWS IAM user called <your app name>-deployer, e.g., for us, studyrocket-deployer. Once you do that, you should add this prebuilt AWS-managed policy to that user: AdministratorAccess-AWSElasticBeanstalk. Then, as usual, open the Security Credentials tab and generate an access key pair. That’s what you’ll store in your repository secrets.

I’ll write another article soon about the auto-deployment of SSL-enabled preview environments with dynamic version-based subdomains (e.g., pr32.yourapp.com for previewing pull request 32), which would involve also adding the AmazonRoute53FullAccess policy to the deployer user. No need here though.

Closing Remarks (via Opening PRs)

Once you’ve added the GitHub Action workflow file to the .github/workflows folder in the root of your repo, open a new pull request, make sure you have populated values for ALL of those referenced secrets, and add a new label to the PR called “deploy” (you can change this if you want). At this point, it’s largely a matter of monitoring the “Checks” tab and watching the logs of your workflow unfold. You may encounter errors, but that is the nature of new automations.

I encourage you to share what you encounter in the comments of this article! Unless you encounter something scary — please don’t share that, I code in the dark.

--

--

Austin Hunt
Austin Hunt

Written by Austin Hunt

Portrait artist programmer. College of Charleston ’19. Vanderbilt Univ. CS ’22. I love art, music, the web, coding, automation, & most importantly, challenges.

No responses yet