The backend technology behind the websites managed by Patton Web Concepts

Proxying Multiple SSL Endpoints with Apache

I have an AWS ec2 instance running multiple web apps for clients. I don't want to run a separate web server for each one, so I need to figure out a proxy solution where I'm just running a single web server that will forward requests to the individual apps. The diagram below illustrates what I'd like to get to.

It's worth noting that this doesn't perform any kind of load balancing or provide redundancy. It's just a simple routing of requests from a single node to another node running the requested service.

Proxy Image

In the diagram, assume that app2 is an Astro.js based application running under the myfancywebapp.com domain name. In this design, the proxy will accept traffic on ports 80 and 443, but then forward them to the app running on port 8002 (port numbers are in red). I'm configuring this all one a single host, but the apps could be running on different servers if the load and/or memory requirements necessitated it.

In order for the proxy to accepts requests for https://myfancywebapp.com on port 443, the proxy must be configured as if it were the endpoint for the request. The requester from the Internet and the proxy web server will exchange certificates and agree to send traffic back and forth over an encrypted connection.

That means the proxy must have the signed SSL certificates for all of its endpoints, noted in green. Personally I use letsencrypt for certificates, but you can also purchase certs from different vendors that might offer additional services or guarantees.

An Example

To see what this looks like in practice, I have a staging/dev site for a client called "alrie.pattonwebconcepts.com".

I need to configure my proxy so that the client can see what their site will look like and to debug any issues in my deployment pipeline.


          <VirtualHost *:80>
            ServerAdmin erik@erikpatton.com
            ServerName alrie.pattonwebconcepts.com
            DocumentRoot "/var/www/alrie"
            RewriteEngine on
            RewriteCond %{SERVER_NAME} =alrie.pattonwebconcepts.com
            RewriteRule ^ https://%{SERVER_NAME}%{REQUEST_URI} [END,NE,R=permanent]
          </VirtualHost>
          

The first snippet shows a section of the httpd.conf file that accepts requests for alrie.pattonwebconcepts.com on port 80. There are a couple of things to note in this config.

The first is that DocumentRoot is defined, but it's not actually used. That's because our apps have their own HTTP listeners running on their own port. They're not static HTML, PHP or CGI scripts hanging around on a file system.

What that means for the proxy is that instead of trying to read content off the disk from the location of DocumentRoot, we're telling it to tell it send the request to someone else who will provide the actual HTML.

The only reason to specify DocumentRoot is really to make the configuration more self documenting. We need to install our app somewhere, so by placing it in DocumentRoot we can look at the apache config and know where the code is, even if Apache's not using it.

Docker enters the room

However, having our app installed directly on the web server makes updates messy. The app must be stopped, old code removed, new code copied in, and the app restarted. Back in the old days, I used a combination of ssh, rsync, and bash to do this. And God forbid something were broken, I'd have to undo the update and put the old one back.

A much cleaner way to do this now is to put your app in a container. Code updates happen by simply stopping the old container and starting the new one. The only requirement for the container is that you have to expose the port the app is using. And to make that process even easier, I have a GitHub action that takes the code, runs error checks, builds a new container, stops the old one, and starts the new one.


          FROM node:lts AS runtime
          WORKDIR /app
          COPY . .
          RUN npm install
          RUN npm run build
          ENV HOST=0.0.0.0
          ENV PORT=8085
          EXPOSE 8085
          CMD ORIGIN=https://localhost node ./dist/server/entry.mjs
          

Here is the Dockerfile for my Astro.js app, which is a Node.js under the hood. After pulling Node and building our app, we both run the app on port 8085, and then expose it.

Regardless of how your app is deployed, the only thing Apache really needs is the port number it's running on.


          # docker ps -a
          CONTAINER ID   ...   PORTS                                      NAMES
          ...
          84b146c9639d   ...   0.0.0.0:8085->8085/tcp, :::8085->8085/tcp  alrie
          ...
          

This snippet shows that I have a container named "alrie" and that it's sending TCP traffic back and forth on port 8085, both externally and internally.

Making things secure

We're configured to accept port 80, but we really need port 443. To get there, we're going to configure Apache to take our web request that came in on port 80 and re-write it to use https: instead.

RewriteEngine tells Apache to enable URL rewriting, RewriteCond says "if the request is for a server called "alrie.pattonwebconcepts.com", then RewriteRule does the actual reformatting where he carret says starting at the beginning of the request, replace it with the server name and the remainder of the URL.

The important bit here is that once the URL is rewritten, Apache will then reprocess it. The means we have to configure Apache to handle SSL connections.

Configure apache to use SSL on your app

The first step is to configure Apache to support SSL connections *in general*, which we'll assume is already done.

The next step is to tell Apache where to find your app's SSL certificates and keys with SSLCertificateFile and SSLCertificateKeyFile. I use letsencrypt/certbot to make this easier, but you can also do it manually by submitting a cert request to a certificate authority.

Either way, we'll assume you have valid certificates.


          <VirtualHost *:443>
            ServerAdmin erik@erikpatton.com
            ServerName alrie.pattonwebconcepts.com
            DocumentRoot "/var/www/html/alrie"
            SSLCertificateFile /etc/letsencrypt/live/alrie.pattonwebconcepts.com/fullchain.pem
            SSLCertificateKeyFile /etc/letsencrypt/live/alrie.pattonwebconcepts.com/privkey.pem
            Include /etc/letsencrypt/options-ssl-apache.conf
            ProxyPass / http://localhost:8085/
            ProxyPassReverse  / http://localhost:8085/
          </VirtualHost>
          

The last step

Now that we have a valid SSL connection to our app, the last step is to tell Apache to send it to the app. The ProxyPass directive says take all traffic "/" and send it to ourselves, but at port 8085. ProxyPassReverse says make sure all return traffic comes back to me.

If I wanted to send my traffic to another machine, I would simply change "localhost" to the name of the server where I wanted to send it.

Wrapping up

Configuring a proxy isn't conceptually difficult, but it can be fiddly. This is where the Apache log files are indispensable. If things aren't working, first check the proxy to see if it's getting the request. If so, check that the rewrite is working (you'll have to up the default debug level), then check that the server receiving the request is actually getting it.

You can verify all of this with the browser of course, but I like the curl command since I can tell it to show me things like the HTTP headers, the SSL certificates being used, etc. It's a powerful tool, so if you're not familiar with it, spending some time playing with it can be helpful. And if you get hung up on SSL certificates, the openssl s_client command can be helpful for looking at certificate details.

A final thought

Because everything we're doing by hand is just config files and code, there's no reason why we couldn't go the next step and write a script to automate this entire procedure. Even better would be to put the config in GitHub/Lab and configure an action to update the server, where the certs as are stored as secrets. Heck, you could even trigger another action that would populate a Prometheus/Grafana config so you could start monitoring it in real time. And if you want to go crazy, gin up some Helm charts to run your container on a Kubernetes cluster.

Maybe tomorrow...

Patton Web Concepts

 

Find Me

 

Boston, MA

erik "at" pattonwebconcepts.com

@erikpatton

About Me

 

I'm Erik I build and maintain websites so other people don't have to.

My expertise lies in building computing infrastructures for websites that are reliable, fast, and secure. I work primarily with Linux systems in cloud and on-premise environments.

I also do web design and development with a preference for the Astro javascript framework. I've also managed several websites using WordPress.

If you need a new website, an integration to your existing site, or managed hosting, please get in touch.