Super Pi Part 6: HTTPS and NGINX

Posted on April 17, 2021

I'm feeling good about everything being behind an NGINX proxy, but I'm not a fan of the red shield in my browser notifying me that the connections are insecure because I'm not using HTTPS. Luckily, this issue isn't terribly difficult to address and I can use tools I already have. Like any proxy worth its salt, NGINX is HTTPS-capable, but there are some implementation details to decide on that aren't that obvious.

The first question I need to answer is: How do I get my SSL certificate? I can create a certificate-signing request and submit it to an established certificate authority online, but I'm only using this certificate on my internal network. Typically, when going through a certificate authority, I would also claim the domains I'm getting the certificate for. I haven't looked into it, but I imagine that costs money. So registering with a certificate authority is out.

I could also create a self-signed certificate. This would allow my phone or laptop to create an HTTPS connection to my Pi, but the browser would still complain that the connection isn't trusted. I'd have to make an exception for my Pi and (I think) it would stop complaining. That doesn't sound like a great solution either, but I could work with it.

The last option is to create a certificate authority and install it on all of my clients, and use that certificate authority to create certificates that NGINX will use. Since the certificate authority used to sign the certificates is registered with my browser and PC, the browser will be happy to find NGINX is serving certificates that it recognizes. Also, if I create a new certificate, I don't have to change anything on any clients as long as I sign that new certificate with the authority already installed on my clients. This is more work ahead of time, but it feels like less maintenance in the long run so this is the solution I went with. With that being said, it's time to add yet another component to Super Pi.

Creating Certificates

I could create self-signed certificates or I could make myself a Certificate Authority and sign my own certificates. I will do the latter. Becoming a certificate authority isn't that difficult, and everything can be done with OpenSSL. I connected to my Pi over SSH and installed it:

sudo apt install openssl

This process generates a lot of files, so I'm going to try to preemptively organize:

mkdir openssl
mkdir openssl/auth
mkdir openssl/cert

Certificate Authority

Everything related to the certificate authority is going in auth, everything else will go in cert. Now, I need a key to use to create a certificate authority's root certificate with:

cd openssl/auth
openssl genrsa -des3 -out myCA.key 2048

I was prompted for a password and a key was generated. WIth that key, I can create my root certificate:

openssl req -x509 -new -nodes -key myCA.key -sha256 -days 1825 -out myCA.pem

You get a lot of prompts to fill out, here's a sample for my root certificate:

Country Name (2 letter code) [AU]:US
State or Province Name (full name) [Some-State]:Arizona
Locality Name (eg, city) []:Tucson
Organization Name (eg, company) [Internet Widgits Pty Ltd]: Internet Widgets Pty Ltd
Organizational Unit Name (eg, section) []:Home
Common Name (e.g. server FQDN or YOUR name) []: Internet Widgets Pty Ltd
Email Address []:support@internetwidgetparty.com

Just fill it out with your information. I made up my Organization Name and Section, they can be anything. Common Names come up a lot, and they mean different things for a certificate authority and a certificate your browser would install. For the certificate authority, the Common Name should be identical to the Organization Name.

Now that I have my certificate authority, it's time to install it on my clients. Some browsers don't like .pem format so the first thing I'll do is convert it:

openssl x509 -outform der -in myCA.pem -out myCA.crt

Now, I'll switch over to my laptop and copy the .crt file to /usr/local/share/ca-certificates/. Linux has a command to update its certifcate store by looking in this directory. So, using scp from good ol' Lappy:

sudo scp pi@192.168.0.23:/home/pi/openssl/auth/myCA.crt /usr/local/share/ca-certificates/

Once it's copied, I'll update my certificate store:

sudo update-ca-certificates

I can tell that this command works because I see the certificate show up in /etc/ssl/certs/. Now, I can install it in Firefox. I clicked the hamburger menu and clicked on Preferences, and under the Privacy & Security tab, there's a Certificates heading near the bottom. I clicked the View Certificates... button which opened a modal dialog and in the bottom middle I clicked the Import... button and selected the certificate at /usr/local/share/ca-certificates/myCA.crt. That's pretty much it, now I'm a registered certificate authority with my laptop.

With phones, in my case Android, the easiest thing to do is expose the .pem file on the network, download it, and add it to the certificate store of the device. I'm going to let NGINX expose my .pem file by moving a copy to a cert directory that www-data owns:

sudo mkdir /var/www/html/certs
sudo cp =/openssl/auth/myCA.crt /var/www/html/certs/
sudo chown -R www-data:www-data /var/www/html/certs

With the file copied and permissions granted, I added another NGINX config file at /etc/nginx/sites-available/cert:

server {
  listen 80 default_server;
  listen [::]:80 default_server;

  root /var/www/html/certs;

  location / {
    try_files $uri $uri/ =404;
  }
}

This config I set to the default server. I did this because when this is active, it's the only HTTP endpoint I'm going to want to hit. I don't necessarily want to broadcast my certificate to everyone on my network, so I will only make this available when needed. Since I need it right now, I'll activate it:

sudo ln -s /etc/nginx/sites-available/cert /etc/nginx/sites-enabled/cert
sudo nginx -t
sudo systemctl restart nginx

Once NGINX restarts I can download it to my device by navigating to the .crt file at http://192.168.0.23/myCA.crt. The file downloads and on my phone I can go to Settings -> Biometrics and Security -> Other security settings -> Install from phone storage -> CA certificate and locate the file I downloaded to install it. For most browsers, this is fine. Any Chromium-based browser will be happy and I can happily navigate to my HTTPS services. Firefox, however, manages its own certificate store that is not modifiable on mobile unless you're using the beta version. I installed the beta version and navigated to about:config as a URL and toggled the setting security.enterprise_roots.enabled to true. With that done I was able to navigate to my HTTPS services on mobile. It looks like I'll be using the beta version of Firefox for the foreseeable future. It might be time to see what Brave is all about. Before I go any further, I'm going to bring down the cert endpoint from my Pi:

sudo rm /etc/nginx/sites-enabled/cert

Getting a Certificate

Now that I have a certificate authority, I can start creating certificates. Again, there are a few different ways to approach this process. This time, my Pi setup narrows my options down significantly. Typically, I would generate one certificate for each domain. The fact that I have a root certificate authority on my clients makes this appealing. As I mentioned, I can now pass my laptop any certificate signed with the installed authority, and my laptop browser will be happy. There's a catch, though: all of my domains are hosted at the same IP address. When my laptop makes a secure connection to my Pi, it doesn't pass along the request, so NGINX won't know the domain the request is going to. Since it doesn't know the domain, it doesn't know which certificate to give back to my browser, it will just pick the default. This means that one domain will work, but the others will break because the certificate doesn't match up with the resource the browser is looking for. Straight away, then, it looks like giving each domain its own certificate is a bad idea.

The next option is to use Server Name Indication. In this process, my laptop will send the domain it's looking to get access to in the initial request to NGINX. NGINX can then find the certificate with the requested domain name and send it along. This is the most robust solution, because if I add another domain later, I can generate a new certificate alongside the existing certificates. The main drawback to this approach is that since the secure connection hasn't been made yet, the domain name my laptop is looking for isn't encrypted which can be exploited. Since I'm only working with my local network, I'm not too worried about that.

The last option, which is the one I went with, is creating one certificate with alternate names associated with it. With this implementation, if I add another domain to my Pi, I'll have to create a new certificate for NGINX to pass out. This means that at any time I will only have one certificate, which I kind of like. The downside is that there is a limit to how many alternate names can be on a certificate, but with three domains I think I will be alright.

Now that I've finally decided how to tackle certificate generation, the next step is to create a private key to sign requests and certificates with. I'm moving from authority territory to certificate territory so I'll change directories as well:

cd ../cert
openssl genrsa -out my.domains.com.key 2048

This key will be used to create a certificate signing request. This is the request used in the scenario where I send a request to an established certificate authority and they send me a certificate to use in NGINX. If I were going this route, I would add a config file with a list of alternate names comprised of the different domains my Pi is hosting. Since I'm the certificate authority, I'm not putting that information on the request. I will be using it later, though. For now, I'll create the signing request:

openssl req -new -key my.domains.com.key -out my.domains.com.csr 

I got another slew of questions to answer, and filled it out similarly to how I filled out the certificate:

Country Name (2 letter code) [AU]:US
State or Province Name (full name) [Some-State]:Arizona
Locality Name (eg, city) []:Tucson
Organization Name (eg, company) [Internet Widgits Pty Ltd]:Internet Widgets Party
Organizational Unit Name (eg, section) []:Home
Common Name (e.g. server FQDN or YOUR name) []:my.domains.local
Email Address []:support@internetwidgetparty.com

Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:         
An optional company name []:

I changed the Common Name to the one I intend to use for the certificates I end up installing. This is where I'm not following good practice. Typically, the Common Name would be one of my domains, but I'm instead using a domain I will never use. Truth be told, I couldn't pick one, so I just decided to use a general name. For me, it's more important that I'm able to look at this file in the future and know how I generated it and how it's being used, and the name helps a lot on that front. If I were following best practices, though, I would set the Common Name to be a domain I'm actually using. Once the request is generated, I can create a certificate, but I'll need the configuration file with the domain information I mentioned right before I made the request. OpenSSL refers to this kind of file as an extfile, so I created a file with that extension named my.domains.local.ext and put it in the cert directory:

authorityKeyIdentifier=keyid,issuer
basicConstraints=CA:FALSE keyUsage=digitalSignature,nonRepudiation,keyEncipherment,dataEncipherment subjectAltName=@altnames

[altnames]
DNS.1 = my.pi-hole.local
DNS.2 = my.node-red.local
IP.1 = 192.168.0.23 

Admittedly, there's a little more information in here than just the alternate names for my Pi. The authorityKeyIdentifier provides information on how the client is going to identify the authority signing the certificate. Unfortunately, I can't provide much more information than that. setting CA:FALSE in basicConstraints tells the client that it needs to provide the certificate authority. This is good because that happens to be exactly what I installed on all my devices. In the keyUsage field, I provided most of the uses I could find. All of these seem like things I may want to use in the future, although I think digitalSignature alone would probably be enough. In the altnames section, I put in all of the domains I've registered with my Pi in Super Pi Part 4. Interestingly, I can also add an IP address as an altname, so I added that as well. Finally I can create a certificate:

openssl -x509 -req -in my.domains.local.csr -CA /home/pi/openssl/auth/myCA.pem -CAkey /home/pi/openssl/auth/myCA.key -CAcreateserial -out my.domains.local.crt -days 1825 -sha256 -extfile my.domains.local.ext

Linux has a designated place for my .crt and .key files and I want to follow best practices so I'm going to move them before I move on to NGINX:

sudo mv ~/openssl/cert/my.domains.local.crt /etc/ssl/certs/
sudo mv ~/openssl/cert/my.domains.local.key /etc/ssl/private/

Since there's not a lot relying on the certificates specifically, I don't really care too much about backing them up. If worse comes to worst, I can always generate new certificates with my CA without much trouble. With that, we can move on to NGINX.

NGINX

Although this section is longer, reconfiguring NGINX is actually much easier than all the work it took to get the certificates set up properly. NGINX has documentation on setting up HTTPS servers, which got me most of the way to where I wanted to be. Since Raspberry Pis aren't very powerful and HTTPS can take a lot of resources, I wanted to try to make things a little easier on my server. I've seen recommendations saying you want at least as many HTTPS dedicated processes as you have cores. I don't have the luxury of cores here, but we can leverage some NGINX settings to make things a little easier on the poor Pi. I'm putting these settings in a config file separate from everything else since I want these settings available to everything and I only want to declare them once. I made a file called /etc/nginx/sites-available/https and put the following config inside:

ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;

The first setting allows the share the ssl_session_cache between requests. I set the cache to expire in 10 minutes. This is especially convenient since I have three domains serving up the same certificate. I increased the session timeout as well, which means a greater time between my client making a "handshake" with my Raspberry Pi to establish another secure connection. As always, I ran the usual sh commands one at a time to make sure NGINX was happy:

sudo ln -s /etc/nginx/sites-available/https /etc/nginx/sites-enabled/https
sudo nginx -t
sudo systemctl restart nginx

With that bit of configuration done I start moving my services to HTTPS.

Upgrading Plain HTML Files

The certificate stuff is all done, so I can start building out some NGINX configuration and testing connections. The easiest website to set up will be the journal thing I set up in my NGINX post, so I'm going to start with that to make sure I remember what I'm doing. For comparison's sake, here's how I had it configured in Super Pi Part 4:

server {
    listen 80;
    listen [::]:80;
    root /var/www/html/journal;
    index index.html;
    server_name my.journal.local;

    location / {
        try_files $uri $uri/ =404;
    }
}

To make an HTTPS variant, I added the following configuration to the end of /etc/nginx/sites-available/journal/, most of which was taken from the documentation linked above:

server {
  listen 443 ssl;
  keepalive_timeout 70;

  ssl_certificate /etc/ssl/certs/my.domains.local.crt;
  ssl_certificate_key /etc/ssl/private/my.domains.local.key;
  ssl_protocols TLSv1.2;
  ssl_ciphers HIGH:!aNULL:MD5;

  root /var/www/html/journal/;
  index index.html;
  server_name my.journal.local;

  location / {
    try_files $uri $uri/ =404;
  }
}

I changed the listen directive to 443 ssl since 443 is the default HTTPS port and HTTPS requires ssl. I added a keepalive_timeout for the handshaking process to establish a secure connection between a device and the Pi. Next, I added the ssl_certificate and ssl_certificate_key and pointed them to the files I generated with OpenSSL. I set ssl_protocols to only accept TLSV1.2. This is pretty restrictive, I might need to expand this later. The last thing I did for HTTPS was set the ssl_ciphers to values from the documentation. After that, I just took everything from the HTTP version (sans listening directives) and pasted it into the HTTPS server. I saved the config, tested and restarted NGINX, navigated to https://my.journal.local and lo and behold, it worked straight away. Happily, this strategy worked for everything.

Upgrading Pi-Hole

I used exactly the same process to update Pi-Hole. replace the listen directive value, add the keepalive_timeout, certificate files, protocols, and ciphers and everything else is the same as before:

server {
  listen 443 ssl;
  keepalive_timeout 70;

  ssl_certificate /etc/ssl/certs/my.domains.local.crt;
  ssl_certificate_key /etc/ssl/private/my.domains.local.key;
  ssl_protocols TLSv1.2;
  ssl_ciphers HIGH:!aNULL:MD5;

  root /var/www/html/pihole;
  server_name my.pihole.local;
  autoindex off;

  index pihole/index.php index.php index.html index.htm;

  location / {
    expires max;
    try_files $uri $uri/ =404;
  }

  location = \.php$ {
    include fastcgi_params;
    fastcgi_param SCRIPT_FILENAME $document_root/$fastcgi_script_name;
    fastcgi_pass unix:/run/php/php7.3-fpm.sock;
    fastcgi_param FQDN true;
    auth_basic "Restricted"; # For Basic Auth
    auth_basic_user_file /etc/nginx/.htpasswd; # For Basic Auth
  }

  location /*.js {
    index pihole/index.js;
    auth_basic "Restricted"; # For Basic Auth
    auth_basic_user_file /etc/nginx/.htpasswd; # For Basic Auth
  }

  location /admin {
    root /var/www/html/pihole;
    index index.php index.html index.htm;
    auth_basic "Restricted"; # For Basic Auth
    auth_basic_user_file /etc/nginx/.htpasswd; # For Basic Auth
  }

  location ~ /\.ht {
    deny all;
  }
}

This makes sense when I think about it because I'm essentially doing the same thing: proxying to a page. There are a lot of location blocks, but because transport is scoped at the server level, all of the location blocks can remain exactly the same.

Upgrading Node-RED

Because Node-RED deals with websockets and I'm not too familiar with them, I was nervous that this one would be more difficult. I was so convinced of this that at first I over-complicated it; the strategy is exactly the same! Here's what I ended up with:

map $http_upgrade $connection_upgrade {
  default upgrade;
  '' close;
}

server {
  listen 443 ssl;
  keepalive_timeout 70;

  ssl_certificate /etc/ssl/certs/my.domains.local.crt;
  ssl_certificate_key /etc/ssl/private/my.domains.local.key;
  ssl_protocols TLSv1.2;
  ssl_ciphers HIGH:!aNULL:MD5;

  server_name my.node-red.local;

  location / {
    proxy_pass http://websocket;
    ssl_certificate /etc/ssl/certs/my.domains.local.crt;
    ssl_certificate_key /etc/ssl/private/my.domains.local.key;
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection $connection_upgrade;
    proxy_set_header Host $host;
    proxy_set_header X-Forwarded-For $remote_addr;
  }
}

upstream websocket {
  server localhost:1880;
}

The map and upstream directives are copied verbatim, and all of the other configuration work is copying the SSL parameters as before. My over-complication was that I was sure that the proxy_pass needed to be https because I thought NGINX wanted to know how I wanted to connect to the socket. What I didn't realize is that NGINX is already wrapping the socket in SSL. What it wants in the proxy_pass directive are the attributes of the connection behind the scenes. For this value to be https, Node-RED would have to be using its own SSL layer that NGINX would be connecting through. Since Node-RED isn't configured this way, http is the only viable option.

Redirecting HTTP to HTTPS Globally

Since all of my services are now hosted on HTTPS, I want to redirect all HTTP traffic to use my updated services. NGINX provides a sort of catch-all that allows me to set a global redirect and saves me the time of creating a redirect for every domain. I'll go into my /etc/nginx/sites-available/https file and add this configuration:

server {
  listen 80;
  listen [::]:80;
  return 301 https://$host$request_uri;
}

This block listens on port 80 and returns a 301 response (permanent redirect) to the HTTPS version of the host (or domain) and the URL tacked onto it. Now my http endpoints will never be hit. I could go delete all of my HTTP server blocks, but I want to keep a record of all of the things I've done. To that end, I'm going to put all of the config I'm not using anymore into /etc/nginx/sites-available/archive. For good measure, I'll also make a backup of all of my configuration for when my SD card inevitably dies.

Config Cleanup

I'm not quite done yet. If I navigate to the http versions of my services, they still appear and are not getting redirected. Since the redirect server block isn't looking for any specific domains, NGINX will only use the redirect if it can't match the request anywhere else. That means I need to remove my http server blocks (and nothing else). I'd like to keep my http config for posterity, so I'm going to put everything I don't need in /etc/nginx/sites-available/archive. I also copied the map and upstream blocks just so I had all the context in case I needed these http blocks later. After all of that was done, I opened my services up in my browser and it was still pulling up http endpoints. This seemed fishy because NGINX didn't have any configuration telling it that these endpoints existed anymore. Opening a new private browser to bypass the cache showed that the redirects were working as expected.

Now I'm done. This was a bit of a long one, but a lot of it's code blocks, and those don't really count. If a post is just one big code block, is it really a post at all?