Blog Support

NGINX Reverse Proxy Setup Guide

Reverse proxies accept connections on behalf of a server coming from a client. They are the opposite of forward proxies, which accept connections on behalf of a client destined for a server. They’re incredibly useful in two main cases: tightly controlled (and managed) ingress into a network, and supporting older products that don’t natively support the latest and greatest encryption.

Let’s be honest, SSL/TLS involves a lot of details that all have to align just so. If you have 200 servers, funneling ingress through just a few reverse proxies acting as “gatekeepers” limits your external attack surface considerably. With the use of a technology called SNI (an extension to SSL/TLS that allows a server to determine, based solely on requested DNS name where a request is destined for) and a Wildcard certificate, establishing this baseline becomes a breeze. (Added bonus: If you point your @ directive for your DNS name to your reverse proxy, and point its default backing to an error page, you won’t have to wait for DNS to propagate when you publish services externally).

When you setup an SSL/TLS terminating reverse proxy, you should be aware of any legal and moral obligations you might have. You are, in essence performing an authorized “Man-In-The-Middle” attack. Therein lies the rub with transport encryption – it is only encrypted in transit, the recipient can do whatever they’d like with that information. As the recipient, we are obligated to protect that information, and honor that trust. It is highly recommended that you configure your proxy to re-encrypt the traffic destined for your backend server in your internal farm.

Linux Administrators on the other hand are likely to use nginx, apache, or HAproxy depending on their use case. For today, let’s look at nginx as a terminating reverse proxy.

NGINX: (SSL/TLS Terminating Reverse Proxy)

NGINX (pronounced engine-x) over the past few years has been gaining momentum with a very loyal following. It’s not surprising – it’s easy to configure (and features easy to understand directives in order to configure SSL/TLS securely), and with its latest build even supports dynamic modules – a feature it’s been lacking for a long time. In most cases. A terminating proxy is vastly superior to a passthrough proxy.

To follow this guide, you’ll need A Wildcard Certificate (You can purchase one right here at ssltrust.au starting at just $67.10 per year.)

Only add this first line if you’re sure your domain will never serve any pages over insecure HTTP #(including subdomains)

text

add_header Strict-Transport-Security "max-age=63072000;includeSubdomains;preload";

nginx uses a certificate bundle instead of separately specifying a cert
its intermediates, and its root. You can actually just create one file pasting the Base64 encoded certificates right after the other, starting with the wildcard certificate itself and ending with the root, to create this bundle.
You can also create the bundle with the following command

shell

cat *.example.org.crt intermediate.crt root.crt / > www.example.com.chained.crt

shell

ssl_certificate /ssl/example.org.crt;
ssl_certificate_key /ssl/example.org.key;

You should disable TLSv1 if you can afford to give up support for legacy clients like Java 6 and Windows XP.

text

ssl_protocols TLSv1 TLSv1.1 TLSv1.2;

I recommend researching what best suits your environment, but this is a good place to start.

text

ssl_ciphers "EECDH+ECDSA+AESGCM EECDH+aRSA+AESGCM EECDH+ECDSA+SHA384 EECDH+ECDSA+SHA256 EECDH+aRSA+SHA384 EECDH+aRSA+SHA256 EECDH EDH+aRSA !RC4 !aNULL !eNULL !LOW !3DES !MD5 !EXP !PSK !SRP !DSS";
ssl_prefer_server_ciphers on;

You will need to generate your own Diffie-Hellman parameters.
You can do this in your current directory with the command

shell

openssl dhparam -out ./dhparams.pem 2048

It might take up to 10 minutes depending on your hardware.

shell

ssl_dhparam /ssl/dhparams.pem;

This is enough shared memory for 40,000 concurrent sessions.

text

ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;

Now, navigate to your conf.d folder inside /etc/nginx. Create a file called “HTTPRedirect.conf”. The contents of that file should be the following:

text

server {
    listen 80;
    server_name www.example.org; #change to your domainname
    return 301 https://$server_name$request_uri;
}

This redirects any subdomain from insecure HTTP on port 80 to the same subdomain based on what resource is being requested, but over https on port 443.

Next, For each subdomain you want to configure create a file named $somename.conf. Each file will handle a single subdomain, and the contents will look like so:

text

server {
    listen       443 ssl;
    server_name  site1.example.com;
    location / {
 # This can be HTTP or HTTPS on any port.
 # If anything other than localhost, HTTPS is recommended.
     proxy_pass http://localhost:8000;
  # You can optionally set these directives to keep from losing your original source IP in any logs on the backend server. Without them, the IP of the proxy will appear. You may need to pass through other headers as well depending on your situation.
   proxy_set_header Host $host;
   proxy_set_header X-Real-IP $remote_addr;
    }
}

Restart nginx with the command:

shell

service nginx restart

(or on newer systems systemctl restart nginx ). If there are any errors, they can typically be found in /var/log/nginx/error.log

Discussions and Comments

Click here to view and join in on any discussions and comments on this article.

Written by
Paul Baka


Helpful Guides

View more Guides, FAQs and information to help with your Certificate purchases.

Learning Centre

View more resources on cyber security, encryption and the internet.


Continue reading with these guides you may be interested in...

#SSL/TLS

Nginx SSL Configuration and Installation Guide

Video Included

NGINX is a popular open-source web server that can also be used as a reverse proxy, load balancer, and HTTP cache. It is known for its high performance, scalability, and ease of configuration. NGINX is widely used by companies of all sizes, …

#SSL/TLS

Setup Varnish with Nginx and SSL

Two of the most important considerations for any website owner are security and speed. Historically, these goals have been ever at odds. One of the most effective techniques for insuring a consistent experience for end users is a caching layer. …

Setup HAProxy 2 with KeepAliveD and Layer 7 Retries

HAProxy is an extremely powerful free and open-source load balancing solution. With it, you can insure high availability within your datacenter. Highly available systems are better for business continuity and better for security, as they can be …