Mastering Nginx Guide From A Newbie Perspective

CCarter

Final Boss ®
Moderator
BuSo Pro
Boot Camp
Digital Strategist
Joined
Sep 15, 2014
Messages
4,358
Likes
8,865
Degree
8
Alrighty - (Related to my SAAS Journal), where we're building the new platform on Nginx. Now obviously building on a new platform you have not mastered or are a completely newbie to is never a good idea. When problems occur you are out of your depth and element. When big problems occur, pray to god Stackoverflow is there to help you or the Google hasn't been compromised (Yes, you might even have to hit page 2, 3, or even 4 to figure out your problem). You might want to pick up this book too in your programming journey:

QfcSf6z.jpg


But when it comes down to the main server of your operation the foundation of everything needs to be solid, and Apache2's speed pales in comparison to Nginx, so it's time to embrace the future.

"Apache is like Microsoft Word. It has a million options but you only need six. Nginx does those six things, and it does five of them 50 times faster than Apache." - Chris Lea

I know most of you are going to be like

PVH9nqj.gif


You might as well move on, cause this is me getting nerdy...

Apache and Nginx memory comparison usage (lower is better):

VkSpKtE.jpg


Requests Per Second (higher is better obviously):

Il2dqZJ.jpg


Some sources:
Web Server Performance Comparison

Caveat: Remember, Apache supports a larger toolbox of things it can do immediately and is probably the most compatible across all web software out there today. Furthermore, most websites really don't get so many concurrent hits as to gain large performance/memory benefits from Lighttpd or Nginx – but you can check them out to see if they work best for your needs.

--

Nginx's popularity continues to rise, largely I believe due to it's simplicity:

89GA8dn.png


More and more of the topg websites are using Nginx:

gnMKla3.png


Nginx vs. Apache: Our View of a Decade-Old Question

--

So what I did is see how hard it would be to install a wordpress blog under the /blog/ URL. It wasn't too hard actually, in fact the hardest part was getting SSL to run, which took about 10 mins.

First thing you'll need to know is where you'll be storing your html files for your website (you know the exact location where you plan on putting your wordpress). Once you know that open the commandline and visit the /etc/nginx/sites-available/ folder and edit the example.com.conf file (might be example.com). You might also have to delete the "default" file cause that shit will cause problems and a permanent "Welcome to Nginx" screen.

Example of my example.com.conf file:
Code:
server {
	ssl on;
	listen 80;
	listen 443 ssl;
	server_name  example.com;

        ssl_certificate /etc/Nginx/ssl/Nginx.crt;
        ssl_certificate_key /etc/Nginx/ssl/Nginx.key;

	return       301 $scheme://www.example.com$request_uri;
}

server {
	server_name www.example.com;

	listen 80;
	listen 443 ssl;

        ssl_certificate /etc/Nginx/ssl/Nginx.crt;
        ssl_certificate_key /etc/Nginx/ssl/Nginx.key;

	ssl on;

        access_log   /var/log/Nginx/example.com.access.log;
        error_log    /var/log/Nginx/example.com.error.log;

        root /var/www/example.com/htdocs;
        index index.php index.html index.htm index.shtml;

        location / {
                index index.php index.html index.htm index.shtml;
        }

        location /blog/ {
                try_files $uri $uri/ /blog/index.php?$args;
        }

        location ~ \.php$ {
        try_files $uri =404;
        fastcgi_split_path_info ^(.+\.php)(/.+)$;
        fastcgi_pass unix:/var/run/php5-fpm.sock;
        fastcgi_index index.php;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        include fastcgi_params;
    }

	fastcgi_param   HTTPS               on;
	fastcgi_param   HTTP_SCHEME         https;
}

^^ Now this forces SSL on, and that's a problem if you are using a self signed certificate like I am cause it will create warnings. to Turn off SSL, comment our the "ssl on" lines "#ssl on;" as well as the last two fastcgi_parm lines at the end there.

Now obviously being a newbie at Nginx I welcome ALL and every criticism as I continue figuring out Nginx. But from the above code, SSL is enabled, Wordpress runs on /blog/ with custom permalinks enabled, and that's that.

Remember there is no .htaccess mates, but there sort of is, but that's way is retarded cause it's just mudding up the speed.

Another thing to note, this forces non-www to 301 www. That's why there are two sections. The first section is to 301 anything to the correct scheme ($scheme = http or https) to the www version of that same scheme and add the url you were going to ($request_uri).

Obviously when we get some sexy SSL going we'll replace this self signed certificate with our own, but for now that's the current setup as we continue building the platform. If I find corrections or anything else I'll keep coming back and adding them to this thread. If anyone has any other Nginx tricks feel free to drop them in.

Also, a ton of trial and error made this nginx + wordpress possible, this book also came in handy:

3R4IPTs.jpg


I can't wait to have Nginx meet Redis...

MEXz5e8.gif
 
This is a completely basic question. I know that Apache and Nginx and the like are your "server software" but what exactly do they do on the basic level? Are they languages? Are they packages of functions?

I know that for Wordpress, PHP talks to MySQL databases. What are the "requests" in the image above? Is that like HTTP requests or database requests?

And does Wordpress require Apache or can you put Nginx on there?

I start to get confused at the quantum level of all of this. Like all of these coding languages that all do the same thing, which all mainly are compiling C which is somehow reading binary or something. It gets too deep for me and I've never been able to talk to someone who can explain it. It's almost like magic.
 
I start to get confused at the quantum level of all of this. Like all of these coding languages that all do the same thing, which all mainly are compiling C which is somehow reading binary or something. It gets too deep for me and I've never been able to talk to someone who can explain it. It's almost like magic.

I'll take a shot at this and some of your other questions.

Let's talk about programming langs. Every lang, including C ends up compiled to machine code using binary notation (0's and 1's). Machine code, aka Machine Lang is the only language that computer hardware understands. With this in mind, we can say that Machine Lang is the "lowest level" lang because it's right there next to the hardware.

Obviously, no one programs in pure machine lang (binary). Knowing the exact order of 0's and 1's to type in to make enough things happen to even print "hello world" to the screen would be a total mindfuck. If only there were a better way to tell the machine what to do. Enter Assembly Lang. This is the first step away from Machine Code and as close to the metal as you can get as a programmer.

When you write an Assembly program, you use something called an Assembler to translate the Assembly to Machine Code. Still, if you've ever tried Assembly, you'd know it's just ridiculous to learn and only works for the particular computer architecture. I went through the pain of learning Assembly for the Commodore 64 back in the 80's when I was 12 so I could crack games and create my own cheats (infinite lives, ammo). Motivation is a funny thing.

Next stop is C. C is a step away from assembly and also where programs start to get much easier to read and write. C gives programmers a good deal of control over the hardware and is very fast. When you write a program in C, you use something called a compiler that compiles the code down to equivalent machine code. The compiler is basically a translator of sorts, so when you type in
printf("Hello World"); the compiler tries to turn all that into the 0's and 1's the hardware understands.

From C, the next step up would be C++. C++ is also a compiled language and uses a compiler, just like c does, to turn the programs you write into Machine Code. Both C and C++ make it a hell of a lot easier to write programs, but there is still a TON of effort that goes into making a program that does something useful, such as even a simple calculator program.

Let's get into PHP, Ruby, Python, JavaScript and other scripting langs and how they work. All of these languages work the same way. When you write code in one of these languages and execute it, something called in "Interpreter" translates that into machine code, right when it's needed. There is a big tradeoff to this. Every time your code runs, it needs to pass through the interpreter which in turn compiles the code to machine code which is then executed by the hardware. We can call languages like these "interpreted languages" rather than "compiled" like C and C++.

As you can imagine, interpreted languages are much slower than compiled langs because with compiled langs, the compiler has already taken care of turning the code into machine code. This is no different than using a Spanish interpreter if you happen to be in Mexico and can't speak Spanish. You have to talk to the Interpreter before they can translate your English to Spanish. This is much slower from knowing Spanish and talking to the person directly.

Why do all these languages exist? In my humble opinion, it's because there are so many brilliant developers that are all trying to get the most done with as little effort as possible using syntax that is easy to read and write. These languages are all like mini workshops full of tools that are designed for specific situations.

For instance, I can write a web scraper in C++ from scratch and it would probably take me several thousand lines of code to make it work. I can also use Ruby, PHP, Python, etc in far less lines of code. I also don't have to worry about managing memory or the "low level" details because the interpreters handling a lot of that for me. With the modern web, the tide is now turning toward languages like Go, Erlang and Elixir because they are specifically designed to address concurrency issues and computing needs of today with multi-core processors being the standard.

What are Apache and Nginx? They are simply programs written in C/C++ that allow web browsers and other devices to interact with files and applications on a server. Let's use a Wordpress install as an example. Here's how things look from the perspective of the web server software:
  • User types in http://wpsite.example.com
  • After DNS lookup, IP is located and the webserver is contacted
  • Apache/Nginx is listening on port 80 and/or 443 (HTTP and HTTPS), sees that user is requesting http://wpsite.example.com.
  • Apache/Nginx then does what it needs to do to "serve" the data to the end user. This involves things like passing the PHP code through the PHP interpreter gathering things like images/videos/other media, Javascript files, etc. In the case of WordPress, this means that quite a bit of code has to be both interpreted and executed. If you have plugins, the extra code adds to the amount of time this takes
  • The end result is sent back to the user's browser
What seems hard for a lot of people to grasp is the fact that a "server" can both be a piece of software and a piece of hardware. So, what does a piece of software need to do to be considered a "server"? Not much more than listen on a network port or socket and respond to requests.

That image above that was showing "requests per second"? It may help to think of this in a different context. Let's say we're in the restaurant business. We're going to take on Taco Bell, call our restaurant "Turbo Taco" and claim that our food is not only better, but our service is more than twice as fast as Taco Bell. We'll need a metric for this. Let's call it "TPM", or Tacos Per Minute. This would be a measurement of how many tacos that employees can get to hungry customers in a minute.

Because Turbo Taco is a fun company, they decide to have a contest between their top 2 locations to determine which Turbo Taco location is deserving of the Golden Taco award. There are 2 teams, let's call them Nginx and Apache.

Team Apache starts off and they are only getting 20 tacos out to their customers per minute, which means their TPM rate is 20. Upon review of their workflow, it is determined that, for each taco they make, they have to call their boss on the phone and say "I just made a taco", wait for the boss to record it on a log, then the boss has to get approval from upper management to present the taco to the customer. Lots of layers to go through just to make a damn taco and this is with EACH and EVERY one. Still, a TPM rate of 20 isn't too shabby considering the amount of bureaucracy.

The Nginx team was hand picked specifically by the most elite of Turbo Tacoistas. Tacos are prepared and served with military precision, as each team member is a specialist with specific orders. No approval is needed to prepare and "serve" the taco to the customer. Team Nginx can make and "serve" 60 tacos in a minute, giving them a TPM rate of 60 and easily bringing home the Golden Taco award and other assorted random prizes. Because of the extreme training of the Nginx team and also because they eliminated a lot of the bureaucratic layers the Apache team has to go through, they are able to put the Turbo in Turbo Tacos.

Now think of requests per second as the amount of "things" per second that web server software does for end users. A "thing" could mean rendering a static web page, json file in the case of an API, etc.

I hope this helps clarify some of the concepts for you as well as a birds-eye sort of analogy of Apache vs Nginx. Please let me know if you have any questions.
 
Last edited:
The easiest way to think about it is this. On the software side, a server is a framework for determining how to serve web pages or web services across the internet or intranet.

In it's simplest form, a server framework in certain languages might only consist of a very few lines of code, that simply defines a few characteristics of how a web page or web service should be served or made accessible to the user. In other forms, it might consist of multiple files, multiple plugins and modules, and thousands of lines of code. Server frameworks can be written in many different languages, and can offer many different capabilities. Ultimately, your needs determine the demand, and Apache or NGINX will cover the vast majority of people's needs.

The awesome thing is there are a lot of other creative and flexible options out there these days. For example, should you want simplicity and portability, it's possible to create a Go ("Golang") server in just a few lines of code, run a single build command, and your Go server will self-compile into a single, executable, binary that you can take with you on a USB stick anywhere! Correspondingly, you could do the same or similar with Hugo, which is built on Go, and have a portable server + static site generator that is highly portable and efficient to get up and running anywhere.
 
Nginx requires getting pretty familiar with your site's conf file. Utilize this within a site block for browser caching:

Code:
location ~*  \.(jpg|jpeg|png|gif|ico|js)$ {
  expires 365d;
  }
location ~*  \.(pdf)$ {
  expires 30d;
}
location ~*  \.(css)$ {
  expires 1h30m;
}

More: Make Browsers Cache Static Files On nginx
 
For Vary: Accept-Encoding
Code:
	gzip on;
	gzip_disable "msie6";

	gzip_comp_level 6;
	gzip_min_length 1100;
	gzip_buffers 16 8k;
	gzip_proxied any;
	gzip_http_version 1.1;
	gzip_types text/plain text/css text/js text/xml text/javascript application/javascript application/x-javascript application/json application/xml application/xml+rss;
	gzip_vary on;

More: How to configure Nginx Gzip compression
 
Last edited:
Blocking Bad Bots

Inside your nginx.conf file within your "http" block NOT your "server" block (this is a lite version, a more comprehensive version is available below):
Code:
http {
map $http_user_agent $limit_bots {
  default 0;
  ~*(BlackWidow|ChinaClaw|Custo|DISCo|Download|Demon|eCatch|EirGrabber|EmailSiphon|EmailWolf|SuperHTTP|Surfbot|WebWhacker) 1;
  ~*(Twengabot|htmlparser|libwww|Python|perl|urllib|scan|Curl|email|PycURL|Pyth|PyQ|WebCollector|WebCopy|webcraw) 1;
}
}

Inside your example.com configuration file (usually located within your sites-available folder within nginx).
Note: You cannot have duplicate 'location /' subdirectives, so if you have an existing one, place the respective "if" statement within your current existing 'location /' subdirective.

Code:
server {
location / {
#blocks blank user_agents
if ($http_user_agent = "") { return  301 $scheme://www.google.com/; }

  if ($limit_bots = 1) {
  return  301 $scheme://www.google.com$request_uri;
  }
}
}

^^ Now this will redirect those bad bots whenever they come around to your nginx setup. I've created a more comprehensive version within the block unwanted bots thread.
 
Allow only select IPs (1.2.3.4 in example) to a "/restricted" directory:

Code:
location ^~ /restricted {
  allow 1.2.3.4;
  deny all;

  index index.html index.php index.htm index.shtml;

  location ~ \.php$ {
  try_files $uri =404;
  fastcgi_split_path_info ^(.+\.php)(/.+)$;
  fastcgi_pass unix:/var/run/php5-fpm.sock;
  fastcgi_index index.php;
  fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
  include fastcgi_params;
  }
}
 
Nginx as a reverse-proxy via subdomains

It is very common to have web-based services that run on ports other than 80 or 443 (http/https) so they can run alongside general webservers like Nginx/Apache. Say you have something that runs on port 11223. In order to get to the service, you'd have to type in something like this: http://example.com:11223

This isn't what you want. You should always strive to have the software firewall on your VPS (iptables) configured to drop requests to non-standard ports. Even if you are already doing this and are allowing your IP access to all ports, you still don't want to have to hang a port on the end of the URL when you can easily configure Nginx (or Apache) to be a reverse proxy.

A very simple example

You want to set Nginx up so all you have to do is type in http://some-service.example.com and have Nginx take care of forwarding the request to port 11223. Here's what you would do:

  • Set up an A record in DNS for some-service.example.com
  • Set up a very simple config file for a vhost in nginx like so then restart nginx:

Code:
server {
   listen      80;
   server_name some-service.example.com;
   
   location / {
        proxy_pass  http://127.0.0.1:11223;
    }
}


The above is the simplest possible thing that will work. What is happening here is Nginx will see requests coming in for http://some-service.example.com and will act as a middle-man, proxying the request to the service running on the VPS at localhost port 11223. This is both convenient and more secure because we can configure the service to only listen on localhost rather than 0.0.0.0 (everything) and opening up 11223 to the public.

If you want to read more about using Nginx as a reverse-proxy, check out the official documentation here: https://www.nginx.com/resources/admin-guide/reverse-proxy/
 
A testament to NGINX versus Apache2.4.xx in terms of speed.

Before NGINX (running Apache2.4.xxx):

TRYIRY7.jpg


After With NGINX (1.10.0):

2dvVB3S.jpg


--

You may think that's not a lot, but that's 1.9MBs coming at users in under half a second. It makes all the difference in the world when a user is half way around the world - and this is BEFORE a CDN has been enabled :wink:
 
Getting CGI to work on NGINX:

Terminal command to install fastcgi:
Code:
apt-get install fcgiwrap

Then insert this inside your server block:
Code:
  location /cgi-bin/ {
  # Disable gzip (it makes scripts feel slower since they have to complete
  # before getting gzipped)
  gzip off;

  # Set the root to /usr/lib (inside this location this means that we are
  # giving access to the files under /usr/lib/cgi-bin)
  root  /var/www/example.com/htdocs;

  # Fastcgi socket
  fastcgi_pass  unix:/var/run/fcgiwrap.socket;

  # Fastcgi parameters, include the standard ones
  include /etc/nginx/fastcgi_params;

  # Adjust non standard parameters (SCRIPT_FILENAME)
  fastcgi_param SCRIPT_FILENAME  $document_root$fastcgi_script_name;
  include fastcgi_params;
  }

Then restart nginx. I did a test with a hello-world script and these are the results I got:

RYBDCCa.png


10ms to start off the foundation = Insane. Unfortunately turning off gzip means that "Performance Grade" drops to 83 instead of my usual 100/100, but if I turned on gzip the overall script would "feel" sluggish, so it's gotta stay off for now unless I figure something else out, but I'm not bothering researching anything further regarding it.
 
Enabling HTTP/2 on NGINX

Follow up on the HTTP2 discussion in HTTP vs HTTPS: SEO Benefits?, here is how you enable HTTP2 in nginx;

First just make sure nginx version is 1.9.5 or ABOVE:

Code:
commandline: nginx -V

You'll get an output of the nginx version AND a bunch of modules that are loaded with nginx. look for "--with-http_v2_module"

If it's present you are perfect (Upgrade your NGINX server to over 1.9.5 if it's not present). Go into your sites-available/example.com file and find and edit the line that looks like this:

Code:
listen 443 ssl;

to

Code:
listen 443 ssl http2;

Afterwards reload/start your nginx server

Code:
service nginx reload

You now have HTTP2 enabled.
 
Setting up LetsEncrypt SSL on NGINX

Alright so I have an offshoot project that I thought would be interesting to see if we can get LetsEncrypt SSL setup, and I was pleasantly surprised how easy it was.

Step 1. - Update your software with latest patches
Code:
sudo apt-get update && sudo apt-get dist-upgrade -y

Step 2. - Install LetsEncrypt
Code:
sudo apt-get -y install letsencrypt

Step 3. - STOP NGINX server
Code:
service nginx stop

Step 4. - Run LetsEncrypt (replace example.com with your domain)
Code:
letsencrypt certonly --standalone -d example.com -d www.example.com

Step 5. - Generate Strong Diffie-Hellman Group
Code:
sudo openssl dhparam -out /etc/ssl/certs/dhparam.pem 2048

(This may take a few minutes)


Step 6. - Edit NGINX /etc/nginx/sites-available/example.com to reflect something like the following (this setup redirects non-www to www with SSL):

Code:
server {
    listen      80;
    listen      443 ssl http2;
    server_name example.com;

        ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
        ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;

    ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
        ssl_prefer_server_ciphers on;
        ssl_dhparam /etc/ssl/certs/dhparam.pem;
        ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA';
        ssl_session_cache shared:SSL:50m;
        ssl_stapling on;
        ssl_stapling_verify on;
        add_header Strict-Transport-Security max-age=15768000;

        ssl_session_timeout 1h;

    #off on purpose
        #ssl on;

    rewrite     ^   https://www.example.com$request_uri? permanent;
}

server {
        server_name www.example.com;

        #listen 80;
        listen 443 ssl http2;

        ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
        ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;

    ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
        ssl_prefer_server_ciphers on;
        ssl_dhparam /etc/ssl/certs/dhparam.pem;
        ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA';
        ssl_session_cache shared:SSL:50m;
        ssl_stapling on;
        ssl_stapling_verify on;
        add_header Strict-Transport-Security max-age=15768000;

        ssl_session_timeout 1h;

    ssl on;
}

Step 7. - Start NGINX server
Code:
service nginx start

Step 8. - Test Everything went smoothly
Code:
https://www.ssllabs.com/ssltest/analyze.html?d=example.com

U6jC7jG.jpg


Step 9. - Dry Run of LetsEncrypt Renewal
Code:
letsencrypt renew --dry-run --agree-tos

Step 10. - Put in Cron job for every monday/sunday to check for new certification and reload nginx (reason is LetsEncrypt certifications are only for 90 days)
Code:
sudo crontab -e

Then add this cronjob that runs every Sunday at 2:30 AM an 2:35 AM:

Code:
30 2 * * 0 letsencrypt renew >> /var/log/le-renew.log
35 2 * * 0 /etc/init.d/nginx reload

Step 11. - fin(ished)

--

Additionally test of hardening of SSL can be found here: https://securityheaders.io/

You'll probably get an F or E rating, you can go to https://letsecure.me/secure-web-deployment-with-lets-encrypt-and-nginx/ to learn more about how to fix that further.

--

Source #1: https://www.digitalocean.com/commun...cure-nginx-with-let-s-encrypt-on-ubuntu-14-04
Source #2: https://certbot.eff.org/#ubuntuxenial-nginx
Source #3: https://letsecure.me/secure-web-deployment-with-lets-encrypt-and-nginx/
 
Increased SSL hardening

Hardening nginx communication security is really easy. Using https://securityheaders.io/ you can test your setup's security. Here is my before:

RQGUPp3.jpg


Here is my after:

QtgrI6C.jpg


(The A+ certification was left off on purpose.)

I added the following code to the SSL sections of my example.com file:

Code:
	add_header X-Content-Type-Options "nosniff" always;
	add_header X-Frame-Options "SAMEORIGIN" always;
	add_header X-Xss-Protection "1";
	add_header Content-Security-Policy "default-src 'self'";

So my new final code looks like:

Code:
server {
    listen      80;
    listen      443 ssl http2;
    server_name example.com;

        ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
        ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;

    ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
        ssl_prefer_server_ciphers on;
        ssl_dhparam /etc/ssl/certs/dhparam.pem;
        ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA';
        ssl_session_cache shared:SSL:50m;
        ssl_stapling on;
        ssl_stapling_verify on;
        add_header Strict-Transport-Security max-age=15768000;

	add_header X-Content-Type-Options "nosniff" always;
	add_header X-Frame-Options "SAMEORIGIN" always;
	add_header X-Xss-Protection "1";
	add_header Content-Security-Policy "default-src 'self'";

        ssl_session_timeout 1h;

    #off on purpose
        #ssl on;

    rewrite     ^   https://www.example.com$request_uri? permanent;
}

server {
        server_name www.example.com;

        #listen 80;
        listen 443 ssl http2;

        ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
        ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;

    ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
        ssl_prefer_server_ciphers on;
        ssl_dhparam /etc/ssl/certs/dhparam.pem;
        ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA';
        ssl_session_cache shared:SSL:50m;
        ssl_stapling on;
        ssl_stapling_verify on;
        add_header Strict-Transport-Security max-age=15768000;

	add_header X-Content-Type-Options "nosniff" always;
	add_header X-Frame-Options "SAMEORIGIN" always;
	add_header X-Xss-Protection "1";
	add_header Content-Security-Policy "default-src 'self'";

        ssl_session_timeout 1h;

    ssl on;
}

This web page goes into alot more details about securing your NGINX setup that I highly recommend reading: https://letsecure.me/secure-web-deployment-with-lets-encrypt-and-nginx/

Note, regarding this line:
Code:
    ssl_protocols TLSv1 TLSv1.1 TLSv1.2;

You CAN in theory only have TLSv1.2, but doing so will not allow older browsers to connect to your server - older browsers being IE10 and older, Android 4.3 and older, Java6 & 7. Consider your audience when contemplating this. I left it in the main code cause I know my audiences have a tendency to have older browsers around. Note TLSv1.0 will be end of life on 30 June 2018.
 
Code to block bad users from your website (put inside your domain.com.cnf file within "/sites-available/" folder):

Code:
server {
    location / {
            #block bad users
            deny 73.104.161.237;
            deny 23.91.70.24;
        }
}
 
Back