Nginx

Page content

Assuming you have a Website with some higher load, higher demand for availability, or both of them. You can do the following:

  • Duplicate your Webserver (and the Content of Course) as much as you need
  • Put a Loadbalancer in Front the Webserver, best in Combination with a Firewall Ruleset
  • Terminate TLS on the Loadbalancer once, or on each Webserver directly. Whatever you prefer.
  • You can also double the Loadbalancer with two Boxes the get redundancy on this level.

Network Diagram

                   +----------------+
                   |       www      |
                   +--------+-------+
                            |
                   +--------+--------+
                   | 136.244.113.129 | vio0
                   |  Loadbalancer   |
                   |   10.24.96.3    | vio1
                   +--------+--------+
                            |
        +---------------------------------------+
        |                   |                   |
        |                   |                   |
+-------+-------+  +--------+-------+  +--------+-------+
|     www1      |  |      www2      |  |      www3      |
|  10.24.96.4   |  |   10.24.96.5   |  |   10.24.96.6   |
+---------------+  +----------------+  +----------------+

Config Web1 - 3

You can run any Kind of Webserver you want. Apache, Nginx, Httpd, … here is the Config for Nginx on OpenBSD.

nginx.conf

# lb.8192.ch

#
# HTTP Server lb.8192.ch
#
server {
    listen        80;
    server_name   lb.8192.ch www.lb.8192.ch;

    access_log    /var/log/nginx-nossl/lb.8192.ch.log main;
    error_log     /var/log/nginx-nossl/lb.8192.ch-error.log;

    #root          /var/www/virtual/lb.8192.ch;

    index         index.html index.htm;

    location /.well-known/acme-challenge/ {
        rewrite ^/.well-known/acme-challenge/(.*) /$1 break;
        root /acme;
    }

    location / {
        return 301    https://$host$request_uri;
    }
}

#
# HTTPS Server lb.8192.ch
#
server {
    listen        443 ssl;
    server_name   lb.8192.ch www.lb.8192.ch;

    access_log    /var/log/nginx/lb.8192.ch.log main;
    error_log     /var/log/nginx/lb.8192.ch-error.log;

    root          /var/www/virtual/lb.8192.ch;

    index         index.html index.htm;

    ssl_certificate_key         /etc/ssl/private/lb.8192.ch.key;
    ssl_certificate             /etc/ssl/lb.8192.ch.fullchain.pem;


}

/etc/acme-client.conf

authority letsencrypt {
  api url "https://acme-v02.api.letsencrypt.org/directory"
  account key "/etc/acme/letsencrypt-privkey.pem"
}

authority letsencrypt-staging {
  api url "https://acme-staging-v02.api.letsencrypt.org/directory"
  account key "/etc/acme/letsencrypt-staging-privkey.pem"
}

#
# My Stuff
#

domain lb.8192.ch {
  alternative names { www.lb.8192.ch }
  domain key "/etc/ssl/private/lb.8192.ch.key"
  domain full chain certificate "/etc/ssl/lb.8192.ch.fullchain.pem"
  sign with letsencrypt
}

Config Loadbalancer

/etc/pf.conf

root@lb ~# cat /etc/pf.conf

### DEFAULT SETTINGS ###

set block-policy drop
set limit states 50000
set state-defaults pflow
set optimization normal
set ruleset-optimization none
set skip on { lo0 enc0 }

# Normalize Traffic
match inet  scrub (no-df max-mss 1380)
match inet6 scrub (max-mss 1360)

# Block all
block log

# allow outgoing traffic
pass out      quick on vio1
pass out  log quick

# allow incoming traffic
pass      log quick inet proto tcp from any to (self) port { 22 80 443 }

nginx.conf

root@lb ~# cat /etc/nginx/nginx.conf

worker_processes  1;

worker_rlimit_nofile 1024;
events {
    worker_connections  800;
}


http {
    upstream backends {
      server 10.24.96.4:443 max_fails=1 fail_timeout=1s;
      server 10.24.96.5:443 max_fails=1 fail_timeout=1s;
      server 10.24.96.6:443 max_fails=1 fail_timeout=1s;
    }

    default_type  application/octet-stream;
    index         index.html index.htm;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    keepalive_timeout  65;

    server_tokens off;

    #
    # HTTPS Server lb.8192.ch
    #
    server {
      listen        443 ssl;
      server_name   lb.8192.ch www.lb.8192.ch;

      access_log    /var/log/nginx/lb.8192.ch.log main;
      error_log     /var/log/nginx/lb.8192.ch-error.log;

      root          /var/www/virtual/lb.8192.ch;

      index         index.html index.htm;

      ssl_certificate_key   /etc/ssl/private/lb.8192.ch.key;
      ssl_certificate       /etc/ssl/lb.8192.ch.fullchain.pem;
      ssl_protocols         TLSv1.1 TLSv1.2;
      ssl_ciphers           HIGH:!aNULL:!MD5;

      location / {
        proxy_pass  https://backends;
      }

    }
}

Througput Tests

running hey on osx give me the following results. seems like i’m able to serve 3800 (almost empty) Websites over TLS in 60s. Not that bad …

mbp:~ user$ time hey -z 60s -c 100 -disable-keepalive https://lb.8192.ch

Summary:
  Total:	60.7983 secs
  Slowest:	3.8704 secs
  Fastest:	0.8506 secs
  Average:	1.5946 secs
  Requests/sec:	62.5346

  Total data:	49426 bytes
  Size/request:	13 bytes

Response time histogram:
  0.851 [1]	|
  1.153 [39]	|■
  1.455 [1466]	|■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
  1.757 [1747]	|■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
  2.058 [112]	|■■■
  2.360 [107]	|■■
  2.662 [192]	|■■■■
  2.964 [123]	|■■■
  3.266 [8]	|
  3.568 [4]	|
  3.870 [3]	|


Latency distribution:
  10% in 1.3743 secs
  25% in 1.4289 secs
  50% in 1.4742 secs
  75% in 1.5410 secs
  90% in 2.1760 secs
  95% in 2.6154 secs
  99% in 2.9195 secs

Details (average, fastest, slowest):
  DNS+dialup:	0.8579 secs, 0.8506 secs, 3.8704 secs
  DNS-lookup:	0.0007 secs, 0.0000 secs, 0.0025 secs
  req write:	0.0001 secs, 0.0000 secs, 0.0010 secs
  resp wait:	0.7366 secs, 0.0298 secs, 1.5347 secs
  resp read:	0.0001 secs, 0.0000 secs, 0.0003 secs

Status code distribution:
  [200]	3802 responses


real	1m0.822s
user	0m6.916s
sys	0m2.824s

Any Comments ?

sha256: 0b1aad2881c3ead46e02c0d7ae322a07d73e9e06a9871d49ccf42fd6433f46d0