Getting started with self hosting - episode 4
Setup an application on your server
Achieve segregation between apps
why would you want to separate all the apps from each other?
It’s mainly to solve 2 problems:
- The security problem. If an attacker breaks into your server because one app has a security issue, he gains access to everything.
- The version problem. An example would be with 2 apps that need 2 different version of nodeJS. yes you can fix this specific issue for nodeJS by introducing more tooling you will forget next week but you will probably also encounter the same issue for PHP, python, Ruby, …. At the end of the day, you will end up spending more time hitting your head on the wall trying to fix those problems while you probably have better things to do.
To make all your apps independent from each other, there’s a wide range of techs available to you but you can group them in 3 group of solutions:
- 1 machine per application. Basically you’re buying a new machine every time you need to add something new. Don’t do that unless there is no other choice.
- VM based solutions: Vmware, Proxmox, ….
- Container based solutions: docker
Choosing between one or the other is a trade off:
1 machine per application | VM Based solution | Container Based solution | |
Overhead in resources | very large | large | tinny |
Security | best | best | ok |
Ease of maintenance | hard | hard | easy |
Ease of migration | hard | hard | easy |
If you’re looking into a cost effective solution, containers are great considering VM based solutions will always consume more resources. On the other hand, VM will be more secure by achieving real process isolation (your apps will run on a different kernel whereas containers based solution will use the host kernel).
In this guide, we will focus on the container based solution using docker as my motivation was to be as cost effective as possible and I’ve considered the added value of VM didn’t outweigh its cost. To make things simpler, this guide will only focus on an implementation with unique server as it makes maintenance easier and is also much cheaper.
Install apps
How it works
Nothing better than a good old school schema to gain a better understanding of what we’re building here:
How it works: When a user attempt to visit: app1.domain.com
, the request first hit the reverse proxy. The reverse proxy’s role is to forward the request to the container running the service we want to see associated with app1.domain.com
and send back the container response to the user’s browser.
A few things here:
- Reverse proxy: we will use a software called nginx to achieve this. It will be install on the host and listening to port 80 and 443 (corresponding to HTTP and HTTPS). When the reverse proxy receive a request on a domain it knows, it forwards it to the proper container in the same way that a traffic controller manage incoming traffic when you go on a flight. Our services will only be accessible through the reverse proxy adding security and improving loading speed along the way.
- Containers: They are the building blocks running our applications. They will expose their service through the loopback ip (aka localhost) on a certain port so that nobody can access those directly from the internet.
Install: Issue the following command:
sudo apt-get update
sudo apt-get install nginx docker docker-compose
sudo usermod -aG docker `whoami`
Skeleton of an app
I wanted to share here how most of the apps
you might be interested in installing are made of:
- Code of the app: it is the only mandatory piece, it is what makes it unique. Essentially you might need to compile the source, install dependencies for the code to run and install some sort of other software to run the so called code. It all depends on how the application was developed in the first place. For example, if the app is made with:
- Golang: the application will be run as a
fat binary
which means you can just run it and expect it to work. With Golang, the binary contains all the libraries it needs to run on your environment without having to tweak anything - NodeJS: the application will be run either by typing
npm run something_here
ornode entrypoint.js
once you’ve installed all the dependencies and make sure your version of node is compatible with the one recommend by the application maintainer. - Java: the application will be run either by calling the
java
command on the jar representing your application or some sort ofTomcat
server - PHP: the application will be run by either
Apache
orNginx
with a special module able to interpret the code and execute it - ….
- Golang: the application will be run as a
- Database: we’ll define a database as a third party systems used by the core application to store/retrieve data. I know these days non technical people like to employ the word
database
when they store word documents on dropbox but that’s not really what we mean here as we emphasis on the ability of the application to store and retrieve structured data without manual intervention. A Database isn’t a mandatory thing but most applications requires the use of one. Example of database you are very likely to encounter: Postgres, Mysql, sqlite, MongoDB - Other: less frequently, you might see some other systems you might have to install to make your application work:
- some sort of messaging queue: RabbitMQ, Kafka, …
- some sort of search component: solr, elasticsearch
You can see applications that are made of any combinations but those are the most frequent:
- 1 and 2: medium difficulty to install
- 1: simplest to install
- 1 and 2 and 3: hardest to install. Usually, pick a large coffee and prepare yourself for a few hours of fun.
Install an application
Now we have all the tooling in place We’ll focus on the process to follow in order to properly install an app on the server. As an example we will start with 2 apps:
- Nuage: a web based client to manage files on your server. As shown in the section above, nuage only need 1 to work
- Lychee: a place to store your photos. As shown in the section above, lychee needs 1 and 2 to work properly
The installation schema is always the same:
- Setup your domain name
- Configuring our reverse proxy to serve our application on the internet.
- Generate the SSL certificate to enable HTTPS on the application
- Deploy the application container(s)
Setup the domain
Let’s say:
nuage
will be access from the domainfiles.domain.com
photo
will be access from the domainphoto.domain.com
To setup your domain, go on the website of your domain name provider, and create the following entries:
CNAME: file.domain.com -> domain.com
CNAME: photo.domain.com -> domain.com
Configuring our reverse proxy
Nginx has several directories you are interested in:
/etc/nginx/sites-available/
: which contains a list of files, each file corresponding to the configuration of an application you have installed/etc/nginx/sites-enabled/
: which contains a list of symbolic links, each link pointing to the configuration of an application you want to see running.
The simplest configuration looks like:
server {
listen 80;
server_name files.domain.com;
location / {
proxy_set_header Host $host:$server_port;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 90;
proxy_pass http://127.0.0.1:10000;
}
}
This configuration is simple, do the job but there’s many improvements we can make:
- redirect HTTP traffic to HTTPS
- compress your resources to make your application faster to load
- instruct the browser to cache some resources
- mitigate common attacks by using an application firewall
Redirect HTTP to HTTPS
To force the usage of HTTPS, your configuration file needs to look like this:
server {
listen 80;
server_name files.domain.com;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl;
server_name files.domain.com;
ssl_certificate /etc/letsencrypt/live/domain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/domain.com/privkey.pem;
...
location / {
...
add_header Strict-Transport-Security "max-age=31536000; includeSubdomains; ";
}
}
To make it work, we will need to use a SSL certificate. We’ve also enable a browser feature for security called HSTS. HSTS is a very cheap/simple security measure protecting your user against man in the middle attack (eg “New Tricks For Defeating SSL In Practice”). We simply tell the browser that all the ressource on the page will be load over HTTPS even if the page is trying to load something over HTTP
Compressing resources
Because loading speed matter to browser vendors, it’s almost certain your browser support decompression of content on the fly. The goal here is to lower the amount of data sent across the wire by compressing the response and thus improving loading time. Use and abuse of this unless you have good reasons for not doing so.
server {
...
location / {
...
gzip on;
gzip_comp_level 6;
gzip_vary on;
gzip_min_length 1000;
gzip_proxied any;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
gzip_buffers 16 8k;
}
}
Browser Caching
Considering a browser can cache resources, you can make your application load much faster:
map $sent_http_content_type $expires {
default off;
~text/html epoch;
~text/css 7d;
~application/javascript 7d;
~image/ 30d;
~font/ 30d;
}
server {
expires $expires;
...
}
WAF
If you are interested in this kind of stuff, nginx has a paid offering that is pretty cool if you want to protect your installation against attacks.
Wrapping everything together
Fire up a terminal in your server and type:
# create configuration for nuage
cat > /etc/nginx/sites-available/files.conf <<EOF
map $sent_http_content_type $expires {
default off;
~text/html epoch;
~text/css 7d;
~application/javascript 7d;
~image/ 30d;
~font/ 30d;
}
server {
listen 80;
server_name files.domain.com;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl;
server_name files.domain.com;
client_max_body_size 1024M;
expires $expires;
#ssl_certificate /etc/letsencrypt/live/domain.com/fullchain.pem;
#ssl_certificate_key /etc/letsencrypt/live/domain.com/privkey.pem;
location / {
add_header Strict-Transport-Security "max-age=31536000; includeSubdomains; ";
proxy_set_header Host $host:$server_port;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://127.0.0.1:10006;
proxy_read_timeout 90;
gzip on;
gzip_comp_level 6;
gzip_vary on;
gzip_min_length 1000;
gzip_proxied any;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
gzip_buffers 16 8k;
}
}
EOF
# Create configuration for lychee
cat > /etc/nginx/sites-available/photos.conf <<EOF
map $sent_http_content_type $expires {
default off;
~text/html epoch;
~text/css 1d;
~application/javascript 1d;
~image/ 30d;
~font/ 30d;
}
server {
listen 80;
server_name photos.domain.com;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl;
server_name photos.domain.com;
client_max_body_size 1024M;
expires $expires;
#ssl_certificate /etc/letsencrypt/live/domain.com/fullchain.pem;
#ssl_certificate_key /etc/letsencrypt/live/domain.com/privkey.pem;
location / {
proxy_set_header Host $host:$server_port;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://127.0.0.1:10018;
proxy_read_timeout 90;
add_header Strict-Transport-Security "max-age=31536000; includeSubdomains; ";
gzip on;
gzip_comp_level 6;
gzip_vary on;
gzip_min_length 1000;
gzip_proxied any;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
gzip_buffers 16 8k;
}
}
EOF
Don’t forget to edit those files and replace the server_name
by your own domain. Now we’ve created our proxy configuration, we can deploy it live:
ln -s /etc/nginx/sites-available/files.conf /etc/nginx/sites-enabled/files.conf
ln -s /etc/nginx/sites-available/photos.conf /etc/nginx/sites-enabled/photos.conf
sudo nginx -t && sudo service nginx restart
Enable https on your sites
Http usage is decreasing over time as more and more website use https. Self signed certificates are good for development but if you do it wrong, chances are you can be victim of a man in the middle attack. When I say do it wrong, I mean doing either of those mistakes:
- don’t let your OS know about your self signed certificate
- don’t verify your actual SSL certificate before clicking on “I understand the risk and I want to pursue the navigation anyway”.
If you don’t want to spend money on creating SSL certificates, you can still use Let’s encrypt, a free provider of SSL certificate. As Let’s encrypt is free I can’t advise to use it for something you make money from as it doesn’t come with proper support. Example of things that I found out to be a pain with Let’s encrypt:
- A few months ago, their client was crashing with a weird error. After a painful debugging session, I realise one of their API was down and nobody could create SSL certificates for a few hours with nothing else possible than waiting until they put it back online.
- After a security report, they closed the nginx plugin I was an heavy user of instead of fixing the underlying issue (which is still not fix), hours of “fun” to get an alternative solution and deploy it on all my domains ….
At the end of the day, let’s encrypt is free we shouldn’t expect too much from them, a solid alternative from google who want to force https everywhere would have been appreciated but well …
Since I first write, this post, things have change as Let’s encrypt just release a way to generate wildcard certificate. Basically if you have many apps, you can now just use 1 certificate instead of one per app which is very very very appreciated.
Installation: To generate SSL certificate, we will need to install a tool called certbot
:
curl -X GET https://dl.eff.org/certbot-auto > certbot
chmod a+x certbot
sudo mv certbot /usr/bin/
To make sure, certbot
was installed correctly, type the command:
certbot --version
You should be greet with something like certbot 0.14.2
. If you get bash: certbot: command not found
, then something went wrong, dig up the install documentation.
Create a certificate: Once certbot installed, generating an SSL certificate is simple:
sudo certbot certonly -d domain.com -d *.domain.com --keep --renew-by-default --manual --preferred-challenges dns --register --server https://acme-v02.api.letsencrypt.org/directory
Basically, you need to go on the interface of your domain name provider, and set the DNS TXT values as asked by certbot. Once done, you should see a similar message:
Congratulations! Your certificate and chain have been saved at:
/etc/letsencrypt/live/domain.com/fullchain.pem
Your key file has been saved at:
/etc/letsencrypt/live/domain.com/privkey.pem
Your cert will expire on 2018-06-12. To obtain a new or tweaked
version of this certificate in the future, simply run certbot
again. To non-interactively renew *all* of your certificates, run
"certbot renew"
We’re interested in 2 things:
- the newly generated certificate:
/etc/letsencrypt/live/domain.com/fullchain.pem
- the newly generated private key:
/etc/letsencrypt/live/domain.com/privkey.pem
If you’ve followed our instructions, you can now edit the nginx configuration file and removed the comments:
server {
listen 443 ssl;
server_name app1.domain.com;
# the 2 following lines were previously commented
ssl_certificate /etc/letsencrypt/live/domain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/domain.com/privkey.pem;
....
}
Once done, apply the config by restarting the reverse proxy:
sudo nginx -t && sudo service restart nginx
Renewing a certificate: Using lets encrypt, you will have to renew your certificate every 3 months by executing the following command:
sudo certbot renew
In practice, it’s much easier to do it automatically by having a cronjob
launched from the root user on a regular basis. To do this:
sudo su # type your password
crontab -e # should open a file you can edit
At the end of the crontab, add the following:
# renew ssl certificates
* * 1 * * certbot renew
Build the application container
It can be more or less easy to setup your container. Best scenario is there is already a nice container and compose file maintained by the project directly.
In our case, the install instructions for:
- nuage are given on the github repo
- lychee are given here
We will use a tool called docker-compose to manage our application which will make our life much easier. At its core, our application container will be configure with a docker-compose.yml
file which describe what our application is made of.
Once created, we can start and stop our application by using the following commands:
# start your application
docker-compose up -d
# stop your application
docker-compose stop
-
Installing Nuage
We must first create our
docker-compose.yml
file:################## # Setup for Nuage: mkdir -p /app/nuage/ && cd /app/nuage cat > docker-compose.yml <<EOF version: '2' services: nuage: container_name: nuage_app image: machines/nuage restart: always environment: - SECRET_KEY=wGpnvffXRDpeHZA807fNaL62KDUfK4FC - NODE_ENV=production ports: - "127.0.0.1:10006:80" EOF
What we’re saying here is the container is already available on docker hub so we don’t need to create it manually (that’s the
image: machines/nuage
instruction).Once you start your application with the
docker-compose up -d
command, you should be able to open up a browser and visit:file.domain.com
, tadah: -
Installing Lychee
Lychee is gonna a bit more tricky to install as they don’t have an official docker image available and we need to connect to a database to make everything working.
We’ll start by creating our
docker-compose.yml
file:################## # Setup for Lychee: mkdir -p /app/lychee/data/code && mkdir -p /app/lychee/img && cd /app/lychee cat > docker-compose.yml <<EOF version: '2' services: db: container_name: lychee_db image: mysql restart: always environment: MYSQL_DATABASE: lychee MYSQL_PASSWORD: password MYSQL_ROOT_PASSWORD: password MYSQL_USER: lychee volumes: - ./data/db:/var/lib/mysql lychee: container_name: lychee_app build: ./img/ restart: always ports: - "10018:80" volumes: - ./data/config:/var/www/Lychee/data - ./data/uploads:/var/www/Lychee/uploads depends_on: - db EOF
We have 2 blocks here:
- 1 block representing the container running the database:
image: mysql
: That’s the name of the official mysql image- The environment variable allow us to set default behavior we wish to have for the database. In our case, it will create a user and a database name our application code will use to connect to it. How did I know about this? Well, by reading the doc.
- The
volume
property is here to say the container and host will share some files in common. I would advice to add as a volume everything that constitute the state of your system. In our case, using mysql, the important data live under/var/lib/mysql
, which will also exist outside the container in./data/db
. We haven’t done it here, but you could also add the logs under/var/log
.
- 1 block representing the container running the lychee code:
build: ./img/
: means we will run a custom image where the template of the image is defined under./img/
Tip: Don’t blindly put such a dumb password in the environment key of the docker-compose.yml file
Creating our custom image: The goal here is to create a docker image we can use to launch our application. As a starting point, we create a
Dockerfile
skeleton:cd /app/lychee/img cat > Dockerfile <<EOF FROM ubuntu:latest MAINTAINER mickael@kerjean.me RUN apt-get -y update && \ mkdir data && \ apt-get install -y git && \ ##################### # INSTALL APPLICATION DEPS echo "replace this by the installation of dependencies required to install your application" && \ ##################### # INSTALL APPLICATION echo "replace this by the installation of your application" && \ ##################### # CONFIGURATION echo "configure things that needs to be configure" && \ ##################### # INSTALL OTHER RELATED STUFF echo "stuff you want to see in your container" && \ ##################### # CLEANUP apt-get -y clean && \ rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* EXPOSE 80 VOLUME ["/data"] WORKDIR "/data" EOF
We will fill this skeleton as we progress in building the environment to run our application:
docker run -ti ubuntu bash
Once we’re inside a container, put your command line aside the installation page of the application you are trying to install then you fool around until you find the list of required packages and configuration you need to make you application run smoothly. After trial and error, I ended up with:
cd /app/lychee/img cat > Dockerfile <<EOF FROM ubuntu:latest MAINTAINER mickael@kerjean.me RUN apt-get -y update && \ mkdir data && \ apt-get install -y git && \ ##################### # INSTALL APPLICATION DEPS echo "replace this by the installation of dependencies required to install your application" && \ ##################### # INSTALL APPLICATION echo "replace this by the installation of your application" && \ ##################### # CONFIGURATION echo "configure things that needs to be configure" && \ ##################### # INSTALL OTHER RELATED STUFF echo "stuff you want to see in your container" && \ ##################### # CLEANUP apt-get -y clean && \ rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* EXPOSE 80 VOLUME ["/data"] WORKDIR "/data" EOF
and it’s starting script:
cd /app/lychee/img cat > entrypoint.sh <<EOF #!/bin/bash set -e chown -R www-data:www-data /var/www/ source /etc/apache2/envvars /usr/sbin/apache2 -D FOREGROUND exec "$@" EOF
Creating your
Dockerfile
is a time consuming process, there’s no magic here and it’s a lot of trial/error thing until you end up with the perfect image. Once you’ve finish your template, you can build your app:docker-compose build
To manage your application lifecycle:
# start an app docker-compose up -d # stop an app docker-compose stop # cleanup dirty state docker-compose rm # check logs docker-compose logs
Once you’ve started your application, you can navigate to:
http://photos.domain.com
, they should ask you for the database config (hostname: db, username: lychee, password: password) which will be store in your host as well (that’s because of the volume in our docker-compose file).this is mine:
- 1 block representing the container running the database:
-
Conclusion
You might think installing something is actually a lot of work to do and you would be right. The good point is creating images doesn’t have to be complicated once somebody did it right already. Lucky for you I’ve went through that pain for a lot of apps and I’ve develop tooling to make things easier for me to manage. I’ll publish my tooling once it will be ready for general use.
In the next episode (coming soon) we will talk about maintenance, backups, upgrade and everything in between.