We’ll cover the implementation, query and reporting aspect


I have logs but until recently I wasn’t really looking at them simply because:

  • they’re all spread on different machines. Yes there’s syslog servers but I never really liked it
  • I found it too time consuming to generate insight from them

This post will focus on what have change as we’ll build a log infrastructure that actually fit those needs:

  • centralized
  • can be use by any application
  • easy to use
  • easy to get insight from
  • can create reports


To fit various needs, I wanted the system to be versatile and not tight to any application at all. A sort of general log system that can be used by any application. In this respect, I went with nosql as I didn’t want to tight the log schema to anything specific and the choice went to couchdb, a nosql database that have a few advantages to store logs:

  • it’s speaks HTTP so you can use curl to manage everything and you can be sure your programming language will be supported.
  • it has built in features to generate reports in any format you want or even create dahboard from it
  • easy to implement and very slow maintenance
  • you can start very small and scale it if you need more
  • it can be query using map reduce
  • your data can easily be replicate to some other place

In our application, we’ll use nginx as a reverse proxy that will speak directyl with couchdb. Sure you can also use apache for this too but you’re on your own if you choose this path.




Let’s install this:

cat > docker-compose.yml <<EOF
version: '2'
    container_name: logger
    image: couchdb
    - "1007:5984"
    - ./data:/usr/local/var/lib/couchdb
docker-compose up -d

Couchdb is now be up and running, let’s try it

curl -X GET
# {"couchdb":"Welcome","uuid":"cf97d724186d3ef3fe9c1916f14d6794","version":"1.6.1","vendor":{"version":"1.6.1","name":"The Apache Software Foundation"}}

You can now, open your browser and go to, you should see a web interface people call futon.

What you want to do now is to:

  • create users: we’ll create an admin user and another user that will be use by our application to store logs
  • create our log database
  • lock down couchdb as by default couchdb is very permissive

Create a database

Nothing simpler, click the new database button and name it as you want. I use to create a database per application and call them ‘log_$myappname’;

Create our users

On the bottom right, click the fix this button and create an admin user. When you’re done, go on the _users database, new document and create a document that look like this:

    "_id": "org.couchdb.user:app_write",
    "name": "app_write",
    "type": "user",
    "roles": [],
    "password": "my_super_password"

and hit save. This user will be use to write our logs

Configure couchdb

On the right hand side of couchdb ui, go on the configuration tab and:

  • update the entry requirevaliduser with true. That way public users won’t be able to access anything.
  • update the delayedcommits to false
  • update the reducelimit to false so that we’ll be able to create reduce function that can return more than just a number

Couchdb by default can’t give write access without read access. It’s all or none. The trick we’ll be using to allow this behavior anyway is to configure nginx so that anonymous request will be given the proper authorization header while doing a POST request


In our log infra, only nginx is expose on the internet and we’ll use it as a reverse proxy to speak with couchdb. Our objective here is two fold:

  • configure nginx as a reverse proxy
  • anonymous request should only be able to write in couchdb. The trick here is to forward the http authorization header to couchdb

This is the configuration I’m using for nginx (/etc/nginx/sites-enabled/log.conf):

server {
    listen         80;
    server_name    log.example.com;
    return         302 https://$server_name$request_uri;
server {
    listen         443 ssl;
    server_name    log.example.com;

    ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;

    location / {
        proxy_set_header        Host $host:$server_port;
        proxy_set_header        X-Real-IP $remote_addr;
        proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header        X-Forwarded-Proto $scheme;

        proxy_read_timeout  15;
        gzip on;
        gzip_comp_level 9;
        gzip_vary on;
        gzip_min_length  1000;
        gzip_proxied any;
        gzip_types text/plain text/css application/json application/javascript;
        gzip_buffers 16 8k;

        if ($request_method = POST) {
            set $auth "Basic xxxxxxxxxxxxxxxxxxxx";
        if ( $http_authorization != '' ) {
            set $auth $http_authorization;

        proxy_set_header Authorization $auth;


To know what the Authorization header should be for your user (aka the set $auth “Basic xxxxxxxxx…” line), you can go on your terminal and type: curl -vvv –user appwrite:testt -X GET Note: Unnecessary use of -X or –request, GET is already inferred. > GET / HTTP/1.1 > Host: > Authorization: Basic dXNlcm5hbWU6cGFzc3dvcmQ= > User-Agent: curl/7.47.0 > Accept: / > < HTTP/1.1 401 Unauthorized

< WWW-Authenticate: Basic realm=”server” < Server: CouchDB/1.6.1 (Erlang OTP/17) < Date: Wed, 14 Jun 2017 04:16:24 GMT < Content-Type: text/plain; charset=utf-8 < Content-Length: 67 < Cache-Control: must-revalidate < \#+ENDSRC The interesting bit is what’s after Authorization:

Basic dXNlcm5hbWU6cGFzc3dvcmQ=

Now everything is done, we need to restart nginx:

# ensure there is no error in our configuration
nginx -t
# restart nginx
service nginx restart

Our backend is now complete!

Log stuff

Log some logs

In this section, we’ll log data coming from the fail2ban utility. Fail2ban is an utility scanning your logs and bans IP that show malicious signs. If you don’t have it install, you probably should install it. On my server, the logs look like this:

2017-06-14 17:16:12,329 fail2ban.actions        [1112]: NOTICE  [sshd] Ban

First thing to do it to create a database for our logs. I’ll call that one logfail2ban.

The idea is to parse every line of our log and send them to our newly create log infrastructure. So we’ll have to do the following:

  1. grab the lines we’re interested in
  2. parse the line in the json format
  3. send it to our log infra
  4. remove the log files when it’s all done

1: I’m only interested in some specific lines:

cat /var/log/fail2ban.log | grep NOTICE | grep Ban

2: we could use awk or sed but because I didn’t want to spend hours on finding the proper regex for each different type of log, I created an utility that will make our life easier. To install it:

git clone https://github.com/mickael-kerjean/jsonformat
mv jsonformat/jsonformat.py /usr/local/bin/jsonformat && rm -rf jsonformat

What it does it it takes text coming from stdin and parse it to json according to a schema you give as a parameter:

echo "2017-06-14 17:16:12,329 fail2ban.actions        [1112]: NOTICE  [sshd] Ban" | jsonformat --schema '$date $hour _ _ _ \[$process\] _ $ip' --fields 'machine=server'
# {"date": "2017-06-14", "process": "sshd", "machine": "server", "hour": "17:16:12,329", "ip": ""}

3: curl can use in its payload the data coming from stdin so we’ll just need to craft the proper url to send it to our log backend

echo '{"foo": "foo"}' | curl -X POST https://log.example.com/log_fail2ban -d @- -H "Content-Type: application/json"

But we’ll also have to use xargs to process each line separatly

echo -e '{"foo": "foo"}\n{"hello": "world"}' | xargs -0 -d '\n' -I  curl -X POST https://log.example.com/log_fail2ban -d '' -H "Content-Type: application/json"

4: when the process is complete without error we can remove the logs from our machine:

echo "" > /var/log/fail2ban.log

resume: if we take all the pieces together, we end up with the following command:

sudo cat /var/log/fail2ban.log | grep NOTICE | grep Ban | jsonformat --schema '$date $hour _ _ _ \[$process\] _ $ip' --fields 'machine=server' | xargs -0 -d '\n' -I  curl -X POST https://log.example.com/log_fail2ban -d '' -H "Content-Type: application/json" && rm /var/log/fail2ban.log

Just add this in a cron and you’re all set

From our app

Using your programming language of choice, we now need to do the equivalent of the folowing bash commands:

# should succeed
curl -X POST http://log.example.com/log_appname/ -d '{ "foo" : "foo" }' -H 'Content-Type: application/json'

# this should yeld an error as we can't get anything if we're not logged in
curl -X GET http://log.example.com/log_appname/_all_docs

# this should succeed if you set it to an existing user
curl --user username:password -X GET http://log.example.com/log_appname/_all_docs


Once you have some logs inside couchdb, you can query it using map reduce. The way it’s done using couchdb is we create a special document they called design document and create:

  • views for our data which consist of a map and a reduce function
  • list functions which allow you to manipulate your view data and display it in any format you want

We’ll go with a concrete example of reporting using our fail2ban logs as shown earlier and we’ll extract:

  • the numbers of blocked attacks on a period of time
  • who actually try to perform the attack and how many time on a period

Our documents in couchdb have this form:

   "_id": "939ad8536ccf8fd81d1518beaf028244",
   "_rev": "1-1aeafda7be98fef4f6ae505714390898",
   "date": "2017-06-11",
   "process": "sshd",
   "machine": "server",
   "hour": "06:26:02,022",
   "ip": ""

To create our reports, we’ll create this document as a starting point:

   "_id": "_design/report",
   "language": "javascript",
   "views": {
       "attacks": {
           "map": "function(doc) { var date = new Date(doc.date); emit([doc.machine, date.getFullYear(), date.getMonth(), date.getDate()], 1) }",
           "reduce": "_count"
       "attackers": {
           "map": "function(doc) { emit(doc.date, [doc.ip, doc.process]) }"

Now that it’s done, you can go back to futton and click on the view dropdown and select report -> attacks. From there, you should see your data. Note:

  • the reduce checkbox you can click on to actually execute your reduce function and see the number of attacks
  • the grouping dropdown you can click on to narrow down the result by machine, or date. For example with a grouping of 1, I see:
["server"]  557

With a grouping of 4, I would get details details by day:

["server", 2017, 5, 11] 122
["server", 2017, 5, 12] 194
["server", 2017, 5, 13] 122
["server", 2017, 5, 14] 115

To deliver on our promise, we’ll agregate our data by creating a reduce function. To do that, click on the viewcode button and you should now be able to create your reduce function and run it on the fly while saving the map reduce functions back in your design document.

  • numbers of blocked attacks on a period of time
# map
function(doc) { 
  var date = new Date(doc.date)
  emit([doc.machine, date.getFullYear(), date.getMonth(), date.getDate()] , [doc.ip]) 
# reduce:
function(keys, values, rereduce) {
   if(rereduce == false){
    return values.length;
        var sum = 0;
            return sum += value;
        return sum;

It will give the same result but if we execute it without the reduce function, it will give the ip adress of the attacker

  • who performed the attack and how many times
# map: 
function(doc) { 
  var date = new Date(doc.date)
  emit([doc.machine, date.getFullYear(), date.getMonth(), date.getDate()] , [doc.ip]) 
# reduce: 
function(keys, values, rereduce) {
  if (rereduce === false) {
     var ret = {};
         if(!ret[value]){ ret[value] = 0; }
         ret[value] += 1;
    return ret;   
  } else {
    var ret = {};
        for(var key in value){
            if(!ret[key]) ret[key] = 0;
            ret[key] += value[key]
    return ret

You can go much further by creating a real time dashboard of your logs by using couchdb change feed and the show function. If you’re interested in this, the easier way to get start with it is to dig into couchapps