2013-04-23

Loggly and Puppet

Update 2013-10-22: This post refers to Loggly generation 1, and may (most likely) not work with Loggly's new second generation product offering.

As a follow-up to my previous post on pulling data from Loggly using JQuery, this post will show how to use Puppet to automatically register and configure instances to send data to Loggly.

At ServiceTrade, we use Amazon Web Services for almost all of our infrastructure. All our production servers are EC2 instances. The configuration of all the instances is kept in Puppet manifests. Instances go down and come up all the time, and Puppet helps us make sure they are all configured exactly alike out of the box.

A server cannot send data to Loggly unless you have previously told Loggly to accept data from it. Unfortunately, with server instances being created and removed automatically, it would be impossible to keep up with hand-registering each instance with Loggly. Fortunately, we can use Loggly's API and some Puppet manifests to register our instances for us when they come up.

We use rsyslog on our instances for collecting system log data (syslog, kernel, mail logs, etc.). rsyslog can tail log files forward them to other log files, or even other servers via TCP or UDP. Loggly has great documentation on setting up rsyslog to forward log files to Loggly.

First, we need to have Puppet manage rsyslog. This ensures that rsyslog will be installed, and gives us control over rsyslog's master configuration file and a directory of instance specific configuration files. Below is the rsyslog module file. All files are relative to the Puppet root directory.

modules/rsyslog/manifests/init.pp

As it says, the main configuration file will be in modules/rsyslog/files/rsyslog.conf. The config file is the standard one installed by our package manager with a few minor alterations, seen here:

modules/rsyslog/files/rsyslog.conf

That last line is important, because all out Loggly specific configurations will go in /etc/rsyslog.d.

Now that rsyslog is set up, we need to tell each instance where to send its log files. Additionally, we need to register each instance with each log file it will be sending to Loggly. Each log file is sent to a different endpoint, which Loggly refers to as an input. Each input has an ID, and a specific port on the Loggly logging server that maps to that ID. We have already set up a specific Loggly user for API purposes, and we'll use that user to do the registration.

First, we set up a new module that will hold our Loggly API configuration.

modules/loggly/manifests/init.pp

The hash of inputs allows us to easily reference each input we've mapped in Loggly, without having to remember specific ID and port numbers elsewhere in our manifests.

We'll also set up a Puppet template for tailing out log files and forwarding them to Loggly:

modules/loggly/templates/rsyslog.loggly.conf.erb

The template will take the values defined in loggly::$inputs and create one log file per input, which will ultimately end up in /etc/rsyslog.d.

One last Loggly manifest file is needed. This one generates the config file for an input, then registers the server against the input using Loggly's API.

modules/loggly/manifests/device.pp

This manifest uses the $name passed to the definition to gather data from the $inputs hash to build a config file. Also, it execs a cUrl call to Loggly's API to register the device for the input. The response to this call is stored for two reasons: first, if anything goes wrong, we have a record of Loggly's response to our request; and second, if the response file already exists, Puppet will know it does not need to make another call to the API.

All that remains is to use our loggly::device definition in a node definition:

manifests/site.pp

Since our input IDs and ports are bound to specific input names in our $inputs hash, we only need to know the names of the inputs we want to configure this instance to send to, and loggly::device does the rest.

Hopefully, at some point we (or someone else) will get around to releasing a proper Puppet module for this. Until then, I hope this post helps you get set up with the Loggly centralized logging service.

2013-04-12

Loggly from Javascript

Update 2013-10-22: This post refers to Loggly generation 1, and may (most likely) not work with Loggly's new second generation product offering.

For my most recent Dev Days project, I implemented centralized logging for our application, ServiceTrade. I don't want to worry about running our own indexing server, or storing the logs long term, so I investigated several SaaS logging solutions and eventually settled on Loggly. I was impressed with the ease of setting up our account, defining our logging inputs and even integrating with our Puppet configuration management infrastructure. For long term storage, they push raw log files to an S3 bucket of your choosing. Their customer support seemed very eager to help with the one issue I had. All-in-all, I've been pleased with the product.

One thing about Loggly that could use a little work is saved searches. First off, when Loggly gives you a graph of events from a saved search (a very cool feature) the graph sometimes loses information when zooming in and clicking on a section to see specific logs. Visiting the page for a saved search on a specific set of inputs and clicking on the graph to pull up the log lines for that search with give log lines across all inputs, not just the ones the saved search is limited to.

Secondly, you are limited to only 5 saved searches at the moment. The saved search feature is in beta, so hopefully they will allow saving of more (ideally unlimited) searches in the future. Apparently, you can have up to 2000 saved searches; the wording on the saved searches list page is out-of-date.

We are using their excellent API to pull down data and do our own visualizations of multiple saved searches. I'm using jQuery on a simple HTML page to query the API. There are a few caveats. The following information will hopefully prevent someone else from spending the half-hour I did trying to figure this out.

First of all, the API uses HTTP basic authentication. Jquery's get call does not handle HTTP authentication, so I had to use the more verbose ajax method.

Also, since the request is cross-domain, I had to use JSONP, which Loggly supports.

Finally, Loggly's API returns a Bad Request response if you send any parameters that it does not recognize. Unfortunately, unless you tell it otherwise, jQuery.ajax() will always send a timestamp query parameter to prevent response caching. In order to get everything to work, I had to tell the request to turn caching off and not send the parameter.

Here is what the final call looks like:

You might also want to check out my blog post soon on using Puppet to automatically register servers with Loggly.