So after a few Twitter experiments that never saw the light of day I finished Naga last week, and it totally works!

Some things I learned:

  • running SQLite is nonsense and no one should ever do it. I had a lot of trouble getting a SQLite console to access the stored DB file, meaning I was kind of operating blind for stuff going into the DB without logging. That was lame!

  • Logging in JSON is convenient

  • a bunch of Twitter API things that are supposed to be limited aren’t limited e.g. blocking. Some like faving totally are.

  • Mocha is fun.

Have I seriously worked here like 100 days?

That feels insane. I feel like I know like 11 new facts and that’s it. Okay, let’s try to be more realistic. In the last 100 days I learned:

1) some practical Linux stuff
2) I built my second Node app that does a thing
3) figured out how cron notation works
4) made a hubot do a thing… I guess that means I wrote some coffeescript, so, okay that was a first
5) Made a Gulp thing to auto-check for errors or something. I think I deleted it recently but presumably I can do that again
6) bash scripting
7) real-world sql
8) any PHP at all, even though the constant use of $this-> in all code still feels baffling
10) Ran a git merge like a real boy, where it sticks all that >>>>> master junk into your files
11) uhhhh, fuck I guess I set up this site? It wasn’t hard but still….

Okay I guess I’ll keep at it. I still feel like I’m struggling a lot

How to work for weeks and do nothing.

Basically I’ve been here five (5!) weeks and haven’t committed a single fix, new feature, or even PR review (okay I reviewed ONE PR that was a couple lines long)

Some reasons why this has happened.

1) I didn’t always know exactly what was expected of me. On one project I worked like mad to get Hubot to grab metrics from Amazon Cloudwatch, without realizing there was a Cloudwatch integration already in place, and that, basically, there was no need to write an integration for this at all

2) I over-promised a few times. I tried building a number of ‘text-scanning’ tools for the site that I thought would solve a problem, and then delivered versions of them without actually testing them on the exact challenge. This led to over a week of delays. Frustrating. And it’s possible that the webcrawler module I’m using can’t actually be made to go scan a localhost/ site, for some reason

3) I often worked on things where I had no ‘mentor’ really. I took kind of terse responses when asking questions about 1) as ‘don’t bother that guy all the time’ which definitely wasn’t his intent. Other stuff I worked on use code and frameworks that no one else uses, so when I was stuck there was no one else to turn to.

4) distraction as a cure for frustration. So instead of tackling often serious roadblocks head on, I have a tendency to end up nibbling around the edges. Entire DB contains bad structure information? Why not write a script to update individual tables and limp along? or better yet, go polish that Hubot that’s already working just fine. The best solution to this one is probably to keep up with things like this dev blog or a .plan file, because it forces me to clearly state what is actually blocking me rather than just falling in to the trap of staying busy.


I am so wicked-proud of my little crawler/text flattener:

Named after the Great Goddess of Teotihuacan who often appears with spiders (I guess it’s inaccurate to call her a goddess of spiders), my little app grabs all the text everywhere from a site. It’s a node app and has a few limitations:

Some file types seem to hang the process though I’m not certain why. I ended up just setting the crawler to ignore a bunch of image file types. That’s basically okay since I only wanted text, but that points to two sub-problems:
1. I have no idea why that broke it
2. the tool took a few hours to get to image file it hangs on. Meaning testing rigorously is… difficult.

So, collectd didn’t work. At all.

I’m writing this several days after giving it up so some of the details are hazy, but essentially, I couldn’t make collectd work at all.

In retrospect I bet that I was modifying the configuration .yml and making an invalid file. yaml is whitespace-dependent and fails entirely if you have any validation errors. Finally, most projects with YAML configs don’t actually throw errors to the console when fed invalid yaml, they just break silently (does that make Yaml the worst markup language ever? It’s for you to decide).

But anyway, it would log a few lines then nothing, I could never make it store metrics locally. For a while it would make .rrd files (which I never figured out how to open and look at), but never made populated .csv files.

What should I have done differently?

  1. Spinning my wheels for three days was colossally stupid. I felt like I was making some progress, but I would repeatedly find myself stuck for hours, and after trying a few things just sort of stay stumped and try to look for something else to do. This really has two sub-mistakes: a) at the end of a day where I hadn’t made much progress, I should have told someone. b) at the end of an hour where something wasn’t working, I should have bugged someone. I didn’t bug anyone because of fears of seeming like I wasn’t cut out to do this job, but that’s something of a self-fulfilling prophecy.

  2. going into several unfamiliar environments at once. Daemon config, Linux nerdery, AND a virtualbox environment controlled by Vagrant, all situations I didn’t know well. In my follow-up project I build something as a test case that’s installed directly on my OSX machine, an environment I’m much more comfortable with.

  3. glossing over problems. There were logging and output errors early that I tried to ignore or ‘come back to’ but really it just wasn’t working at all!

Using Collectd to generate server health alerts, pt. 1

We want to know if the server’s down, and while there are a number of costly options to tell us this information via SAS, it seems like, knowing exactly how the server should break when it does break, and knowing exactly how our stack is built, there’s no reason we can’t roll our own system health monitors with some OSS components.


  • very low overhead on monitored servers
  • can get data from supervisord and beanstalkd (I don’t even know what these are really)
  • can send alerts with smart(ish) thresholds
  • send information to Hubot in a useful format, so that updates can appear in Slack
  • (later) produce graphs of historical performance
  • uses pubnub as a middleman

you lost me (Toby) already

  • like I said I don’t really know what supervisord and beanstalkd are… beanstalk’s an AWS service… this is probably the… status of the… look I’m sure I can figure it out
  • Why do we need to use pubnub? I’m pretty sure we can have hubot collect POSTS directly and then do junk with them… I’m pretty sure PubNub’s big advantage is that it acts as a middleman for websockets interactions. And this guide I found seems to confirm that, but that doesn’t seem necessary? Maybe this is partly my slavish devotion to New Relic’s model of collecting and sending data.
do me a favor and don't try to sue me, click to read me ranting about non-compete agreements
Yes I signed a non-compete with New Relic, but 1) I was never a developer with New Relic and had very little knowledge of any secret sauce, much less how to implement it. 2) I’m rolling up a system health monitor here, not actual APM. New Relic does have a server health monitoring product, but last I checked it was totally free for life, and my solution will also be a free product unrelated to my employer’s output, soooooo be cool, k?

Whatever. My first version is just going to find something like disk fullness, send that somewhere, and get hubot to report on it regularly.

##roll-your-own vs. service vs. OSS

I kind of quickly dismissed using a service, but really I have a couple of reasons. First off, I started at this job 10 days ago. I don’t want to show up and start saying ‘go buy this service instead of me coding the thing you wanted.’ Also, the requirements given to me include pretty precise metrics that they want to mention. Along with the obvious like CPU and disk, there’s a combination of stats like supervisord, and I doubt that any service is going to measure all of those in teh way the boss wants ‘out of the box.’ I imagine I’ll either end up delivering something that isn’t what was requested without the ability to change it (“uhhh, yeah, it only measures disk full in 30 minute increments, sorry”) or spend a lot of time configuring something that could have been spent on my own project.

There’s also a general career-wise thing about implementing something with OSS: anywhere I go after this will probably have similar server health questions, and it seems more useful to be familiar with a tool I know I’ll always have access to. One of the key pieces for health monitoring is some kind of aggregation & storage, meaning closed software often involves a service that, if the company goes away, the data does too.

Anyhoo, that leaves me with the option of writing my own monitor or using an existing thing. While I feel fairly confident (like 45% confident maybe?) that I can make a node app do everything I want it to, the danger would be overhead: My untested code trying to grab data 90 jillion times a second, store it briefly in memory, and then transmit it, has the danger of adding significantly to server load if I make a mistake. This is always a danger with monitoring (not to get too Heisenberg about it), but if I use something that I know can usually grab and send metrics with no siginificant added load, I’m happier.

Better still, Collectd actually has one function (the creation of RRD files) that can bog down the server, but this is actually comforting because the maintainers and community already know at least one major source of overhead. That’s a lot better than going on a journey of discovery with own code.

Collectd and on to infinity.

So now I just gotta test using Collectd to grab and send metrics. This looks straightforward enough until I try to do basically anything. Collectd was creating RRD files that presumably contained data but which I was unable to open. It’s supposed to be able to log and/or send statistics via HTTP, but I can’t figure out how to actually tell it to send data, or what data to send… I turned on logging but got only minimal info there when I had errors, not any logging of the metric values.

Tune in for part 2 where I quit my job er, I mean figure this out.

Migrating to Sass when your current styling all over the place

I got tasked with migrating work’s pages from .css to Sass. This should have a few benefits:

Why are we doing this again?

  1. SASS is supposed to compile up to a single stylesheet per page
  2. Styling code is easier to write
  3. Using imported variables means its to have master styles that can use by variable names instead of having to look up “what color are H1’s in the toolbar supposed to again?”

I find both the Sass formats easier to write than regular .css, so 2) and 3) made sense to me. For 1) though it seems like there’s a bit of debate here. While going from 12 http calls for separate CSS files should be more efficient, going from inline styles in the page to grabbing a single huge .css file might actually block page loading. See this google video that makes some reference to the problem.

Okay, so, mine is not to reason why, and once I dig into the codebase it’s obvious there’s room for improvement, so let’s do it.

Planning the migration

The meat of what I did was here: trying to decide which SASS compiler to use and map out what styles needed to be moved into Sass files. It shouldn’t have been that hard to go from .css to sass (geez I just thought of this, is there, maybe, like, a Sass decompiler?!? Imma go google it. Oh jeez there are like a million ways to do that, huh. ), but our job was complicated by a bunch of styles appearing right inline in HTML or as part of some PHP code.


Before finding all those styles, I had to pick a compiler system. For a long while I was considering using Grunt since, along with compiling Sass, it can do all kinds of linting/testing/auto-transformations of code. That would be cool, in fact someone had listed as a benefit of Sass that “People won’t be able to copy/paste our code” but that doesn’t seem true of just using Sass, rather you’d have to also use a minifier like Grunt has. Grunt is basically a ‘do something every time you you want to build the site’ tool, so there’s an infinity of things that could be made easier by adopting it. However, I don’t know how it works, and neither do any of the front-end devs. That means I would end up burning a good deal of time getting it set up, and then face more time training people on it or see it wither because they don’t know how to modify its tasks.

okay so no Grunt, then what?

Basically vanilla Sass. I spent a day or so designing a few different integrations before the ops guys steered me to using a docker image rahter than installing a PHP wrapper for LibSass (the office is almost entirely a PHP environment. Once I knew the preferred path, it was easy to find a guide. There’s a great blog article I found by Larry Price about creating a Docker image to watch your filesystem for changes and auto-compile your Sass. I followed his instructions, that worked fine.

Stuff I didn’t really understand

  • I don’t actually know how Larry got this docker image to constantly watch a spot in the filesystem. I know the Docker instance is always running, but I don’t know a standard way to watch a directory (Grunt does it with a whole bunch of fanciness, but it seems like Price did it with just a line of code? IDK
  • How does Docker Hub work? I tried to, like, fork Larry Price’s Docker Hub Image but couldn’t really figure it out. When I set up my own Docker Hub account and added a new image, it didn’t even have a field to add my own Dockerfile. Is there more here than I’m seeing to Larry Price’s Dockerfile? or do I need to use git to upload 4 lines of code? IDK…

Okay but that ‘just works’! I don’t know how to point the Docker instance at more than one directory, but that wasn’t a requirement of the job anyway, so yay!

Time to find some stylez

There are style tags all through the codebase. How the heck am I supposed to find them? This is where a little Linux nerd-ery should come in handy.

I mentioned I was new to this job, right? This is actually my first full-time dev job. There’s a lot of stuff I don’t know how to do, but there’s exactly one thing that I am actually hot shit at: writing Regular Expressions.

Writing a regex to find all these styles should be easy… I’m not trying to fully parse HTML with a RegEx which we know leads to bad things , I just want to get a ‘hit’ every time something that looks a lot like a style instruction is used. Presumably there are some clean .php and .html (and it turned out .twig) files with no styles, so a search that lists where all styles are used should be pretty useful, right?

I started with grep and moved on to ack (BTW I used this code snippet marking at the first mention of these commands, but won’t hereafter), which wasn’t actually installed on OSX by default but is easy enough to grab. ack takes regexes that look normal to me and spits out results for lines matching that regex in all files. At first I was trying to match all possible styles, bold, dashed, emphasis, point size etc. that proved tough, so I gave myself an easier task. I matched all files that a hex-looking color value.

ack --match \#[a-f0-9]{6} /Users/tobias/xibalba/Environment/public

It’s helpful with any Linux command string or regex to make sure you can define what every ‘piece’ is doing

ack --match Use the ack command and match the following pattern. Without other flags the pattern will be assumed to exclude any newline and end-of-line symbols

\#[a-f0-9]{6} this is the regex that finds things that look like #a23f22 and #ffffff it says in order:

\# find a ‘#’, escaped with a ‘\’ because I think that # can mean something in regex-land but I’m not certain, and I know that escaping a character unnecessarily isn’t a problem.

[a-f0-9] finds a single character that is either the lower-case letters ‘abcde’ or ‘f’ or any number. Uppercase letters, letters after ‘f’ or any other character don’t qualify.

{6} that character I told you to find? Make sure it occurs exactly six times. Without anything else following this in the regex pattern, that basically means a minimum of 6 times. So #ffee3m doesn’t qualify, but #e3f5a2f2f1 does.

To be more precise about that last one: ack differentiates between the parts of a line that match the pattern and the matching line as a whole. In the screenshot of the results you can see the color highlighting of the actual matching part.

/Users/tobias/xibalba/Environment/public this is the folder where ack will start its search, by default searching recursively into any subfolder. If you’ve written your regex too generally and point this at some root folder, it can end up taking a very long time.

Okay that command worked pretty good! Let’s see the results:

The ack command worked!

Now I can re-run the command and add something like > searchResults.txt at the end to send the results to a nice text file.

Something I wanted to do but couldn’t do

ack has this nice visual highlighting of the actual match, but I couldn’t figure out how to save those colors. AFAIK sending the output to a file, even if it were something like HTML that could take that color info, it just gets lost the ack man pagemakes a few mentions of color, but no way to export the colors. Oh well.

done, ship it.

Well, not quite. The front-end devs were not as satisfied with these results as I was. When you think about it, knowing that a color was being used in a file left out how the color was being used. What I’d given them was a list of ‘this file is using the color grey.’ Sure they could follow the list and go look at the file to figure out what was really happening, but it seems like it would be way more useful to know ‘h2 is getting made grey here’

So now we had a new challenge: find the selector that starts a style statement (like .h2 or #sidebar), and grab a snippet starting there and ending with the color.

I started writing a regex to find this and… I failed.

First off, ack won’t work. ack is inherently a single-line finder, and most real .css files will have the selector on a separate line from the color. That led me try a few different multi-line searches before settling on awk. I wrestled with awk for 45 minutes before a friend mentioned that awk might be the ur-example of that classic saying: “A developer has a problem. To solve it, she decides to use awk. Now she has two problems.”

In the end I had to kind of fudge it: I just said ‘grab a few lines around where this color is used.’

ack -C 4 --match \#[a-f0-9]{6} /Users/tobias/xibalba/Environment/public

That -C 4 grabs 4 lines before and four lines after the match, and adds it to the results. The lines without a match are marked with a – instead of a : after their line number. Still, even after a spot-check there were examples where the color is in the results but not the selector.

most results did have a selector abofe the color, but not all.

done, ship it.

I had already had my previous attempts at search results come back twice from the front-end dev as ‘this isn’t terribly helpful’ so, when I delivered these and didn’t get any feedback, I wouldn’t be surprised if she had just given up on me giving her something that was dead useful.

The search results were always a kind of ‘nice to have’ part of the migration. Most of the styles were in a /css folder to begin with, and the others might end up staying in the codebase for a long time after Sass is adopted.

Ways to improve process in the future:

  • re-visit a Grunt build process with a fancy linter
  • create a functional test that warns on any styles showing up in non SASS files before compilation
  • re-design regex to find every possible style instead of just colors.
  • find a way to export ack results with colors