Since it’s a weekend and I feel kinda lazy, I’m putting on the sign on the door “went linking”.
- impress.js: a cool library for making amazing presentations (not for the fainthearted)
- PhantomJS: imagine your code telling a web browser what to do, that’s what it does
- Back to Work: a fun and interesting podcast, I’m slowly getting into it
- WeatherSpark: amazing weather visualization site (the only one I use)
As a part of a statistics gathering script in Node.js, the final step is to dump the information into a relation database that can be queried in an standard fashion.
In my case I’m using MySQL, where I store the different hits my pixel has recorded. The script first receives the request through an nginx proxy, stores the data onto Redis, and finally a 3rd script reads the information and dumps it to MySQL.
I’ve found that reading data from one source and passing it to another can be a little tricky in Node.js since the single feature that makes it awesome gets in your way: asynchronous calls. Keeping a clear trace of which record you’re fetching, which one recording, which one failed, etc. can become very cumbersome and prone to error.
In comes to help this very nice module called Step. Straight from the README:
“A simple control-flow library for node.JS that makes parallel execution, serial execution, and error handling painless.”
This way I can write good old fashioned blocking code.
- Fetch data from Redis, establish a temporary working set to avoid collision with write operations.
- Read each record obtained, build and execute a MySQL query.
- Clear the temporary working set.
- Set the next running loop.
The documentation in Step is pretty straight forward, so reading it will give you a clear example of what it does and how to use it. Each funcion defined within the “Step(” portion will be executed sequentially and will receive the result of the previous async. execution.
I haven’t made use of parallel execution yet. If anyone has some real life examples, I’d be happy to check them out.
Both are individually awesome, and when mixed they are invincible (?)
In this post I’m sharing two videos: Continue reading
… or how companies choose to communicate with their users when the worst happens.
Two cases this week.
Go Daddy: http://www.godaddy.com/newscenter/release-view.aspx?news_item_id=410&isc=smtwlp&iphoneview=1
I can hear any excuses or reasons for GoDaddy’s way of detailing the problem, but I can’t help but love the level of honesty from GitHub and feel underestimated by GoDaddy.
For a while now I’ve been working on a node.js based tracker. This is basically a pixel that you attach to any website and it captures information about the visitors. With this development comes the (not small, not irrelevant) task of monitoring the app and the server to be able to define tests as “successful” or “in need of more testing”.
So, I have built a small piece of code that goes within the tracker server, and it repeatedly captures system information and stores it in a Redis database. Another script returns a set of graphs that feed from the Redis information and present the relevant numbers in a visual fashion. Continue reading
While developing for high traffic projects I’ve encountered a common problem not to far into the planning process: how do you do application farming and failover, in a simple way?
First of all, I’ll make clear what I mean by farming and failover. Farming in my mind means a single entrance point through where the users access the server, a domain name, that hosts behind itself a “bunch of severs”, “instances” or “resources” that respond in a transparent manner to the request. Failover is the action taken when any of those “resources” fail to respond, by which load is redistributed automagically (auto and magically).
So both of these features are offered by a lot of services providers and software applications. Also, you always have the option to build one yourself. Since I’ve found paid solutions too expensive, and software apps too complicated, I was on the verge of writing my own load+fail balancer… until I started digging on nginx. Continue reading