DEV Community

Marc Roberts
Marc Roberts

Posted on • Originally published at theparticlelab.com on

Building WordStream

Last week I spent a couple of hours playing with some new technology and built wordstream, a real-time word cloud generated from the twitter sample stream. Here’s how.

The twitter streaming APIs are a very efficient way of having the tweets you’re interested in pushed to you. For example you can use the filter endpoint to have tweets matching your filter (author, hashtag, keywords etc) but for this I was more interested in the sample endpoint which sends out about 1% of all public tweets. This endpoint does however have some limitations:

  • A set of credentials (app/user combination) can only have a single connection open (any further connection attempts will terminate the previous ones). So in order to use it I would either need to have each visitor authenticate with the app in order to create their own streaming connection, or build some sort of servers-side proxy.
  • The API response is actually quite large and when combined with the hundreds of tweets per second received results in a large amount of data being retrieved (during testing on a Friday morning I was getting a fairly consistent 2 Mbps of data from the API).

Here’s a quick example (capturing the stream for about 5 seconds resulting in 1.3 MB of data, I’ve shown just the firstfew tweets here, a sample of the sample you could say) of the streaming API data:

{ 
    created_at: 'Mon Jan 26 16:21:26 +0000 2015',
    id: 559747954651971600,
    id_str: '559747954651971584',
    text: 'Мосгорсуд оставил под арестом до 16 апреля Александра Кольченко, фигуранта дела ...',
    source: '<a href="http://ifttt.com" rel="nofollow">IFTTT</a>',
    truncated: false,
    in_reply_to_status_id: null,
    in_reply_to_status_id_str: null,
    in_reply_to_user_id: null,
    in_reply_to_user_id_str: null,
    in_reply_to_screen_name: null,
    user:
     { id: 2687442584,
       id_str: '2687442584',
       name: 'Галина Никандровa',
       screen_name: 'Byce6A',
       location: '',
       url: null,
       description: null,
       protected: false,
       verified: false,
       followers_count: 210,
       friends_count: 121,
       listed_count: 1,
       favourites_count: 0,
       statuses_count: 73725,
       created_at: 'Mon Jul 28 12:45:30 +0000 2014',
       utc_offset: null,
       time_zone: null,
       geo_enabled: false,
       lang: 'ru',
       contributors_enabled: false,
       is_translator: false,
       profile_background_color: 'C0DEED',
       profile_background_image_url: 'http://abs.twimg.com/images/themes/theme1/bg.png',
       profile_background_image_url_https: 'https://abs.twimg.com/images/themes/theme1/bg.png',
       profile_background_tile: false,
       profile_link_color: '0084B4',
       profile_sidebar_border_color: 'C0DEED',
       profile_sidebar_fill_color: 'DDEEF6',
       profile_text_color: '333333',
       profile_use_background_image: true,
       profile_image_url: 'http://abs.twimg.com/sticky/default_profile_images/default_profile_1_normal.png',
       profile_image_url_https: 'https://abs.twimg.com/sticky/default_profile_images/default_profile_1_normal.png',
       default_profile: true,
       default_profile_image: true,
       following: null,
       follow_request_sent: null,
       notifications: null },
    geo: null,
    coordinates: null,
    place: null,
    contributors: null,
    retweet_count: 0,
    favorite_count: 0,
    entities:
     { hashtags: [],
       trends: [],
       urls: [],
       user_mentions: [],
       symbols: [] },
    favorited: false,
    retweeted: false,
    possibly_sensitive: false,
    filter_level: 'low',
    lang: 'ru',
    timestamp_ms: '1422289286660'
},
{
    created_at: 'Mon Jan 26 16:21:26 +0000 2015',
    id: 559747954639384600,
    id_str: '559747954639384577',
    text: 'Beautiful life is so much better than Carry you tbh',
    source: '<a href="http://twitter.com" rel="nofollow">Twitter Web Client</a>',
    truncated: false,
    in_reply_to_status_id: null,
    in_reply_to_status_id_str: null,
    in_reply_to_user_id: null,
    in_reply_to_user_id_str: null,
    in_reply_to_screen_name: null,
    user:
     { id: 2974152997,
       id_str: '2974152997',
       name: 'Sandra Young',
       screen_name: 'edwardalazobuy1',
       location: 'West Virginia',
       url: 'http://optimizedirectory.com/',
       description: '1D / Glee / T-Swizzle / Narnia / Criminal Minds / KSS 8 / Lucky #18/ #23 / #24 / Directioner / MATTHEW GRAY GUBLER FOR DA WIN! / Louis\' pants',
       protected: false,
       verified: false,
       followers_count: 0,
       friends_count: 1,
       listed_count: 0,
       favourites_count: 0,
       statuses_count: 37,
       created_at: 'Sun Jan 11 06:10:53 +0000 2015',
       utc_offset: null,
       time_zone: null,
       geo_enabled: false,
       lang: 'en',
       contributors_enabled: false,
       is_translator: false,
       profile_background_color: 'C0DEED',
       profile_background_image_url: 'http://abs.twimg.com/images/themes/theme1/bg.png',
       profile_background_image_url_https: 'https://abs.twimg.com/images/themes/theme1/bg.png',
       profile_background_tile: false,
       profile_link_color: '0084B4',
       profile_sidebar_border_color: 'C0DEED',
       profile_sidebar_fill_color: 'DDEEF6',
       profile_text_color: '333333',
       profile_use_background_image: true,
       profile_image_url: 'http://pbs.twimg.com/profile_images/559450280236830720/fGI9TXLt_normal.png',
       profile_image_url_https: 'https://pbs.twimg.com/profile_images/559450280236830720/fGI9TXLt_normal.png',
       profile_banner_url: 'https://pbs.twimg.com/profile_banners/2974152997/1422261339',
       default_profile: true,
       default_profile_image: false,
       following: null,
       follow_request_sent: null,
       notifications: null },
    geo: null,
    coordinates: null,
    place: null,
    contributors: null,
    retweet_count: 0,
    favorite_count: 0,
    entities:
     { hashtags: [],
       trends: [],
       urls: [],
       user_mentions: [],
       symbols: [] },
    favorited: false,
    retweeted: false,
    possibly_sensitive: false,
    filter_level: 'low',
    lang: 'en',
    timestamp_ms: '1422289286657'
},
{ 
    created_at: 'Mon Jan 26 16:21:26 +0000 2015',
    id: 559747954672943100,
    id_str: '559747954672943104',
    text: 'Saints win 2-0! Enppi are 0-0 so double chance looking good on this one too.',
    source: '<a href="http://twitter.com/download/iphone" rel="nofollow">Twitter for iPhone</a>',
    truncated: false,
    in_reply_to_status_id: null,
    in_reply_to_status_id_str: null,
    in_reply_to_user_id: null,
    in_reply_to_user_id_str: null,
    in_reply_to_screen_name: null,
    user:
     { id: 2960224947,
       id_str: '2960224947',
       name: 'The Secret Tipster',
       screen_name: 'Secret_Tipster_',
       location: '',
       url: null,
       description: 'FREE betting tips and £10-£1,000 challenges! \n\n5pts - Strong tip (high stakes)\n3pts - Good tip (medium stakes)\n1pt - Fair tip (low stakes)',
       protected: false,
       verified: false,
       followers_count: 343,
       friends_count: 1588,
       listed_count: 2,
       favourites_count: 104,
       statuses_count: 290,
       created_at: 'Sun Jan 04 14:09:31 +0000 2015',
       utc_offset: 0,
       time_zone: 'London',
       geo_enabled: false,
       lang: 'en-gb',
       contributors_enabled: false,
       is_translator: false,
       profile_background_color: '000000',
       profile_background_image_url: 'http://abs.twimg.com/images/themes/theme1/bg.png',
       profile_background_image_url_https: 'https://abs.twimg.com/images/themes/theme1/bg.png',
       profile_background_tile: false,
       profile_link_color: '89C9FA',
       profile_sidebar_border_color: '000000',
       profile_sidebar_fill_color: '000000',
       profile_text_color: '000000',
       profile_use_background_image: false,
       profile_image_url: 'http://pbs.twimg.com/profile_images/551742687452229634/Q2rfimMq_normal.png',
       profile_image_url_https: 'https://pbs.twimg.com/profile_images/551742687452229634/Q2rfimMq_normal.png',
       default_profile: false,
       default_profile_image: false,
       following: null,
       follow_request_sent: null,
       notifications: null },
    geo: null,
    coordinates: null,
    place: null,
    contributors: null,
    retweet_count: 0,
    favorite_count: 0,
    entities:
     { hashtags: [],
       trends: [],
       urls: [],
       user_mentions: [],
       symbols: [] },
    favorited: false,
    retweeted: false,
    possibly_sensitive: false,
    filter_level: 'low',
    lang: 'en',
    timestamp_ms: '1422289286665' 
}
Enter fullscreen mode Exit fullscreen mode

Here’s a few things to note:

  • There is a lot of metadata about tweets included which I don’t need.
  • There are quite a few native retweets which include the retweeted text prefixed with RT in the new tweet. Should they be excluded or should the retweet count towards the word count?
  • There are many different languages, in order to have something meaningful for myself (I only speak english fluently, plus a couple of other european languages poorly at best) I decided to only process english tweets.

All of this meant that it made sense to build a simple back-end service/proxy that created a single streaming connection, processed this data and fed a far more condensed amount of data out to the browser(s). I chose to build something with node.js.

First we need to get the data out of the streaming API. I found a npm module called node-tweet-stream that worked with the filter endpoint, and with a little butchery was able to hook it up to the sample API instead.

var twitter = require('./twitter-stream'), //
    stream;

stream = new twitter({
  consumer_key: 'xxx',
  consumer_secret: 'xxx',
  token: 'xxx',
  token_secret: 'xxx'
});

stream.on('tweet', function(tweet) {
  console.log(tweet);
});

stream.connect();
Enter fullscreen mode Exit fullscreen mode

I often use Heroku for hosting small things like this and Heroku encourages you to store as much of the application configuration as possible in the environment rather your application code respository. To manage this in my Ruby projects I use dotenv to allow me to keep such configuration in a .env file locally (excluding this from the source control). I was very pleased to find such functionality also exists for developing in node. A quick install of the dotenv npm module and a simple require and it was working here.

Logging things out to the console is great for debugging things but no real use. To get the data out to a browser I started to build a simple express app as I’d had some experience with this before but something reminded me of web sockets and socket.io so I thought I’d try playing with them. Again, all that was required was another install/require and a couple of extra lines and now we have tweets being proxied through to the browser(s). The code was now looking like this:

var app = require('express')(),
    dotenv = require('dotenv'),
    server = require('http').Server(app),
    io = require('socket.io')(server),
    twitter = require('./twitter-stream'),
    stream;

dotenv.load();

stream = new twitter({
  consumer_key: process.env.TWITTER_CONSUMER_KEY,
  consumer_secret: process.env.TWITTER_CONSUMER_SECRET,
  token: process.env.TWITTER_TOKEN,
  token_secret: process.env.TWITTER_TOKEN_SECRET
});

server.listen(process.env.PORT || 5000);

stream.on('tweet', function(tweet) {
  io.emit('tweet', tweet);
});

stream.connect();
Enter fullscreen mode Exit fullscreen mode

The main reason for proxying the data was to reduce the amount sent out to the browsers, so now was time to take those massive responses and reduce them to some word lists. Again I found a couple of great npm modules to help with this; keyword-extractor for extracting the important words (or more accurately, excluding the non-important words), and franc for determining the language of the tweet (keyword-extractor only works with english, much like my brain).

While writing this I noticed that the twitter response actually contains a lang field, negating the need to use franc. I hadn’t noticed this at the time, oh well!

Plugging these in, along with some exclusions myself (links, retweets, replies) gives us the final code (find it on GitHub) that was deployed to Heroku:

var app = require('express')(),
    dotenv = require('dotenv'),
    server = require('http').Server(app),
    io = require('socket.io')(server),
    xt = require('keyword-extractor'),
    franc = require('franc'),
    twitter = require('./twitter-stream'),
    stream;

dotenv.load();

stream = new twitter({
  consumer_key: process.env.TWITTER_CONSUMER_KEY,
  consumer_secret: process.env.TWITTER_CONSUMER_SECRET,
  token: process.env.TWITTER_TOKEN,
  token_secret: process.env.TWITTER_TOKEN_SECRET
});

io.set('origins', '*:*');

server.listen(process.env.PORT || 5000);

function exceptions(word){
  if (word.match(/https?:/)) return false; // links
  if (word.match(/^@/)) return false; // replies
  if (word.match(/&|\/|"/)) return false; // random punctuation

  return true;
}

stream.on('tweet', function(tweet) {

  // ignore retwets
  if (tweet.retweeted_status || tweet.text.match(/^RT/)) return;

  // only english for now
  if (franc(tweet.text) != 'eng') return;

  // parse that tweet, extract words
  words = xt.extract(tweet.text,{
    language:"english",
    remove_digits: true,
    return_changed_case:true
  }).filter(exceptions);

  if (words.length > 0) io.emit('tweet', words);
});

stream.connect();
Enter fullscreen mode Exit fullscreen mode

So with less than 50 lines of code we have live tweets being parsed for words and those word lists being sent out to the browser. Now let’s get the browser to render them.

This is going to be almost entirely javascript powered so I’m going to concentrate on that, if you’re interested in the HTML and CSS then take a look at the source and ask me any questions you might have.

Firstly we’ll use socket.io to connect to the web socket and start grabbing the words as they come in.

I’m using the underscore.js library here to get access to some simple helper functions

var socket = io.connect('wss://twitter-word-stream.herokuapp.com/');

socket.on('tweet', function (data) {
  _.each(data, function(word) {
    console.log(word);
  });
});
Enter fullscreen mode Exit fullscreen mode

And there we go, the words are being spat out to the browser’s console, but of course this is of no practical use. Lets count the occurences and displaying that visually. We’ll do this by throwing the words and their counts in to an object and then displaying the most popular ones periodically.

var socket = io.connect('wss://twitter-word-stream.herokuapp.com/'),
    word_counts = {},
    text_nodes = {},
    frame = 0;

function render() {
  var max = 0,
      displayed_words = [];

  // increment frame counter
  frame++;

  _.each(word_counts, function(count) {
    if (count > max) max = count;
  });

  // filter them to just the most popular ones
  displayed_words = _.sortBy(_.keys(word_counts), function(word) {
    return max - word_counts[word];
  }).slice(0,30);

  _.each(displayed_words, function(word) {
    var size = words[word] / max,
        text, node;

    // create the text node if need be
    if (!text_nodes[word]) {
      text = document.createTextNode(word);
      node = document.createElement('span');

      // position kind of in the middle somewhere
      var top = 80*Math.random();
      var left = 70*Math.random();

      // give it a random pastelly colour
      node.setAttribute('style', "top: " + top + "%; left: " + left + '%; color: hsla('+360*Math.random()+',50%,50%,0.75)');

      node.appendChild(text);
      document.body.appendChild(node);
      text_nodes[word] = {
        updated: frame,
        node: node
      };
    } else {
      text_nodes[word].updated = frame;
    }

    // clear expired words
    _.each(text_nodes, function(obj, word) {
      if (obj.updated < frame) {
        obj.node.remove();
        delete text_nodes[word];
      }
    });

    // size it relative to it's occurence
    text_nodes[word].node.style.transform = 'scale(' + (0.2 + size*0.8) + ')';
    text_nodes[word].node.style.webkitTransform = 'scale(' + (0.2 + size*0.8) + ')';

  });

}

setInterval(render, 500);

socket.on('tweet', function (data) {
  _.each(data, function(word) {
    word_counts[word] = (word_counts[word] || 0) + 1;
  });
});
Enter fullscreen mode Exit fullscreen mode

There’s a few things to explain here:

  • A scale transform is being used instead of font-size to change the size of the words as this results in a GPU accelerated transform, which we can then enhance with transitions with very little impact on performance.
  • The created DOM nodes are being cached in the text_nodes object so we don’t have to recreate them each time or try to find them.
  • A frame number is used to note when the elements were last updated so that it’s easy to remove any words that are no longer popular.
  • The colour of the words are randomised using hsla() as this only requires a single number to be generated (the hue) instead of the multiple numbers required to use rgba().

This works great, but it will count occurences since you first loaded the page, I wanted it to only consider the most recent words (lets say only the last 5 minutes) so I need to store the word lists in such a way that I can easily and quickly remove the older ones. I could have stored the time of each occurence of each word but that would get complicated. I decided instead to store the word occurences in several different objects (I called them buckets), with the one that was incremented being rotated every few seconds. The render method would then only use the buckets covering the last 5 minutes worth of occurences.

var socket = io.connect('wss://twitter-word-stream.herokuapp.com/'),

    text_nodes = {},
    frame = 0,

    current_bucket = {},
    buckets = [current_bucket],

    bucket_count = 30, // how many buckets to remember
    bucket_width = 10; // how many seconds worth of words to keep in the buckets

function render() {
  var max = 0,
      words = {},
      displayed_words = [];

  // increment frame counter
  frame++;

  // get counts of words across all buckets
  _.each(buckets, function(bucket){
    _.each(bucket, function(count, word) {
      words[word] = (words[word] || 0) + count;
      if (count > max) max = count;
    });
  });

  // filter them to just the most popular ones

  displayed_words = _.sortBy(_.keys(words), function(word) {
    return max - words[word];
  }).slice(0,30);

  _.each(displayed_words, function(word) {
    var size = words[word] / max,
        text, node;

    // create the text node if need be
    if (!text_nodes[word]) {
      text = document.createTextNode(word);
      node = document.createElement('span');

      // position kind of in the middle somewhere
      var top = 80*Math.random();
      var left = 70*Math.random();

      // give it a random pastelly colour
      node.setAttribute('style', "top: " + top + "%; left: " + left + '%; color: hsla('+360*Math.random()+',50%,50%,0.75)');

      node.appendChild(text);
      document.body.appendChild(node);
      text_nodes[word] = {
        updated: frame,
        node: node
      };
    } else {
      text_nodes[word].updated = frame;
    }

    // clear expired words
    _.each(text_nodes, function(obj, word) {
      if (obj.updated < frame) {
        obj.node.remove();
        delete text_nodes[word];
      }
    });

    // size it relative to it's occurence
    text_nodes[word].node.style.transform = 'scale(' + (0.2 + size*0.8) + ')';
    text_nodes[word].node.style.webkitTransform = 'scale(' + (0.2 + size*0.8) + ')';

  });

}

function rotate_buckets() {

  current_bucket = {};
  buckets.push(current_bucket);

  while (buckets.length >= bucket_count) buckets.shift();

}

setInterval(rotate_buckets, bucket_width*1000);
setInterval(render, 500);

socket.on('tweet', function (data) {
  _.each(data, function(word) {
    current_bucket[word] = (current_bucket[word] || 0) + 1;
  });
});
Enter fullscreen mode Exit fullscreen mode

And there we have the (more or less) finished code, and here it is running on Heroku

There’s still a few things I’d like to improve when I can:

  • The positioning of the words is random, which often results in excessive overlapping, the translucency helps with that but it sometimes is quite bad.
  • It would be nice to have it be a little more customisable, maybe the source being a hashtag, a user or your timeline instead of the sample stream.

It was fun to spend a couple of hours playing around with some new things, everyone needs to be able to do that occasionally.

What new technologies are you most excited about playing with?


UPDATE: The source for all this can be found on GitHub

GitHub logo marcroberts / wordstream

A twitter streaming client to extract words

node-js-getting-started

A barebones Node.js app using Express 4.

This application support the Getting Started with Node on Heroku article - check it out.

Running Locally

Make sure you have Node.js and the Heroku Toolbelt installed.

$ git clone git@github.com:heroku/node-js-getting-started.git # or clone your own fork
$ cd node-js-getting-started
$ npm install
$ npm start
Enter fullscreen mode Exit fullscreen mode

Your app should now be running on localhost:5000.

Deploying to Heroku

$ heroku create
$ git push heroku master
$ heroku open

Documentation

For more information about using Node.js on Heroku, see these Dev Center articles:






Top comments (3)

Collapse
 
andypiper profile image
Andy Piper

Very nice, fun visualisation! Are you handling extended Tweets (140+ characters) as well? In theory these should have a truncated: true value and then a full_text field further down the Tweet object.

Collapse
 
marcroberts profile image
Marc Roberts

I haven't touched this code for a couple of years now, so no I'm not considering the full_text of longer tweets. Thanks for pointing this out. I'll made the small change you suggested next time I'm in there.

The code is actually all up on GitHub, I should update the article to include that

Collapse
 
andypiper profile image
Andy Piper

That's nice, yes I found it :-) I might fork and have a go at adding our new version of the sample stream API (coming soon!) when I have a chance.