Today I’ve been reminded of the importance of measuring first, rather than making assumptions.
I thought I’d write a post walking through it - that way at least something useful might come out of it, if it helps someone else
Background
The last couple of days I’ve been trying to improve test speed. It was fine to start with but the more tests I added, the longer the runs took. When it got to the point that in some sections each test was taking 600-1300ms , I got fed up enough to do something about it.
Starting point
I suspected it was probably a database thing, but thought I might as well try some basic Node profiling first to see if there was a smoking gun that would help.
The instructions on that page say that --prof
will enable the Node profiling. I wanted to get the profile results from running Mocha, rather than “just” Node. Adding the flag as --v8-prof
means Mocha will pass the flag through to Node:
$ NODE_ENV=test mocha --v8-prof --require test/fixtures.js
$ node --prof-process isolate-0x102d57000-8614-v8.log > processed.txt
Unfortunately nothing obvious showed up there, but the C++ entry points did firm up the database theory.
[C++ entry points]:
ticks cpp total name
3497 72.1% 58.8% T __ZN2v88internal21Builtin_HandleApiCallEiPmPNS0_7IsolateE
1290 26.6% 21.7% T
72% in HandleApiCall
- the only API calls we’re making are to the database, so that looks a good place to start.
Wild goose chase
Unfortunately, this is where I messed up. I forgot about trying to identify what the problem actually was and just started trying to fix Stuff.
- I wasted time testing a change from knex-cleaner to individually deleting the objects using
knex("table").del()
. - A lot of the tests re-seed everything, and I wasted time trying to speed up table truncation in seeds.
- I tried changing from PostgreSQL to SQLite for tests
- I even started wondering about rewriting the seed files
Stopped chasing the goose
Eventually I remembered to measure to see where the problem actually was. Assuming it isn’t the tests we’ve got two stages here, clean
and seed
. Let’s try to identify which one is the problem.
exports.dbCleanAndSeed = async function () {
let cleanFinishedMs, seedFinishedMs, startMs;
const options = { ignoreTables: ["knex_migrations", "knex_migrations_lock"] };
startMs = Date.now()
await knexCleaner.clean(database, options);
cleanFinishedMs = Date.now();
await database.seed.run();
seedFinishedMs = Date.now();
console.log("Clean took: %i; seed took %i", cleanFinishedMs - startMs, seedFinishedMs - cleanFinishedMs);
}
Which did the trick, and told me where I should be looking:
site tests
Clean took: 28; seed took 675
✓ can get the sites page (732ms)
Clean took: 28; seed took 743
✓ get the 'add a new site page' (776ms)
Clean took: 29; seed took 592
✓ add a new site (630ms)
Clean took: 26; seed took 594
✓ add a site and see it on the sites page (628ms)
Clean took: 29; seed took 748
✓ can't add a new site with no creds (779ms)
Clean took: 27; seed took 652
✓ gets 404 for a site that doesn't exist (684ms)
Clean took: 30; seed took 732
✓ can't add a new site with no domain (769ms)
Clean took: 26; seed took 609
✓ can't add a new site with no active value (640ms)
O-kay. So, cleaning - definitely not the problem!
I have five seed files though; wonder if any file in particular is the problem? We do have the source code for knex, so let’s find the seed loading bit and see if it’s easy to instrument.
It looks like _waterfallBatch()
in knex/lib/seed/Seeder.js
is the function responsible for actually loading the seeds, so let’s do some time stamping there.
async _waterfallBatch(seeds) {
const { knex } = this;
const log = [];
for (const seedPath of seeds) {
const importFile = require('../util/import-file'); // late import
const seed = await importFile(seedPath);
try {
const startMs = Date.now()
await seed.seed(knex);
const endMs = Date.now()
console.log(`${seedPath} took ${endMs - startMs} ms`);
[...]
Run it…
seeds/01_user_seed.js took 147 ms
seeds/02_site_seed.js took 6 ms
seeds/03_review_seed.js took 3 ms
seeds/04_campaign_seed.js took 5 ms
seeds/05_redirect_seed.js took 461 ms
… Oh. Yeah, that’s pretty conclusive.
Looking at the redirect file, I see the likely culprit.
05_redirect_seed.js
:
const geoIpData = await iplocate(remoteIp);
A geo IP lookup for every single seed loaded. That would do it.
01_user_seed.js
:
return knex('users').insert([
{email: 'johnwatson@bakerstreet.com',
passwordHash: Bcrypt.hashSync("Sherlock",
parseInt(process.env.BCRYPT_SALT_ROUNDS))
}])
And a bcrypt hash for every user inserted. That would explain that one.
Conclusion
For the users seed I reduced the Bcrypt salt rounds to 1 - it’s only local testing so it doesn’t need to resist attacks.
For the redirects, I implemented a lookup table for the IPs used in the seed file. The results can be seen immediately:
seeds/01_user_seed.js took 9 ms
seeds/02_site_seed.js took 5 ms
seeds/03_review_seed.js took 5 ms
seeds/04_campaign_seed.js took 5 ms
seeds/05_redirect_seed.js took 8 ms
And that’s the end. Hope that helps someone else!
Top comments (0)