Starting from the official ESLint documentation:
ESLint is a configurable JavaScript linter. It helps you find and fix problems in your JavaScript code. Problems can be anything from potential runtime bugs, to not following best practices, to styling issues.
this blog post want to focus on the internal mechanism of ESLint, trying to deconstruct the tool and build a complete mental model of it.
Let's start from the definition itself, in the first sentence we can find three key points:
- Configurable
- JavaScript : ESLint works with Javascript/Typescript. However, those concepts are language-agnostic.
- Linter
Linter
Basically, a linter is a static code analysis tool. It looks at the code you’ve written down and analyzes it to spot any type of issues, from the functional to the structural ones.
Some of the advantages of using a linter could be:
- Reduced errors and improved quality of the code.
- A lot of developers’ time saved.
- Code readability improved. Unified style across codebase (better code review experience).
- Mistakes' auto-fixing.
- and many more...
The mechanism of ESLint is divided into two phases: a first parsing phase followed by an action phase.
1. Parsing
The main job of the parsing phase is to understand the source code and transform it in a format that allows its manipulation, analysis or transformation. In compiler’s theory this format is called Intermediate Representation (IR).
One of the most common form of Intermediate Representation is given by the Abstract Syntax Tree (AST), the format on which ESLint relies.
The AST is not directly built from the source code, but there is a need of an intermediate phase, called tokenization. During that
phase, called lexical analysis, the source code is transformed in a list of tokens by an agent called tokenizer, or scanner, that will be interpreted by the parser and turned into an AST during the syntax analysis.
Usually the lexical analysis and the syntax analysis are done by two different components (tokenizer and parser), but some parsers can do both (at the end it’s just naming stuff).
The scenario just described is actually the first phase of a compiler’s work (called Frontend phase) to generate the IR on which the optimizations/transformations will take place (done during the Backend phase).
If you have some free time, take a look at this Stanford course about compilers: Compilers course - Stanford
Let's zoom in on the lexical analysis, trying to address it from a more practical point of view, analyzing this piece of code.
const name = "George";
Let's put down a real dummy tokenizer:
enum TokenType {
Keyword = "Keyword",
Identifier = "Identifier",
String = "String",
}
interface Token {
value: string;
type: TokenType;
}
const KEYWORDS = ["const", "="]; // and many others...
const STRING_REGEX = /["].*["]|['].*[']/;
const getTokenType = (token: string): TokenType => {
if (KEYWORDS.includes(token)) {
return TokenType.Keyword;
}
// Checking for a string.
if (STRING_REGEX.test(token)) {
return TokenType.String;
}
return TokenType.Identifier;
};
const tokenize = (INPUT: string) => {
const tokens: Token[] = [];
let readingBuffer = "";
for (let i = 0; i < INPUT.length; i++) {
const character = INPUT[i];
if (character == " " || character == ";") {
/**
* Whitespace or terminal symbol found.
* We can evaluate the token.
*/
const type = getTokenType(readingBuffer);
tokens.push({ type, value: readingBuffer });
readingBuffer = "";
continue;
}
readingBuffer += character;
}
return tokens;
};
Let's try this tokenizer on our piece of code:
const INPUT = 'const name = "George";';
console.log(tokenize(INPUT));
This is the output:
[
{
"type": "Keyword",
"value": "const"
},
{
"type": "Identifier",
"value": "name"
},
{
"type": "Keyword",
"value": "="
},
{
"type": "String",
"value": "\"George\""
}
]
So, the result will be a list of categorized words, so called Tokens. Each token can have a couple of different informations. The bare minimum needed are two of them:
- Literally the word (i.e. the
value
field), called lexeme; - The token type
A tokenizer could throw an error if an invalid token is found (e.g., an
Identifier
starting with a number).
Ok, now it's the parser turn.
But before diving into the real work of it, it's necessary to define the language, in a formal way (actually, if you're going to write a new coding language, its definition should be the first thing to design).
To do that, formal grammars are used. Check this wikipedia page and the Chomsky hierarchy.
Now, those concepts are a whole entire world to explore, and this is out of the scope of this article. I suggest the reading of the Chapters 4, 5 and 6 of the book Crafting Interpreters by Robert Nystrom for a wider (but still practical) understanding of those subjects.
Another practical great resource to look at is The SuperTiny Compiler. To explore them from a theorical point of view, you can find A LOT of resources from books or courses online.
Focusing again on ESLint, the parser used by the linter is called Espree. This is an in-house parser built by the ESLint folks to fully support ECMAScript 6 and JSX on top of the already existing Esprima.
The Espree module provide APIs for both tokenization and parsing that you can easily test out.
var espree = require("espree");
const code = `const name = "George"`;
console.log(espree.tokenize(code, { ecmaVersion: 6 }));
console.log(espree.parse(code, { ecmaVersion: 6 }));
Those are the output Tokens:
[
{
"type": "Keyword",
"value": "const",
"start": 0,
"end": 5
},
{
"type": "Identifier",
"value": "name",
"start": 6,
"end": 10
},
{
"type": "Punctuator",
"value": "=",
"start": 11,
"end": 12
},
{
"type": "String",
"value": "\"George\"",
"start": 13,
"end": 21
}
]
and that is the AST built from the tokens. As you can see, now those tokens have a meaning assigned to them. That's possible thanks to the grammar definition of the Javascript language.
{
"type": "Program",
"start": 0,
"end": 21,
"body": [
{
"type": "VariableDeclaration",
"start": 0,
"end": 21,
"declarations": [
{
"type": "VariableDeclarator",
"start": 6,
"end": 21,
"id": {
"type": "Identifier",
"start": 6,
"end": 10,
"name": "name"
},
"init": {
"type": "Literal",
"start": 13,
"end": 21,
"value": "George",
"raw": "\"George\""
}
}
],
"kind": "const"
}
],
"sourceType": "script"
}
The structure of the AST is defined by the ESTree specification. The idea is that every node of that tree has its own type. Then, each type of node is characterized by a particular structure:
- We have some infos that are common to all the nodes type. Just look at the
start
andend
properties. Those indicate the location of the code that is represented by that subtree. This type of information is especially useful when printing in output warnings or errors to help developers find their issues through the codebase. - Then, each node type has its own properties that describe them. Here, we can look at a couple of examples:
-
Program
is the root node, and it has thebody
and thesourceType
parameters. Thebody
contains the whole subtree describing the code, while thesourceType
indicates if the source has been parsed as an ES6 module or just a script. -
VariableDeclaration
is the node that identifies one or more inline declarations. As you can see one of its property is calleddeclarations
. Note the usage of plural: the parser know that in Javascript it's possible to declarate multiple variable at the same line! Also, in thekind
property, it is stored the keyword used to declare those variables (e.g.,const
,let
,var
). That could be useful, for example, to warning the usage ofvar
, or maybe to give errors at coding time if the developer reassigns a constant. -
VariableDeclarator
is the single declaration found in theVariableDeclaration
source (please note that the names are different). Here, we have both theid
andinit
property populated: - the
id
is of typeIdentifier
, and indicates what's the identifier used for this variable declaration. This could be useful, for example, to output errors when using invalid symbols in identifier (e.g., if it starts with a number). We then have theinit
node, of typeLiteral
. This node represent the initialization of the variable. As you can expect, for aVariableDeclarator
node, that property is not mandatory: in Javascript is possible to declare variables without immediately initialize them. All those considerations are formalised in the ESTree specification.
-
Perfect, so now the parsing phase is complete. We should have a pretty solid, still simplified, mental model of it. We have our AST, and it's the action phase turn to work on it!
As we said, AST is a tree, that means that to operate on it we need to traverse it. What does that means? Just travel around its node and do whatever we want. This is the moment where endless possibilities open up before our eyes. How do we want to traverse it? What we want to do? How we want to do it? Again, we found ourselves in front of theory of Computer Science, complete courses are built around those concepts. For that case, thought, we are stricted to the architecture of ESLint.
The linter uses estraverse to travel around the tree, following two main concepts:
- Visitor Pattern: this is a behavioral design pattern. The idea here is to leave to the consumer (us) the decision of what to do for which node (type). The algorithm will just traverse the tree, exposing us the current node. Each node can have a different structure (we just saw an example above) and it will be our duty to differentiate actions based on node type.
- Depth First Search: while talking about the Visitor Pattern, we used the world "algorithm". Ouch! We just talked about traversing, but how we do it? There is an algorithm that is in charge of doing that (and a lot of choices are available). In the case of estraverse, that algorithm is the Depth First Search. Take a look at that visualization to grasp more about the algorithm and read this simple article to understand the process of the DFS. One thing that really interests us is the backtracking nature of the algorithm and in a moment we will understand the reason.
So, we said that thanks to the Visitor Pattern, we will be able to execute functions for each node of the AST, while having all the information about that node. Also, thanks to the DFS algorithm, we have backtracking traversal too. That means that we will be able to execute functions not only when entering a node, but also when leaving it. We can try to visualize this, let's use estraverse
to traverse the AST generated by espree
from that code snippet:
const a = 2;
const b = 10;
const sum = (a, b) => {
return a + b;
};
console.log(a + b);
Let's generate the AST:
import { parse } from "espree";
/**
* we need ecmaVersion option to support 'const' keyword
*/
const ast = parse(code, { ecmaVersion: 6 });
A rough visualization of the generated tree is as follow (you can use one of the various tools online to better visualize it):
Program
-- VariableDeclaration
---- VariableDeclarator
------ "a"
------ "2"
-- VariableDeclaration
---- VariableDeclarator
------ "b"
------ "10"
-- VariableDeclaration
---- VariableDeclarator
------ "sum"
------ ArrowFunctionExpression
-- ExpressionStatement
---- CallExpression
------ MemberExpression
-------- "console"
-------- "log"
------ arguments
-------- "+"
---------- "a"
---------- "b"
Ok, now let's traverse it with estraverse
and then print, for each node, when it is entered and when it is left.
const ast = parse(code, { ecmaVersion: 6 });
traverse(ast, {
enter: (node, parent) => {
console.log("entering ", node.type);
},
leave: (node, parent) => {
console.log("leaving ", node.type);
},
});
That is the (slighty prettified) output:
entering Program
-- entering VariableDeclaration
---- entering VariableDeclarator
------ entering Identifier
------ leaving Identifier
------ entering Literal
------ leaving Literal
---- leaving VariableDeclarator
-- leaving VariableDeclaration
-- entering VariableDeclaration
---- entering VariableDeclarator
------ entering Identifier
------ leaving Identifier
------ entering Literal
------ leaving Literal
---- leaving VariableDeclarator
-- leaving VariableDeclaration
-- entering VariableDeclaration
---- entering VariableDeclarator
------ entering Identifier
------ leaving Identifier
------ entering ArrowFunctionExpression
-------- entering Identifier
-------- leaving Identifier
-------- entering Identifier
-------- leaving Identifier
-------- entering BlockStatement
---------- entering ReturnStatement
------------ entering BinaryExpression
-------------- entering Identifier
-------------- leaving Identifier
-------------- entering Identifier
-------------- leaving Identifier
------------ leaving BinaryExpression
---------- leaving ReturnStatement
-------- leaving BlockStatement
------ leaving ArrowFunctionExpression
---- leaving VariableDeclarator
-- leaving VariableDeclaration
-- entering ExpressionStatement
---- entering CallExpression
------ entering MemberExpression
-------- entering Identifier
-------- leaving Identifier
-------- entering Identifier
-------- leaving Identifier
------ leaving MemberExpression
------ entering BinaryExpression
-------- entering Identifier
-------- leaving Identifier
-------- entering Identifier
-------- leaving Identifier
------ leaving BinaryExpression
---- leaving CallExpression
-- leaving ExpressionStatement
leaving Program
Ooookay, from this output, should be now clear the concept of backtracking. Now we have the theory behind the action phase of the linter.
Let's recap what we have seen by now. Recalling the starting three points:
- Linter: a linter has two phases: parsing and action.
- the parsing phase wants to create a useful representation on which we can work on: the Abstract Syntax Tree;
- the action phase traverses the tree and does stuffs for each node of it;
- Javascript: ESLint works for Javascript.
- Configurable: we didn't talk about this yet. Let's address it now.
Configurable
ESLint is labeled as configurable because its action phase is based upon two completely customizable concepts:
- Rules
I feel like the ESLint's doc definition is pretty straightforward.
Rules are the core building block of ESLint. A rule validates if your code meets a certain expectation, and what to do if it does not meet that expectation. Rules can also contain additional configuration options specific to that rule.
To define it in a way that links better with what we have just discussed: a rule is a function that subscribe to a node type exposed from the Visitor Pattern of the traversing function.
- Plugins
An ESLint plugin is just a set of rules (and other stuffs that ESLint offers) that we can share.
Let's say for example that in your company there are a couple of rules that your manager wants you to follow. You can write those rules as ESLint ones, create a plugin and call it something like eslint-plugin-mycompany
. Now, this plugin can be downloaded and plugged in by new folks of the team.
Practical Exercise
I like theory, but only when practice follows it. Let's create a custom rule! Now, the main part of this article was understanding what's going on under the hood of ESLint. I really care that this practical section is present in the article, but at the same time I will not go to deep around all the different options of writing custom rules. Please refer to the official documentation for a more in-depth tutorial.
From ESLint doc:
Create a custom rule if the ESLint built-in rules and community-published custom rules do not meet your needs. You might create a custom rule to enforce a best practice for your company or project, prevent a particular bug from recurring, or ensure compliance with a style guide.
Before creating a custom rule that isn’t specific to your company or project, it’s worth searching the web to see if someone has published a plugin with a custom rule that solves your use case. It’s quite possible the rule may already exist.
The two main points here are:
- you should write custom rules "to enforce a best practice for your company or project, prevent a particular bug from recurring, or ensure compliance with a style guide".
- before writing custom rules "it’s worth searching the web to see if someone has published a plugin with a custom rule that solves your use case. It’s quite possible the rule may already exist".
For example I remember, in the past, an entire week where we found multiple bugs caused by the same issue: fallbacking for a number when not defined with the ||
operator. That's not the best thing to do, cause 0 will be considered as null by the ||
operator. You should go with the nullish coalescing operator (??
). First thing that popped out of my mind was: we could write an ESLint rule for that! I will leave to you as an exercise because the article focuses on ESLint applied on plain Javascript, while we would need Typescript to write that rule (we have to check for the left operand type to be a number).
Anyway, let's start.
First thing first, an ESLint rule is just a javascript
file with a precise structure:
module.exports = {
meta: {
// metadata of the rule
},
create(context) {
return {
// object that define the real action! 😎
};
},
};
So, the rule needs to export an object with two properties:
- meta: contains all the metadata about the rule. That data includes stuffs like rule description or type. E.g., is an error? Is a warning? Is it a rule that can auto-fix the code? (Yes, ESLint can be launched with an auto-fixing mode to allows it automatic code editing). Please check the documentation for an exhaustive list.
-
create: that's where the magic happens! This function must return an object with as many properties you want. Each property is a function that will be called while visiting nodes during traversing (we can call those functions visitors). As we know, in Javascript a property has a key and a value. In this case, the value is a function, the one that will be called during the visit, the action! The key can be one of those three options:
- A plain node type or a selector: the function will be called when on that node type (or selector matching) while going down on the tree;
- A plain node type or a selector and the
:exit
keyword: the function will be called when on that node type (or selector matching) while going up on the tree; - An event name from the ones provided by ESLint while performing code path analysis. That's a tricky but powerful tool provided by the linter. The idea is to give the developer the opportunity to follow and track all the different paths that a code execution can follows (remember that in our codes we have if statements, loops, exceptions, early returns and so on). Now, it's worth to dive deeper into this, but I already know that we will not need those stuffs in our example. For that reason I'd suggest the reader to deepen the knowledge about this tool in the official documentation by itself.
As you can see from the snippet above, the
create
function has a parameter: the Context object. This object contains informations that are somehow related to the rule's context, starting from the rule's metadata towards to the source code that the rule is analyzing. But more importantly, this object provides thereport
method used to report ESLint errors or warnings.
Ooookay, we can start with the hands-on.
I want to suggest two tools that I found useful when trying to understand what is going on during a rule action phase. The first one is the AST Explorer that we can use to check the node type of the node of our interest. The other one is StackBlitz. You can code online (starting from an ESLint template) to try out custom rules on the fly. There is an issue, though: to use custom rule it's mandatory to publish a custom plugin that wraps it. To bypass this condition is possible to install on the code environment an
npm
package calledeslint-plugin-local-rules
. You can check the NPM page, it's pretty straightforward.
The rule that I want to write will be called not-allows-underscore
: the idea is to abolish the use of underscores when declaring variables or functions. It's a real dummy rule, but it should be enough to see in action the concepts that we have discussed earlier. The first thing that I would do is to go to AST Explorer, write down a code that declares variables and functions (both standard and arrows one) and take a look at what type of node is the one that encodes the identifier. Doing that, I found out that the node type of my interest is Identifier
, what a surprise! 🤣. In particular, the structure of the node holds the string used as identifier in the name
property.
Ok so, we will just need to register a visitor on that node type and check for its name
property. That's the written rule:
"use strict";
const checkForUnderscores = (identifier) => {
return identifier.includes("_");
};
module.exports = {
meta: {
docs: {
description:
"This rule wants to report every usage of the underscore character when declaring variables or function.",
recommended: false,
},
},
create: function (context) {
return {
Identifier: (node) => {
const containsUnderscores = checkForUnderscores(node.name);
if (containsUnderscores) {
context.report({
message: "An underscore has been used in an identifier.",
node,
});
}
},
};
},
};
Running the rule on this piece of invalid code:
const this_is_a_function = () => {
console.log("dummy one!");
};
const my_variable = "dummy one!";
function this_is_another_function() {
console.log("dummy one!");
}
will give, across the others, those errors:
1:7 error An underscore has been used in an identifier
5:7 error An underscore has been used in an identifier
7:10 error An underscore has been used in an identifier
That's it! I know it is a really simple rule, but I feel like is enough to have a good mental model of ESLint in our mind. At the end that was the goal of this post.
📚 References
- ESLint official documentation
- Esprima official documentation
- Leveling Up One’s Parsing Game With ASTs
- How linting and ESLint improve code quality
- Javascript AST Visualizer
- ESTree: Javascript AST specifications
- Understanding Compilers - For Humans (v2)
- Linting and ESLint: Write Better Code
- Visitor Pattern
Top comments (0)