This post was updated on 27 April 2023 to include information about logging with Winston and AppSignal.
Good logging practices are crucial for monitoring and troubleshooting your Node.js servers. They help you track errors in the application, discover performance optimization opportunities, and carry out different kinds of analysis on the system (such as in the case of outages or security issues) to make critical product decisions.
Even though logging is an essential aspect of building robust web applications, it's often ignored or glossed over in discussions about development best practices. Knowing what and how to log is tricky because it's often difficult to understand what information you’ll need during troubleshooting.
Since we understand how essential logging is, we need to practice good logging habits. We should accumulate logs that make it easier to diagnose the root cause of problems and solve potential issues before they impact end-users.
This article will outline some best practices to follow when writing logs in a Node.js application.
1. Use a Node.js Logging Library
Node.js developers tend to rely on the runtime's console methods (such as console.log()
) to log events since it is built into the runtime, and offers a familiar API similar to the JavaScript console mechanism provided by web browsers.
While console.log()
has its uses, it is not an adequate solution for implementing logging in a production application. It lacks support for features and configuration options that are considered essential to a good logging setup. For example, console methods do not support log levels like warn
, error
or debug
, despite the provision of methods like console.warn()
, console.error()
, and console.debug()
, respectively. Those are simply functions that print to the standard output or standard error without indicating log severity.
A good logging library provides a robust feature set that makes it easier to centralize, format, and distribute the logs to fit your needs. For example, a typical logging framework will provide various options for where to output log data (such as the terminal, filesystem, or a database), while also supporting the ability to send the logs over HTTP if you want to transport your log entries to a log management service.
There are three major concerns for choosing a suitable logging library: recording, formatting, and storing messages. You need to make sure that your library of choice addresses all three concerns in a satisfactory manner. Another critical consideration for selecting a logging library is performance. Since the logger will be used a lot throughout the codebase, it can harm your application's runtime performance. Therefore, you should also investigate the performance characteristics of a library, and see how it compares to alternatives.
In the Node.js ecosystem, there are several popular options to consider. Most of them offer similar features, but they also have their differences — so it's vital that you try them out yourself and see which one caters to your use case the best:
- Winston — The most popular logging library, with support for multiple transports. This allows you to easily configure your preferred storage location for your logs.
- Pino — Pino's biggest attraction is its speed. It claims to be up to five times faster than alternatives, in many cases.
- Bunyan — Another feature-rich logging framework that outputs in JSON by default and provides a CLI tool for viewing your logs.
- Roarr — Roarr is a different kind of logger that works in Node.js and the browser.
For the rest of the article, we'll use winston
to demonstrate the different concepts. This is not necessarily an endorsement of winston
; it's only being used here because it's the most popular logging framework for Node.js. You can follow along by installing winston
in your project though npm
:
npm install winston
2. Use the Correct Log Levels
If you've spent a reasonable amount of time in the software industry, especially supporting production applications, you may have observed the use of log levels, at least in passing. They provide a way to differentiate between the types of events in a system and add context to how important each event is. If you correctly utilize log levels in your application, it will be easy to distinguish between critical events that need to be immediately addressed versus purely informative events.
Although logging systems give different names to severity levels, the concepts remain largely the same. Here are the most common log levels that you are likely to encounter, regardless of the logging framework you choose (in decreasing order of severity):
- FATAL: Used to represent a catastrophic situation — your application cannot recover. Logging at this level usually signifies the end of the program.
- ERROR: Represents an error condition in the system that happens to halt a specific operation, but not the overall system. You can log at this level when a third-party API is returning errors.
- WARN: Indicates runtime conditions that are undesirable or unusual, but not necessarily errors. An example could be using a backup data source when the primary source is unavailable.
- INFO: Info messages are purely informative. Events that are user-driven or application-specific may be logged at this level. A common use of this level is to log interesting runtime events, such as the startup or shutdown of a service.
- DEBUG: Used to represent diagnostic information that may be needed for troubleshooting.
- TRACE: Captures every possible detail about an application's behavior during development.
The winston
library uses the following log levels by default — with error
being the most severe and silly
being the least:
{
error: 0,
warn: 1,
info: 2,
http: 3,
verbose: 4,
debug: 5,
silly: 6
}
If the defaults do not suit your needs, you can change them while initializing a custom logger. For example, you can instead use the log levels discussed above.
const { createLogger, format, transports } = require('winston');
const logLevels = {
fatal: 0,
error: 1,
warn: 2,
info: 3,
debug: 4,
trace: 5,
};
const logger = createLogger({
levels: logLevels,
transports: [new transports.Console()],
});
When you want to log a message, you can reference the desired level directly on the custom logger, as shown below:
logger.info('System launch'); // {"message":"System launch","level":"info"}
logger.fatal('A critical failure!'); // {"message":"A critical failure!","level":"fatal"}
Winston also allows you to define a level
property on the logger and on each transport which specifies the maximum level of messages that should be logged. For example, you can run your programs at info
level by default, then switch to debug
or trace
level when you need to troubleshoot an issue or deploy to a testing environment. You should control this setting through an environmental variable.
const logger = winston.createLogger({
level: 'warn',
levels: logLevels,
transports: [
new transports.Console({ level: 'info' }),
]
});
3. Use Structured Logging
When defining how your log messages look, the priority should be to make your log entries easy to read for both humans and machines.
One of the primary goals of logging is to enable post-mortem debugging, and this will involve reading log entries to reconstruct the steps that led to an event in the system. Having human-readable log entries will make this thankless task much easier for developers and sysadmins. It's also important to use a structured format that is easy to parse by machines. This allows for some automated processing on the logs (such as for alerting or auditing purposes).
JSON is a universal favorite for structured log entries because it is ubiquitous and easily readable by humans. It is also highly machine-readable and easily converted to other formats, even when working with other programming languages.
When logging in JSON, it's necessary to use a standard schema so that the semantics of each field is clearly defined. This also makes it easy to find what you're looking for when analyzing log entries.
Winston outputs a JSON string by default with two fields: message
and level
. The former contains the text that is being logged, while the latter signifies the log level. Customizing the output is straightforward through winston.format
, which uses logform to implement its different formats. For example, if you wanted to add a timestamp to each log entry, you can do so by combining the timestamp
and json
formats as shown below:
const { createLogger, format, transports } = require('winston');
const logger = createLogger({
format: format.combine(format.timestamp(), format.json()),
transports: [new transports.Console({})],
});
This will produce log entries in the following format:
{"message":"Connected to DB!","level":"info","timestamp":"2021-07-28T22:35:27.758Z"}
{"message":"Payment received","level":"info","timestamp":"2021-07-28T22:45:27.758Z"
Ensure you read the logform docs to learn about the predefined formats and how to create your own custom formats.
4. Write Descriptive Messages
Log entries should adequately describe the events that they represent. Each message should be unique to the situation and should clearly explain the event that occurred at that point. In the event of an emergency, your log entries may be the only source of information to help you understand what happened, so it's important to get this aspect of logging right!
Here's an example of an inadequate log entry being used to communicate a request failure:
Request failed, will retry.
The message above doesn't provide any insights into:
- The specific request that failed
- The reason for its failure
- The length of time before the request is retried
We can probably find out the answers to some of these questions by looking at other places such as other log entries or even the code itself. However, it is better to make the log entry more valuable on its own, through a more descriptive message:
"POST" request to "https://example.com/api" failed. Response code: "429", response message: "too many requests". Retrying after "60" seconds.
This second message is much better because it provides adequate information about the failed request, including the status code and response message, and also notes that the request will be retried in 60 seconds. If all your messages are just as descriptive, you'll have a more pleasant time when
attempting to make sense of your logs. Other examples of good log messages include the following:
Status of task id "1234" changed from "IN_PROGRESS" to "COMPLETED".
SomeMethod() processed "100" records in "35ms".
User registration failed: field "email" is not valid email address; field "password" is below the minimum 8 characters.
When composing log messages, include all the relevant details pertaining to the event without being unnecessarily verbose. This will prevent other log readers (which could include your future self) from being overwhelmed with too much information to sift through. Your log messages should also be able to stand on their own; don't rely on a previous message's content to provide the context for a later entry.
5. Add the Right Amount of Context to Your Logs
Besides writing a descriptive log message, you also need to include the right amount of context in the log entry. Context makes it possible to quickly reconstruct the actions leading up to an event. Add basic information to the log, such as the timestamp of the event and the method where it occurred (or a stack trace, in the case of errors). You should also add data points relevant to the flow of the operation that triggered the event. These data points may be generated at different parts of the operation flow and aggregated at the point of logging.
In the context of a billing service, log entries generated by the system could include several data points, including:
- Session identifiers
- Usernames and IDs
- Product or transaction identifiers
- The current page that the user is on
You can use each of the above data points to track a user's flow through an entire checkout process. If an important event occurs, the available data will be automatically appended to the log output, and it will be possible to identify:
- the circumstances that led to the event (such as the user who experienced the event)
- the page where it occurred
- the transaction and
- product id that triggered the event.
These data points also let you filter log entries based on a common identifier such as the user id or product id.
Winston provides the ability to add global metadata (such as the component or service where an event occurred) to every generated log entry. In a complex application, this information in your logs is helpful for troubleshooting issues because it immediately directs you to the point of failure.
You can configure this when creating the logger for the component or service:
const logger = createLogger({
format: format.combine(format.timestamp(), format.json()),
defaultMeta: {
service: 'billing-service',
},
transports: [new transports.Console({})],
});
A service
field will be included in all logs created by the logger
object:
{"message":"Order \"1234\" was processed successfully","level":"info","service":"billing-service","timestamp":"2021-07-29T10:56:14.651Z"}
To add metadata to individual entries, you need to create a context or metadata object that you can pass around throughout the flow of an operation so that the data is accessible at logging points. You can also utilize the concept of child loggers to add metadata at logging points:
const ctx = {
userId: '090121',
productId: 'creme-de-la-creme',
};
logger.child({ context: ctx }).info('Order "1234" was processed successfully');
// {"context":{"userId":"090121","productId":"creme-de-la-creme"},"message":"Order \"1234\" was processed successfully","level":"info","service":"billing-service","timestamp":"2021-07-29T12:20:13.249Z"}
6. Avoid Logging Sensitive Information
Regardless of whether you're in an industry with strict rules around compliance (such as healthcare or finance) or not, it's important to avoid including sensitive information in your logs.
Sensitive information includes social security numbers, addresses, passwords, credit card details, access tokens, and similar data types. Since log messages are often stored in plain text, such data will be exposed if the logs fall into the wrong hands. You also have to ensure you are not falling
afoul of regulations that apply to countries where your product is operational (such as GDPR) by recording certain pieces of information.
You can avoid accidentally leaking sensitive data in your logs by minimizing which parts of the system work with that data. For instance, credit card details should only be seen by the billing component of your system, and sensitive data should be kept out of URLs — redacted where possible.
Although this isn't a foolproof solution, you can also use a blocklist to prevent specific fields from ever making it into the logs.
7. Log for Auditing and Profiling Reasons
We primarily use logs to diagnose issues and find the root cause of bugs. However, logs can also prove invaluable when auditing or profiling a system, or perhaps to generate interesting statistics about system behavior.
For example, you can log details of what users are doing on the system (like user sign-ins, the resources they created or accessed, etc.). In the absence of specialized tools, you can also use logging to profile your code by reporting how long an operation took or how many times a function was executed. The
insights gleaned from this data can help you improve the performance of your application's hotspots.
Winston provides a simple profiling mechanism that you can take advantage of:
logger.profile('test', { level: 'info' });
// Returns an object corresponding to a specific timing
const profiler = logger.startTimer();
setTimeout(() => {
// End the timer and log the duration
profiler.done({ message: 'Logging message' });
}, 1000);
A durationMs
field will be included in the output:
{"message":"Logging message","level":"info","durationMs":1001,"timestamp":"2021-07-29T14:17:55.097Z"}
8. Automatically Log Uncaught Exceptions and Unhandled Promise Rejections
When you encounter an uncaught exception or unhandled promise rejection, it is always considered good practice to crash the program. Use a process manager like PM2 to automatically restart the process and restore the program to a clean state.
To understand why such an event has occurred, it's also necessary to log the details of the exception or promise rejection before exiting. Winston provides handlers for both situations, which may be configured on a logger
instance:
const logger = createLogger({
transports: [
new transports.File({ filename: 'file.log' })
],
exceptionHandlers: [
new transports.File({ filename: 'exceptions.log' })
],
rejectionHandlers: [
new transports.File({ filename: 'rejections.log' })
]
});
In the example above, uncaught exceptions will be logged into an exceptions.log
file, while unhandled rejections will be logged into a rejections.log
file. Entries will automatically include the full stack trace as well as information about process arguments and memory usage associated with the exception, giving you all the details you need to find the root of the problem.
Learn how to configure Winston's unhandled exception handler.
9. Centralize and Monitor Your Logs with Winston and AppSignal
Managing a Node.js application often involves handling vast amounts of logs generated by the system. As your application grows, these logs can become increasingly difficult to track and analyze, making it crucial to centralize them in one unified location. By consolidating logs from various sources, you can streamline the monitoring and troubleshooting process, providing better visibility into your application's performance and ensuring a more reliable user experience.
Centralizing logs not only simplifies management, but also offers several key benefits:
Easier log analysis: Combining logs from multiple instances and sources allows for more efficient and comprehensive analysis, making identify patterns, trends, and potential issues.
Faster troubleshooting: When issues arise, centralized logs make it easier to pinpoint the root cause, reducing the time spent on debugging and improving the speed at which you can resolve problems.
Improved performance monitoring: With all logs in one place, you can easily monitor the overall health of your application, enabling you to identify performance bottlenecks and optimize your code accordingly.
Alerting and notifications: You can set up alerts based on specific conditions, ensuring that you're immediately notified of critical issues.
Regulatory compliance: For organizations subject to regulatory requirements, centralizing logs can help meet compliance standards by providing a clear audit trail and ensuring data integrity.
Winston can be configured to integrate with a log management platform like AppSignal to provide all the above benefits and more. All you need to do is set up the AppSignal transport for Winston as shown below:
const { createLogger, transports } = require("winston");
const { WinstonTransport } = require("@appsignal/nodejs");
const logger = createLogger({
transports: [new WinstonTransport({ group: "nodejs_app" })],
});
logger.info("Hello from Winston Logger!");
You'll need to install the AppSignal SDK for Node.js,
create a log source via the AppSignal dashboard, and initialize the AppSignal client instance with the provided API Key. Once you're all set up, your application logs will be streamed directly to AppSignal.
Here's an example of how your logs might look in AppSignal:
Learn more about AppSignal's logging integrations for Node.js.
Node.js Logging Tips: Next Steps
In this article, we've covered some invaluable practical tips to help you write more useful log entries. Next, you should add logging to your Node.js server and configure it while keeping the above best practices in mind.
If you have any additional tips regarding logging in Node.js that were not discussed in this post, feel free to share them with me on Twitter.
Thanks for reading, and happy coding!
P.S. If you liked this post, subscribe to our JavaScript Sorcery list for a monthly deep dive into more magical JavaScript tips and tricks.
P.P.S. If you need an APM for your Node.js app, go and check out the AppSignal APM for Node.js.
Ayo is a Software Developer by trade. He enjoys writing about diverse technologies in web development, mainly in Go and JavaScript/TypeScript. You can learn more about him through his blog.
Top comments (0)