This blog post is jointly written by Julien Dubois (Microsoft) and Charlie Klein (Logz.io), in order for Spring Boot users to better understand the benefits of using a logs ingestion and analysis tool, like Logz.io.
How logging works in a classical Spring Boot application
Spring Boot applications usually use Logback (official documentation) or Log4J 2 (official documentation) to manage application logs.
This configuration is usually modified depending on each project or company’s usage, and it is common (and good) practice to follow the Twelve-Factor App chapter on logging and output everything to the console.
Advanced users, using for example JHipster, typically use some customized version of Logback, which is usually configured in three different ways:
- Using the
src/main/resources/logback-spring.xml
file. For example, here is JHipster’s default configuration: https://github.com/jhipster/jhipster-sample-app/blob/master/src/main/resources/logback-spring.xml. - Using Spring Boot’s configuration file, under the
logging.*
properties keys. This is documented at https://docs.spring.io/spring-boot/docs/current/reference/html/appendix-application-properties.html and you can also find an example in JHipster’s default Spring Boot configuration file for development: https://github.com/jhipster/jhipster-sample-app/blob/master/src/main/resources/config/application-dev.yml. - At runtime, as Logback can be programmatically reconfigured. For instance, this is what JHipster users do with their custom
LoggingConfiguration.java
file or when they use their log management screen.
Configuring logs is great, but once they are sent to the console the real interesting work begins: you need to have a system to aggregate and analyze them! In this article, we’ll use Logz.io to understand what needs to be configured, and most importantly what benefit we can expect from such a solution.
Shipping logs to Logz.io
The first step for shipping your logs to Logz.io is adding the Logz.io dependency to your Maven pom.xml
file:
<dependency>
<groupId>io.logz.logback</groupId>
<artifactId>logzio-logback-appender</artifactId>
<version>1.0.22</version>
</dependency>
Then, you need to configure your src/main/resources/logback-spring.xml
file to use this library:
<configuration>
<!-- Use shutdownHook so that we can close gracefully and finish the log drain -->
<shutdownHook class="ch.qos.logback.core.hook.DelayingShutdownHook"/>
<appender name="LogzioLogbackAppender" class="io.logz.logback.LogzioLogbackAppender">
<token><your-logz-io-token></token>
<logzioUrl>https://<your-logz-io-listener-host>:8071</logzioUrl>
<logzioType>java-application</logzioType>
<filter class="ch.qos.logback.classic.filter.ThresholdFilter">
<level>INFO</level>
</filter>
</appender>
<root level="debug">
<appender-ref ref="LogzioLogbackAppender"/>
</root>
</configuration>
A good way to ensure your logs made it to the platform is to open up the “Live Tail” tab in the Logz.io platform. After pressing play, you should see a stream of logs showing up in the application. If you don’t, visit the Logz.io log shipping troubleshooting page.
If you want more information, you should follow this more in-depth tutorial (which was co-written by Julien Dubois) at https://docs.microsoft.com/en-us/azure/java/java-get-started-with-logzio.
Analyzing logs with Logz.io
After shipping your logs to Logz.io, the fun part begins.
Logz.io delivers the ELK Stack and Grafana – the most popular open source monitoring solutions in the world – as a fully-managed service so engineers can use the open source they know without the hassle of building or maintaining scalable logging or metrics pipelines themselves. Users can query and visualize their logs and metrics to monitor for production issues, security incidents, user activity, and and many more user-cases. In this section, we’ll focus on Logz.io’s logging capabilities.
Logz.io users interact with logs on a high-powered version of Kibana (the “K” in the “ELK Stack”) which offers engineers broad flexibility to slice and dice their log data. Logz.io users benefit from Kibana’s complete functionality with additional advanced analytical features that make Kibana faster, more integrated, and easier to use. The following tutorial will take place in the “Kibana” tab on the upper toolbar (Box A).
Upon opening Kibana, you can view all logs generated by your environment. According to our queries below (Box B), there were 1.7 million logs generated by “service-10” within the last 2 days.
Obviously, sorting through this list of logs to identify production issues or monitor user activity is impossible. Clicking on the “Patterns” tab (Box C) will cluster similar groups of logs together so you can quickly understand what kind of log data your service is generating without needing to scroll through the entire list of logs. For each pattern, you can see the time, count, ratio (percentage of logs that fall under the pattern), and the pattern itself.
Scrolling through the log patterns, you can quickly query for the most important log data by clicking the “filter out” or “filter in” buttons to the right of every pattern (Box D). In this case, as we scroll down we find a pattern indicating an “SSL Library error” – something that is clearly worth investigating.
After opening up the SSL error pattern, we can see it was flagged as a “Cognitive Insight”. Cognitive Insights leverages AI to cross reference incoming logs against online discussion forums such as StackOverflow and GitHub to identify critical logged events in your environment.
This feature helps us find the needle in the haystack. Upon opening the Insight, we see contextual information around the event and links to the forums that discuss the SSL Error.
Teams can open these links to see how other engineers addressed the issue.
The last bit of information you may want to know about this SSL error is its origin. In other words, what is the actual code producing this issue? To get this information, you can go to the “Insights” tab in the upper right corner.
This graph helps us correlate issues with specific deployments. The colorful lines represent production issues and the grey dotted lines represent recent deployments. As you can see, the SSL error – represented by the blue dot – did not appear until after the “Updating puppet certificate” deployment at 3:20pm on Dec 4th. Therefore, we can conclude that this deployment led to the SSL error so it can quickly be addressed.
With Cognitive Insights and the ability to plot issues on a deployment timeline, you can close the loop with engineers by showing them the root cause and origin of the problem.
See it for yourself!
In addition to the capabilities explored above, you can monitor metrics on Grafana, build your own Kibana visualizations, set alerts, and do a whole lot more.
Top comments (0)