Skip to main content

SLF4J and Logback Configuration


Maven Dependencies:
For slf4j add following dependency to your pom. This is for slf4j which is a facade, an API, which should be implemented by a logging framework. As a logging framework logback will be used.
<dependency>
    <groupId>org.slf4j</groupId>
    <artifactId>slf4j-api</artifactId>
    <version>${slf4j.version}</version>
    <scope>compile</scope>
</dependency>

Add following dependency for logback.
<dependency>
    <groupId>ch.qos.logback</groupId>
    <artifactId>logback-classic</artifactId>
    <version>${logback.version}</version>
    <scope>runtime</scope>
</dependency>

And finally if there are libraries or legacy codes that are using log4j as logging system, add following dependency. This library will forward log4j logging requests to slf4j.
<dependency>
   <groupId>org.slf4j</groupId>
   <artifactId>log4j-over-slf4j</artifactId>
   <version>${slf4j.version}</version>
</dependency>

Logging Configuration File:
Put logback.xml to class path.  
<root level="DEBUG">
   <appender-ref ref="FILE"/>
   <appender-ref ref="STDOUT"/>
</root>

Add root logger definition. Assign a logging level as appropriate. Now define the referenced appenders, which are logging destinations.
    <appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
        <file>application.log</file>

        <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
            <!-- daily rollover -->
            <fileNamePattern>application.%d{yyyy-MM-dd}.log</fileNamePattern>

            <!-- keep 30 days' worth of history -->
            <maxHistory>30</maxHistory>
        </rollingPolicy>

        <encoder>
            <pattern>%date %level [%thread] %logger{10} [%file:%line] %msg%n</pattern>
        </encoder>
    </appender>

Encoder pattern is what log statement will look like on output. Now add a console appender for easier debugging if you are using an IDE with console screen.
   <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
        <!-- encoders are assigned the type
             ch.qos.logback.classic.encoder.PatternLayoutEncoder by default -->
        <encoder>
            <pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
        </encoder>
    </appender>

Now necessary configuration to collect logs on application.log file and console is ready. Let’s add one more logger and appender for better analysis of executing system.
    <logger name="com.opensource.tr.myapplication"
          additivity="false">
        <appender-ref ref="FILE-REDUCED"/>
    </logger>

<appender name="FILE-REDUCED" class="ch.qos.logback.core.rolling.RollingFileAppender">
        <file>application-reduced.log</file>
        <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
            <!-- daily rollover -->
            <fileNamePattern>application-reduced.%d{yyyy-MM-dd}.log</fileNamePattern>

            <!-- keep 30 days' worth of history -->
            <maxHistory>30</maxHistory>
        </rollingPolicy>

        <encoder>
            <pattern>%date %level [%thread] %logger{10} [%file:%line] %msg%n</pattern>
        </encoder>
    </appender>

Additivity : Note the additivity attribute. False means produced logs won’t be sent to upper level loggers. In other words, logs produced from “com.opensource.tr.myapplication” package and below won’t be seen in application.log file. They will only be included in application-reduced.log file.

Effective Level : Note that logging level is not specified. It will be inherited from the first defined logging level on its upper hierarchy. In this case it is the root logger and its level is debug. As a result, the logger’s effective logging level will be debug.

If it is favorable to change logging configuration without application restart add following definition to configuration root to make logback regularly scan for configuration file changes.
<configuration scan="true" scanPeriod="30 seconds">

This line means configuration file will be scanned every 30 seconds and if a change is detected logging system will be reconfigured. Remember default time unit for period is milliseconds.


To enable JMX configuration adding below line to configuration file is enough:

<jmxConfigurator />



Logging Tips:
-          Logging levels are from lowest to highest:
TRACE, DEBUG, INFO, WARN and ERROR
o   ERROR is good for reporting application exceptions that substantially effect application flow.
o   WARN is good for application exceptions that are not critical for application flow.
o   INFO is good for reporting for execution of  important points of application, like successfully obtaining a JDBC connection or reading confiugration file.
o   DEBUG is good for creating information for later debugging of system in case of system errros.
o   TRACE level is not a commonly used logging level.
-          Instead of using so called ‘Guarded Logging’ use slf4j parameterized logging approach.
Ä°nstead of :
 if(logger.isDebugEnabled()) {
  logger
.debug("Entry number: " + i + " is " + String.valueOf(entry[i]));
 }

Use :
 Object entry = new SomeObject(); 

 logger.debug("The entry is {}.", entry);

So that parameter constraction cost of ineligible-to-logging statements are  avoided. And code doesn’t look messy.


Complete logack.xml file:


<configuration scan="true" scanPeriod="30 seconds">
    <contextName>MyApplication</contextName>
    <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
        <!-- encoders are assigned the type
             ch.qos.logback.classic.encoder.PatternLayoutEncoder by default -->
        <encoder>
            <pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
        </encoder>
    </appender>
    <appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
        <file>application.log</file>
        <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
            <!-- daily rollover -->
            <fileNamePattern>application.%d{yyyy-MM-dd}.log</fileNamePattern>
            <!-- keep 30 days' worth of history -->
            <maxHistory>30</maxHistory>
        </rollingPolicy>
        <encoder>
            <pattern>%date %level [%thread] %logger{10} [%file:%line] %msg%n</pattern>
        </encoder>
    </appender>
    <appender name="FILE-REDUCED" class="ch.qos.logback.core.rolling.RollingFileAppender">
        <file>application-reduced.log</file>
        <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
            <!-- daily rollover -->
            <fileNamePattern>application-reduced.%d{yyyy-MM-dd}.log</fileNamePattern>
            <!-- keep 30 days' worth of history -->
            <maxHistory>30</maxHistory>
        </rollingPolicy>
        <encoder>
            <pattern>%date %level [%thread] %logger{10} [%file:%line] %msg%n</pattern>
        </encoder>
    </appender>
    <root level="DEBUG">
        <appender-ref ref="FILE"/>
        <appender-ref ref="STDOUT"/>
    </root>
    <logger name="com.opensource.tr.myapplication" level="DEBUG"  additivity="false">
        <appender-ref ref="FILE-REDUCED"/>
    </logger>
    
</configuration>

Comments

Popular posts from this blog

Obfuscating Spring Boot Projects Using Maven Proguard Plugin

Introduction Obfuscation is the act of reorganizing bytecode such that it becomes hard to decompile. Many developers rely on obfuscation to save their sensitive code from undesired eyes. Publishing jars without obfuscation may hinder competitiveness because rivals may take advantage of easily decompilable nature of java binaries. Objective Spring Boot applications make use of public interfaces, annotations which makes applications harder to obfuscate. Additionally, maven Spring Boot plugin creates a fat jar which contains all dependent jars. It is not viable to obfuscate the whole fat jar. Thus obfuscating Spring Boot applications is different than obfuscating regular java applications and requires a suitable strategy. Audience Those who use Spring Boot and Maven and wish to obfuscate their application using Proguard are the target audience for this article. Sample Application As the sample application, I will use elastic search synch application from my GitHub repository.

Hadoop Installation Document - Standalone Mode

This document shows my experience on following apache document titled “Hadoop:Setting up a Single Node Cluster”[1] which is for Hadoop version 3.0.0-Alpha2 [2]. A. Prepare the guest environment Install VirtualBox. Create a virtual 64 bit Linux machine. Name it “ubuntul_hadoop_master”. Give it 500MB memory. Create a VMDK disc which is dynamically allocated up to 30GB. In network settings in first tab you should see Adapter 1 enabled and attached to “NAT”. In second table enable adapter 2 and attach to “Host Only Adaptor”. First adapter is required for internet connection. Second one is required for letting outside connect to a guest service. In storage settings, attach a Linux iso file to IDE channel. Use any distribution you like. Because of small installation size, I choose minimal Ubuntu iso [1]. In package selection menu, I only left standard packages selected.  Login to system.  Setup JDK. $ sudo apt-get install openjdk-8-jdk Install ssh and pdsh, if not already i

Java: Cost of Volatile Variables

Introduction Use of volatile variables is common among Java developers as a way of implicit synchronization. JIT compilers may reorder program execution to increase performance. Java memory model[1] constraints reordering of volatile variables. Thus volatile variable access should has a cost which is different than a non-volatile variable access. This article will not discuss technical details on use of volatile variables. Performance impact of volatile variables is explored by using a test application. Objective Exploring volatile variable costs and comparing with alternative approaches. Audience This article is written for developers who seek to have a view about cost of volatile variables. Test Configuration Test application runs read and write actions on java variables. A non volatile primitive integer, a volatile primitive integer and an AtomicInteger is tested. Non-volatile primitive integer access is controlled with ReentrantLock and ReentrantReadWriteLock  to compa