Java Logging Best Practices

Logging is how your application tells you what happened when you weren’t watching. Good logs help you debug problems, track user behavior, monitor performance, and audit security events. Bad logs waste disk space and make debugging harder.

This guide covers modern Java logging with SLF4J and Logback, the combination used by most professional Java applications.

The Logging Stack

Java logging typically involves two components:

  • Logging facade (SLF4J): The API your code calls. Keeps your code independent of the logging implementation.
  • Logging implementation (Logback): The actual logging engine that writes to files, console, or other destinations.

This separation lets you switch implementations without changing your code. Spring Boot uses this stack by default.

Setup

Maven Dependencies

<!-- SLF4J API -->
<dependency>
    <groupId>org.slf4j</groupId>
    <artifactId>slf4j-api</artifactId>
    <version>2.0.9</version>
</dependency>

<!-- Logback implementation -->
<dependency>
    <groupId>ch.qos.logback</groupId>
    <artifactId>logback-classic</artifactId>
    <version>1.4.14</version>
</dependency>

Spring Boot includes these automatically. Don’t add them if you’re using Spring Boot starters.

Basic Logging

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

public class UserService {
    
    // Create logger for this class
    private static final Logger logger = LoggerFactory.getLogger(UserService.class);
    
    public User createUser(String name, String email) {
        logger.info("Creating user with name: {}", name);
        
        User user = new User(name, email);
        user.setId(generateId());
        
        logger.debug("Generated user ID: {}", user.getId());
        
        return user;
    }
}

Key points:

  • Logger is static final – one per class, created once
  • Use {} placeholders instead of string concatenation
  • Pass the class to getLogger() for proper naming

Log Levels

SLF4J defines five log levels, from most to least severe:

Level When to Use Example
ERROR Something failed and needs attention Database connection failed, payment processing error
WARN Something unexpected but recoverable Retry attempt, deprecated API usage, slow query
INFO Normal operational events User login, order placed, service started
DEBUG Detailed information for debugging Method entry/exit, variable values, query parameters
TRACE Very detailed, typically too noisy for normal use Every method call, detailed loop iterations
public class OrderService {
    private static final Logger logger = LoggerFactory.getLogger(OrderService.class);
    
    public Order processOrder(OrderRequest request) {
        logger.trace("Entering processOrder with request: {}", request);
        
        logger.debug("Validating order request");
        if (!isValid(request)) {
            logger.warn("Invalid order request received: {}", request.getId());
            throw new InvalidOrderException("Validation failed");
        }
        
        logger.info("Processing order {} for customer {}", 
            request.getId(), request.getCustomerId());
        
        try {
            Order order = createOrder(request);
            logger.info("Order {} created successfully", order.getId());
            return order;
        } catch (PaymentException e) {
            logger.error("Payment failed for order {}: {}", 
                request.getId(), e.getMessage(), e);
            throw e;
        }
    }
}

Logback Configuration

Create src/main/resources/logback.xml:

<?xml version="1.0" encoding="UTF-8"?>
<configuration>
    
    <!-- Console appender -->
    <appender name="CONSOLE" class="ch.qos.logback.core.ConsoleAppender">
        <encoder>
            <pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
        </encoder>
    </appender>
    
    <!-- File appender -->
    <appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
        <file>logs/application.log</file>
        <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
            <fileNamePattern>logs/application.%d{yyyy-MM-dd}.log</fileNamePattern>
            <maxHistory>30</maxHistory>
        </rollingPolicy>
        <encoder>
            <pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
        </encoder>
    </appender>
    
    <!-- Set log levels by package -->
    <logger name="com.example.myapp" level="DEBUG"/>
    <logger name="org.springframework" level="INFO"/>
    <logger name="org.hibernate" level="WARN"/>
    
    <!-- Root logger -->
    <root level="INFO">
        <appender-ref ref="CONSOLE"/>
        <appender-ref ref="FILE"/>
    </root>
    
</configuration>

Pattern Explained

%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n

%d{...}      - Timestamp with format
[%thread]    - Thread name
%-5level     - Log level, left-padded to 5 characters
%logger{36}  - Logger name (class), max 36 characters
%msg         - The log message
%n           - Newline

Output example:

2026-01-02 14:30:45.123 [main] INFO  c.e.myapp.UserService - Creating user with name: Alice

Spring Boot Configuration

In application.properties:

# Log levels
logging.level.root=INFO
logging.level.com.example.myapp=DEBUG
logging.level.org.springframework=INFO
logging.level.org.hibernate.SQL=DEBUG

# File output
logging.file.name=logs/application.log
logging.file.max-size=10MB
logging.file.max-history=30

# Pattern
logging.pattern.console=%d{HH:mm:ss.SSS} %-5level %logger{36} - %msg%n

Best Practices

Use Parameterized Logging

// Bad - string concatenation always executes
logger.debug("Processing user: " + user.getName() + " with id: " + user.getId());

// Good - parameters only evaluated if level is enabled
logger.debug("Processing user: {} with id: {}", user.getName(), user.getId());

The parameterized form avoids string concatenation when debug logging is disabled, which is typical in production.

Check Level for Expensive Operations

// If building the message is expensive, check first
if (logger.isDebugEnabled()) {
    logger.debug("Full object state: {}", expensiveToString(object));
}

Log Exceptions Properly

// Bad - loses stack trace
try {
    processData();
} catch (Exception e) {
    logger.error("Processing failed: " + e.getMessage());
}

// Good - includes full stack trace
try {
    processData();
} catch (Exception e) {
    logger.error("Processing failed", e);
}

// Good - with context
try {
    processData();
} catch (Exception e) {
    logger.error("Processing failed for user {}: {}", userId, e.getMessage(), e);
}

Pass the exception as the last argument to include the stack trace.

Include Context

// Bad - no context
logger.info("Order created");
logger.error("Payment failed");

// Good - includes identifiers
logger.info("Order {} created for customer {}", orderId, customerId);
logger.error("Payment failed for order {} with amount {}: {}", 
    orderId, amount, errorMessage);

When you’re searching logs at 3 AM, context is everything.

Use MDC for Request Context

MDC (Mapped Diagnostic Context) adds context to all log messages in a thread:

import org.slf4j.MDC;

public class RequestFilter implements Filter {
    
    @Override
    public void doFilter(ServletRequest request, ServletResponse response, 
                         FilterChain chain) throws IOException, ServletException {
        try {
            MDC.put("requestId", generateRequestId());
            MDC.put("userId", getCurrentUserId());
            
            chain.doFilter(request, response);
        } finally {
            MDC.clear();
        }
    }
}

Update logback.xml pattern to include MDC values:

<pattern>%d{HH:mm:ss.SSS} [%thread] [%X{requestId}] [%X{userId}] %-5level %logger{36} - %msg%n</pattern>

Now every log message includes requestId and userId automatically.

Choose the Right Level

public class PaymentService {
    private static final Logger logger = LoggerFactory.getLogger(PaymentService.class);
    
    public PaymentResult processPayment(Payment payment) {
        // INFO - business events worth knowing about
        logger.info("Processing payment {} for amount {}", 
            payment.getId(), payment.getAmount());
        
        // DEBUG - useful for debugging but too noisy for production
        logger.debug("Payment details: {}", payment);
        
        try {
            PaymentResult result = gateway.process(payment);
            
            if (result.isSuccess()) {
                logger.info("Payment {} succeeded", payment.getId());
            } else {
                // WARN - something unexpected but handled
                logger.warn("Payment {} declined: {}", 
                    payment.getId(), result.getDeclineReason());
            }
            
            return result;
            
        } catch (GatewayException e) {
            // ERROR - something failed
            logger.error("Payment {} failed due to gateway error", payment.getId(), e);
            throw new PaymentException("Gateway error", e);
        }
    }
}

What to Log

Do Log

  • Application startup and shutdown
  • Configuration values at startup (sanitized)
  • Significant business events (orders, payments, user actions)
  • Authentication events (login, logout, failures)
  • Errors and exceptions with context
  • Performance metrics (slow queries, timeouts)
  • External service calls and responses

Don’t Log

  • Passwords, API keys, tokens
  • Credit card numbers, SSNs, personal data
  • Full request/response bodies in production
  • Every loop iteration or trivial operation
  • Sensitive business data without masking
// Bad - logs sensitive data
logger.info("User login: username={}, password={}", username, password);
logger.debug("Credit card: {}", creditCardNumber);

// Good - sanitize sensitive data
logger.info("User login: username={}", username);
logger.debug("Credit card: ****{}", lastFourDigits(creditCardNumber));

Production Logging Patterns

Structured Logging (JSON)

JSON logs are easier to parse and search in log aggregation systems:

<dependency>
    <groupId>net.logstash.logback</groupId>
    <artifactId>logstash-logback-encoder</artifactId>
    <version>7.4</version>
</dependency>
<appender name="JSON_FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
    <file>logs/application.json</file>
    <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
        <fileNamePattern>logs/application.%d{yyyy-MM-dd}.json</fileNamePattern>
        <maxHistory>30</maxHistory>
    </rollingPolicy>
    <encoder class="net.logstash.logback.encoder.LogstashEncoder"/>
</appender>

Output:

{"@timestamp":"2026-01-02T14:30:45.123Z","level":"INFO","logger_name":"com.example.OrderService","message":"Order 12345 created","orderId":"12345","customerId":"67890"}

Log Rotation

Configure rotation to prevent disk space issues:

<appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
    <file>logs/application.log</file>
    <rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
        <fileNamePattern>logs/application.%d{yyyy-MM-dd}.%i.log</fileNamePattern>
        <maxFileSize>100MB</maxFileSize>
        <maxHistory>30</maxHistory>
        <totalSizeCap>3GB</totalSizeCap>
    </rollingPolicy>
    <encoder>
        <pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
    </encoder>
</appender>

Async Logging

Async appenders prevent logging from blocking your application:

<appender name="ASYNC_FILE" class="ch.qos.logback.classic.AsyncAppender">
    <appender-ref ref="FILE"/>
    <queueSize>512</queueSize>
    <discardingThreshold>0</discardingThreshold>
</appender>

<root level="INFO">
    <appender-ref ref="ASYNC_FILE"/>
</root>

Common Mistakes

Logging Too Much

// Don't log in tight loops
for (int i = 0; i < 1000000; i++) {
    logger.debug("Processing item {}", i);  // Bad
    process(items[i]);
}

// Log summary instead
logger.info("Processing {} items", items.length);
int processed = processAll(items);
logger.info("Processed {} items successfully", processed);

Ignoring Performance Impact

// This is expensive even when debug is disabled
logger.debug("Request: " + request.toString() + ", Response: " + response.toString());

// This only evaluates if debug is enabled
logger.debug("Request: {}, Response: {}", request, response);

Not Logging Enough Context

// Useless in production
logger.error("Error occurred");

// Useful
logger.error("Failed to process order {} for customer {}: {}", 
    orderId, customerId, e.getMessage(), e);

Summary

Good logging is an investment that pays off during debugging and production support. Use SLF4J with Logback for flexibility. Choose log levels thoughtfully. Include context in every message. Protect sensitive data. Configure rotation and retention for production.

When something goes wrong at 3 AM, your logs are the first place you’ll look. Make sure they tell you what happened.


Prerequisites: Java Exception Handling | Introduction to Maven

Related: How to Debug Java Code | Introduction to Spring Boot

Sources

  • SLF4J. “SLF4J Manual.” slf4j.org/manual.html
  • Logback. “Logback Documentation.” logback.qos.ch/documentation.html
  • Spring. “Spring Boot Logging.” docs.spring.io/spring-boot/docs/current/reference/html/features.html#features.logging
Scroll to Top