Spring Reactive Logging using Zalando Logbook and Elastic Stack

Hello everyone! This post will show how to centralize Spring WebFlux logging using the Elastic Stack.

· Prerequisites
· Overview
∘ What is Elastic Stack?
∘ Why use Elastic Stack?
∘ What is Zalando Logbook?
· Coding
∘ Project Setup
∘ Setup of logbook
∘ Testing
∘ Logback integration
· Conclusion
· References


Prerequisites

This is the list of all the prerequisites:

  • Spring Boot / WebFlux 3+
  • Maven 3.+
  • Java 17+
  • Your favorite IDE (IntelliJ IDEA, Eclipse, NetBeans, VS Code)
  • Docker and Docker compose installed
  • Postman

Overview

What is Elastic Stack?

The Elastic Stack is a group of Open Source products from Elastic designed to help users take data from any type of source and in any format, and search, analyze, and visualize that data in real time. The product group was formerly known as the ELK Stack for the core products in the group — Elasticsearch, Logstash, and Kibana — but has been rebranded as the Elastic Stack.

  • Elasticsearch is at the core of the stack. It’s a distributed, RESTful search and analytics engine, scalable data store, and vector database capable of addressing many use cases. It centrally stores your data for lightning-fast search, fine‑tuned relevancy, and powerful analytics that scale with ease.
  • Logstash is an open-source data ingestion tool that allows you to collect data from various sources, transform it, and distribute the data. Logstash can handle different types of data, such as logs, metrics, and other event data.
  • Kibana is a web-based data visualization on top of Elasticsearch. It offers powerful and easy-to-use features such as histograms, line graphs, pie charts, heat maps, and built-in geospatial support. Kibana is also the home for Elastic Enterprise Search, Elastic Observability, and Elastic Security solutions

Why use Elastic Stack?

  • It’s open-source and free to use.
  • Scalability: It deploys at scale and works across all types of infrastructures, including SaaS, containers or bare metal, private cloud, and public cloud.
  • It offers centralized logging capabilities to aggregate server logs from complex cloud environments into a single searchable index.
  • Elastic Stack helps monitor and analyze security events in real-time.

What is Zalando Logbook?

Logbook is an extensible Java library to enables complete request and response logging for different client- and server-side technologies. It satisfies a special need by a) allowing web application developers to log any HTTP traffic that an application receives or sends and b) in a way that makes it easy to persist and analyze it later. This can be useful for traditional log analysis, meeting audit requirements, or investigating individual historic traffic issues.

Logbook is ready to use out of the box for most common setups. Even for uncommon applications and technologies, it should be simple to implement the necessary interfaces to connect a library/framework/etc. to it.

— https://github.com/zalando/logbook

Coding

Project Setup

We’ll create a simple Spring Boot project from start.spring.io, with the following dependencies: Spring Reactive Web, Lombok, Spring Boot Actuator, and Validation.

Setup of logbook

The first thing is to add the Zalando Logbook dependency in our pom.xml

        <!-- https://mvnrepository.com/artifact/org.zalando/logbook-spring-boot-webflux-autoconfigure -->
<dependency>
<groupId>org.zalando</groupId>
<artifactId>logbook-spring-boot-webflux-autoconfigure</artifactId>
<version>3.9.0</version>
</dependency>

Next, we define a LogbookConfiguration class. Logbook provides auto-configuration for Spring WebFlux projects. It comes with the configuration of the project Client and Server.

@Configuration
@Import(LogbookWebFluxAutoConfiguration.class) // import configurations of LogbookWebFluxAutoConfiguration
public class LogbookConfiguration {

@Bean // Enable the LogbookWebFilter bean to trace requests and responses.
public WebFilter logbookFilter(Logbook logbook) {
return new LogbookWebFilter(logbook);
}

@Bean
public ExchangeFilterFunction logbookClientExchangeFunction(final Logbook logbook) {
return new LogbookExchangeFilterFunction(logbook);
}

@Bean // Enable Logbook logging for every WebClient calls request/response
public WebClient webClient(final ExchangeFilterFunction logbookClientExchangeFunction) {
return WebClient.builder()
.filter(logbookClientExchangeFunction)
.build();
}
}

Finally, we add the logbook logger in application.properties which traces the level to log the requests and responses.

logging:
level:
org.zalando.logbook: TRACE

Testing

Let’s try our configuration with the following controller code:

@Slf4j
@RestController
@RequiredArgsConstructor
@RequestMapping("/v1/trace")
public class TraceController {

public static final String JSON_PLACEHOLDER_BASE_URL = "https://jsonplaceholder.typicode.com";

private final WebClient webClient;


@GetMapping(value = "/log", produces = MediaType.APPLICATION_JSON_VALUE)
public Mono<ApiResponse> getLogging() {
//Calling webclient
var postResponse = createPost();

return postResponse.flatMap(pt -> Mono.just(new ApiResponse(pt, "response")));
}

private Mono<Post> createPost() {
var post = new Post("foo", "bar", 1);

return webClient.post()
.uri(MessageFormat.format("{0}/posts", JSON_PLACEHOLDER_BASE_URL))
.header(HttpHeaders.CONTENT_TYPE, MediaType.APPLICATION_JSON_UTF8_VALUE)
.body(Mono.just(post), Post.class)
.retrieve()
.bodyToMono(Post.class);
}
}

As we see, the endpoint works correctly. And when we look at the console we can see the Logbook request/response log call like the following:

TRACE 12710 --- [logging-spring-webflux] [or-http-epoll-3] org.zalando.logbook.Logbook              : {"origin":"remote","type":"request","correlation":"f21dadc406c11156","protocol":"HTTP/1.1","remote":"/[0:0:0:0:0:0:0:1]:46364","method":"GET","uri":"http://localhost:8080/v1/trace/log","host":"localhost","path":"/v1/trace/log","scheme":"http","port":"8080","headers":{"Accept":["*/*"],"Accept-Encoding":["gzip, deflate, br"],"Connection":["keep-alive"],"Host":["localhost:8080"],"Postman-Token":["32f49d7d-7645-4899-8753-9a975d45b423"],"User-Agent":["PostmanRuntime/7.41.1"]}}
TRACE 12710 --- [logging-spring-webflux] [or-http-epoll-3] org.zalando.logbook.Logbook : {"origin":"remote","type":"request","correlation":"e87ba2b89603774a","protocol":"HTTP/1.1","remote":"/[0:0:0:0:0:0:0:1]:46364","method":"GET","uri":"http://localhost:8080/v1/trace/log","host":"localhost","path":"/v1/trace/log","scheme":"http","port":"8080","headers":{"Accept":["*/*"],"Accept-Encoding":["gzip, deflate, br"],"Connection":["keep-alive"],"Host":["localhost:8080"],"Postman-Token":["32f49d7d-7645-4899-8753-9a975d45b423"],"User-Agent":["PostmanRuntime/7.41.1"]}}
TRACE 12710 --- [logging-spring-webflux] [or-http-epoll-3] org.zalando.logbook.Logbook : {"origin":"local","type":"request","correlation":"df7b6ef5aab683fc","protocol":"HTTP/1.1","remote":"localhost","method":"POST","uri":"https://jsonplaceholder.typicode.com/posts","host":"jsonplaceholder.typicode.com","path":"/posts","scheme":"https","port":null,"headers":{"Content-Type":["application/json;charset=UTF-8"]},"body":{"title":"foo","body":"bar","userId":1}}
TRACE 12710 --- [logging-spring-webflux] [or-http-epoll-3] org.zalando.logbook.Logbook : {"origin":"remote","type":"response","correlation":"df7b6ef5aab683fc","duration":1672,"protocol":"HTTP/1.1","status":201,"headers":{"Access-Control-Allow-Credentials":["true"],"Access-Control-Expose-Headers":["Location"],"alt-svc":["h3=\":443\"; ma=86400"],"Cache-Control":["no-cache"],"CF-Cache-Status":["DYNAMIC"],"CF-RAY":["8b8ae2e19ee8215c-MAD"],"Connection":["keep-alive"],"Content-Length":["65"],"Content-Type":["application/json; charset=utf-8"],"Date":["Sun, 25 Aug 2024 10:26:51 GMT"],"Etag":["W/\"41-GDNaWfnVU6RZhpLbye0veBaqcHA\""],"Expires":["-1"],"Location":["https://jsonplaceholder.typicode.com/posts/101"],"Nel":["{\"report_to\":\"heroku-nel\",\"max_age\":3600,\"success_fraction\":0.005,\"failure_fraction\":0.05,\"response_headers\":[\"Via\"]}"],"Pragma":["no-cache"],"Report-To":["{\"group\":\"heroku-nel\",\"max_age\":3600,\"endpoints\":[{\"url\":\"https://nel.heroku.com/reports?ts=1724581611&sid=e11707d5-02a7-43ef-b45e-2cf4d2036f7d&s=FS6%2FM6Yxhm7kWXgAD%2FfAxbpSkrlt3JxXkcHZH1anpM4%3D\"}]}"],"Reporting-Endpoints":["heroku-nel=https://nel.heroku.com/reports?ts=1724581611&sid=e11707d5-02a7-43ef-b45e-2cf4d2036f7d&s=FS6%2FM6Yxhm7kWXgAD%2FfAxbpSkrlt3JxXkcHZH1anpM4%3D"],"Server":["cloudflare"],"Vary":["Origin, X-HTTP-Method-Override, Accept-Encoding"],"Via":["1.1 vegur"],"X-Content-Type-Options":["nosniff"],"X-Powered-By":["Express"],"X-Ratelimit-Limit":["1000"],"X-Ratelimit-Remaining":["999"],"X-Ratelimit-Reset":["1724581655"]},"body":{"title":"foo","body":"bar","userId":1,"id":101}}
TRACE 12710 --- [logging-spring-webflux] [or-http-epoll-3] org.zalando.logbook.Logbook : {"origin":"local","type":"response","correlation":"f21dadc406c11156","duration":6075,"protocol":"HTTP/1.1","status":200,"headers":{"Content-Length":["69"],"Content-Type":["application/json"],"transfer-encoding":["chunked"]},"body":{"data":{"title":"foo","body":"bar","userId":1},"message":"response"}}
TRACE 12710 --- [logging-spring-webflux] [or-http-epoll-3] org.zalando.logbook.Logbook : {"origin":"local","type":"response","correlation":"e87ba2b89603774a","duration":6203,"protocol":"HTTP/1.1","status":200,"headers":{"Content-Length":["69"],"Content-Type":["application/json"]},"body":{"data":{"title":"foo","body":"bar","userId":1},"message":"response"}}

Logback integration

Logback is an open-source Java-based logging framework that acts as a useful tool for dealing with application logging. It is intended as a successor to the popular log4j project. It was designed by Ceki Gülcü, log4j’s founder.

logbook provides a project that includes Logstash Logback Encoder dependencies. We add the logbook-logstash dependency in the POM file.

<dependency>
<groupId>org.zalando</groupId>
<artifactId>logbook-logstash</artifactId>
<version>3.9.0</version>
</dependency>

Next, we add the logback-spring.xml file under the resources folder with this content:

<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<include resource="org/springframework/boot/logging/logback/defaults.xml"/>
<include resource="org/springframework/boot/logging/logback/console-appender.xml"/>

<springProperty scope="context" name="application_name" source="spring.application.name"/>

<logger name="com.bootlabs.logging" level="DEBUG"/>
<logger name="org.springframework" level="ERROR"/>
<logger name="org.springframework.web.reactive" level="ERROR"/>
<!-- Adding logbook in logs -->
<logger name="org.zalando.logbook" level="TRACE"/>

<variable name="LOG_LOCATION" value="./logs"/>

<appender name="file-appender"
class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>${LOG_LOCATION}/${application_name}.log</file>
<encoder class="net.logstash.logback.encoder.LogstashEncoder"/>
<rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
<!-- daily rollover -->
<fileNamePattern>${LOG_LOCATION}/archived/${application_name}.%d{yyyy-MM-dd}.%i.log</fileNamePattern>
<!-- each file should be at most 10MB, keep 60 days worth of history, but at most 1GB -->
<maxHistory>60</maxHistory>
<maxFileSize>10MB</maxFileSize>
<totalSizeCap>1GB</totalSizeCap>
</rollingPolicy>
</appender>
<appender name="console" class="ch.qos.logback.core.ConsoleAppender">
<encoder class="net.logstash.logback.encoder.LogstashEncoder"/>
<layout class="ch.qos.logback.classic.PatternLayout">
<Pattern>
%black(%d{ISO8601}) %highlight(%-5level) [%blue(%t)] %yellow(%C): %msg%n%throwable
</Pattern>
</layout>
</appender>
<root level="info">
<appender-ref ref="file-appender"/>
<appender-ref ref="console"/>
</root>
</configuration>

Now, Spring Boot logs to the console, and a file named the ${application_name}.log and changes its logging target to another file once a certain condition is met.

By default, the logs are formatted like this:

{
"@timestamp": "2024-08-25T21:37:34.612883231+02:00",
"@version": "1",
"message": "{\"origin\":\"local\",\"type\":\"response\",\"correlation\":\"ae5c2f60066f2bfc\",\"duration\":114,\"protocol\":\"HTTP/1.1\",\"status\":200,\"headers\":{\"Content-Length\":[\"69\"],\"Content-Type\":[\"application/json\"]},\"body\":{\"data\":{\"title\":\"foo\",\"body\":\"bar\",\"userId\":1},\"message\":\"response\"}}",
"logger_name": "org.zalando.logbook.Logbook",
"thread_name": "reactor-http-epoll-2",
"level": "TRACE",
"level_value": 5000,
"application_name": "logging-spring-webflux"
}

The Logbook JSON log is inside the message attribute. This is not usable. We need to add a Logbook Sink bean log to serialize the values using ObjectMapper.

@Bean
public Sink sink(ObjectMapper objectMapper) {
HttpLogFormatter formatter = new JsonHttpLogFormatter(objectMapper);
return new LogstashLogbackSink(formatter);
}

Let’s try, The Logbook JSON is now in a new field http

{
"@timestamp": "2024-08-25T22:03:26.217003503+02:00",
"@version": "1",
"message": "200 OK GET http://localhost:8080/v1/trace/log",
"logger_name": "org.zalando.logbook.Logbook",
"thread_name": "reactor-http-epoll-2",
"level": "TRACE",
"level_value": 5000,
"application_name": "logging-spring-webflux",
"http": {
"origin": "local",
"type": "response",
"correlation": "c00b0b4630533e95",
"duration": 2074,
"protocol": "HTTP/1.1",
"status": 200,
"headers": {
"Content-Length": [
"69"
],
"Content-Type": [
"application/json"
]
},
"body": {
"data": {
"title": "foo",
"body": "bar",
"userId": 1
},
"message": "response"
}
}
}

The log file is ready to be read. Logbook provides more features and configurations such as obfuscation of headers and query parameters, logging strategy, path inclusion or exclusion, etc. For more details about the mode, see the documentation.

Elastic Stack Setting up

Create a docker-compose file at the root level of the project. It’ll contain the elastic stack services and our app.

version: "3.8"

services:
api:
build: .
ports:
- '8080:8080'
container_name: webflux-api
restart: always
volumes:
- ./data/logs:/app/logs
networks:
- elk-network

elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:8.14.3
ports:
- 9200:9200
environment:
- discovery.type=single-node
- xpack.security.enabled=false
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
networks:
- elk-network
healthcheck:
test: curl -s http://localhost:9200 >/dev/null || exit 1
interval: 30s
timeout: 10s
retries: 50

kibana:
image: docker.elastic.co/kibana/kibana:8.14.3
depends_on:
elasticsearch:
condition: service_healthy
ports:
- 5601:5601
environment:
- SERVERNAME=kibana
- SERVER_NAME=kibana
- ELASTICSEARCH_HOSTS=http://elasticsearch:9200
networks:
- elk-network
healthcheck:
test: curl -s -I http://localhost:5601 | grep -q 'HTTP/1.1 302 Found'
interval: 30s
timeout: 10s
retries: 50

logstash:
image: docker.elastic.co/logstash/logstash:8.14.3
depends_on:
kibana:
condition: service_healthy
#user: root
volumes:
- "./data:/usr/share/logstash/app_webflux_logs/"
- "./docker/conf/logstash.conf:/usr/share/logstash/pipeline/logstash.conf:ro"
networks:
- elk-network
environment:
- xpack.monitoring.enabled=false
- ELASTIC_HOSTS=http://elasticsearch:9200

volumes:
data:
driver: local

networks:
elk-network:
driver: bridge

Logstash pipeline config

Create logstash.conf file inside the config folder with the following content:

input {
file {
#https://www.elastic.co/guide/en/logstash/current/plugins-inputs-file.html
#default is TAIL which assumes more data will come into the file.
mode => "tail"
type => "json_lines"
path => "/usr/share/logstash/app_webflux_logs/logs/logging-spring-webflux*"
sincedb_path => "/dev/null"
start_position => "beginning"
}
}

filter {
json {
source => "message"
}
}

output {
elasticsearch {
index => "demo-webflux-logstash"
hosts=> "${ELASTIC_HOSTS}"
data_stream => false
}
}

The logstash pipeline config file has a separate section for each type of plugin we want to add to the event processing pipeline. Each section contains configuration options for one or more plugins. If we specify multiple filters, they are applied in the order they appear in the configuration file. If we specify multiple outputs, events are sent to each destination sequentially, in the order they appear in the configuration file.

The mode “tail” aims to track changing log files and emit new content as it’s appended to each file.

Then, run the docker-compose to run the containers.

docker-compose up -d

Testing

Once we’ve run the docker-compose up command, we can check if ElasticSearch and Kibana are up and running.

We can now view the logs by clicking the Discover menu in the navigation pane. Let’s try to call the API endpoint.

Conclusion

Well done !!. This post taught us how to centralize Spring WebFlux logging using the Elastic Stack.

The complete source code is available on GitHub.

You can reach out to me and follow me on MediumTwitterGitHubLinkedln

Support me through GitHub Sponsors.

Thank you for Reading !! See you in the next post.

References

👉 Link to Medium blog

Related Posts